From Digital Age to Nano Age. WorldWide.

Tag: google cloud

Robotic Automations

It's a sunny day for Google Cloud | TechCrunch


Google Cloud, Google’s cloud computing division, had a blockbuster fiscal quarter, blowing past analysts’ expectations and sending Google parent company Alphabet’s stock soaring 13%+ in after-hours trading.

Google Cloud revenue jumped 28% to $9.57 billion in Q1 2024, bolstered by the demand for generative AI tools that rely on cloud infrastructure, services and apps. That continues a positive trend for the division, which in the previous quarter (Q4 2023) notched year-on-year growth of 25.66%.

Google Cloud’s operating income grew nearly 5x to $900 billion, up from $191 million. No doubt investors were pleased about this tidbit, along with Alphabet’s first-ever dividend (of 20 cents per share) and a $70 billion share repurchase program.

Elsewhere across Alphabet, Google Search and other revenue climbed 14.4% to $46.15 billion in the first fiscal quarter. YouTube revenue was up 20% year-over-year to $8.09 billion (a slight dip from Q4 2023 revenue of $9.2 billion), and Google’s overall advertising business gained 13% year-on-year to reach $61.6 billion.

Alphabet’s Other Bets category, which includes the company’s self-driving vehicle subsidiary Waymo, was the notable loser. Revenue grew 72% to $495 million in Q1, but Other Bets lost $1.02 billion — about the same as it lost in Q4 2023. (Other Bets typically isn’t profitable.)

Alphabet’s whole-org revenue stands at $80.5 billion, an increase of 15% year-over-year, with net income coming in at $23.7 billion (up 57%). Beyond Google Cloud’s performance, a reduced headcount might’ve contributed to the winning quarter; Alphabet reported a 5% drop in workforce to 180,895 employees.

On a call with investors, Alphabet CEO Sundar Pichai said that YouTube’s and Google’s cloud businesses are projected to reach a combined annual run rate of over $100 billion by the end of 2024. Last year, the divisions’ combined revenue was $64.59 billion, with Google Cloud raking in $33.08 billion and YouTube generating $31.51 billion.

“Taking a step back, it took Google more than 15 years to reach $100 billion in annual revenue,” Pichai said. “In just the last six years, we’ve gone from $100 billion to more than $300 billion in annual revenue. … This shows our track record of investing in and building successful new growing businesses.”


Software Development in Sri Lanka

Robotic Automations

Google Cloud Next 2024: Watch the keynote on Gemini AI, enterprise reveals right here | TechCrunch


It’s time for Google’s annual look up to the cloud, this time with a big dose of AI.

At 9 a.m. PT Tuesday, Google Cloud CEO Thomas Kurian kicked off the opening keynote for this year’s Google Cloud Next event, and you can watch the archive of their reveals above, or right here.

After this week we’ll know more about Google’s attempts to help the enterprise enter the age of AI. From a deeper dive into Gemini, the company’s AI-powered chatbot, to securing AI products and implementing generative AI into cloud applications, Google will continue to cover a wide range of topics.

We’re also keeping tabs on everything Google’s announcing at Cloud Next 2024, from Google Vids to Gemini Code Assist to Google Workspace updates.

And for those more interested in Google’s details and reveals for developers, their Developer Keynote started off at 11:30am PT Wednesday, and you can catch up on that full stream right here or via the embed below.


Software Development in Sri Lanka

Robotic Automations

Google's Gemini Pro 1.5 enters public preview on Vertex AI | TechCrunch


Gemini 1.5 Pro, Google’s most capable generative AI model, is now available in public preview on Vertex AI, Google’s enterprise-focused AI development platform. The company announced the news during its annual Cloud Next conference, which is taking place in Las Vegas this week.

Gemini 1.5 Pro launched in February, joining Google’s Gemini family of generative AI models. Undoubtedly its headlining feature is the amount of context that it can process: between 128,000 tokens to up to 1 million tokens, where “tokens” refers to subdivided bits of raw data (like the syllables “fan,” “tas” and “tic” in the word “fantastic”).

One million tokens is equivalent to around 700,000 words or around 30,000 lines of code. It’s about four times the amount of data that Anthropic’s flagship model, Claude 3, can take as input and about eight times as high as OpenAI’s GPT-4 Turbo max context.

A model’s context, or context window, refers to the initial set of data (e.g. text) the model considers before generating output (e.g. additional text). A simple question — “Who won the 2020 U.S. presidential election?” — can serve as context, as can a movie script, email, essay or e-book.

Models with small context windows tend to “forget” the content of even very recent conversations, leading them to veer off topic. This isn’t necessarily so with models with large contexts. And, as an added upside, large-context models can better grasp the narrative flow of data they take in, generate contextually richer responses and reduce the need for fine-tuning and factual grounding — hypothetically, at least.

So what specifically can one do with a 1 million-token context window? Lots of things, Google promises, like analyzing a code library, “reasoning across” lengthy documents and holding long conversations with a chatbot.

Because Gemini 1.5 Pro is multilingual — and multimodal in the sense that it’s able to understand images and videos and, as of Tuesday, audio streams in addition to text — the model can also analyze and compare content in media like TV shows, movies, radio broadcasts, conference call recordings and more across different languages. One million tokens translates to about an hour of video or around 11 hours of audio.

Thanks to its audio-processing capabilities, Gemini 1.5 Pro can generate transcriptions for video clips, as well, although the jury’s out on the quality of those transcriptions.

In a prerecorded demo earlier this year, Google showed Gemini 1.5 Pro searching the transcript of the Apollo 11 moon landing telecast (which comes to about 400 pages) for quotes containing jokes, and then finding a scene in movie footage that looked similar to a pencil sketch.

Google says that early users of Gemini 1.5 Pro — including United Wholesale Mortgage, TBS and Replit — are leveraging the large context window for tasks spanning mortgage underwriting; automating metadata tagging on media archives; and generating, explaining and transforming code.

Gemini 1.5 Pro doesn’t process a million tokens at the snap of a finger. In the aforementioned demos, each search took between 20 seconds and a minute to complete — far longer than the average ChatGPT query.

Google previously said that latency is an area of focus, though, and that it’s working to “optimize” Gemini 1.5 Pro as time goes on.

Of note, Gemini 1.5 Pro is slowly making its way to other parts of Google’s corporate product ecosystem, with the company announcing Tuesday that the model (in private preview) will power new features in Code Assist, Google’s generative AI coding assistance tool. Developers can now perform “large-scale” changes across codebases, Google says, for example updating cross-file dependencies and reviewing large chunks of code.




Software Development in Sri Lanka

Robotic Automations

Google releases Imagen 2, a video clip generator | TechCrunch


Google doesn’t have the best track record when it comes to image-generating AI.

In February, the image generator built into Gemini, Google’s AI-powered chatbot, was found to be randomly injecting gender and racial diversity into prompts about people, resulting in images of racially diverse Nazis, among other offensive inaccuracies.

Google pulled the generator, vowing to improve it and eventually re-release it. As we await its return, the company’s launching an enhanced image-generating tool, Imagen 2, inside its Vertex AI developer platform — albeit a tool with a decidedly more enterprise bent. Google announced Imagen 2 at its annual Cloud Next conference in Las Vegas.

Image Credits: Frederic Lardinois/TechCrunch

Imagen 2 — which is actually a family of models, launched in December after being previewed at Google’s I/O conference in May 2023 — can create and edit images given a text prompt, like OpenAI’s DALL-E and Midjourney. Of interest to corporate types, Imagen 2 can render text, emblems and logos in multiple languages, optionally overlaying those elements in existing images — for example, onto business cards, apparel and products.

After launching first in preview, image editing with Imagen 2 is now generally available in Vertex AI along with two new capabilities: inpainting and outpainting. Inpainting and outpainting, features other popular image generators such as DALL-E have offered for some time, can be used to remove unwanted parts of an image, add new components and expand the borders of an image to create a wider field of view.

But the real meat of the Imagen 2 upgrade is what Google’s calling “text-to-live images.”

Imagen 2 can now create short, four-second videos from text prompts, along the lines of AI-powered clip generation tools like Runway, Pika and Irreverent Labs. True to Imagen 2’s corporate focus, Google’s pitching live images as a tool for marketers and creatives, such as a GIF generator for ads showing nature, food and animals — subject matter that Imagen 2 was fine-tuned on.

Google says that live images can capture “a range of camera angles and motions” while “supporting consistency over the entire sequence.” But they’re in low resolution for now: 360 pixels by 640 pixels. Google’s pledging that this will improve in the future. 

To allay (or at least attempt to allay) concerns around the potential to create deepfakes, Google says that Imagen 2 will employ SynthID, an approach developed by Google DeepMind, to apply invisible, cryptographic watermarks to live images. Of course, detecting these watermarks — which Google claims are resilient to edits, including compression, filters and color tone adjustments — requires a Google-provided tool that’s not available to third parties.

And no doubt eager to avoid another generative media controversy, Google’s emphasizing that live image generations will be “filtered for safety.” A spokesperson told TechCrunch via email: “The Imagen 2 model in Vertex AI has not experienced the same issues as the Gemini app. We continue to test extensively and engage with our customers.”

Image Credits: Frederic Lardinois/TechCrunch

But generously assuming for a moment that Google’s watermarking tech, bias mitigations and filters are as effective as it claims, are live images even competitive with the video generation tools already out there?

Not really.

Runway can generate 18-second clips in much higher resolutions. Stability AI’s video clip tool, Stable Video Diffusion, offers greater customizability (in terms of frame rate). And OpenAI’s Sora — which, granted, isn’t commercially available yet — appears poised to blow away the competition with the photorealism it can achieve.

So what are the real technical advantages of live images? I’m not really sure. And I don’t think I’m being too harsh.

After all, Google is behind genuinely impressive video generation tech like Imagen Video and Phenaki. Phenaki, one of Google’s more interesting experiments in text-to-video, turns long, detailed prompts into two-minute-plus “movies” — with the caveat that the clips are low resolution, low frame rate and only somewhat coherent.

In light of recent reports suggesting that the generative AI revolution caught Google CEO Sundar Pichai off guard and that the company’s still struggling to maintain pace with rivals, it’s not surprising that a product like live images feels like an also-ran. But it’s disappointing nonetheless. I can’t help the feeling that there is — or was — a more impressive product lurking in Google’s skunkworks.

Models like Imagen are trained on an enormous number of examples usually sourced from public sites and datasets around the web. Many generative AI vendors see training data as a competitive advantage and thus keep it and info pertaining to it close to the chest. But training data details are also a potential source of IP-related lawsuits, another disincentive to reveal much.

I asked, as I always do around announcements pertaining to generative AI models, about the data that was used to train the updated Imagen 2, and whether creators whose work might’ve been swept up in the model training process will be able to opt out at some future point.

Google told me only that its models are trained “primarily” on public web data, drawn from “blog posts, media transcripts and public conversation forums.” Which blogs, transcripts and forums? It’s anyone’s guess.

A spokesperson pointed to Google’s web publisher controls that allow webmasters to prevent the company from scraping data, including photos and artwork, from their websites. But Google wouldn’t commit to releasing an opt-out tool or, alternatively, compensating creators for their (unknowing) contributions — a step that many of its competitors, including OpenAI, Stability AI and Adobe, have taken.

Another point worth mentioning: Text-to-live images isn’t covered by Google’s generative AI indemnification policy, which protects Vertex AI customers from copyright claims related to Google’s use of training data and outputs of its generative AI models. That’s because text-to-live images is technically in preview; the policy only covers generative AI products in general availability (GA).

Regurgitation, or where a generative model spits out a mirror copy of an example (e.g., an image) that it was trained on, is rightly a concern for corporate customers. Studies both informal and academic have shown that the first-gen Imagen wasn’t immune to this, spitting out identifiable photos of people, artists’ copyrighted works and more when prompted in particular ways.

Barring controversies, technical issues or some other major unforeseen setbacks, text-to-live images will enter GA somewhere down the line. But with live images as it exists today, Google’s basically saying: use at your own risk.


Software Development in Sri Lanka

Robotic Automations

Google injects generative AI into its cloud security tools | TechCrunch


At its annual Cloud Next conference in Las Vegas, Google on Tuesday introduced new cloud-based security products and services — in addition to updates to existing products and services — aimed at customers managing large, multi-tenant corporate networks.

Many of the announcements had to do with Gemini, Google’s flagship family of generative AI models.

For example, Google unveiled Gemini in Threat Intelligence, a new Gemini-powered component of the company’s Mandiant cybersecurity platform. Now in public preview, Gemini in Threat Intelligence can analyze large portions of potentially malicious code and let users perform natural language searches for ongoing threats or indicators of compromise, as well as summarize open source intelligence reports from around the web.

“Gemini in Threat Intelligence now offers conversational search across Mandiant’s vast and growing repository of threat intelligence directly from frontline investigations,” Sunil Potti, GM of cloud security at Google, wrote in a blog post shared with TechCrunch. “Gemini will navigate users to the most relevant pages in the integrated platform for deeper investigation … Plus, [Google’s malware detection service] VirusTotal now automatically ingests OSINT reports, which Gemini summarizes directly in the platform.”

Elsewhere, Gemini can now assist with cybersecurity investigations in Chronicle, Google’s cybersecurity telemetry offering for cloud customers. Set to roll out by the end of the month, the new capability guides security analysts through their typical workflows, recommending actions based on the context of a security investigation, summarizing security event data and creating breach and exploit detection rules from a chatbot-like interface.

And in Security Command Center, Google’s enterprise cybersecurity and risk management suite, a new Gemini-driven feature lets security teams search for threats using natural language while providing summaries of misconfigurations, vulnerabilities and possible attack paths.

Rounding out the security updates were privileged access manager (in preview), a service that offers just-in-time, time-bound and approval-based access options designed to help mitigate risks tied to privileged access misuse. Google’s also rolling out principal access boundary (in preview, as well), which lets admins implement restrictions on network root-level users so that those users can only access authorized resources within a specifically defined boundary.

Lastly, Autokey (in preview) aims to simplify creating and managing customer encryption keys for high-security use cases, while Audit Manager (also in preview) provides tools for Google Cloud customers in regulated industries to generate proof of compliance for their workloads and cloud-hosted data.

“Generative AI offers tremendous potential to tip the balance in favor of defenders,” Potti wrote in the blog post. “And we continue to infuse AI-driven capabilities into our products.”

Google isn’t the only company attempting to productize generative AI–powered security tooling. Microsoft last year launched a set of services that leverage generative AI to correlate data on attacks while prioritizing cybersecurity incidents. Startups, including Aim Security, are also jumping into the fray, aiming to corner the nascent space.

But with generative AI’s tendency to make mistakes, it remains to be seen whether these tools have staying power.


Software Development in Sri Lanka

Robotic Automations

Google's Gemini comes to databases | TechCrunch


Google wants Gemini, its family of generative AI models, to power your app’s databases — in a sense.

At its annual Cloud Next conference in Las Vegas, Google announced the public preview of Gemini in Databases, a collection of features underpinned by Gemini to — as the company pitched it — “simplify all aspects of the database journey.” In less jargony language, Gemini in Databases is a bundle of AI-powered, developer-focused tools for Google Cloud customers who are creating, monitoring and migrating app databases.

One piece of Gemini in Databases is Database Studio, an editor for structured query language (SQL), the language used to store and process data in relational databases. Built into the Google Cloud console, Database Studio can generate, summarize and fix certain errors with SQL code, Google says, in addition to offering general SQL coding suggestions through a chatbot-like interface.

Joining Database Studio under the Gemini in Databases brand umbrella is AI-assisted migrations via Google’s existing Database Migration Service. Google’s Gemini models can convert database code and deliver explanations of those changes along with recommendations, according to Google.

Elsewhere, in Google’s new Database Center — yet another Gemini in Databases component — users can interact with databases using natural language and can manage a fleet of databases with tools to assess their availability, security and privacy compliance. And should something go wrong, those users can ask a Gemini-powered bot to offer troubleshooting tips.

“Gemini in Databases enables customer to easily generate SQL; additionally, they can now manage, optimize and govern entire fleets of databases from a single pane of glass; and finally, accelerate database migrations with AI-assisted code conversions,” Andi Gutmans, GM of databases at Google Cloud, wrote in a blog post shared with TechCrunch. “Imagine being able to ask questions like ‘Which of my production databases in east Asia had missing backups in the last 24 hours?’ or ‘How many PostgreSQL resources have a version higher than 11?’ and getting instant insights about your entire database fleet.”

That assumes, of course, that the Gemini models don’t make mistakes from time to time — which is no guarantee.

Regardless, Google’s forging ahead, bringing Gemini to Looker, its business intelligence tool, as well.

Launching in private preview, Gemini in Looker lets users “chat with their business data,” as Google describes it in a blog post. Integrated with Workspace, Google’s suite of enterprise productivity tools, Gemini in Looker spans features such as conversational analytics; report, visualization and formula generation; and automated Google Slide presentation generation. 

I’m curious to see if Gemini in Looker’s report and presentation generation work reliably well. Generative AI models don’t exactly have a reputation for accuracy, after all, which could lead to embarrassing, or even mission-critical, mistakes. We’ll find out as Cloud Next continues into the week with any luck.

Gemini in Databases could be perceived as a response of sorts to top rival Microsoft’s recently launched Copilot in Azure SQL Database, which brought generative AI to Microsoft’s existing fully managed cloud database service. Microsoft is looking to stay a step ahead in the budding AI-driven database race and has also worked to build generative AI with Azure Data Studio, the company’s set of enterprise data management and development tools.


Software Development in Sri Lanka

Robotic Automations

Google open sources tools to support AI model development | TechCrunch


In a typical year, Cloud Next — one of Google’s two major annual developer conferences, the other being I/O — almost exclusively features managed and otherwise closed source, gated-behind-locked-down-APIs products and services. But this year, whether to foster developer goodwill or advance its ecosystem ambitions (or both), Google debuted a number of open source tools primarily aimed at supporting generative AI projects and infrastructure.

The first, MaxDiffusion, which Google actually quietly released in February, is a collection of reference implementations of various diffusion models — models like the image generator Stable Diffusion — that run on XLA devices. “XLA” stands for Accelerated Linear Algebra, an admittedly awkward acronym referring to a technique that optimizes and speeds up specific types of AI workloads, including fine-tuning and serving.

Google’s own tensor processing units (TPUs) are XLA devices, as are recent Nvidia GPUs.

Beyond MaxDiffusion, Google’s launching JetStream, a new engine to run generative AI models — specifically text-generating models (so not Stable Diffusion). Currently limited to supporting TPUs with GPU compatibility supposedly coming in the future, JetStream offers up to 3x higher “performance per dollar” for models like Google’s own Gemma 7B and Meta’s Llama 2, Google claims.

“As customers bring their AI workloads to production, there’s an increasing demand for a cost-efficient inference stack that delivers high performance,” Mark Lohmeyer, Google Cloud’s GM of compute and machine learning infrastructure, wrote in a blog post shared with TechCrunch. “JetStream helps with this need … and includes optimizations for popular open models such as Llama 2 and Gemma.”

Now, “3x” improvement is quite a claim to make, and it’s not exactly clear how Google arrived at that figure. Using which generation of TPU? Compared to which baseline engine? And how’s “performance” being defined here, anyway?

I’ve asked Google all these questions and will update this post if I hear back.

Second-to-last on the list of Google’s open source contributions are new additions to MaxText, Google’s collection of text-generating AI models targeting TPUs and Nvidia GPUs in the cloud. MaxText now includes Gemma 7B, OpenAI’s GPT-3 (the predecessor to GPT-4), Llama 2 and models from AI startup Mistral — all of which Google says can be customized and fine-tuned to developers’ needs.

We’ve heavily optimized [the models’] performance on TPUs and also partnered closely with Nvidia to optimize performance on large GPU clusters,” Lohmeyer said. “These improvements maximize GPU and TPU utilization, leading to higher energy efficiency and cost optimization.”

Finally, Google’s collaborated with Hugging Face, the AI startup, to create Optimum TPU, which provides tooling to bring certain AI workloads to TPUs. The goal is to reduce the barrier to entry for getting generative AI models onto TPU hardware, according to Google — in particular text-generating models.

But at present, Optimum TPU is a bit bare-bones. The only model it works with is Gemma 7B. And Optimum TPU doesn’t yet support training generative models on TPUs — only running them.

Google’s promising improvements down the line.


Software Development in Sri Lanka

Robotic Automations

Nvidia's next-gen Blackwell platform will come to Google Cloud in early 2025 | TechCrunch


It’s Google Cloud Next in Las Vegas this week, and that means it’s time for a bunch of new instance types and accelerators to hit the Google Cloud Platform. In addition to the new custom Arm-based Axion chips, most of this year’s announcements are about AI accelerators, whether built by Google or from Nvidia.

Only a few weeks ago, Nvidia announced its Blackwell platform. But don’t expect Google to offer those machines anytime soon. Support for the high-performance Nvidia HGX B200 for AI and HPC workloads and GB200 NBL72 for large language model (LLM) training will arrive in early 2025. One interesting nugget from Google’s announcement: The GB200 servers will be liquid-cooled.

This may sound like a bit of a premature announcement, but Nvidia said that its Blackwell chips won’t be publicly available until the last quarter of this year.

Image Credits: Frederic Lardinois/TechCrunch

Before Blackwell

For developers who need more power to train LLMs today, Google also announced the A3 Mega instance. This instance, which the company developed together with Nvidia, features the industry-standard H100 GPUs but combines them with a new networking system that can deliver up to twice the bandwidth per GPU.

Another new A3 instance is A3 confidential, which Google described as enabling customers to “better protect the confidentiality and integrity of sensitive data and AI workloads during training and inferencing.” The company has long offered confidential computing services that encrypt data in use, and here, once enabled, confidential computing will encrypt data transfers between Intel’s CPU and the Nvidia H100 GPU via protected PCIe. No code changes required, Google says. 

As for Google’s own chips, the company on Tuesday launched its Cloud TPU v5p processors — the most powerful of its homegrown AI accelerators yet — into general availability. These chips feature a 2x improvement in floating point operations per second and a 3x improvement in memory bandwidth speed.

Image Credits: Frederic Lardinois/TechCrunch

All of those fast chips need an underlying architecture that can keep up with them. So in addition to the new chips, Google also announced Tuesday new AI-optimized storage options. Hyperdisk ML, which is now in preview, is the company’s next-gen block storage service that can improve model load times by up to 3.7x, according to Google.

Google Cloud is also launching a number of more traditional instances, powered by Intel’s fourth- and fifth-generation Xeon processors. The new general-purpose C4 and N4 instances, for example, will feature the fifth-generation Emerald Rapids Xeons, with the C4 focused on performance and the N4 on price. The new C4 instances are now in private preview, and the N4 machines are generally available today.

Also new, but still in preview, are the C3 bare-metal machines, powered by older fourth-generation Intel Xeons, the X4 memory-optimized bare metal instances (also in preview) and the Z3, Google Cloud’s first storage-optimized virtual machine that promises to offer “the highest IOPS for storage optimized instances among leading clouds.”


Software Development in Sri Lanka

Robotic Automations

Google bets on partners to run their own sovereign Google Clouds | TechCrunch


Data sovereignty and residency laws have become commonplace in recent years. The major clouds, however, were always set up to enable the free movement of data between their various locations, so over the course of the last few years, all of the hyperscalers started looking into how they could offer sovereign clouds that can guarantee that government data, for example, never left a given country. AWS announced its European Sovereign Cloud last October. The Microsoft Azure Cloud for Sovereignty became generally available in December.

Google Cloud’s approach has been a bit different. Back in 2021, Google Cloud partnered with T-Systems to offer a sovereign cloud for Germany. A few weeks ago, it also announced a new partnership with World Wide Technology (WWT) to offer sovereign cloud solutions for government customers in the U.S.

Now Google is renewing its focus on data sovereignty. For the time being, though, it looks like its emphasis is on partnerships, not building its own sovereign clouds.

Google Cloud’s hybrid and on-premises story has changed quite a bit over the last few years. From the Cloud Services Platform to Anthos, GKE On-Prem and likely a few others that time has long forgotten, Google Cloud has aimed to offer a solution for companies that want to use its services and tooling but because of regulations, security, cost or paranoia, don’t want their workloads and data to sit in the Google cloud. Google’s latest effort in this space is branded Google Distributed Cloud (GDC), a fully managed software and hardware solution that can either be connected to the Google Cloud or be completely air-gapped from the internet.

Of course, this wouldn’t be 2024 if Google didn’t put an emphasis on AI in all of these efforts, too.

“Today, customers are looking for entirely new ways to process and analyze data, discover hidden insights, increase productivity and build entirely new applications — all with AI at the core,” said Vithal Shirodkar, VP/GM, Google Distributed Cloud and Geo Expansion, Google Cloud, in Tuesday’s announcement. “However, data sovereignty, regulatory compliance, and low-latency requirements can present a dilemma for organizations eager to adopt AI in the cloud. The need to keep sensitive data in certain locations, adhere to strict regulations, and ensure swift responsiveness can make it difficult to capitalize on the cloud’s inherent advantages of innovation, scalability, and cost-efficiency.”

At Cloud Next, Google Cloud’s annual developer conference, GDC is getting a slew of updates, including new security features (in partnership with Palo Alto Networks), support for the Apigee API management service and more. Developers can also now use a GDC Sandbox in Google Cloud to build and test applications without the need to work with the physical hardware. What’s maybe just as important as these new features is that GDC is now ISO27001 and SOC2 compliant.

On the hardware side, Google Cloud is introducing new AI servers for GDC. These are powered by Nvidia’s L4 Tensor Core GPUs and are now available in addition to the existing GDC AI-optimized servers with the high-powered Nvidia H100 GPUs.

Another interesting aspect to the GDC digital sovereignty story is that Google Cloud is emphasizing its partners, T-Systems, WWT and Clarence, which can deliver sovereign GDC-powered clouds on behalf of their clients.


Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber