From Digital Age to Nano Age. WorldWide.

Tag: Google Cloud Next

Robotic Automations

Google Cloud Next 2024: Everything announced so far | TechCrunch


Google’s Cloud Next 2024 event takes place in Las Vegas through Thursday, and that means lots of new cloud-focused news on everything from Gemini, Google’s AI-powered chatbot, to AI to devops and security. Last year’s event was the first in-person Cloud Next since 2019, and Google took to the stage to show off its ongoing dedication to AI with its Duet AI for Gmail and many other debuts, including expansion of generative AI to its security product line and other enterprise-focused updates and debuts.

Don’t have time to watch the full archive of Google’s keynote event? That’s OK; we’ve summed up the most important parts of the event below, with additional details from the TechCrunch team on the ground at the event. And Tuesday’s updates weren’t the only things Google made available to non-attendees — Wednesday’s developer-focused stream started at 10:30 a.m. PT.

Google Vids

Leveraging AI to help customers develop creative content is something Big Tech is looking for, and Tuesday, Google introduced its version. Google Vids, a new AI-fueled video creation tool, is the latest feature added to the Google Workspace.

Here’s how it works: Google claims users can make videos alongside other Workspace tools like Docs and Sheets. The editing, writing and production is all there. You also can collaborate with colleagues in real time within Google Vids. Read more

Gemini Code Assist

After reading about Google’s new Gemini Code Assist, an enterprise-focused AI code completion and assistance tool, you may be asking yourself if that sounds familiar. And you would be correct. TechCrunch Senior Editor Frederic Lardinois writes that “Google previously offered a similar service under the now-defunct Duet AI branding.” Then Gemini came along. Code Assist is a direct competitor to GitHub’s Copilot Enterprise. Here’s why

And to put Gemini Code Assist into context, Alex Wilhelm breaks down its competition with Copilot, and its potential risks and benefits to developers, in the latest TechCrunch Minute episode.

Google Workspace

Image Credits: Google

Among the new features are voice prompts to kick off the AI-based “Help me write” feature in Gmail while on the go. Another one for Gmail includes a way to instantly turn rough email drafts into a more polished email. Over on Sheets, you can send out a customizable alert when a certain field changes. Meanwhile, a new set of templates make starting a new spreadsheet easier. For the Doc lovers, there is support for tabs now. This is good because, according to the company, you can “organize information in a single document instead of linking to multiple documents or searching through Drive.” Of course, subscribers get the goodies first. Read more

Google also seems to have plans to monetize two of its new AI features for the Google Workspace productivity suite. This will look like $10/month/user add-on packages. One will be for the new AI meetings and messaging add-on that takes notes for you, provides meeting summaries and translates content into 69 languages. The other is for the introduced AI security package, which helps admins keep Google Workspace content more secure. Read more

Imagen 2

In February, Google announced an image generator built into Gemini, Google’s AI-powered chatbot. The company pulled it shortly after it was found to be randomly injecting gender and racial diversity into prompts about people. This resulted in some offensive inaccuracies. While we waited for an eventual re-release, Google came out with the enhanced image-generating tool, Imagen 2. This is inside its Vertex AI developer platform and has more of a focus on enterprise. Imagen 2 is now generally available and comes with some fun new capabilities, including inpainting and outpainting. There’s also what Google’s calling “text-to-live images” where you can now create short, four-second videos from text prompts, along the lines of AI-powered clip generation tools like RunwayPika and Irreverent Labs. Read more

Vertex AI Agent Builder

We can all use a little bit of help, right? Meet Google’s Vertex AI Agent Builder, a new tool to help companies build AI agents.

“Vertex AI Agent Builder allows people to very easily and quickly build conversational agents,” Google Cloud CEO Thomas Kurian said. “You can build and deploy production-ready, generative AI-powered conversational agents and instruct and guide them the same way that you do humans to improve the quality and correctness of answers from models.”

To do this, the company uses a process called “grounding,” where the answers are tied to something considered to be a reliable source. In this case, it’s relying on Google Search (which in reality could or could not be accurate). Read more

Gemini comes to databases

Google calls Gemini in Databases a collection of features that “simplify all aspects of the database journey.” In less jargony language, it’s a bundle of AI-powered, developer-focused tools for Google Cloud customers who are creating, monitoring and migrating app databases. Read more

Google renews its focus on data sovereignty

Image Credits: MirageC / Getty Images

Google has offered cloud sovereignties before, but now it is focused more on partnerships rather than building them out on their own. Read more

Security tools get some AI love

Image Credits: Getty Images

Google jumps on board the productizing generative AI-powered security tool train with a number of new products and features aimed at large companies. Those include Threat Intelligence, which can analyze large portions of potentially malicious code. It also lets users perform natural language searches for ongoing threats or indicators of compromise. Another is Chronicle, Google’s cybersecurity telemetry offering for cloud customers to assist with cybersecurity investigations. The third is the enterprise cybersecurity and risk management suite Security Command Center. Read more

Nvidia’s Blackwell platform

One of the anticipated announcements is Nvidia’s next-generation Blackwell platform coming to Google Cloud in early 2025. Yes, that seems so far away. However, here is what to look forward to: support for the high-performance Nvidia HGX B200 for AI and HPC workloads and GB200 NBL72 for large language model (LLM) training. Oh, and we can reveal that the GB200 servers will be liquid-cooled. Read more

Chrome Enterprise Premium

Meanwhile, Google is expanding its Chrome Enterprise product suite with the launch of Chrome Enterprise Premium. What’s new here is that it mainly pertains mostly to security capabilities of the existing service, based on the insight that browsers are now the endpoints where most of the high-value work inside a company is done. Read more

Gemini 1.5 Pro

Image Credits: Google

Everyone can use a “half” every now and again, and Google obliges with Gemini 1.5 Pro. This, Kyle Wiggers writes, is “Google’s most capable generative AI model,” and is now available in public preview on Vertex AI, Google’s enterprise-focused AI development platform. Here’s what you get for that half: The amount of context that it can process, which is from 128,000 tokens up to 1 million tokens, where “tokens” refers to subdivided bits of raw data (like the syllables “fan,” “tas” and “tic” in the word “fantastic”). Read more

Open source tools

Image Credits: Getty Images

At Google Cloud Next 2024, the company debuted a number of open source tools primarily aimed at supporting generative AI projects and infrastructure. One is Max Diffusion, which is a collection of reference implementations of various diffusion models that run on XLA, or Accelerated Linear Algebra, devices. Then there is JetStream, a new engine to run generative AI models. The third is MaxTest, a collection of text-generating AI models targeting TPUs and Nvidia GPUs in the cloud. Read more

Axion

Image Credits: Google

We don’t know a lot about this one, however, here is what we do know: Google Cloud joins AWS and Azure in announcing its first custom-built Arm processor, dubbed Axion. Frederic Lardinois writes that “based on Arm’s Neoverse 2 designs, Google says its Axion instances offer 30% better performance than other Arm-based instances from competitors like AWS and Microsoft and up to 50% better performance and 60% better energy efficiency than comparable X86-based instances.” Read more

The entire Google Cloud Next keynote

If all of that isn’t enough of an AI and cloud update deluge, you can watch the entire event keynote via the embed below.

Google Cloud Next’s developer keynote

On Wednesday, Google held a separate keynote for developers. They offered a deeper dive into the ins and outs of a number of tools outlined during the Tuesday keynote, including Gemini Cloud Assist, using AI for product recommendations and chat agents, ending with a showcase from Hugging Face. You can check out the full keynote below.


Software Development in Sri Lanka

Robotic Automations

Google Cloud Next 2024: Watch the keynote on Gemini AI, enterprise reveals right here | TechCrunch


It’s time for Google’s annual look up to the cloud, this time with a big dose of AI.

At 9 a.m. PT Tuesday, Google Cloud CEO Thomas Kurian kicked off the opening keynote for this year’s Google Cloud Next event, and you can watch the archive of their reveals above, or right here.

After this week we’ll know more about Google’s attempts to help the enterprise enter the age of AI. From a deeper dive into Gemini, the company’s AI-powered chatbot, to securing AI products and implementing generative AI into cloud applications, Google will continue to cover a wide range of topics.

We’re also keeping tabs on everything Google’s announcing at Cloud Next 2024, from Google Vids to Gemini Code Assist to Google Workspace updates.

And for those more interested in Google’s details and reveals for developers, their Developer Keynote started off at 11:30am PT Wednesday, and you can catch up on that full stream right here or via the embed below.


Software Development in Sri Lanka

Robotic Automations

Google's Gemini comes to databases | TechCrunch


Google wants Gemini, its family of generative AI models, to power your app’s databases — in a sense.

At its annual Cloud Next conference in Las Vegas, Google announced the public preview of Gemini in Databases, a collection of features underpinned by Gemini to — as the company pitched it — “simplify all aspects of the database journey.” In less jargony language, Gemini in Databases is a bundle of AI-powered, developer-focused tools for Google Cloud customers who are creating, monitoring and migrating app databases.

One piece of Gemini in Databases is Database Studio, an editor for structured query language (SQL), the language used to store and process data in relational databases. Built into the Google Cloud console, Database Studio can generate, summarize and fix certain errors with SQL code, Google says, in addition to offering general SQL coding suggestions through a chatbot-like interface.

Joining Database Studio under the Gemini in Databases brand umbrella is AI-assisted migrations via Google’s existing Database Migration Service. Google’s Gemini models can convert database code and deliver explanations of those changes along with recommendations, according to Google.

Elsewhere, in Google’s new Database Center — yet another Gemini in Databases component — users can interact with databases using natural language and can manage a fleet of databases with tools to assess their availability, security and privacy compliance. And should something go wrong, those users can ask a Gemini-powered bot to offer troubleshooting tips.

“Gemini in Databases enables customer to easily generate SQL; additionally, they can now manage, optimize and govern entire fleets of databases from a single pane of glass; and finally, accelerate database migrations with AI-assisted code conversions,” Andi Gutmans, GM of databases at Google Cloud, wrote in a blog post shared with TechCrunch. “Imagine being able to ask questions like ‘Which of my production databases in east Asia had missing backups in the last 24 hours?’ or ‘How many PostgreSQL resources have a version higher than 11?’ and getting instant insights about your entire database fleet.”

That assumes, of course, that the Gemini models don’t make mistakes from time to time — which is no guarantee.

Regardless, Google’s forging ahead, bringing Gemini to Looker, its business intelligence tool, as well.

Launching in private preview, Gemini in Looker lets users “chat with their business data,” as Google describes it in a blog post. Integrated with Workspace, Google’s suite of enterprise productivity tools, Gemini in Looker spans features such as conversational analytics; report, visualization and formula generation; and automated Google Slide presentation generation. 

I’m curious to see if Gemini in Looker’s report and presentation generation work reliably well. Generative AI models don’t exactly have a reputation for accuracy, after all, which could lead to embarrassing, or even mission-critical, mistakes. We’ll find out as Cloud Next continues into the week with any luck.

Gemini in Databases could be perceived as a response of sorts to top rival Microsoft’s recently launched Copilot in Azure SQL Database, which brought generative AI to Microsoft’s existing fully managed cloud database service. Microsoft is looking to stay a step ahead in the budding AI-driven database race and has also worked to build generative AI with Azure Data Studio, the company’s set of enterprise data management and development tools.


Software Development in Sri Lanka

Robotic Automations

Google open sources tools to support AI model development | TechCrunch


In a typical year, Cloud Next — one of Google’s two major annual developer conferences, the other being I/O — almost exclusively features managed and otherwise closed source, gated-behind-locked-down-APIs products and services. But this year, whether to foster developer goodwill or advance its ecosystem ambitions (or both), Google debuted a number of open source tools primarily aimed at supporting generative AI projects and infrastructure.

The first, MaxDiffusion, which Google actually quietly released in February, is a collection of reference implementations of various diffusion models — models like the image generator Stable Diffusion — that run on XLA devices. “XLA” stands for Accelerated Linear Algebra, an admittedly awkward acronym referring to a technique that optimizes and speeds up specific types of AI workloads, including fine-tuning and serving.

Google’s own tensor processing units (TPUs) are XLA devices, as are recent Nvidia GPUs.

Beyond MaxDiffusion, Google’s launching JetStream, a new engine to run generative AI models — specifically text-generating models (so not Stable Diffusion). Currently limited to supporting TPUs with GPU compatibility supposedly coming in the future, JetStream offers up to 3x higher “performance per dollar” for models like Google’s own Gemma 7B and Meta’s Llama 2, Google claims.

“As customers bring their AI workloads to production, there’s an increasing demand for a cost-efficient inference stack that delivers high performance,” Mark Lohmeyer, Google Cloud’s GM of compute and machine learning infrastructure, wrote in a blog post shared with TechCrunch. “JetStream helps with this need … and includes optimizations for popular open models such as Llama 2 and Gemma.”

Now, “3x” improvement is quite a claim to make, and it’s not exactly clear how Google arrived at that figure. Using which generation of TPU? Compared to which baseline engine? And how’s “performance” being defined here, anyway?

I’ve asked Google all these questions and will update this post if I hear back.

Second-to-last on the list of Google’s open source contributions are new additions to MaxText, Google’s collection of text-generating AI models targeting TPUs and Nvidia GPUs in the cloud. MaxText now includes Gemma 7B, OpenAI’s GPT-3 (the predecessor to GPT-4), Llama 2 and models from AI startup Mistral — all of which Google says can be customized and fine-tuned to developers’ needs.

We’ve heavily optimized [the models’] performance on TPUs and also partnered closely with Nvidia to optimize performance on large GPU clusters,” Lohmeyer said. “These improvements maximize GPU and TPU utilization, leading to higher energy efficiency and cost optimization.”

Finally, Google’s collaborated with Hugging Face, the AI startup, to create Optimum TPU, which provides tooling to bring certain AI workloads to TPUs. The goal is to reduce the barrier to entry for getting generative AI models onto TPU hardware, according to Google — in particular text-generating models.

But at present, Optimum TPU is a bit bare-bones. The only model it works with is Gemma 7B. And Optimum TPU doesn’t yet support training generative models on TPUs — only running them.

Google’s promising improvements down the line.


Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber