From Digital Age to Nano Age. WorldWide.

Tag: Google Cloud Next 2024

Robotic Automations

Google Cloud Next 2024: Everything announced so far | TechCrunch


Google’s Cloud Next 2024 event takes place in Las Vegas through Thursday, and that means lots of new cloud-focused news on everything from Gemini, Google’s AI-powered chatbot, to AI to devops and security. Last year’s event was the first in-person Cloud Next since 2019, and Google took to the stage to show off its ongoing dedication to AI with its Duet AI for Gmail and many other debuts, including expansion of generative AI to its security product line and other enterprise-focused updates and debuts.

Don’t have time to watch the full archive of Google’s keynote event? That’s OK; we’ve summed up the most important parts of the event below, with additional details from the TechCrunch team on the ground at the event. And Tuesday’s updates weren’t the only things Google made available to non-attendees — Wednesday’s developer-focused stream started at 10:30 a.m. PT.

Google Vids

Leveraging AI to help customers develop creative content is something Big Tech is looking for, and Tuesday, Google introduced its version. Google Vids, a new AI-fueled video creation tool, is the latest feature added to the Google Workspace.

Here’s how it works: Google claims users can make videos alongside other Workspace tools like Docs and Sheets. The editing, writing and production is all there. You also can collaborate with colleagues in real time within Google Vids. Read more

Gemini Code Assist

After reading about Google’s new Gemini Code Assist, an enterprise-focused AI code completion and assistance tool, you may be asking yourself if that sounds familiar. And you would be correct. TechCrunch Senior Editor Frederic Lardinois writes that “Google previously offered a similar service under the now-defunct Duet AI branding.” Then Gemini came along. Code Assist is a direct competitor to GitHub’s Copilot Enterprise. Here’s why

And to put Gemini Code Assist into context, Alex Wilhelm breaks down its competition with Copilot, and its potential risks and benefits to developers, in the latest TechCrunch Minute episode.

Google Workspace

Image Credits: Google

Among the new features are voice prompts to kick off the AI-based “Help me write” feature in Gmail while on the go. Another one for Gmail includes a way to instantly turn rough email drafts into a more polished email. Over on Sheets, you can send out a customizable alert when a certain field changes. Meanwhile, a new set of templates make starting a new spreadsheet easier. For the Doc lovers, there is support for tabs now. This is good because, according to the company, you can “organize information in a single document instead of linking to multiple documents or searching through Drive.” Of course, subscribers get the goodies first. Read more

Google also seems to have plans to monetize two of its new AI features for the Google Workspace productivity suite. This will look like $10/month/user add-on packages. One will be for the new AI meetings and messaging add-on that takes notes for you, provides meeting summaries and translates content into 69 languages. The other is for the introduced AI security package, which helps admins keep Google Workspace content more secure. Read more

Imagen 2

In February, Google announced an image generator built into Gemini, Google’s AI-powered chatbot. The company pulled it shortly after it was found to be randomly injecting gender and racial diversity into prompts about people. This resulted in some offensive inaccuracies. While we waited for an eventual re-release, Google came out with the enhanced image-generating tool, Imagen 2. This is inside its Vertex AI developer platform and has more of a focus on enterprise. Imagen 2 is now generally available and comes with some fun new capabilities, including inpainting and outpainting. There’s also what Google’s calling “text-to-live images” where you can now create short, four-second videos from text prompts, along the lines of AI-powered clip generation tools like RunwayPika and Irreverent Labs. Read more

Vertex AI Agent Builder

We can all use a little bit of help, right? Meet Google’s Vertex AI Agent Builder, a new tool to help companies build AI agents.

“Vertex AI Agent Builder allows people to very easily and quickly build conversational agents,” Google Cloud CEO Thomas Kurian said. “You can build and deploy production-ready, generative AI-powered conversational agents and instruct and guide them the same way that you do humans to improve the quality and correctness of answers from models.”

To do this, the company uses a process called “grounding,” where the answers are tied to something considered to be a reliable source. In this case, it’s relying on Google Search (which in reality could or could not be accurate). Read more

Gemini comes to databases

Google calls Gemini in Databases a collection of features that “simplify all aspects of the database journey.” In less jargony language, it’s a bundle of AI-powered, developer-focused tools for Google Cloud customers who are creating, monitoring and migrating app databases. Read more

Google renews its focus on data sovereignty

Image Credits: MirageC / Getty Images

Google has offered cloud sovereignties before, but now it is focused more on partnerships rather than building them out on their own. Read more

Security tools get some AI love

Image Credits: Getty Images

Google jumps on board the productizing generative AI-powered security tool train with a number of new products and features aimed at large companies. Those include Threat Intelligence, which can analyze large portions of potentially malicious code. It also lets users perform natural language searches for ongoing threats or indicators of compromise. Another is Chronicle, Google’s cybersecurity telemetry offering for cloud customers to assist with cybersecurity investigations. The third is the enterprise cybersecurity and risk management suite Security Command Center. Read more

Nvidia’s Blackwell platform

One of the anticipated announcements is Nvidia’s next-generation Blackwell platform coming to Google Cloud in early 2025. Yes, that seems so far away. However, here is what to look forward to: support for the high-performance Nvidia HGX B200 for AI and HPC workloads and GB200 NBL72 for large language model (LLM) training. Oh, and we can reveal that the GB200 servers will be liquid-cooled. Read more

Chrome Enterprise Premium

Meanwhile, Google is expanding its Chrome Enterprise product suite with the launch of Chrome Enterprise Premium. What’s new here is that it mainly pertains mostly to security capabilities of the existing service, based on the insight that browsers are now the endpoints where most of the high-value work inside a company is done. Read more

Gemini 1.5 Pro

Image Credits: Google

Everyone can use a “half” every now and again, and Google obliges with Gemini 1.5 Pro. This, Kyle Wiggers writes, is “Google’s most capable generative AI model,” and is now available in public preview on Vertex AI, Google’s enterprise-focused AI development platform. Here’s what you get for that half: The amount of context that it can process, which is from 128,000 tokens up to 1 million tokens, where “tokens” refers to subdivided bits of raw data (like the syllables “fan,” “tas” and “tic” in the word “fantastic”). Read more

Open source tools

Image Credits: Getty Images

At Google Cloud Next 2024, the company debuted a number of open source tools primarily aimed at supporting generative AI projects and infrastructure. One is Max Diffusion, which is a collection of reference implementations of various diffusion models that run on XLA, or Accelerated Linear Algebra, devices. Then there is JetStream, a new engine to run generative AI models. The third is MaxTest, a collection of text-generating AI models targeting TPUs and Nvidia GPUs in the cloud. Read more

Axion

Image Credits: Google

We don’t know a lot about this one, however, here is what we do know: Google Cloud joins AWS and Azure in announcing its first custom-built Arm processor, dubbed Axion. Frederic Lardinois writes that “based on Arm’s Neoverse 2 designs, Google says its Axion instances offer 30% better performance than other Arm-based instances from competitors like AWS and Microsoft and up to 50% better performance and 60% better energy efficiency than comparable X86-based instances.” Read more

The entire Google Cloud Next keynote

If all of that isn’t enough of an AI and cloud update deluge, you can watch the entire event keynote via the embed below.

Google Cloud Next’s developer keynote

On Wednesday, Google held a separate keynote for developers. They offered a deeper dive into the ins and outs of a number of tools outlined during the Tuesday keynote, including Gemini Cloud Assist, using AI for product recommendations and chat agents, ending with a showcase from Hugging Face. You can check out the full keynote below.


Software Development in Sri Lanka

Robotic Automations

Watch: Google's Gemini Code Assist wants to use AI to help developers


Can AI eat the jobs of the developers who are busy building AI models? The short answer is no, but the longer answer is not yet settled. News this week that Google has a new AI-powered coding tool for developers, straight from the company’s Google Cloud Next 2024 event in Las Vegas, means that competitive pressures between major tech companies to build the best service to help coders write more code, more quickly is still heating up.

Microsoft’s GitHub Copilot service that has similar outlines has been steadily working toward enterprise adoption. Both companies want to eventually build developer-helping tech that can understand a company’s codebase, allowing it to offer up more tailored suggestions and tips.

Startups are in the fight as well, though they tend to focus more tailored solutions than the broader offerings from the largest tech companies; Pythagora, Tusk and Ellipsis from the most recent Y Combinator batch are working on app creation from user prompts, AI agents for bug-squashing and turning GitHub comments into code, respectively.

Everywhere you look, developers are building tools and services to help their own professional cohort.

Developers learning to code today won’t know a world in which they don’t have AI-powered coding helps. Call it the graphic calculator era for software builders. But the risk — or the worry, I suppose — is that in time the AI tools that are ingesting mountains of code to get smarter to help humans do more will eventually be able to do enough that fewer humans are needed to do the work of writing code for companies themselves. And if a company can spend less money and employ fewer people, it will; no job is safe, but some roles are just more difficult to replace at any given moment.

Thankfully, given the complexities of modern software services, ever-present tech debt and an infinite number of edge cases, what big tech and startups are busy building today seem to be very useful coding helps and not something ready to replace or even reduce the number of humans building them. For now. I wouldn’t take the other end of that bet on a multi-decade time frame.

And for those looking for an even deeper dive into what Google revealed this week, you can head here for our complete rundown, including details on exactly how Gemini Code Assist works, and Google’s in-depth developer walkthrough from Cloud Next 2024.


Software Development in Sri Lanka

Robotic Automations

Google Cloud Next 2024: Watch the keynote on Gemini AI, enterprise reveals right here | TechCrunch


It’s time for Google’s annual look up to the cloud, this time with a big dose of AI.

At 9 a.m. PT Tuesday, Google Cloud CEO Thomas Kurian kicked off the opening keynote for this year’s Google Cloud Next event, and you can watch the archive of their reveals above, or right here.

After this week we’ll know more about Google’s attempts to help the enterprise enter the age of AI. From a deeper dive into Gemini, the company’s AI-powered chatbot, to securing AI products and implementing generative AI into cloud applications, Google will continue to cover a wide range of topics.

We’re also keeping tabs on everything Google’s announcing at Cloud Next 2024, from Google Vids to Gemini Code Assist to Google Workspace updates.

And for those more interested in Google’s details and reveals for developers, their Developer Keynote started off at 11:30am PT Wednesday, and you can catch up on that full stream right here or via the embed below.


Software Development in Sri Lanka

Robotic Automations

New Google Vids product helps create a customized video with an AI assist | TechCrunch


All of the major vendors have been looking at ways to use AI to help customers develop creative content. On Tuesday at the Google Cloud Next customer conference in Las Vegas, Google introduced a new AI-fueled video creation tool called Google Vids. The tool will become part of the Google Workspace productivity suite when it’s released.

“I want to share something really entirely new. At Google Cloud Next, we’re unveiling Google Vids, a brand new, AI-powered video creation app for work,” Aparna Pappu, VP & GM at Google Workspace said, introducing the tool.

Image Credits: Frederic Lardinois/TechCrunch

The idea is to provide a video creation tool alongside other Workspace tools like Docs and Sheets with a similar ability to create and collaborate in the browser, except in this case, on video. “This is your video editing, writing and production assistant, all in one,” Pappu said. “We help transform the assets you already have — whether marketing copy or images or whatever else in your drive — into a compelling video.”

Like other Google Workspace tools, you can collaborate with colleagues in real time in the browser. “No need to email files back and forth. You and your team can work on the story at the same time with all the same access controls and security that we provide for all of Workspace,” she said.

Image Credits: Google Cloud

Examples of the kinds of videos people are creating with Google Vids include product pitches, training content or celebratory team videos. Like most generative AI tooling, Google Vids starts with a prompt. You enter a description of what you want the video to look like. You can then access files in your Google Drive or use stock content provided by Google and the AI goes to work, creating a storyboard of the video based on your ideas.

You can then reorder the different parts of the video, add transitions, select a template and insert an audio track where you record the audio or add a script and a preset voice will read it. Once you’re satisfied, you can generate the video. Along the way colleagues can comment or make changes, just as with any Google Workspace tool.

Google Vids is currently in limited testing. In June it will roll out to additional testers in Google Labs and will eventually be available for customers with Gemini for Workspace subscriptions.

Image Credits: Frederic Lardinois/TechCrunch


Software Development in Sri Lanka

Robotic Automations

With Vertex AI Agent Builder, Google Cloud aims to simplify agent creation | TechCrunch


AI agents are the new hot craze in generative AI. Unlike the previous generation of chatbots, these agents can do more than simply answer questions. They can take actions based on the conversation, and even interact with back-end transactional systems to take actions in an automated manner.

On Tuesday at Google Cloud Next, the company introduced a new tool to help companies build AI agents.

“Vertex AI Agent Builder allows people to very easily and quickly build conversational agents,” Google Cloud CEO Thomas Kurian said. “You can build and deploy production-ready, generative AI-powered conversational agents and instruct and guide them the same way that you do humans to improve the quality and correctness of answers from models.”

The no-code product builds upon Google’s Vertex AI Search and Conversation product released previously. It’s also built on top of the company’s latest Gemini large language models and relies both on RAG APIs and vector search, two popular methods used industry-wide to reduce hallucinations, where models make up incorrect answers when they can’t find an accurate response.

Part of the way the company is improving the quality of the answers is through a process called “grounding,” where the answers are tied to something considered to be a reliable source. In this case, it’s relying on Google Search (which in reality could or could not be accurate).

“We’re now bringing you grounding in Google Search, bringing the power of the world’s knowledge that Google Search offers through our grounding service to models. In addition, we also support the ability to ground against enterprise data sources,” Kurian said. The latter might be more suitable for enterprise customers.

Image Credits: Frederic Lardinois/TechCrunch

In a demo, the company used this capability to create an agent that analyzes previous marketing campaigns to understand a company’s brand style, and then apply that knowledge to help generate new ideas that are consistent with that style. The demo analyzed over 3,000 brand images, descriptions, videos and documents related to this fictional company’s products stored on Google Drive. It then helped generate pictures, captions and other content based on its understanding of the fictional company’s style.

Although you can build any type of agent, this particular example would put Google directly in competition with Adobe, which released its creative generative AI tool Firefly last year and GenStudio last month to help build content that doesn’t stray from the company’s style. The flexibility is there to build anything, but the question is whether you want to buy something off the shelf instead if it exists.

The new capabilities are already available, according to Google. It supports multiple languages and offers country-based API endpoints in the U.S. and EU.


Software Development in Sri Lanka

Robotic Automations

Google's Gemini Pro 1.5 enters public preview on Vertex AI | TechCrunch


Gemini 1.5 Pro, Google’s most capable generative AI model, is now available in public preview on Vertex AI, Google’s enterprise-focused AI development platform. The company announced the news during its annual Cloud Next conference, which is taking place in Las Vegas this week.

Gemini 1.5 Pro launched in February, joining Google’s Gemini family of generative AI models. Undoubtedly its headlining feature is the amount of context that it can process: between 128,000 tokens to up to 1 million tokens, where “tokens” refers to subdivided bits of raw data (like the syllables “fan,” “tas” and “tic” in the word “fantastic”).

One million tokens is equivalent to around 700,000 words or around 30,000 lines of code. It’s about four times the amount of data that Anthropic’s flagship model, Claude 3, can take as input and about eight times as high as OpenAI’s GPT-4 Turbo max context.

A model’s context, or context window, refers to the initial set of data (e.g. text) the model considers before generating output (e.g. additional text). A simple question — “Who won the 2020 U.S. presidential election?” — can serve as context, as can a movie script, email, essay or e-book.

Models with small context windows tend to “forget” the content of even very recent conversations, leading them to veer off topic. This isn’t necessarily so with models with large contexts. And, as an added upside, large-context models can better grasp the narrative flow of data they take in, generate contextually richer responses and reduce the need for fine-tuning and factual grounding — hypothetically, at least.

So what specifically can one do with a 1 million-token context window? Lots of things, Google promises, like analyzing a code library, “reasoning across” lengthy documents and holding long conversations with a chatbot.

Because Gemini 1.5 Pro is multilingual — and multimodal in the sense that it’s able to understand images and videos and, as of Tuesday, audio streams in addition to text — the model can also analyze and compare content in media like TV shows, movies, radio broadcasts, conference call recordings and more across different languages. One million tokens translates to about an hour of video or around 11 hours of audio.

Thanks to its audio-processing capabilities, Gemini 1.5 Pro can generate transcriptions for video clips, as well, although the jury’s out on the quality of those transcriptions.

In a prerecorded demo earlier this year, Google showed Gemini 1.5 Pro searching the transcript of the Apollo 11 moon landing telecast (which comes to about 400 pages) for quotes containing jokes, and then finding a scene in movie footage that looked similar to a pencil sketch.

Google says that early users of Gemini 1.5 Pro — including United Wholesale Mortgage, TBS and Replit — are leveraging the large context window for tasks spanning mortgage underwriting; automating metadata tagging on media archives; and generating, explaining and transforming code.

Gemini 1.5 Pro doesn’t process a million tokens at the snap of a finger. In the aforementioned demos, each search took between 20 seconds and a minute to complete — far longer than the average ChatGPT query.

Google previously said that latency is an area of focus, though, and that it’s working to “optimize” Gemini 1.5 Pro as time goes on.

Of note, Gemini 1.5 Pro is slowly making its way to other parts of Google’s corporate product ecosystem, with the company announcing Tuesday that the model (in private preview) will power new features in Code Assist, Google’s generative AI coding assistance tool. Developers can now perform “large-scale” changes across codebases, Google says, for example updating cross-file dependencies and reviewing large chunks of code.




Software Development in Sri Lanka

Robotic Automations

Chrome Enterprise goes Premium with new security and management features | TechCrunch


At its Google Cloud Next conference in Las Vegas, Google on Tuesday extended its Chrome Enterprise product suite with the launch of Chrome Enterprise Premium.

Google has long offered an enterprise-centric version of its Chrome browser. With Chrome Enterprise, IT departments get the ability to manage employees’ browser settings, the extensions they install and web apps they use, for example. More importantly, though, they also get a number of new security controls around data loss prevention, malware protection, phishing prevention and the Zero Trust access to SaaS apps.

Chrome Enterprise Premium, which will cost $6/user/month, mostly extends the security capabilities of the existing service, based on the insight that browsers are now the endpoints where most of the high-value work inside a company is done.

Authentication, access, communication and collaboration, administration, and even coding are all browser-based activities in the modern enterprise,” Parisa Tabriz, Google’s VP for Chrome, wrote in Tuesday’s announcement. “Endpoint security is growing more challenging due to remote work, reliance on an extended workforce, and the proliferation of new devices that aren’t part of an organization’s managed fleet. As these trends continue to accelerate and converge, it’s clear that the browser is a natural enforcement point for endpoint security in the modern enterprise.”

These new features include additional enterprise controls to enforce policies and manage software updates and extensions, as well as new security reporting features and forensic capabilities that can be integrated with third-party security tools. Chrome Enterprise Premium takes Zero Trust a step further with context-aware access controls that can also mitigate the risk of data leaks. This includes approved applications and those that were not sanctioned by the IT department.

“With Chrome Enterprise Premium, we have confidence in Google’s security expertise, including Project Zero’s cutting-edge security research and fast security patches. We set up data loss prevention restrictions and warnings for sharing sensitive information in applications like generative AI platforms and noticed a noteworthy 50% reduction in content transfers,” said Nick Reva, head of corporate security engineering at Snap.

The new service is now generally available.


Software Development in Sri Lanka

Robotic Automations

Google looks to monetize AI with two new $10 Workspace add-ons | TechCrunch


The jury is still out on how much AI increases individual productivity, but companies are now trying to monetize it as they build more advanced AI features into their software. Last year, Microsoft made waves when it announced Copilot would add $30 per user per month to the price of an Office 365 subscription. But these features are costly to create and run, and customers will have to absorb some of that cost.

At Google Cloud Next, Google followed Microsoft’s monetization lead, announcing a pair of $10/month/user add-on packages for the Google Workspace productivity suite.

Image Credits: Frederic Lardinois/TechCrunch

For starters, the new AI meetings and messaging add-on takes notes for you, provides meeting summaries and translates content into 69 languages. “We’re adding 52 new languages — including Filipino, Korean, you name it — to translate for Meet. Now this brings the total number of languages we support to 69,” Aparna Pappu, VP & GM at Google Workspace said.

The company also introduced an AI security package, which helps admins keep Google Workspace content more secure, including the ability to classify and protect files with certain sensitive characteristics. The add-on can also help protect information that is supposed to be kept private and apply data loss prevention controls that can be tuned to the specific requirements of individual organizations.

Image Credits: Frederic Lardinois/TechCrunch

While $10 per user might feel a bit steep, especially given the price of Workspace without these additional features, they do appear to be in line with the cost of similar features from third-party services. For customers who may not want these add-ons for every user, Google says that they can mix and match license types and apply the advanced features where they would be most useful.

It’s also worth noting that Google is planning additional enhancements for the meeting add-on, including generative AI custom backgrounds, which lets users describe a background and the AI creates it for you, as well as studio quality lighting, video quality and sound, which uses AI to provide professional quality meetings, among other new features in the works. These features will be coming in the next couple of months, according to the company.

The two add-ons are now available to Workspace subscribers.


Software Development in Sri Lanka

Robotic Automations

Google releases Imagen 2, a video clip generator | TechCrunch


Google doesn’t have the best track record when it comes to image-generating AI.

In February, the image generator built into Gemini, Google’s AI-powered chatbot, was found to be randomly injecting gender and racial diversity into prompts about people, resulting in images of racially diverse Nazis, among other offensive inaccuracies.

Google pulled the generator, vowing to improve it and eventually re-release it. As we await its return, the company’s launching an enhanced image-generating tool, Imagen 2, inside its Vertex AI developer platform — albeit a tool with a decidedly more enterprise bent. Google announced Imagen 2 at its annual Cloud Next conference in Las Vegas.

Image Credits: Frederic Lardinois/TechCrunch

Imagen 2 — which is actually a family of models, launched in December after being previewed at Google’s I/O conference in May 2023 — can create and edit images given a text prompt, like OpenAI’s DALL-E and Midjourney. Of interest to corporate types, Imagen 2 can render text, emblems and logos in multiple languages, optionally overlaying those elements in existing images — for example, onto business cards, apparel and products.

After launching first in preview, image editing with Imagen 2 is now generally available in Vertex AI along with two new capabilities: inpainting and outpainting. Inpainting and outpainting, features other popular image generators such as DALL-E have offered for some time, can be used to remove unwanted parts of an image, add new components and expand the borders of an image to create a wider field of view.

But the real meat of the Imagen 2 upgrade is what Google’s calling “text-to-live images.”

Imagen 2 can now create short, four-second videos from text prompts, along the lines of AI-powered clip generation tools like Runway, Pika and Irreverent Labs. True to Imagen 2’s corporate focus, Google’s pitching live images as a tool for marketers and creatives, such as a GIF generator for ads showing nature, food and animals — subject matter that Imagen 2 was fine-tuned on.

Google says that live images can capture “a range of camera angles and motions” while “supporting consistency over the entire sequence.” But they’re in low resolution for now: 360 pixels by 640 pixels. Google’s pledging that this will improve in the future. 

To allay (or at least attempt to allay) concerns around the potential to create deepfakes, Google says that Imagen 2 will employ SynthID, an approach developed by Google DeepMind, to apply invisible, cryptographic watermarks to live images. Of course, detecting these watermarks — which Google claims are resilient to edits, including compression, filters and color tone adjustments — requires a Google-provided tool that’s not available to third parties.

And no doubt eager to avoid another generative media controversy, Google’s emphasizing that live image generations will be “filtered for safety.” A spokesperson told TechCrunch via email: “The Imagen 2 model in Vertex AI has not experienced the same issues as the Gemini app. We continue to test extensively and engage with our customers.”

Image Credits: Frederic Lardinois/TechCrunch

But generously assuming for a moment that Google’s watermarking tech, bias mitigations and filters are as effective as it claims, are live images even competitive with the video generation tools already out there?

Not really.

Runway can generate 18-second clips in much higher resolutions. Stability AI’s video clip tool, Stable Video Diffusion, offers greater customizability (in terms of frame rate). And OpenAI’s Sora — which, granted, isn’t commercially available yet — appears poised to blow away the competition with the photorealism it can achieve.

So what are the real technical advantages of live images? I’m not really sure. And I don’t think I’m being too harsh.

After all, Google is behind genuinely impressive video generation tech like Imagen Video and Phenaki. Phenaki, one of Google’s more interesting experiments in text-to-video, turns long, detailed prompts into two-minute-plus “movies” — with the caveat that the clips are low resolution, low frame rate and only somewhat coherent.

In light of recent reports suggesting that the generative AI revolution caught Google CEO Sundar Pichai off guard and that the company’s still struggling to maintain pace with rivals, it’s not surprising that a product like live images feels like an also-ran. But it’s disappointing nonetheless. I can’t help the feeling that there is — or was — a more impressive product lurking in Google’s skunkworks.

Models like Imagen are trained on an enormous number of examples usually sourced from public sites and datasets around the web. Many generative AI vendors see training data as a competitive advantage and thus keep it and info pertaining to it close to the chest. But training data details are also a potential source of IP-related lawsuits, another disincentive to reveal much.

I asked, as I always do around announcements pertaining to generative AI models, about the data that was used to train the updated Imagen 2, and whether creators whose work might’ve been swept up in the model training process will be able to opt out at some future point.

Google told me only that its models are trained “primarily” on public web data, drawn from “blog posts, media transcripts and public conversation forums.” Which blogs, transcripts and forums? It’s anyone’s guess.

A spokesperson pointed to Google’s web publisher controls that allow webmasters to prevent the company from scraping data, including photos and artwork, from their websites. But Google wouldn’t commit to releasing an opt-out tool or, alternatively, compensating creators for their (unknowing) contributions — a step that many of its competitors, including OpenAI, Stability AI and Adobe, have taken.

Another point worth mentioning: Text-to-live images isn’t covered by Google’s generative AI indemnification policy, which protects Vertex AI customers from copyright claims related to Google’s use of training data and outputs of its generative AI models. That’s because text-to-live images is technically in preview; the policy only covers generative AI products in general availability (GA).

Regurgitation, or where a generative model spits out a mirror copy of an example (e.g., an image) that it was trained on, is rightly a concern for corporate customers. Studies both informal and academic have shown that the first-gen Imagen wasn’t immune to this, spitting out identifiable photos of people, artists’ copyrighted works and more when prompted in particular ways.

Barring controversies, technical issues or some other major unforeseen setbacks, text-to-live images will enter GA somewhere down the line. But with live images as it exists today, Google’s basically saying: use at your own risk.


Software Development in Sri Lanka

Robotic Automations

Google injects generative AI into its cloud security tools | TechCrunch


At its annual Cloud Next conference in Las Vegas, Google on Tuesday introduced new cloud-based security products and services — in addition to updates to existing products and services — aimed at customers managing large, multi-tenant corporate networks.

Many of the announcements had to do with Gemini, Google’s flagship family of generative AI models.

For example, Google unveiled Gemini in Threat Intelligence, a new Gemini-powered component of the company’s Mandiant cybersecurity platform. Now in public preview, Gemini in Threat Intelligence can analyze large portions of potentially malicious code and let users perform natural language searches for ongoing threats or indicators of compromise, as well as summarize open source intelligence reports from around the web.

“Gemini in Threat Intelligence now offers conversational search across Mandiant’s vast and growing repository of threat intelligence directly from frontline investigations,” Sunil Potti, GM of cloud security at Google, wrote in a blog post shared with TechCrunch. “Gemini will navigate users to the most relevant pages in the integrated platform for deeper investigation … Plus, [Google’s malware detection service] VirusTotal now automatically ingests OSINT reports, which Gemini summarizes directly in the platform.”

Elsewhere, Gemini can now assist with cybersecurity investigations in Chronicle, Google’s cybersecurity telemetry offering for cloud customers. Set to roll out by the end of the month, the new capability guides security analysts through their typical workflows, recommending actions based on the context of a security investigation, summarizing security event data and creating breach and exploit detection rules from a chatbot-like interface.

And in Security Command Center, Google’s enterprise cybersecurity and risk management suite, a new Gemini-driven feature lets security teams search for threats using natural language while providing summaries of misconfigurations, vulnerabilities and possible attack paths.

Rounding out the security updates were privileged access manager (in preview), a service that offers just-in-time, time-bound and approval-based access options designed to help mitigate risks tied to privileged access misuse. Google’s also rolling out principal access boundary (in preview, as well), which lets admins implement restrictions on network root-level users so that those users can only access authorized resources within a specifically defined boundary.

Lastly, Autokey (in preview) aims to simplify creating and managing customer encryption keys for high-security use cases, while Audit Manager (also in preview) provides tools for Google Cloud customers in regulated industries to generate proof of compliance for their workloads and cloud-hosted data.

“Generative AI offers tremendous potential to tip the balance in favor of defenders,” Potti wrote in the blog post. “And we continue to infuse AI-driven capabilities into our products.”

Google isn’t the only company attempting to productize generative AI–powered security tooling. Microsoft last year launched a set of services that leverage generative AI to correlate data on attacks while prioritizing cybersecurity incidents. Startups, including Aim Security, are also jumping into the fray, aiming to corner the nascent space.

But with generative AI’s tendency to make mistakes, it remains to be seen whether these tools have staying power.


Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber