From Digital Age to Nano Age. WorldWide.

Tag: Google

Robotic Automations

Google Cloud Next 2024: Everything announced so far | TechCrunch


Google’s Cloud Next 2024 event takes place in Las Vegas through Thursday, and that means lots of new cloud-focused news on everything from Gemini, Google’s AI-powered chatbot, to AI to devops and security. Last year’s event was the first in-person Cloud Next since 2019, and Google took to the stage to show off its ongoing dedication to AI with its Duet AI for Gmail and many other debuts, including expansion of generative AI to its security product line and other enterprise-focused updates and debuts.

Don’t have time to watch the full archive of Google’s keynote event? That’s OK; we’ve summed up the most important parts of the event below, with additional details from the TechCrunch team on the ground at the event. And Tuesday’s updates weren’t the only things Google made available to non-attendees — Wednesday’s developer-focused stream started at 10:30 a.m. PT.

Google Vids

Leveraging AI to help customers develop creative content is something Big Tech is looking for, and Tuesday, Google introduced its version. Google Vids, a new AI-fueled video creation tool, is the latest feature added to the Google Workspace.

Here’s how it works: Google claims users can make videos alongside other Workspace tools like Docs and Sheets. The editing, writing and production is all there. You also can collaborate with colleagues in real time within Google Vids. Read more

Gemini Code Assist

After reading about Google’s new Gemini Code Assist, an enterprise-focused AI code completion and assistance tool, you may be asking yourself if that sounds familiar. And you would be correct. TechCrunch Senior Editor Frederic Lardinois writes that “Google previously offered a similar service under the now-defunct Duet AI branding.” Then Gemini came along. Code Assist is a direct competitor to GitHub’s Copilot Enterprise. Here’s why

And to put Gemini Code Assist into context, Alex Wilhelm breaks down its competition with Copilot, and its potential risks and benefits to developers, in the latest TechCrunch Minute episode.

Google Workspace

Image Credits: Google

Among the new features are voice prompts to kick off the AI-based “Help me write” feature in Gmail while on the go. Another one for Gmail includes a way to instantly turn rough email drafts into a more polished email. Over on Sheets, you can send out a customizable alert when a certain field changes. Meanwhile, a new set of templates make starting a new spreadsheet easier. For the Doc lovers, there is support for tabs now. This is good because, according to the company, you can “organize information in a single document instead of linking to multiple documents or searching through Drive.” Of course, subscribers get the goodies first. Read more

Google also seems to have plans to monetize two of its new AI features for the Google Workspace productivity suite. This will look like $10/month/user add-on packages. One will be for the new AI meetings and messaging add-on that takes notes for you, provides meeting summaries and translates content into 69 languages. The other is for the introduced AI security package, which helps admins keep Google Workspace content more secure. Read more

Imagen 2

In February, Google announced an image generator built into Gemini, Google’s AI-powered chatbot. The company pulled it shortly after it was found to be randomly injecting gender and racial diversity into prompts about people. This resulted in some offensive inaccuracies. While we waited for an eventual re-release, Google came out with the enhanced image-generating tool, Imagen 2. This is inside its Vertex AI developer platform and has more of a focus on enterprise. Imagen 2 is now generally available and comes with some fun new capabilities, including inpainting and outpainting. There’s also what Google’s calling “text-to-live images” where you can now create short, four-second videos from text prompts, along the lines of AI-powered clip generation tools like RunwayPika and Irreverent Labs. Read more

Vertex AI Agent Builder

We can all use a little bit of help, right? Meet Google’s Vertex AI Agent Builder, a new tool to help companies build AI agents.

“Vertex AI Agent Builder allows people to very easily and quickly build conversational agents,” Google Cloud CEO Thomas Kurian said. “You can build and deploy production-ready, generative AI-powered conversational agents and instruct and guide them the same way that you do humans to improve the quality and correctness of answers from models.”

To do this, the company uses a process called “grounding,” where the answers are tied to something considered to be a reliable source. In this case, it’s relying on Google Search (which in reality could or could not be accurate). Read more

Gemini comes to databases

Google calls Gemini in Databases a collection of features that “simplify all aspects of the database journey.” In less jargony language, it’s a bundle of AI-powered, developer-focused tools for Google Cloud customers who are creating, monitoring and migrating app databases. Read more

Google renews its focus on data sovereignty

Image Credits: MirageC / Getty Images

Google has offered cloud sovereignties before, but now it is focused more on partnerships rather than building them out on their own. Read more

Security tools get some AI love

Image Credits: Getty Images

Google jumps on board the productizing generative AI-powered security tool train with a number of new products and features aimed at large companies. Those include Threat Intelligence, which can analyze large portions of potentially malicious code. It also lets users perform natural language searches for ongoing threats or indicators of compromise. Another is Chronicle, Google’s cybersecurity telemetry offering for cloud customers to assist with cybersecurity investigations. The third is the enterprise cybersecurity and risk management suite Security Command Center. Read more

Nvidia’s Blackwell platform

One of the anticipated announcements is Nvidia’s next-generation Blackwell platform coming to Google Cloud in early 2025. Yes, that seems so far away. However, here is what to look forward to: support for the high-performance Nvidia HGX B200 for AI and HPC workloads and GB200 NBL72 for large language model (LLM) training. Oh, and we can reveal that the GB200 servers will be liquid-cooled. Read more

Chrome Enterprise Premium

Meanwhile, Google is expanding its Chrome Enterprise product suite with the launch of Chrome Enterprise Premium. What’s new here is that it mainly pertains mostly to security capabilities of the existing service, based on the insight that browsers are now the endpoints where most of the high-value work inside a company is done. Read more

Gemini 1.5 Pro

Image Credits: Google

Everyone can use a “half” every now and again, and Google obliges with Gemini 1.5 Pro. This, Kyle Wiggers writes, is “Google’s most capable generative AI model,” and is now available in public preview on Vertex AI, Google’s enterprise-focused AI development platform. Here’s what you get for that half: The amount of context that it can process, which is from 128,000 tokens up to 1 million tokens, where “tokens” refers to subdivided bits of raw data (like the syllables “fan,” “tas” and “tic” in the word “fantastic”). Read more

Open source tools

Image Credits: Getty Images

At Google Cloud Next 2024, the company debuted a number of open source tools primarily aimed at supporting generative AI projects and infrastructure. One is Max Diffusion, which is a collection of reference implementations of various diffusion models that run on XLA, or Accelerated Linear Algebra, devices. Then there is JetStream, a new engine to run generative AI models. The third is MaxTest, a collection of text-generating AI models targeting TPUs and Nvidia GPUs in the cloud. Read more

Axion

Image Credits: Google

We don’t know a lot about this one, however, here is what we do know: Google Cloud joins AWS and Azure in announcing its first custom-built Arm processor, dubbed Axion. Frederic Lardinois writes that “based on Arm’s Neoverse 2 designs, Google says its Axion instances offer 30% better performance than other Arm-based instances from competitors like AWS and Microsoft and up to 50% better performance and 60% better energy efficiency than comparable X86-based instances.” Read more

The entire Google Cloud Next keynote

If all of that isn’t enough of an AI and cloud update deluge, you can watch the entire event keynote via the embed below.

Google Cloud Next’s developer keynote

On Wednesday, Google held a separate keynote for developers. They offered a deeper dive into the ins and outs of a number of tools outlined during the Tuesday keynote, including Gemini Cloud Assist, using AI for product recommendations and chat agents, ending with a showcase from Hugging Face. You can check out the full keynote below.


Software Development in Sri Lanka

Robotic Automations

Google brings AI-powered editing tools, like Magic Editor, to all Google Photos users for free | TechCrunch


Google Photos is getting an AI upgrade. On Wednesday, the tech giant announced that a handful of enhanced editing features previously limited to Pixel devices and paid subscribers — including its AI-powered Magic Editor — will now make their way to all Google Photos users for free. This expansion also includes Google’s Magic Eraser, which removes unwanted items from photos; Photo Unblur, which uses machine learning to sharpen blurry photos; Portrait Light, which lets you change the light source on photos after the fact, and others.

The editing tools have historically been a selling point for Google’s high-end devices, the Pixel phones, as well as a draw for Google’s cloud storage subscription product, Google One. But with the growing number of AI-powered editing tools flooding the market, Google has decided to make its set of AI photo editing features available to more people for free.

Image Credits: Google

There are some caveats to this expansion, however.

For starters, the tools will only start rolling out on May 15 and it will take weeks for them to make it to all Google Photos users.

In addition, there are some hardware device requirements to be able to use them. On ChromeOS, for instance, the device must be a Chromebook Plus with ChromeOS version 118+ or have at least 3GB RAM. On mobile, the device must run Android 8.0 or higher or iOS 15 or higher.

The company notes that Pixel tablets will now be supported, as well.

Magic Editor is the most notable feature of the group. Introduced last year with the launch of the Pixel 8 and Pixel 8 Pro, this editing tool uses generative AI to do more complicated photo edits — like filling in gaps in a photo, repositioning the subject and other edits to the foreground or background of a photo. With Magic Editor, you can change a gray sky to blue, remove people from the background of a photo, recenter the photo subject while filling in gaps, remove other clutter and more.

Previously, these kinds of edits would require Magic Eraser and other professional editing tools, like Photoshop, to get the same effect. And those edits would be more manual, not automated via AI.

Image Credits: Google

With the expansion, Magic Editor will come to all Pixel devices, while iOS and Android users (whose phones meet the requirements) will get 10 Magic Editor saves per month. To go beyond that, they’ll still need to buy a Premium Google One plan — meaning 2TB of storage and above.

The other tools will be available to all Google Photos users, no Google One subscription is required. The full set of features that will become available includes Magic Eraser, Photo Unblur, Sky suggestions, Color pop, HDR effect for photos and videos, Portrait Blur, Portrait Light (plus the add light/balance light features in the tool), Cinematic Photos, Styles in the Collage Editor and Video Effects.

Other features like the AI-powered Best Take — which merges similar photos to create a single best shot where everyone is smiling — will continue to be available only to Pixel 8 and 8 Pro.


Software Development in Sri Lanka

Robotic Automations

Watch: Google's Gemini Code Assist wants to use AI to help developers


Can AI eat the jobs of the developers who are busy building AI models? The short answer is no, but the longer answer is not yet settled. News this week that Google has a new AI-powered coding tool for developers, straight from the company’s Google Cloud Next 2024 event in Las Vegas, means that competitive pressures between major tech companies to build the best service to help coders write more code, more quickly is still heating up.

Microsoft’s GitHub Copilot service that has similar outlines has been steadily working toward enterprise adoption. Both companies want to eventually build developer-helping tech that can understand a company’s codebase, allowing it to offer up more tailored suggestions and tips.

Startups are in the fight as well, though they tend to focus more tailored solutions than the broader offerings from the largest tech companies; Pythagora, Tusk and Ellipsis from the most recent Y Combinator batch are working on app creation from user prompts, AI agents for bug-squashing and turning GitHub comments into code, respectively.

Everywhere you look, developers are building tools and services to help their own professional cohort.

Developers learning to code today won’t know a world in which they don’t have AI-powered coding helps. Call it the graphic calculator era for software builders. But the risk — or the worry, I suppose — is that in time the AI tools that are ingesting mountains of code to get smarter to help humans do more will eventually be able to do enough that fewer humans are needed to do the work of writing code for companies themselves. And if a company can spend less money and employ fewer people, it will; no job is safe, but some roles are just more difficult to replace at any given moment.

Thankfully, given the complexities of modern software services, ever-present tech debt and an infinite number of edge cases, what big tech and startups are busy building today seem to be very useful coding helps and not something ready to replace or even reduce the number of humans building them. For now. I wouldn’t take the other end of that bet on a multi-decade time frame.

And for those looking for an even deeper dive into what Google revealed this week, you can head here for our complete rundown, including details on exactly how Gemini Code Assist works, and Google’s in-depth developer walkthrough from Cloud Next 2024.


Software Development in Sri Lanka

Robotic Automations

Google fires 28 employees after sit-in protest over controversial Project Nimbus contract with Israel | TechCrunch


Google has terminated the employment of 28 employees following a prolonged sit-in protest at the company’s Sunnyvale and New York offices.

The protests were in response to Project Nimbus, a $1.2 billion cloud computing contract inked by Google and Amazon with the Israeli government and its military three years ago. The controversial project, which also reportedly includes the provision of advanced artificial intelligence and machine learning technology, allegedly has strict contractual stipulations that prevent Google and Amazon from bowing to boycott pressure — this effectively means that they must continue providing services to Israel no matter what.

Conflict

There have been countless protests and public chastising from within the companies’ ranks since 2021, but with the heightening Israel-Palestine conflict in the wake of last October’s attacks by Hamas, this is spilling further into the workforce of corporations deemed not only to be helping Israel, but actively profiteering from the conflict.

While the latest rallies included demonstrations outside Google’s Sunnyvale and New York offices, as well as Amazon’s Seattle HQ, protestors went one step further by going inside the buildings, including the office of Google Cloud CEO Thomas Kurian.

In a statement issued to TechCrunch via anti big-tech advocacy firm Justice Speaks, Hasan Ibraheem, a Google software engineer participating in the New York City sit-in protest, said that by providing cloud and AI infrastructure to the Israeli military, Google is “directly implicated in the genocide of the Palestinian people.”

“It’s my responsibility to do everything I can to end this contract even while Google pretends nothing is wrong,” Ibraheem said. “The idea of working for a company that directly provides infrastructure for genocide makes me sick. We’ve tried sending petitions to leadership but they’ve gone ignored. We will make sure they can’t ignore us anymore. We will make as much noise as possible. So many workers don’t know that Google has this contract with the IOF [Israel Offensive Forces]. So many don’t know that their colleagues have been facing harassment for being Muslim, Palestinian and Arab and speaking out. So many people don’t realize how complicit their own company is. It’s our job to make sure they do.”

Nine Google workers were also arrested and forcibly removed from the company’s offices, four of whom were in New York and five in Sunnyvale. A separate statement issued by Justice Speaks on behalf of the so-called “Nimbus nine” protestors, said that they had demanded to speak with Kurian, a request that went unmet.

The statement reads in full:

Last night, Google made the decision to arrest us, the company’s own workers — instead of engaging with our concerns about Project Nimbus, the company’s $1.2 billion cloud computing contract with Israel. Those of us sitting in Thomas Kurian’s office repeatedly requested to speak with the Google Cloud CEO, but our requests were denied. Throughout the past three years, since the contract’s signing, we have repeatedly attempted to engage with Google executives about Project Nimbus through company channels, including town halls, forums, petitions signed by over a thousand workers, and direct outreach from concerned workers.

Google executives have ignored our concerns about our ethical responsibility for the impact of our technology as well as the damage to our workplace health and safety caused by this contract, and the company’s internal environment of retaliation, harassment, and bullying. At every turn, instead, Google is repressing speech inside the company, and condoning harassment, intimidation, bullying, silencing, and censorship of Palestinian, Arab, and Muslim Googlers.

Workers have the right to know how their labor is being used, and to have a say in ensuring the technology they build is not used for harm. Workers also have the right to go to work without fear, anxiety, and stress due to the potential that their labor is being used to power a genocide. Google is depriving us of these basic rights, which is what led us to sit-in at offices across the country yesterday.

Meanwhile, Google continues to lie to its workers, the media, and the public. Google continues to claim, as of yesterday, that Project Nimbus is “not directed at highly sensitive, classified, or military workloads relevant to weapons or intelligence services.” Yet, reporting from TIME Magazine proves otherwise. Google has built custom tools for Israel’s Ministry of Defense, and has doubled down on contracting with the Israeli Occupational Forces, Israel’s military, since the start of its genocide against Palestinians in Gaza. By continuing its lies, Google is disrespecting and disregarding consumers, the media, as well as, most importantly, us—its workers.

We will not stay silent in light of Google’s bare-faced lies. Hundreds and thousands of Google workers have joined No Tech for Apartheid’s call for the company to Drop Project Nimbus. Despite Google’s attempts to silence us and disregard our concerns, we will persist. We will continue to organize and fight until Google drops Project Nimbus and stops aiding and abetting Israel’s genocide and apartheid state in Palestine.”

A Google spokesperson confirmed to TechCrunch that 28 employees were fired, and that it will “continue to investigate and take action” if needed.

“These protests were part of a longstanding campaign by a group of organizations and people who largely don’t work at Google,” the spokesperson said. “A small number of employee protesters entered and disrupted a few of our locations. Physically impeding other employees’ work and preventing them from accessing our facilities is a clear violation of our policies, and completely unacceptable behavior. After refusing multiple requests to leave the premises, law enforcement was engaged to remove them to ensure office safety.”


Software Development in Sri Lanka

Robotic Automations

Google Cloud Next 2024: Watch the keynote on Gemini AI, enterprise reveals right here | TechCrunch


It’s time for Google’s annual look up to the cloud, this time with a big dose of AI.

At 9 a.m. PT Tuesday, Google Cloud CEO Thomas Kurian kicked off the opening keynote for this year’s Google Cloud Next event, and you can watch the archive of their reveals above, or right here.

After this week we’ll know more about Google’s attempts to help the enterprise enter the age of AI. From a deeper dive into Gemini, the company’s AI-powered chatbot, to securing AI products and implementing generative AI into cloud applications, Google will continue to cover a wide range of topics.

We’re also keeping tabs on everything Google’s announcing at Cloud Next 2024, from Google Vids to Gemini Code Assist to Google Workspace updates.

And for those more interested in Google’s details and reveals for developers, their Developer Keynote started off at 11:30am PT Wednesday, and you can catch up on that full stream right here or via the embed below.


Software Development in Sri Lanka

Robotic Automations

New Google Vids product helps create a customized video with an AI assist | TechCrunch


All of the major vendors have been looking at ways to use AI to help customers develop creative content. On Tuesday at the Google Cloud Next customer conference in Las Vegas, Google introduced a new AI-fueled video creation tool called Google Vids. The tool will become part of the Google Workspace productivity suite when it’s released.

“I want to share something really entirely new. At Google Cloud Next, we’re unveiling Google Vids, a brand new, AI-powered video creation app for work,” Aparna Pappu, VP & GM at Google Workspace said, introducing the tool.

Image Credits: Frederic Lardinois/TechCrunch

The idea is to provide a video creation tool alongside other Workspace tools like Docs and Sheets with a similar ability to create and collaborate in the browser, except in this case, on video. “This is your video editing, writing and production assistant, all in one,” Pappu said. “We help transform the assets you already have — whether marketing copy or images or whatever else in your drive — into a compelling video.”

Like other Google Workspace tools, you can collaborate with colleagues in real time in the browser. “No need to email files back and forth. You and your team can work on the story at the same time with all the same access controls and security that we provide for all of Workspace,” she said.

Image Credits: Google Cloud

Examples of the kinds of videos people are creating with Google Vids include product pitches, training content or celebratory team videos. Like most generative AI tooling, Google Vids starts with a prompt. You enter a description of what you want the video to look like. You can then access files in your Google Drive or use stock content provided by Google and the AI goes to work, creating a storyboard of the video based on your ideas.

You can then reorder the different parts of the video, add transitions, select a template and insert an audio track where you record the audio or add a script and a preset voice will read it. Once you’re satisfied, you can generate the video. Along the way colleagues can comment or make changes, just as with any Google Workspace tool.

Google Vids is currently in limited testing. In June it will roll out to additional testers in Google Labs and will eventually be available for customers with Gemini for Workspace subscriptions.

Image Credits: Frederic Lardinois/TechCrunch


Software Development in Sri Lanka

Robotic Automations

Alphabet X’s Bellwether harnesses AI to help predict natural disasters | TechCrunch


The world is on fire. Quite literally, much of the time quite literally. Predicting such disasters before they get out of hand — or better yet, before they happen — will be key to maintaining a reasonable quality of life for the coming century. It’s a big, global issue. It’s also one Alphabet believes it can help tackle.

The Google parent’s moonshot factory X this week officially unveiled Project Bellwether, its latest bid to apply technology to some of our biggest problems. Here that means using AI tools to identify natural disasters like wildfire and flooding as quickly as possible. If implemented correctly, it could be a gamechanger for first responders.

“Bellwether is X’s moonshot to understand and anticipate changes across the planet, so that any organization, community, or business can ask smarter and more timely questions about the natural and built environment,” project head Sarah Russell says in a social media post. “Until now, it’s been epically hard and expensive to apply AI to geospatial questions, but our team has harnessed some of the most recent advances in machine learning (plus some straight-up solid engineering) to re-think the whole endeavor.”

Project Bellwether’s coming out party coincides with news that the United States National Guard’s Defense Innovation Unit (DIU) will be utilizing the organization’s “prediction engine.” According to the teams, current technology has the potential to delay response times by hours or days, causing untold damage to human life and property.

“Right now, our analysts have to spend time sorting through images to find the ones that cover the areas most affected by natural disasters,” the Guard’s Col. Brian McGarry notes. “They then have to correlate those images to surrounding infrastructure, label all the relevant features, and only then can highlight the significant damage and send it forward to first responder teams.”

The Bellwether team has thus far produced two tools. The first is designed to forecast the risk of wildfire “up to five years into the future.” The second is a response tool that helps first responders “identify critical infrastructure” in the wake of natural disaster or extreme weather.

Google has been exploring the use of machine learning models and AI to predict natural disasters for some time now. Project Bellwether’s partnership with the National Guard could well prove an important validation of that work.


Software Development in Sri Lanka

Robotic Automations

With Vertex AI Agent Builder, Google Cloud aims to simplify agent creation | TechCrunch


AI agents are the new hot craze in generative AI. Unlike the previous generation of chatbots, these agents can do more than simply answer questions. They can take actions based on the conversation, and even interact with back-end transactional systems to take actions in an automated manner.

On Tuesday at Google Cloud Next, the company introduced a new tool to help companies build AI agents.

“Vertex AI Agent Builder allows people to very easily and quickly build conversational agents,” Google Cloud CEO Thomas Kurian said. “You can build and deploy production-ready, generative AI-powered conversational agents and instruct and guide them the same way that you do humans to improve the quality and correctness of answers from models.”

The no-code product builds upon Google’s Vertex AI Search and Conversation product released previously. It’s also built on top of the company’s latest Gemini large language models and relies both on RAG APIs and vector search, two popular methods used industry-wide to reduce hallucinations, where models make up incorrect answers when they can’t find an accurate response.

Part of the way the company is improving the quality of the answers is through a process called “grounding,” where the answers are tied to something considered to be a reliable source. In this case, it’s relying on Google Search (which in reality could or could not be accurate).

“We’re now bringing you grounding in Google Search, bringing the power of the world’s knowledge that Google Search offers through our grounding service to models. In addition, we also support the ability to ground against enterprise data sources,” Kurian said. The latter might be more suitable for enterprise customers.

Image Credits: Frederic Lardinois/TechCrunch

In a demo, the company used this capability to create an agent that analyzes previous marketing campaigns to understand a company’s brand style, and then apply that knowledge to help generate new ideas that are consistent with that style. The demo analyzed over 3,000 brand images, descriptions, videos and documents related to this fictional company’s products stored on Google Drive. It then helped generate pictures, captions and other content based on its understanding of the fictional company’s style.

Although you can build any type of agent, this particular example would put Google directly in competition with Adobe, which released its creative generative AI tool Firefly last year and GenStudio last month to help build content that doesn’t stray from the company’s style. The flexibility is there to build anything, but the question is whether you want to buy something off the shelf instead if it exists.

The new capabilities are already available, according to Google. It supports multiple languages and offers country-based API endpoints in the U.S. and EU.


Software Development in Sri Lanka

Robotic Automations

Google's Gemini Pro 1.5 enters public preview on Vertex AI | TechCrunch


Gemini 1.5 Pro, Google’s most capable generative AI model, is now available in public preview on Vertex AI, Google’s enterprise-focused AI development platform. The company announced the news during its annual Cloud Next conference, which is taking place in Las Vegas this week.

Gemini 1.5 Pro launched in February, joining Google’s Gemini family of generative AI models. Undoubtedly its headlining feature is the amount of context that it can process: between 128,000 tokens to up to 1 million tokens, where “tokens” refers to subdivided bits of raw data (like the syllables “fan,” “tas” and “tic” in the word “fantastic”).

One million tokens is equivalent to around 700,000 words or around 30,000 lines of code. It’s about four times the amount of data that Anthropic’s flagship model, Claude 3, can take as input and about eight times as high as OpenAI’s GPT-4 Turbo max context.

A model’s context, or context window, refers to the initial set of data (e.g. text) the model considers before generating output (e.g. additional text). A simple question — “Who won the 2020 U.S. presidential election?” — can serve as context, as can a movie script, email, essay or e-book.

Models with small context windows tend to “forget” the content of even very recent conversations, leading them to veer off topic. This isn’t necessarily so with models with large contexts. And, as an added upside, large-context models can better grasp the narrative flow of data they take in, generate contextually richer responses and reduce the need for fine-tuning and factual grounding — hypothetically, at least.

So what specifically can one do with a 1 million-token context window? Lots of things, Google promises, like analyzing a code library, “reasoning across” lengthy documents and holding long conversations with a chatbot.

Because Gemini 1.5 Pro is multilingual — and multimodal in the sense that it’s able to understand images and videos and, as of Tuesday, audio streams in addition to text — the model can also analyze and compare content in media like TV shows, movies, radio broadcasts, conference call recordings and more across different languages. One million tokens translates to about an hour of video or around 11 hours of audio.

Thanks to its audio-processing capabilities, Gemini 1.5 Pro can generate transcriptions for video clips, as well, although the jury’s out on the quality of those transcriptions.

In a prerecorded demo earlier this year, Google showed Gemini 1.5 Pro searching the transcript of the Apollo 11 moon landing telecast (which comes to about 400 pages) for quotes containing jokes, and then finding a scene in movie footage that looked similar to a pencil sketch.

Google says that early users of Gemini 1.5 Pro — including United Wholesale Mortgage, TBS and Replit — are leveraging the large context window for tasks spanning mortgage underwriting; automating metadata tagging on media archives; and generating, explaining and transforming code.

Gemini 1.5 Pro doesn’t process a million tokens at the snap of a finger. In the aforementioned demos, each search took between 20 seconds and a minute to complete — far longer than the average ChatGPT query.

Google previously said that latency is an area of focus, though, and that it’s working to “optimize” Gemini 1.5 Pro as time goes on.

Of note, Gemini 1.5 Pro is slowly making its way to other parts of Google’s corporate product ecosystem, with the company announcing Tuesday that the model (in private preview) will power new features in Code Assist, Google’s generative AI coding assistance tool. Developers can now perform “large-scale” changes across codebases, Google says, for example updating cross-file dependencies and reviewing large chunks of code.




Software Development in Sri Lanka

Robotic Automations

Chrome Enterprise goes Premium with new security and management features | TechCrunch


At its Google Cloud Next conference in Las Vegas, Google on Tuesday extended its Chrome Enterprise product suite with the launch of Chrome Enterprise Premium.

Google has long offered an enterprise-centric version of its Chrome browser. With Chrome Enterprise, IT departments get the ability to manage employees’ browser settings, the extensions they install and web apps they use, for example. More importantly, though, they also get a number of new security controls around data loss prevention, malware protection, phishing prevention and the Zero Trust access to SaaS apps.

Chrome Enterprise Premium, which will cost $6/user/month, mostly extends the security capabilities of the existing service, based on the insight that browsers are now the endpoints where most of the high-value work inside a company is done.

Authentication, access, communication and collaboration, administration, and even coding are all browser-based activities in the modern enterprise,” Parisa Tabriz, Google’s VP for Chrome, wrote in Tuesday’s announcement. “Endpoint security is growing more challenging due to remote work, reliance on an extended workforce, and the proliferation of new devices that aren’t part of an organization’s managed fleet. As these trends continue to accelerate and converge, it’s clear that the browser is a natural enforcement point for endpoint security in the modern enterprise.”

These new features include additional enterprise controls to enforce policies and manage software updates and extensions, as well as new security reporting features and forensic capabilities that can be integrated with third-party security tools. Chrome Enterprise Premium takes Zero Trust a step further with context-aware access controls that can also mitigate the risk of data leaks. This includes approved applications and those that were not sanctioned by the IT department.

“With Chrome Enterprise Premium, we have confidence in Google’s security expertise, including Project Zero’s cutting-edge security research and fast security patches. We set up data loss prevention restrictions and warnings for sharing sensitive information in applications like generative AI platforms and noticed a noteworthy 50% reduction in content transfers,” said Nick Reva, head of corporate security engineering at Snap.

The new service is now generally available.


Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber