From Digital Age to Nano Age. WorldWide.

Tag: releases

Robotic Automations

Mistral releases Codestral, its first generative AI model for code | TechCrunch


Mistral, the French AI startup backed by Microsoft and valued at $6 billion, has released its first generative AI model for coding, dubbed Codestral. Codestral, like other code-generating models, is designed to help developers write and interact with code. It was trained on over 80 programming languages, including Python, Java, C++ and JavaScript, explains Mistral […]

© 2024 TechCrunch. All rights reserved. For personal use only.


Software Development in Sri Lanka

Robotic Automations

U.K. agency releases tools to test AI model safety | TechCrunch


The U.K. Safety Institute, the U.K.’s recently established AI safety body, has released a toolset designed to “strengthen AI safety” by making it easier for industry, research organizations and academia to develop AI evaluations.  Called Inspect, the toolset — which is available under an open source license, specifically an MIT License — aims to assess certain […]

© 2024 TechCrunch. All rights reserved. For personal use only.


Software Development in Sri Lanka

Robotic Automations

Airbnb releases group booking features as it taps into AI for customer service | TechCrunch


Airbnb’s summer release is usually a grand affair with tons of updates for guests and a few for hosts. This time, however, the company is introducing just a few updates for group booking along with a new category called icons, which are experiences hosted by celebrated names in music, film, TV and sports.

Group booking features, which are probably the only update that will reach all users, allow people to create shared wishlists, a messages tab for the group with the host, and trip invitations for the group with details of the property.

Users can now invite friends or family to a wishlist through contacts on their phone or a link. Group members can add properties to a list, leave notes about a property, or vote on them to decide on the booking.

Image Credits: Airbnb

After the primary member books the property, they have to invite people to travel with them again on the trip with a postcard. The invitation card also has details like address, check-in instructions, and Wi-Fi passwords.

Airbnb is also introducing a new message tab where all travelers can chat with the host and react to messages. Hosts can use AI-powered suggestions to reply to questions, such as sending the guests the property’s checkout guide.

Image Credits: Airbnb

Nathan Blecharczyk, Airbnb’s co-founder and chief strategy officer, said that the company built group features as more than 80% of trips on the platform involve more than one person. Plus, there is a lot of back and forth between the group members in terms of taking decisions.

Apart from group features, Airbnb is releasing a new earnings dashboard for hosts with better insights and an experience category called Icons, which features properties like the X-Men mansion, the Ferrari museum, Prince’s Purple Rain House, and living room sessions with Doja Cat. These experiences will be available for a short time, and users will need to apply for them to get in. Airbnb is planning to accept 4,000 people across 11 experiences.

Using AI on Airbnb

Last November, in a conversation with TechCrunch, Airbnb CEO Brian Chesky mentioned that the company is testing generative AI-powered review summaries. While Airbnb hasn’t made public announcements in this area, Blecharczyk told TechCrunch that the platform plans to use AI in multiple areas, including customer support.

“We are increasingly using AI to streamline our customer support to make sure that customers are routed to the right agents or to have tickets automatically resolved with AI-generated responses,” Blecharczyk said.

The Airbnb co-founder noted that AI-powered responses are applicable only to certain scenarios, and the company has been testing them in limited production.

“I think there is a lot of potential for applying AI to the business. We think a lot about how AI is going to change the experience at the consumer layer over time,” Blecharczyk said without sharing specifics of the company’s roadmap.

“We have a lot of reviews, and we sit on top of a lot of customer service data. So we are thinking about how we can use AI to succinctly give a better picture of a listing and its information.”

In November 2023, Airbnb acquired a stealth startup called Gameplanner, which was launched by Siri co-founder Adam Cheyer. Earlier this year, Chesky mentioned that the Gameplanner acquisition was part of the travel platform’s plan to create an “ultimate concierge.” Using AI-powered replies to solve customer queries is probably the first step toward that.


Software Development in Sri Lanka

Robotic Automations

The Browser Company releases Arc for Windows | TechCrunch


The Browser Company, makers of the Arc web browser, released its Windows version today. The company started testing the Windows client in December, and it said that more than 150,000 people have been using it.

The startup, which aims to replace your current browser, recently raised $50 million at a $550 million valuation. Today, The Browser Company opened access to its Windows version to all users without any waitlist. Previously, the waitlist had more than 1 million people on it.

The company started with an invite-only Mac-based version in 2022 and opened it to everyone in July 2023.

Image Credits: The Browser Company

The Browser Company decided to build the Windows version in Swift so it could reuse and share the majority of the codebase with the Mac version. Swift is a programming language that Apple originally designed to develop iPhone and Mac apps. Using Swift on Windows will make it easier to maintain feature parity in the future. The company has also written extensively about its experience building in Swift on Windows to help developers looking to port their Mac apps.

Features on the Windows version

Arc on Windows has some core features of the Mac version, including the sidebar with most used webpages pinned up top; Spaces, which are like folders for different sets of tabs for different tasks such as “Work,” “Entertainment,” “Vacation” and “Notetaking”; profiles for separate browsing data and preferences; split view for opening multiple tabs in a single window; and support for picture-in-picture video player so you can look at other tabs while watching a video clip.

Image Credits: The Browser Company

The Windows team has also included the Peek feature, which was missing from the initial version. Peek lets you see a quick link preview from the pinned and favorites tabs without clicking on the link.

This month, it introduced Arc Sync across devices to let users access their sidebar, spaces, folders, and tabs are accessible across devices. This feature will work on the Windows version as well.

One of the primary differentiators between the Windows and Mac versions is that the former supports touchscreens.

However, the newly released Windows versions miss features like Little Arc, which is a floating browser window for temporary uses such as opening a link. The company didn’t specify if the Windows version has AI-powered features such as link preview summaries, renaming downloaded files, instant links to serve websites directly, and automatically updating live folders.

What’s ahead

Currently, Arc for Windows only supports Windows 11, but the startup is working on support for Windows 10.

The company also said that it wants to bring feature parity to both Mac and Windows but didn’t provide a timeline for it.

Earlier this year, The Browser Company released a mobile app for iPhone called Arc Search. With the latest release, the company said it plans to release an Arc Search client for Android.


Software Development in Sri Lanka

Robotic Automations

Snowflake releases a flagship generative AI model of its own | TechCrunch


All-around, highly generalizable generative AI models were the name of the game once, and they arguably still are. But increasingly, as cloud vendors large and small join the generative AI fray, we’re seeing a new crop of models focused on the deepest-pocketed potential customers: the enterprise.

Case in point: Snowflake, the cloud computing company, today unveiled Arctic LLM, a generative AI model that’s described as “enterprise-grade.” Available under an Apache 2.0 license, Arctic LLM is optimized for “enterprise workloads,” including generating database code, Snowflake says, and is free for research and commercial use.

“I think this is going to be the foundation that’s going to let us — Snowflake — and our customers build enterprise-grade products and actually begin to realize the promise and value of AI,” CEO Sridhar Ramaswamy said in press briefing. “You should think of this very much as our first, but big, step in the world of generative AI, with lots more to come.”

An enterprise model

My colleague Devin Coldewey recently wrote about how there’s no end in sight to the onslaught of generative AI models. I recommend you read his piece, but the gist is: Models are an easy way for vendors to drum up excitement for their R&D and they also serve as a funnel to their product ecosystems (e.g., model hosting, fine-tuning and so on).

Arctic LLM is no different. Snowflake’s flagship model in a family of generative AI models called Arctic, Arctic LLM — which took around three months, 1,000 GPUs and $2 million to train — arrives on the heels of Databricks’ DBRX, a generative AI model also marketed as optimized for the enterprise space.

Snowflake draws a direct comparison between Arctic LLM and DBRX in its press materials, saying Arctic LLM outperforms DBRX on the two tasks of coding (Snowflake didn’t specify which programming languages) and SQL generation. The company said Arctic LLM is also better at those tasks than Meta’s Llama 2 70B (but not the more recent Llama 3 70B) and Mistral’s Mixtral-8x7B.

Snowflake also claims that Arctic LLM achieves “leading performance” on a popular general language understanding benchmark, MMLU. I’ll note, though, that while MMLU purports to evaluate generative models’ ability to reason through logic problems, it includes tests that can be solved through rote memorization, so take that bullet point with a grain of salt.

“Arctic LLM addresses specific needs within the enterprise sector,” Baris Gultekin, head of AI at Snowflake, told TechCrunch in an interview, “diverging from generic AI applications like composing poetry to focus on enterprise-oriented challenges, such as developing SQL co-pilots and high-quality chatbots.”

Arctic LLM, like DBRX and Google’s top-performing generative model of the moment, Gemini 1.5 Pro, is a mixture of experts (MoE) architecture. MoE architectures basically break down data processing tasks into subtasks and then delegate them to smaller, specialized “expert” models. So, while Arctic LLM contains 480 billion parameters, it only activates 17 billion at a time — enough to drive the 128 separate expert models. (Parameters essentially define the skill of an AI model on a problem, like analyzing and generating text.)

Snowflake claims that this efficient design enabled it to train Arctic LLM on open public web data sets (including RefinedWeb, C4, RedPajama and StarCoder) at “roughly one-eighth the cost of similar models.”

Running everywhere

Snowflake is providing resources like coding templates and a list of training sources alongside Arctic LLM to guide users through the process of getting the model up and running and fine-tuning it for particular use cases. But, recognizing that those are likely to be costly and complex undertakings for most developers (fine-tuning or running Arctic LLM requires around eight GPUs), Snowflake’s also pledging to make Arctic LLM available across a range of hosts, including Hugging Face, Microsoft Azure, Together AI’s model-hosting service, and enterprise generative AI platform Lamini.

Here’s the rub, though: Arctic LLM will be available first on Cortex, Snowflake’s platform for building AI- and machine learning-powered apps and services. The company’s unsurprisingly pitching it as the preferred way to run Arctic LLM with “security,” “governance” and scalability.

Our dream here is, within a year, to have an API that our customers can use so that business users can directly talk to data,” Ramaswamy said. “It would’ve been easy for us to say, ‘Oh, we’ll just wait for some open source model and we’ll use it. Instead, we’re making a foundational investment because we think [it’s] going to unlock more value for our customers.”

So I’m left wondering: Who’s Arctic LLM really for besides Snowflake customers?

In a landscape full of “open” generative models that can be fine-tuned for practically any purpose, Arctic LLM doesn’t stand out in any obvious way. Its architecture might bring efficiency gains over some of the other options out there. But I’m not convinced that they’ll be dramatic enough to sway enterprises away from the countless other well-known and -supported, business-friendly generative models (e.g. GPT-4).

There’s also a point in Arctic LLM’s disfavor to consider: its relatively small context.

In generative AI, context window refers to input data (e.g. text) that a model considers before generating output (e.g. more text). Models with small context windows are prone to forgetting the content of even very recent conversations, while models with larger contexts typically avoid this pitfall.

Arctic LLM’s context is between ~8,000 and ~24,000 words, dependent on the fine-tuning method — far below that of models like Anthropic’s Claude 3 Opus and Google’s Gemini 1.5 Pro.

Snowflake doesn’t mention it in the marketing, but Arctic LLM almost certainly suffers from the same limitations and shortcomings as other generative AI models — namely, hallucinations (i.e. confidently answering requests incorrectly). That’s because Arctic LLM, along with every other generative AI model in existence, is a statistical probability machine — one that, again, has a small context window. It guesses based on vast amounts of examples which data makes the most “sense” to place where (e.g. the word “go” before “the market” in the sentence “I go to the market”). It’ll inevitably guess wrong — and that’s a “hallucination.”

As Devin writes in his piece, until the next major technical breakthrough, incremental improvements are all we have to look forward to in the generative AI domain. That won’t stop vendors like Snowflake from championing them as great achievements, though, and marketing them for all they’re worth.


Software Development in Sri Lanka

Robotic Automations

Hugging Face releases a benchmark for testing generative AI on health tasks | TechCrunch


Generative AI models are increasingly being brought to healthcare settings — in some cases prematurely, perhaps. Early adopters believe that they’ll unlock increased efficiency while revealing insights that’d otherwise be missed. Critics, meanwhile, point out that these models have flaws and biases that could contribute to worse health outcomes.

But is there a quantitative way to know how helpful, or harmful, a model might be when tasked with things like summarizing patient records or answering health-related questions?

Hugging Face, the AI startup, proposes a solution in a newly released benchmark test called Open Medical-LLM. Created in partnership with researchers at the nonprofit Open Life Science AI and the University of Edinburgh’s Natural Language Processing Group, Open Medical-LLM aims to standardize evaluating the performance of generative AI models on a range of medical-related tasks.

Open Medical-LLM isn’t a from-scratch benchmark, per se, but rather a stitching-together of existing test sets — MedQA, PubMedQA, MedMCQA and so on — designed to probe models for general medical knowledge and related fields, such as anatomy, pharmacology, genetics and clinical practice. The benchmark contains multiple choice and open-ended questions that require medical reasoning and understanding, drawing from material including U.S. and Indian medical licensing exams and college biology test question banks.

“[Open Medical-LLM] enables researchers and practitioners to identify the strengths and weaknesses of different approaches, drive further advancements in the field and ultimately contribute to better patient care and outcome,” Hugging Face wrote in a blog post.

Image Credits: Hugging Face

Hugging Face is positioning the benchmark as a “robust assessment” of healthcare-bound generative AI models. But some medical experts on social media cautioned against putting too much stock into Open Medical-LLM, lest it lead to ill-informed deployments.

On X, Liam McCoy, a resident physician in neurology at the University of Alberta, pointed out that the gap between the “contrived environment” of medical question-answering and actual clinical practice can be quite large.

Hugging Face research scientist Clémentine Fourrier, who co-authored the blog post, agreed.

“These leaderboards should only be used as a first approximation of which [generative AI model] to explore for a given use case, but then a deeper phase of testing is always needed to examine the model’s limits and relevance in real conditions,” Fourrier replied on X. “Medical [models] should absolutely not be used on their own by patients, but instead should be trained to become support tools for MDs.”

It brings to mind Google’s experience when it tried to bring an AI screening tool for diabetic retinopathy to healthcare systems in Thailand.

Google created a deep learning system that scanned images of the eye, looking for evidence of retinopathy, a leading cause of vision loss. But despite high theoretical accuracy, the tool proved impractical in real-world testing, frustrating both patients and nurses with inconsistent results and a general lack of harmony with on-the-ground practices.

It’s telling that of the 139 AI-related medical devices the U.S. Food and Drug Administration has approved to date, none use generative AI. It’s exceptionally difficult to test how a generative AI tool’s performance in the lab will translate to hospitals and outpatient clinics, and, perhaps more importantly, how the outcomes might trend over time.

That’s not to suggest Open Medical-LLM isn’t useful or informative. The results leaderboard, if nothing else, serves as a reminder of just how poorly models answer basic health questions. But Open Medical-LLM, and no other benchmark for that matter, is a substitute for carefully thought-out real-world testing.




Software Development in Sri Lanka

Robotic Automations

Meta releases Llama 3, claims it's among the best open models available | TechCrunch


Meta has released the latest entry in its Llama series of open source generative AI models: Llama 3. Or, more accurately, the company has open sourced two models in its new Llama 3 family, with the rest to come at an unspecified future date.

Meta describes the new models — Llama 3 8B, which contains 8 billion parameters, and Llama 3 70B, which contains 70 billion parameters — as a “major leap” compared to the previous-gen Llama models, Llama 2 8B and Llama 2 70B, performance-wise. (Parameters essentially define the skill of an AI model on a problem, like analyzing and generating text; higher-parameter-count models are, generally speaking, more capable than lower-parameter-count models.) In fact, Meta says that, for their respective parameter counts, Llama 3 8B and Llama 3 70B — trained on two custom-built 24,000 GPU clusters — are are among the best-performing generative AI models available today.

That’s quite a claim to make. So how is Meta supporting it? Well, the company points to the Llama 3 models’ scores on popular AI benchmarks like MMLU (which attempts to measure knowledge), ARC (which attempts to measure skill acquisition) and DROP (which tests a model’s reasoning over chunks of text). As we’ve written about before, the usefulness — and validity — of these benchmarks is up for debate. But for better or worse, they remain one of the few standardized ways by which AI players like Meta evaluate their models.

Llama 3 8B bests other open source models like Mistral’s Mistral 7B and Google’s Gemma 7B, both of which contain 7 billion parameters, on at least nine benchmarks: MMLU, ARC, DROP, GPQA (a set of biology-, physics- and chemistry-related questions), HumanEval (a code generation test), GSM-8K (math word problems), MATH (another mathematics benchmark), AGIEval (a problem-solving test set) and BIG-Bench Hard (a commonsense reasoning evaluation).

Now, Mistral 7B and Gemma 7B aren’t exactly on the bleeding edge (Mistral 7B was released last September), and in a few of benchmarks Meta cites, Llama 3 8B scores only a few percentage points higher than either. But Meta also makes the claim that the larger-parameter-count Llama 3 model, Llama 3 70B, is competitive with flagship generative AI models including Gemini 1.5 Pro, the latest in Google’s Gemini series.

Image Credits: Meta

Llama 3 70B beats Gemini 1.5 Pro on MMLU, HumanEval and GSM-8K, and — while it doesn’t rival Anthropic’s most performant model, Claude 3 Opus — Llama 3 70B scores better than the weakest model in the Claude 3 series, Claude 3 Sonnet, on five benchmarks (MMLU, GPQA, HumanEval, GSM-8K and MATH).

Image Credits: Meta

For what it’s worth, Meta also developed its own test set covering use cases ranging from coding and creating writing to reasoning to summarization, and — surprise! — Llama 3 70B came out on top against Mistral’s Mistral Medium model, OpenAI’s GPT-3.5 and Claude Sonnet. Meta says that it gated its modeling teams from accessing the set to maintain objectivity, but obviously — given that Meta itself devised the test — the results have to be taken with a grain of salt.

Image Credits: Meta

More qualitatively, Meta says that users of the new Llama models should expect more “steerability,” a lower likelihood to refuse to answer questions, and higher accuracy on trivia questions, questions pertaining to history and STEM fields such as engineering and science and general coding recommendations. That’s in part thanks to a much larger data set: a collection of 15 trillion tokens, or a mind-boggling ~750,000,000,000 words — seven times the size of the Llama 2 training set. (In the AI field, “tokens” refers to subdivided bits of raw data, like the syllables “fan,” “tas” and “tic” in the word “fantastic.”)

Where did this data come from? Good question. Meta wouldn’t say, revealing only that it drew from “publicly available sources,” included four times more code than in the Llama 2 training data set, and that 5% of that set has non-English data (in ~30 languages) to improve performance on languages other than English. Meta also said it used synthetic data — i.e. AI-generated data — to create longer documents for the Llama 3 models to train on, a somewhat controversial approach due to the potential performance drawbacks.

“While the models we’re releasing today are only fine tuned for English outputs, the increased data diversity helps the models better recognize nuances and patterns, and perform strongly across a variety of tasks,” Meta writes in a blog post shared with TechCrunch.

Many generative AI vendors see training data as a competitive advantage and thus keep it and info pertaining to it close to the chest. But training data details are also a potential source of IP-related lawsuits, another disincentive to reveal much. Recent reporting revealed that Meta, in its quest to maintain pace with AI rivals, at one point used copyrighted ebooks for AI training despite the company’s own lawyers’ warnings; Meta and OpenAI are the subject of an ongoing lawsuit brought by authors including comedian Sarah Silverman over the vendors’ alleged unauthorized use of copyrighted data for training.

So what about toxicity and bias, two other common problems with generative AI models (including Llama 2)? Does Llama 3 improve in those areas? Yes, claims Meta.

Meta says that it developed new data-filtering pipelines to boost the quality of its model training data, and that it’s updated its pair of generative AI safety suites, Llama Guard and CybersecEval, to attempt to prevent the misuse of and unwanted text generations from Llama 3 models and others. The company’s also releasing a new tool, Code Shield, designed to detect code from generative AI models that might introduce security vulnerabilities.

Filtering isn’t foolproof, though — and tools like Llama Guard, CybersecEval and Code Shield only go so far. (See: Llama 2’s tendency to make up answers to questions and leak private health and financial information.) We’ll have to wait and see how the Llama 3 models perform in the wild, inclusive of testing from academics on alternative benchmarks.

Meta says that the Llama 3 models — which are available for download now, and powering Meta’s Meta AI assistant on Facebook, Instagram, WhatsApp, Messenger and the web — will soon be hosted in managed form across a wide range of cloud platforms including AWS, Databricks, Google Cloud, Hugging Face, Kaggle, IBM’s WatsonX, Microsoft Azure, Nvidia’s NIM and Snowflake. In the future, versions of the models optimized for hardware from AMD, AWS, Dell, Intel, Nvidia and Qualcomm will also be made available.

And more capable models are on the horizon.

Meta says that it’s currently training Llama 3 models over 400 billion parameters in size — models with the ability to “converse in multiple languages,” take more data in and understand images and other modalities as well as text, which would bring the Llama 3 series in line with open releases like Hugging Face’s Idefics2.

Image Credits: Meta

“Our goal in the near future is to make Llama 3 multilingual and multimodal, have longer context and continue to improve overall performance across core [large language model] capabilities such as reasoning and coding,” Meta writes in a blog post. “There’s a lot more to come.”

Indeed.


Software Development in Sri Lanka

Robotic Automations

Google releases Imagen 2, a video clip generator | TechCrunch


Google doesn’t have the best track record when it comes to image-generating AI.

In February, the image generator built into Gemini, Google’s AI-powered chatbot, was found to be randomly injecting gender and racial diversity into prompts about people, resulting in images of racially diverse Nazis, among other offensive inaccuracies.

Google pulled the generator, vowing to improve it and eventually re-release it. As we await its return, the company’s launching an enhanced image-generating tool, Imagen 2, inside its Vertex AI developer platform — albeit a tool with a decidedly more enterprise bent. Google announced Imagen 2 at its annual Cloud Next conference in Las Vegas.

Image Credits: Frederic Lardinois/TechCrunch

Imagen 2 — which is actually a family of models, launched in December after being previewed at Google’s I/O conference in May 2023 — can create and edit images given a text prompt, like OpenAI’s DALL-E and Midjourney. Of interest to corporate types, Imagen 2 can render text, emblems and logos in multiple languages, optionally overlaying those elements in existing images — for example, onto business cards, apparel and products.

After launching first in preview, image editing with Imagen 2 is now generally available in Vertex AI along with two new capabilities: inpainting and outpainting. Inpainting and outpainting, features other popular image generators such as DALL-E have offered for some time, can be used to remove unwanted parts of an image, add new components and expand the borders of an image to create a wider field of view.

But the real meat of the Imagen 2 upgrade is what Google’s calling “text-to-live images.”

Imagen 2 can now create short, four-second videos from text prompts, along the lines of AI-powered clip generation tools like Runway, Pika and Irreverent Labs. True to Imagen 2’s corporate focus, Google’s pitching live images as a tool for marketers and creatives, such as a GIF generator for ads showing nature, food and animals — subject matter that Imagen 2 was fine-tuned on.

Google says that live images can capture “a range of camera angles and motions” while “supporting consistency over the entire sequence.” But they’re in low resolution for now: 360 pixels by 640 pixels. Google’s pledging that this will improve in the future. 

To allay (or at least attempt to allay) concerns around the potential to create deepfakes, Google says that Imagen 2 will employ SynthID, an approach developed by Google DeepMind, to apply invisible, cryptographic watermarks to live images. Of course, detecting these watermarks — which Google claims are resilient to edits, including compression, filters and color tone adjustments — requires a Google-provided tool that’s not available to third parties.

And no doubt eager to avoid another generative media controversy, Google’s emphasizing that live image generations will be “filtered for safety.” A spokesperson told TechCrunch via email: “The Imagen 2 model in Vertex AI has not experienced the same issues as the Gemini app. We continue to test extensively and engage with our customers.”

Image Credits: Frederic Lardinois/TechCrunch

But generously assuming for a moment that Google’s watermarking tech, bias mitigations and filters are as effective as it claims, are live images even competitive with the video generation tools already out there?

Not really.

Runway can generate 18-second clips in much higher resolutions. Stability AI’s video clip tool, Stable Video Diffusion, offers greater customizability (in terms of frame rate). And OpenAI’s Sora — which, granted, isn’t commercially available yet — appears poised to blow away the competition with the photorealism it can achieve.

So what are the real technical advantages of live images? I’m not really sure. And I don’t think I’m being too harsh.

After all, Google is behind genuinely impressive video generation tech like Imagen Video and Phenaki. Phenaki, one of Google’s more interesting experiments in text-to-video, turns long, detailed prompts into two-minute-plus “movies” — with the caveat that the clips are low resolution, low frame rate and only somewhat coherent.

In light of recent reports suggesting that the generative AI revolution caught Google CEO Sundar Pichai off guard and that the company’s still struggling to maintain pace with rivals, it’s not surprising that a product like live images feels like an also-ran. But it’s disappointing nonetheless. I can’t help the feeling that there is — or was — a more impressive product lurking in Google’s skunkworks.

Models like Imagen are trained on an enormous number of examples usually sourced from public sites and datasets around the web. Many generative AI vendors see training data as a competitive advantage and thus keep it and info pertaining to it close to the chest. But training data details are also a potential source of IP-related lawsuits, another disincentive to reveal much.

I asked, as I always do around announcements pertaining to generative AI models, about the data that was used to train the updated Imagen 2, and whether creators whose work might’ve been swept up in the model training process will be able to opt out at some future point.

Google told me only that its models are trained “primarily” on public web data, drawn from “blog posts, media transcripts and public conversation forums.” Which blogs, transcripts and forums? It’s anyone’s guess.

A spokesperson pointed to Google’s web publisher controls that allow webmasters to prevent the company from scraping data, including photos and artwork, from their websites. But Google wouldn’t commit to releasing an opt-out tool or, alternatively, compensating creators for their (unknowing) contributions — a step that many of its competitors, including OpenAI, Stability AI and Adobe, have taken.

Another point worth mentioning: Text-to-live images isn’t covered by Google’s generative AI indemnification policy, which protects Vertex AI customers from copyright claims related to Google’s use of training data and outputs of its generative AI models. That’s because text-to-live images is technically in preview; the policy only covers generative AI products in general availability (GA).

Regurgitation, or where a generative model spits out a mirror copy of an example (e.g., an image) that it was trained on, is rightly a concern for corporate customers. Studies both informal and academic have shown that the first-gen Imagen wasn’t immune to this, spitting out identifiable photos of people, artists’ copyrighted works and more when prompted in particular ways.

Barring controversies, technical issues or some other major unforeseen setbacks, text-to-live images will enter GA somewhere down the line. But with live images as it exists today, Google’s basically saying: use at your own risk.


Software Development in Sri Lanka

Robotic Automations

Overture Maps Foundation releases the first beta of its open map dataset | TechCrunch


The Overture Maps Foundation today launched the first beta of its global open map dataset. With this, the foundation, which is backed by the likes of Amazon, Esri, Meta, Microsoft and TomTom, is getting one step closer to launching a production-ready open dataset for developers who need geospatial data to power their applications.

“This Beta release brings together multiple sources of open data, has been through several validation tests, is formatted in a new schema and has an entity reference system that allows attachment of other spatial data,” said Marc Prioleau, executive director of Overture Maps Foundation. “This is a significant step forward for open map data by delivering data that is ready to be used in applications.”

Overture was founded back in 2022, under the umbrella of the Linux Foundation. At the time, Linux Foundation executive director Jim Zemlin noted that “mapping the physical environment and every community in the world, even as they grow and change, is a massively complex challenge that no one organization can manage. Industry needs to come together to do this for the benefit of all.”

Now, two years later, some Overture members have already started integrating its data into their applications. Meta, the foundation says, is using Overture data for its map solutions while Microsoft is adopting it to add coverage to Bing Maps.

Overture’s dataset includes five base layers in the beta release that include 54 million places of interest, 2.3 billion buildings, roads, footbaths and other travel infrastructure, administrative boundaries, and a contextual base layer including land and water data.

In a world where OpenStreetMap (OSM) has been around for a very long time, it’s worth asking why the industry would need a project like Overture (which actually uses OSM data as part of its data set).

“Overture is a data-centric map project, not a community of individual map editors,” the project’s FAQ explains. “Therefore, Overture is intended to be complementary to OSM. We combine OSM with other sources to produce new open map data sets. Overture data will be available for use by the OpenStreetMap community under compatible open data licenses. Overture members are encouraged to contribute to OSM directly.”


Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber