From Digital Age to Nano Age. WorldWide.

Tag: AI039s

Robotic Automations

Perplexity AI's new feature will turn your searches into shareable pages | TechCrunch


AI-powered summaries of webpages are a feature that you will find in many AI-centric tools these days. The next step for some of these tools is to prepare detailed and well-formatted web pages for your search queries. The Arc browser is doing this to some extent through its Arc Search app. Apple is rumored to […]

© 2024 TechCrunch. All rights reserved. For personal use only.


Software Development in Sri Lanka

Robotic Automations

OpenAI offers a peek behind the curtain of its AI's secret instructions | TechCrunch


Ever wonder why conversational AI like ChatGPT says “Sorry, I can’t do that” or some other polite refusal? OpenAI is offering a limited look at the reasoning behind its own models’ rules of engagement, whether it’s sticking to brand guidelines or declining to make NSFW content. Large language models (LLMs) don’t have any naturally occurring […]

© 2024 TechCrunch. All rights reserved. For personal use only.


Software Development in Sri Lanka

Robotic Automations

Why RAG won't solve generative AI's hallucination problem | TechCrunch


Hallucinations — the lies generative AI models tell, basically — are a big problem for businesses looking to integrate the technology into their operations.

Because models have no real intelligence and are simply predicting words, images, speech, music and other data according to a private schema, they sometimes get it wrong. Very wrong. In a recent piece in The Wall Street Journal, a source recounts an instance where Microsoft’s generative AI invented meeting attendees and implied that conference calls were about subjects that weren’t actually discussed on the call.

As I wrote a while ago, hallucinations may be an unsolvable problem with today’s transformer-based model architectures. But a number of generative AI vendors suggest that they can be done away with, more or less, through a technical approach called retrieval augmented generation, or RAG.

Here’s how one vendor, Squirro, pitches it:

At the core of the offering is the concept of Retrieval Augmented LLMs or Retrieval Augmented Generation (RAG) embedded in the solution … [our generative AI] is unique in its promise of zero hallucinations. Every piece of information it generates is traceable to a source, ensuring credibility.

Here’s a similar pitch from SiftHub:

Using RAG technology and fine-tuned large language models with industry-specific knowledge training, SiftHub allows companies to generate personalized responses with zero hallucinations. This guarantees increased transparency and reduced risk and inspires absolute trust to use AI for all their needs.

RAG was pioneered by data scientist Patrick Lewis, researcher at Meta and University College London, and lead author of the 2020 paper that coined the term. Applied to a model, RAG retrieves documents possibly relevant to a question — for example, a Wikipedia page about the Super Bowl — using what’s essentially a keyword search and then asks the model to generate answers given this additional context.

“When you’re interacting with a generative AI model like ChatGPT or Llama and you ask a question, the default is for the model to answer from its ‘parametric memory’ — i.e., from the knowledge that’s stored in its parameters as a result of training on massive data from the web,” David Wadden, a research scientist at AI2, the AI-focused research division of the nonprofit Allen Institute, explained. “But, just like you’re likely to give more accurate answers if you have a reference [like a book or a file] in front of you, the same is true in some cases for models.”

RAG is undeniably useful — it allows one to attribute things a model generates to retrieved documents to verify their factuality (and, as an added benefit, avoid potentially copyright-infringing regurgitation). RAG also lets enterprises that don’t want their documents used to train a model — say, companies in highly regulated industries like healthcare and law — to allow models to draw on those documents in a more secure and temporary way.

But RAG certainly can’t stop a model from hallucinating. And it has limitations that many vendors gloss over.

Wadden says that RAG is most effective in “knowledge-intensive” scenarios where a user wants to use a model to address an “information need” — for example, to find out who won the Super Bowl last year. In these scenarios, the document that answers the question is likely to contain many of the same keywords as the question (e.g., “Super Bowl,” “last year”), making it relatively easy to find via keyword search.

Things get trickier with “reasoning-intensive” tasks such as coding and math, where it’s harder to specify in a keyword-based search query the concepts needed to answer a request — much less identify which documents might be relevant.

Even with basic questions, models can get “distracted” by irrelevant content in documents, particularly in long documents where the answer isn’t obvious. Or they can — for reasons as yet unknown — simply ignore the contents of retrieved documents, opting instead to rely on their parametric memory.

RAG is also expensive in terms of the hardware needed to apply it at scale.

That’s because retrieved documents, whether from the web, an internal database or somewhere else, have to be stored in memory — at least temporarily — so that the model can refer back to them. Another expenditure is compute for the increased context a model has to process before generating its response. For a technology already notorious for the amount of compute and electricity it requires even for basic operations, this amounts to a serious consideration.

That’s not to suggest RAG can’t be improved. Wadden noted many ongoing efforts to train models to make better use of RAG-retrieved documents.

Some of these efforts involve models that can “decide” when to make use of the documents, or models that can choose not to perform retrieval in the first place if they deem it unnecessary. Others focus on ways to more efficiently index massive datasets of documents, and on improving search through better representations of documents — representations that go beyond keywords.

“We’re pretty good at retrieving documents based on keywords, but not so good at retrieving documents based on more abstract concepts, like a proof technique needed to solve a math problem,” Wadden said. “Research is needed to build document representations and search techniques that can identify relevant documents for more abstract generation tasks. I think this is mostly an open question at this point.”

So RAG can help reduce a model’s hallucinations — but it’s not the answer to all of AI’s hallucinatory problems. Beware of any vendor that tries to claim otherwise.


Software Development in Sri Lanka

Robotic Automations

European car manufacturer will pilot Sanctuary AI's humanoid robot | TechCrunch


Sanctuary AI announced that it will be delivering its humanoid robot to a Magna manufacturing facility. Based in Canada, with auto manufacturing facilities in Austria, Magna manufactures and assembles cars for a number of Europe’s top automakers, including Mercedes, Jaguar and BMW. As is often the nature of these deals, the parties have not disclosed how many of Sanctuary AI’s robots will be deployed.

The news follows similar deals announced by Figure and Apptronik, which are piloting their own humanoid systems with BMW and Mercedes, respectively. Agility also announced a deal with Ford at CES in January 2020, though that agreement found the American carmaker exploring the use of Digit units for last-mile deliveries. Agility has since put that functionality on the back burner, focusing on warehouse deployments through partners like Amazon.

For its part, Magna invested in Sanctuary AI back in 2021 — right around the time Elon Musk announced plans to build a humanoid robot to work in Tesla factories. The company would later dub the system “Optimus.” Vancouver-based Sanctuary unveiled its own system, Phoenix, back in May of last year. The system stands 5’7” (a pretty standard height for these machines) and weighs 155 pounds.

Phoenix isn’t Sanctuary’s first humanoid (an early model had been deployed at a Canadian retailer), but it is the first to walk on legs — this is in spite of the fact that most available videos only highlight the system’s torso. The company has also focused some of its efforts on creating dexterous hands — an important addition if the system is expected to expand functionality beyond moving around totes.

Sanctuary calls the pilot, “a multi-disciplinary assessment of improving cost and scalability of robots using Magna’s automotive product portfolio, engineering and manufacturing capabilities; and a strategic equity investment by Magna.”

As ever, these agreements should be taken as what they are: pilots. They’re not exactly validation of the form factor and systems — that comes later, if Magna gets what it’s looking for with the deal. That comes down to three big letters: ROI.

The company isn’t disclosing specifics with regard to the number of robots, the length of the pilot or even the specific factory where they will be deployed.


Software Development in Sri Lanka

Robotic Automations

TechCrunch Early Stage 2024 Women's Breakfast: Exploring AI's impact on founders | TechCrunch


In the world of tech, innovation knows no bounds. And at the forefront of this ever-evolving landscape, AI stands tall, casting its transformative spell on everything it touches. But amid the buzz, one crucial question emerges: How is AI shaping the journey of founders?

TechCrunch’s Early Stage conference is set to delve deep into this inquiry, and we’re thrilled to announce a special Women’s Breakfast event on April 25 in Boston. This exclusive gathering will focus on exploring the intricate ways in which AI is reshaping the entrepreneurial path for women in tech.

Women in Tech Sunrise Breakfast: How AI is impacting founders

AI is not just a tool; it’s a paradigm shift, redefining the rules of engagement in the startup realm. From revolutionizing product development to influencing investor sentiment, AI’s impact is profound and far-reaching. Our distinguished panelists will navigate these waters, offering insights, strategies, and personal anecdotes from their journey as trailblazing founders.

Meet our esteemed panelists:

  • Lily Lyman: Partner, Underscore VC
  • Rudina Seseri: Co-founder and managing partner, Glasswing Ventures
  • Milo Werner: General partner, Engine Ventures

Together they’ll unravel the mysteries of AI adoption, the challenges it poses, and the opportunities it unlocks for visionary entrepreneurs. This is not just a discussion; it’s a roadmap for navigating the AI-driven future of entrepreneurship.

TechCrunch Early Stage 2024 promises to be a landmark event, and the Women’s Breakfast is your gateway to unlocking the full potential of AI in your entrepreneurial journey. All women can join us on April 25 for a morning of inspiration, empowerment, and actionable insights by purchasing your ticket today. See you there!

Is your company interested in sponsoring or exhibiting at TechCrunch Early Stage 2024? Reach out to our sponsorship sales team by completing this form.


Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber