From Digital Age to Nano Age. WorldWide.

Tag: RAG

Robotic Automations

Why RAG won't solve generative AI's hallucination problem | TechCrunch


Hallucinations — the lies generative AI models tell, basically — are a big problem for businesses looking to integrate the technology into their operations.

Because models have no real intelligence and are simply predicting words, images, speech, music and other data according to a private schema, they sometimes get it wrong. Very wrong. In a recent piece in The Wall Street Journal, a source recounts an instance where Microsoft’s generative AI invented meeting attendees and implied that conference calls were about subjects that weren’t actually discussed on the call.

As I wrote a while ago, hallucinations may be an unsolvable problem with today’s transformer-based model architectures. But a number of generative AI vendors suggest that they can be done away with, more or less, through a technical approach called retrieval augmented generation, or RAG.

Here’s how one vendor, Squirro, pitches it:

At the core of the offering is the concept of Retrieval Augmented LLMs or Retrieval Augmented Generation (RAG) embedded in the solution … [our generative AI] is unique in its promise of zero hallucinations. Every piece of information it generates is traceable to a source, ensuring credibility.

Here’s a similar pitch from SiftHub:

Using RAG technology and fine-tuned large language models with industry-specific knowledge training, SiftHub allows companies to generate personalized responses with zero hallucinations. This guarantees increased transparency and reduced risk and inspires absolute trust to use AI for all their needs.

RAG was pioneered by data scientist Patrick Lewis, researcher at Meta and University College London, and lead author of the 2020 paper that coined the term. Applied to a model, RAG retrieves documents possibly relevant to a question — for example, a Wikipedia page about the Super Bowl — using what’s essentially a keyword search and then asks the model to generate answers given this additional context.

“When you’re interacting with a generative AI model like ChatGPT or Llama and you ask a question, the default is for the model to answer from its ‘parametric memory’ — i.e., from the knowledge that’s stored in its parameters as a result of training on massive data from the web,” David Wadden, a research scientist at AI2, the AI-focused research division of the nonprofit Allen Institute, explained. “But, just like you’re likely to give more accurate answers if you have a reference [like a book or a file] in front of you, the same is true in some cases for models.”

RAG is undeniably useful — it allows one to attribute things a model generates to retrieved documents to verify their factuality (and, as an added benefit, avoid potentially copyright-infringing regurgitation). RAG also lets enterprises that don’t want their documents used to train a model — say, companies in highly regulated industries like healthcare and law — to allow models to draw on those documents in a more secure and temporary way.

But RAG certainly can’t stop a model from hallucinating. And it has limitations that many vendors gloss over.

Wadden says that RAG is most effective in “knowledge-intensive” scenarios where a user wants to use a model to address an “information need” — for example, to find out who won the Super Bowl last year. In these scenarios, the document that answers the question is likely to contain many of the same keywords as the question (e.g., “Super Bowl,” “last year”), making it relatively easy to find via keyword search.

Things get trickier with “reasoning-intensive” tasks such as coding and math, where it’s harder to specify in a keyword-based search query the concepts needed to answer a request — much less identify which documents might be relevant.

Even with basic questions, models can get “distracted” by irrelevant content in documents, particularly in long documents where the answer isn’t obvious. Or they can — for reasons as yet unknown — simply ignore the contents of retrieved documents, opting instead to rely on their parametric memory.

RAG is also expensive in terms of the hardware needed to apply it at scale.

That’s because retrieved documents, whether from the web, an internal database or somewhere else, have to be stored in memory — at least temporarily — so that the model can refer back to them. Another expenditure is compute for the increased context a model has to process before generating its response. For a technology already notorious for the amount of compute and electricity it requires even for basic operations, this amounts to a serious consideration.

That’s not to suggest RAG can’t be improved. Wadden noted many ongoing efforts to train models to make better use of RAG-retrieved documents.

Some of these efforts involve models that can “decide” when to make use of the documents, or models that can choose not to perform retrieval in the first place if they deem it unnecessary. Others focus on ways to more efficiently index massive datasets of documents, and on improving search through better representations of documents — representations that go beyond keywords.

“We’re pretty good at retrieving documents based on keywords, but not so good at retrieving documents based on more abstract concepts, like a proof technique needed to solve a math problem,” Wadden said. “Research is needed to build document representations and search techniques that can identify relevant documents for more abstract generation tasks. I think this is mostly an open question at this point.”

So RAG can help reduce a model’s hallucinations — but it’s not the answer to all of AI’s hallucinatory problems. Beware of any vendor that tries to claim otherwise.


Software Development in Sri Lanka

Robotic Automations

Aerospike raises $109M for its real-time database platform to capitalize on the AI boom | TechCrunch


NoSQL database Aerospike today announced that it has raised a $109 million Series E round led by Sumeru Equity Partners. Existing investor Alsop Louie Partners also participated in this round.

In 2009, the company started as a key-value store with a focus on the adtech industry; Aerospike has since diversified its offerings quite a bit. Today, its core offering is a NoSQL database that’s optimized for real-time use cases at scale.

In 2022, Aerospike added document support and then followed that up with graph and vector capabilities — two database features that are crucial for building real-time AI and ML applications.

“We were founded primarily as a real-time data platform that can work with data at really high scale, or, as we call it, unlimited scale,” Aerospike CEO Subbu Iyer said. “We’ve been fortunate enough that a lot of our customers have either started their journey at scale with us, or started the journey earlier and grown into the platform. So our premise has held good that real-time data and real-time access to data is going to be important pretty much across every industry. Our founding principles were really to deliver real-time performance with data at any scale, and the lowest [total cost of ownership] on the market.”

In part, Aerospike, which offers its service as a hosted platform and on-premises, is able to deliver on this promise through its hybrid memory architecture that allows it to augment the use of RAM to speed up data access with fast flash storage — or any combination of the two. Aerospike competitor Redis recently acquired Speedb to offer similar capabilities — also with an eye on helping its customers reduce costs.

Image Credits: Aerospike

Today, the company’s customers include the likes of Airtel, TransUnion, Snap and TechCrunch parent company Yahoo.

Right now, though, it’s definitely the AI boom that is driving a lot of interest in Aerospike and the company wants to be in a position to capitalize on that through this new funding round.

Unsurprisingly, that means the company plans to use the new funding to accelerate its innovations around AI, which are mostly focused on its graph and vector capabilities. Iyer told me that Aerospike is specifically looking at combining those two capabilities.

“Going forward, there are some synergistic ways in which graph and vectors can come together,” he said. “A simple use case I use for this, for example, is if you’re looking for a specific document and you have embeddings and stored them in a vector database, you want to use a vector search to get to that specific document. But if you’re looking for a set of similar documents, a vector search can get you to the neighborhood and then a graph can get you a similar corpus of documents because of relationships and stuff.”

That, of course, is also what got investors interested in the company. Aerospike raised its last round in 2019 and, according to the company’s CEO, it didn’t need to raise now, but there is a large opportunity for Aerospike to capitalize on, something Sumeru co-founder and managing director George Kadifa also stressed.

“AI is transforming the economy and presents new opportunities for growth and innovation,” Kadifa said. “Aerospike, with its impressive customer base and performance advantage at scale, is uniquely positioned to become a foundational element for the next generation of real-time AI applications.”


Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber