Backblaze Inc.

06/27/2024 | Press release | Distributed by Public on 06/27/2024 09:04

AI 101: Why RAG Is All the RAGe

At the risk of being called the stick in the mud of the tech world, we here at Backblaze have often bemoaned our industry's love of making up new acronyms. The most recent culprit, hailing from the fast-moving artificial intelligence/machine learning (AI/ML) space, is truly memorable: RAG, aka retrieval-augmented generation. For the record, its creator has apologized for inflicting it upon the world.

Given how useful it is, we're willing to forgive. (I'm sure he was holding his breath for that news.) Today, our AI 101 series is back to talk about what RAG is-and the big problem it solves.

Read more AI 101

This article is part of a series that attempts to understand the evolving world of AI/ML. Check out our previous articles for more context:

Let's start with large language models (LLMs)

LLMs are the most recognizable expression of AI in our current zeitgeist. (Arguably, you could append that with "that we're all paying attention to," given that ML algorithms have been behind many tools for decades now.) LLMs underpin tools like ChatGPT, Google Gemini, and Claude, as well as things like service-oriented chatbots, natural language processing tasks, and so on. They're trained on vast amounts of data with algorithmic guardrails known as parameters and hyperparameters guiding their training. Once trained, we query them through a process known as inference.

Fabulous! The possibilities are endless. However, one of the biggest challenges we've experienced (and laughed about on the internet) is that LLMs can return inaccurate results, while sounding very, very reasonable. Additionally, LLMs don't know what they don't know. Their answers can only be as good as the data they draw from-so, if their training dataset is outdated or contains a systematic bias, it will impact your results. As AI tools have become more widely adopted, we've seen LLM inaccuracies range from "funny and widely mocked" to "oh, that's actually serious."

Shoutout to Reddit users for finally getting this one into trusted sources.
Shoutout to Reddit users for finally getting this one into trusted sources.
Shoutout to Reddit users for finally getting this one into trusted sources.

Enter retrieval-augmented generation (Fine! RAG)

RAG is a solution to these problems. Instead of relying on only an LLM's dataset, RAG queries external sources before returning a response. It's more complicated than "let me google that for you," as the process takes that external data, turns it into a vectored database, and then balances external data with an LLM's "general knowledge" generated response and skill at responding to conversational queries.

This has several advantages. Users now have sources they can cite, and recent information is taken into account. From a development perspective, it means that you don't have to re-train a model as frequently. And, it can be implemented in as few as five lines of code.

One important nuance is that when you're building RAG into your product, you can set its sources. For industries like medicine and law, that means you can point them towards industry journals and trusted sources, outweighing the often misquoted or mis-cited examples you might see in a general database.

Another example: For a technical documentation portal, you can take an LLM, trained on general information and the nuts and bolts of conversational querying, and direct it to rely on your organization's help articles as its most important sources. Your organization controls the authoritative data, and how often/when changes are made. Users can trust that they're getting the most recent security patches and correct code. And, you can do so quickly, easily, and-most importantly-cost-effectively.

RAG doesn't mean foolproof AI

RAG is a great, straightforward method for keeping LLM tools updated with current, high-quality information and giving users more transparency around where their answers are coming from. However, as we mentioned above, AI is only ever as good as the data it uses. Keep in mind, that's a deceptively simple thing to say. It's an entire, specialized job to validate datasets, and that expertise is built into the research and monitoring that happens while training an LLM.

RAG gives a new source of data a privileged position-you're saying "this data is more authoritative than that data" and, since the LLM doesn't have anything in its general database, it may not have a counter argument. If you're not paying attention to your RAG data source standards, and doing so on an ongoing basis, it's possible, and even likely, that data bias, low quality data, etc. could creep into your model.

Think of it this way: If you're pointing to a new feature in your tech docs and there's an error, that impact is magnified because an LLM will give more weight to the RAG data. At least in that case, you're the one who controls the source data. In our other examples of legal or medical AI tools pointing to journal updates, things can get, well, more complicated. If (when) you're setting up an AI that uses RAG, it's imperative to make sure you're also setting yourself up with reliable sources that are regularly updated.

But, given its impact, and how low of a lift it is to integrate into existing products, we can see why RAG is all the RAGe-and, as always, we look forward to more to come in the AI landscape. For now, we can already see the impact it's having on the market, with SaaS companies and startups alike exploring the possibilities.

print