Is Your AI an “Innie” or an “Outie”?
Lessons from Severance on Controlling Context for Better Business Insights
Often using an LLM is like asking a particularly knowledgeable friend for advice…they tap into vast bodies of knowledge but may occasionally state responses that are incorrect, misleading, or irrelevant to the specific context. Retrieval-augmented generation solves these issues by imposing guardrails on the LLM’s retrieval process.
Introduction
First off, SPOILER ALERT if you aren’t yet caught up with Apple TV’s Severance: there will be spoilers, and you probably won’t understand the analogies here without watching.
Coincidentally, as I was finishing Season 2, I was also deep in the development of version 1 of ID8’s Nexus innovation research platform. In the course of experiencing both simultaneously, it dawned on me that there’s a striking parallel between Severance’s innies (employees with intentionally limited knowledge, confined entirely within a carefully controlled context) and the Retrieval-Augmented Generation (RAG) approach we use in Nexus. Both environments offer intentionally curated contexts to achieve very specific objectives, with strict guardrails that can, perhaps paradoxically, liberate rather than constrain.
However, to determine whether RAG is right for you, you must first dig into what RAG is, what its advantages and disadvantages are, and in which situations it’s most appropriate. Sometimes you’d be better off with an integrated approach that leverages multiple methods, and sometimes better off not bothering with the restrictive shackles of the innie world altogether.
What Is RAG?
To understand RAG deeply, we must first tackle the basics of how traditional large language models (LLMs) function. Typically, a user sends a prompt to an LLM, which then responds using an enormous, generalized knowledge base accumulated during training. Imagine asking a particularly knowledgeable friend for advice. They’ll provide an answer influenced by everything they’ve ever learned, read, or overheard. While expansive, this approach can lead to inaccuracies or “hallucinations,” where they confidently state responses that are entirely incorrect, misleading, or irrelevant to the specific context.
Even when researchers intentionally upload focused datasets, such as a spreadsheet or a PDF, the vast pretrained knowledge within an LLM might unintentionally seep in, causing confusion or diluting the specificity required for accurate analysis tailored to the curated data that you have spent precious time and money to obtain. This contamination of external knowledge makes precise and dependable business insights challenging to achieve.

RAG solves these issues by imposing guardrails on the LLM’s retrieval process. When a prompt is sent to a RAG-enabled system, it does not directly access its full breadth of pretrained knowledge. Instead, it first consults a vector database, which is a structured, highly organized repository of pre-selected information relevant to your query. The database retrieves precise contextual snippets explicitly related to your request. The LLM then generates its response exclusively from this retrieved context. This contrasts from the “pure generative” use of the LLM where you offer no constraints. Instead, under a RAG system you are disintermediating that pure generative experience, making sure that you are only feeding the LLM the content that it should be using. This provides some level of deterministic orchestration to the AI-generated output by putting strict controls on the inputs. To visualize this clearly, see the RAG approach below, which adds extra steps to the pure generative method and prevents the LLM from going rogue and mixing the “innie” world (that of your data) with that from “outie” world (the vast amount of pretrained data from the base LLM).

Innies vs. Outies: Advantages and Disadvantages
The “innie” method, synonymous with RAG in this post, involves strict control, precise targeting, and deliberate limitation. By contrast, the “outie” approach leverages broader, unrestricted interactions with traditional LLMs, drawing from extensive general knowledge. Both have distinct strengths and weaknesses, depending on the objectives.
When to Use the Innie (RAG) Approach:
- Specialized business research needing a high degree of accuracy: Industries such as pharmaceuticals, finance, or aerospace require pinpoint precision where inaccuracies are costly.
- Innovation scenarios testing entirely new hypotheses: When exploring entirely new products or markets, controlled environments prevent biases from existing data.
- Regulatory or compliance-driven contexts: Strictly regulated industries benefit from constrained systems to ensure absolute compliance.
- Customer insights research using specifically crafted personas: To yield actionable, precise insights based on detailed profiles without external noise.
- Internal data discovery: For employees to tap into internal datasets to generate new documents, you will want LLMs to generate responses trusted to draw from internal data only.
- Customer service chatbots: When engaging with customers, you want LLM responses to reflect your policies accurately, and resist responding to questions on irrelevant or even controversial political topics that the base LLM would entertain under other circumstances.
The Harsh Lesson of Helly R.: When Guardrails Come Down
In Severance, Helly R., the innie persona of Helena Eagan, experiences a brutal awakening when her meticulously curated Lumon workplace environment collides violently with her external reality. The removal of guardrails—exposing Helly to the full spectrum of external information—leads to emotional turmoil and identity conflict.
Translating this cautionary tale into business, integration of vast, disparate datasets without careful consideration can similarly result in confusion and conflicting signals rather than clarity. Too much context, especially irrelevant or contradictory information, can lead to indecision, poor strategies, or would-be innovations destined for failure. Businesses must evaluate carefully when to maintain strict segmentation and when broader integration truly adds value.
When to Use the Outie (Non-RAG) Approach:
Since RAG systems work by restricting the LLM’s output to only the documents retrieved from a vector database, this constraint also creates limitations. If nothing relevant is retrieved, the LLM has nothing to draw from and returns little or no output; if the retrieved material is of low quality or off-topic, the result may be generic or incoherent; and if the underlying body of knowledge is incomplete, the system effectively narrows the LLM’s knowledge, limiting its ability to generate meaningful responses. Accordingly, if you want to explore topics broadly and bring in a broad set of worldly information to your research, you will want to operate as an outie. While you may still want to use prompting and specify certain source files to reduce hallucinations, a RAG system may prove too limiting for your purpose.
Consider the following examples of research that is best done with an outie approach:
- Exploratory research and ideation: Early stages benefit from broader exposure, triggering innovative connections and ideas, especially if exploring areas of innovation that are closer to your core business and the LLM’s pretrained knowledge is more relevant to the task at hand.
- Broad market and competitive analysis: Understanding comprehensive market dynamics often requires an expansive context to provide valuable strategic insights. Unless you have access to large, comprehensive datasets on market and competitor data (and some companies do), you are going to be limited in your analysis.
- Creative problem-solving and brainstorming sessions: Unrestricted insights can foster imaginative solutions and inspire unconventional thinking. For the sake of creativity, you may want to leverage the cross-pollination of concepts outside the narrow focus of your research to inject fresh ideas that often spring from the ingenious recombinations of disparate concepts.
The Case of Mark S.: When Integration Illuminates the Path
Severance’s main character, Mark S., demonstrates the value of integrating innie and outie worlds. Initially confined by Lumon’s constraints, Mark gains invaluable clarity and perspective once he reintegrates external knowledge, and in doing so understands deeper motives, characters, and broader realities that shape Lumon’s operations.
Similarly, in business, certain situations demand broad integration for comprehensive insights. For example, consumer behavior analysis or competitive market positioning often requires integrating diverse data sources to reveal critical patterns or opportunities not evident from a narrow view. Or, deep user insights about a feature could lead you to invest in a certain feature set, but without integrating information about industry shifts or upcoming regulations, you may not know if those features will even be viable, or even compliant, when they’re released.
Why RAG Is Particularly Powerful for Innovation Research
Innovation research involves exploring uncharted territories, inventing new concepts, and proposing novel markets. These endeavors deal with “unknown unknowns,” areas where existing data can be misleading or insufficient.
The RAG system is especially beneficial here, as it prevents irrelevant or biased data from contaminating the research process. It ensures clarity and specificity by curating an intentionally limited but highly relevant set of insights. This disciplined approach makes RAG ideally suited for generating precise, actionable insights in innovation research, where clarity often comes only from the most targeted of knowledge.
At ID8 Innovation, our Nexus platform incorporates a RAG architecture to guide research precisely because of its ability to balance rigor and creativity, to use both controlled (innie) and comprehensive (outie) knowledge environments effectively. This strategic integration helps us generate accurate, relevant insights tailored exactly to the innovative problems we aim to solve. As indicated before, this method doesn’t come without its challenges, and we have employed several methods for incorporating data sources, including some limited pure generative LLM use, to supplement the custom primary research we conduct, typically in the form of customer and expert interviews. So, we feel that RAG can be a powerful device in the innovation researcher’s toolkit, but it must be used properly, employed in the right context, and sometimes with the help of other methods to get the insights you need, and not merely the ones handed to you by the LLM.