AI Context Farming

AI Context Farming

Building Self-Recursive Knowledge Pipelines for the synthetic mind for 10x Output

17 min read
3 views

We treat AI like a brilliant consultant we fire and re-hire every ten minutes. We paste a snippet of code, a paragraph of text, or a meeting summary into a chat window, ask for insight, and then close the tab. Each time, the AI performs with the intelligence of someone who just walked into the room. Because that is exactly what happened. We treat AI like a search engine with better prose. You are, in effect, giving the ‘consultant’ amnesia between every conversation. This isn’t collaboration; it is a series of disjointed transactions with the most powerful reasoning engine ever built.

And yet we wonder why the outputs feel generic.

Every day, context flows through meetings, Teams/Slack threads, design reviews, field visits, stakeholder calls, and ad hoc hallway conversations. It arrives, informs a decision or two, and then evaporates.

I have come to think of this problem through a metaphor I find useful: context farming. Just as a farmer does not wait for rain but builds irrigation systems, channels, and reservoirs to capture and direct water where it is needed, working with AI requires building deliberate pipelines to capture, structure, and route the context that flows through your workstreams. The goal is to ensure that every meaningful piece of knowledge you encounter; whether it arrives in a Monday standup or a 47-page technical spec; gets converted into a knowledge artifact that an AI system can ingest, reason over, build upon, and connect to everything else it already knows about your work. Context farming is not documentation for compliance. It is the unremitting construction of a living knowledge substrate: a second brain, not for you, but for the AI systems that work alongside you.

The Dark Matter of Professional Work

Consider a typical project lifecycle. There are kickoff meetings where scope and vision are debated. Design reviews where trade-offs are weighed. Field visits where ground truth is observed. Stakeholder conversations where priorities quietly shift. Code reviews, budget discussions, retrospectives, and dozens of informal exchanges that shape how the project actually unfolds.

In most organisations, the overwhelming majority of this context lives in exactly one place: the heads of the people who were present. Meeting notes, when they exist at all, capture perhaps 5% of what was actually discussed. The reasoning behind a critical architectural choice; the options considered, the constraints weighed, the trade-offs accepted;vanishes entirely. Meetings end and the nuance dies.

This is the dark matter of the project—invisible to the outside observer and, crucially, invisible to your AI tools. It is the why behind every what. And without it, every AI interaction that lacks accumulated context is a cold start. And cold starts produce cold outputs.

We have built entire professional cultures around the systematic destruction of this context. Not out of carelessness, but because the cost of capturing, structuring, and retrieving it has historically been too high. AI changes this calculus entirely. The cost of transforming raw context into structured, retrievable knowledge has collapsed. What matters now is not the technology. It is the discipline of capture.

Yes, most LLM providers now offer memory across chats. But, non-scoped memory is not useful context IMO. I have had to turn it off on all my Claude and Gemini clients. What these systems remember are scattered fragments harvested from dozens of unrelated conversations—a cocktail of half-formed preferences, one-off requests, and stray facts stirred together without structure. That is not a knowledge base. That is a junk drawer. The context problem does not disappear because a model can recall that you once asked it to write a birthday message for your aunt. It persists because no amount of passive memory can substitute for the deliberate architecture of knowledge.

How Humans Actually Build Expertise

I keep returning to the human analogy because I think it is the most illuminating frame for understanding what we are trying to build. The parallels to AI context farming are underexplored.

Consider how a new employee becomes effective. On day one, they know almost nothing. They are handed an onboarding packet — the equivalent of a system prompt. It gives them the broad strokes, but it does not make them competent. Competence comes through a fundamentally different process: attending meetings, absorbing unwritten rules, getting CC'd on threads that reveal the texture of organizational politics, making mistakes, receiving feedback, updating mental models. Over time, the accumulated weight of these micro-interactions transforms them from an outsider into someone who "gets it."

We rarely think of this as data collection, but that is exactly what it is — osmotic context accumulation that builds a mental model rich enough to generate original insight no onboarding manual could teach. This mimics the human biological process of learning. We do not learn by accessing a static library; we learn by immersion. Our senses are always on, converting the raw noise of existence into the signal of experience. We need to build this sensory cortex for our digital counterparts: a living system that absorbs, structures, and compounds understanding the way human expertise actually forms, only without the lossy compression that memory imposes.

Now consider how we onboard AI: we paste some text into a prompt, maybe upload a document or two, and expect senior-level performance from a five-minute corridor introduction. The context farming pipeline closes this gap — replicating, in structured form, the full breadth of how human expertise actually accumulates: through meetings, documents, discussions, observation, and reflection.

What Is a Knowledge Artifact?

A knowledge artifact, as I use the term, is any discrete unit of structured context that can be consumed by an AI system and connected to other units. It could be a meeting transcript distilled into decisions, action items, and open questions. It could be a design rationale document explaining not just what a system does, but why it was built that way and what alternatives were rejected. It could be a field observation report, a stakeholder interview summary, a test plan with annotations about edge cases, or a product requirements document enriched with the verbal context that never made it into the original draft.

The key properties of a good knowledge artifact are: it is self-contained enough to be useful in isolation, connected enough to reference related artifacts, timestamped so its recency can be assessed, and annotated with enough meta-context that an AI system can understand not just what it says, but what role it plays in the larger body of knowledge. A meeting summary that says “we decided to use PostgreSQL” is mildly useful. A meeting summary that says “We decided to use PostgreSQL over DynamoDB because our query patterns are relational, our team has deeper expertise in SQL, and the cost projections at our current scale favor a managed RDS instance: this reverses the preliminary recommendation from the architecture review on Jan 15” is a knowledge artifact. The difference is the embedded reasoning, the connections, and the context that makes the information actionable beyond the moment it was captured.

The Pipeline

Building a context farming pipeline is less about any single tool and more about establishing a discipline: a set of habits and systems that ensure context capture becomes as automatic and non-negotiable as locking your door when you leave the house. Practically, this means establishing trigger points: moments in your workflow where context capture happens automatically. A meeting ends—that is a trigger to run the transcript through your artifact template. A code review is completed—that is a trigger to document the design decisions that emerged. A field visit concludes—that is a trigger to file structured observations. Each of these moments deposits a thin layer of context that becomes part of the persistent substrate. The pipeline is always growing, always deepening. Nothing of substance is allowed to evaporate. Context capture should take minutes, not hours, especially when AI assists with the conversion. The routine should be simple enough that it requires no willpower to maintain.

Here is how I think about the pipeline, broken into its essential stages.

Stage 1: Capture Everything That Moves

The first principle is radical: nothing that generates context should go unrecorded. Every meeting gets transcribed. Every design discussion gets summarized. Every field visit produces a structured report. Every code review conversation gets distilled into a rationale artifact. This sounds exhausting, and it would be if you were doing it manually. But this is precisely where AI earns its first dividend. The transcription, the initial summarization, the structuring—these are tasks AI can handle today with minimal human oversight. Your job is not to write every artifact by hand. Your job is to ensure the raw material reaches the pipeline.

The practical shift here is attitudinal. You must begin to see yourself not just as a participant in meetings, discussions, and reviews, but as a curator of context. When you attend a meeting, you are not just listening for your action items. You are feeding the pipeline. When a colleague shares a document for your review, you are not just providing feedback; you are ingesting context that must be routed into the knowledge base.

Stage 2: Transform Raw Context into Structured Artifacts

After a meeting, don't just save the transcript. Run it through a template that forces out: what was decided, why it was decided, what was rejected, what's still open, and what it connects to from last week. Do the same for field visits, code reviews, design debates — each with its own template tuned to your domain. You're not prompting AI once and moving on. You're building a repeatable machine that turns every conversation into something AI can actually use next time. Over time, these templates become sophisticated, incorporating the specific vocabulary, priorities, and reasoning patterns of your domain.

Stage 3: Connect, Index, and Cross-Reference

Isolated artifacts are better than nothing, but the compounding magic happens when artifacts reference each other. When a new artifact is created, the AI should be prompted to identify connections to existing artifacts. “This decision in today’s meeting relates to the architecture review from January 15. It contradicts the preliminary recommendation but aligns with the updated cost analysis from February 3.” These cross-references transform a flat collection of documents into a knowledge graph—a web of interconnected understanding that mirrors the way expert knowledge actually works in a human mind.

This is where the pipeline begins to exhibit emergent intelligence. An AI with access to a well-connected knowledge base does not just retrieve information; it reasons across it. It can identify contradictions between decisions made at different times. It can surface assumptions that were valid when a decision was made but may no longer hold. It can trace the lineage of a design choice back through six months of discussions and flag when the original rationale has been undermined by subsequent developments.

Stage 4: The Recursive Loop—AI Outputs as Inputs

Here is where the concept becomes truly powerful:The AI’s own outputs must feed back into the knowledge base as new artifacts. When AI synthesizes a report from your knowledge base, that synthesis is itself a knowledge artifact. When AI identifies a contradiction between two decisions, that analysis becomes an artifact. When AI generates a draft proposal based on accumulated context, the draft and the reasoning behind it become artifacts.

This is the recursive loop that transforms a static knowledge base into a living, self-expanding intelligence system. Each cycle of the loop (human generates context, AI transforms it into artifacts, AI reasons over artifacts to produce new insights, those insights become new artifacts)compounds the total intelligence available to you. It is the same dynamic that makes compound interest so powerful in finance: the returns themselves generate further returns.

Think about what happens over three months of disciplined context farming on a software product. In month one, the AI knows your basic architecture and recent decisions. By month two, it understands the design philosophy, the political dynamics between stakeholders, the recurring patterns in your bugs, and the unstated assumptions behind your test strategy. By month three, it can draft a technical proposal that accounts for constraints you forgot you had, references decisions you made weeks ago, and anticipates objections from stakeholders based on their documented preferences. That is not magic. That is compound context.

The Domain-Agnostic Nature of Context Farming

Although I have drawn many examples from software development and project management—because those are my native domains—the principles of context farming are remarkably domain-agnostic.

In healthcare, I have lived this principle before the language for it existed. Years ago, I architected Uganda's HIV Drug Resistance Database: a platform that captured each patient's full arc: psychosocial history, viral load trajectory, ART regimen history, and drug resistance test results. This was the structured substrate clinicians used to decide whether to switch a patient to a different treatment line or maintain their current regimen. My plan was to layer reinforcement learning on top — an AI that would learn alongside clinicians until its weights were strong enough to work the backlog independently, especially where experienced clinicians were scarce. The bottleneck was the unstructured social history: before LLMs, extracting clinical signal from free-text psychosocial notes would have demanded years of hand-coded rule-based systems and clinical NLP engines. Today, that bottleneck has collapsed. An LLM can ingest unstructured patient narratives, cross-reference them against validated past decisions, and feed structured context directly into a clinical decision support system — the same architecture that powers oncology treatment recommendation engines. And the pipeline does not stop at the point of decision. Treatment guidelines evolve. Patients respond — or they don't. A patient who was switched to a second-line regimen 6 months ago now has new viral load data, new adherence patterns, new psychosocial notes. Each outcome feeds back into the substrate, refining what the system knows about which decisions worked, under what conditions, and for whom. This is the recursive loop in clinical form: every decision generates new context, every outcome sharpens the next recommendation, and the substrate grows richer with each cycle. That is context farming — not as metaphor, but as medicine.

In legal practice, every case discussion, precedent analysis, client conversation, and courtroom observation generates context that, once structured, allows an AI to develop an increasingly sophisticated understanding of a firm’s strategy, a client’s risk profile, or a judge’s tendencies.

In product management, every user interview, feature request, sprint retrospective, and competitive analysis is raw material for a knowledge base that can eventually tell you not just what your users want, but why they want it, how that has shifted over time, and where the gaps are between what you are building and what the market is moving toward.

In research and analysis, Every literature review, every data set, every analytical framework, every reviewer comment—all of it feeds the pipeline. When AI assists with the next analysis, it is not starting from scratch. It carries forward the methodological decisions, the theoretical commitments, and the empirical patterns that have already been established. The recursion here is especially powerful: each analytical output refines the knowledge base that informs subsequent analyses.

The pattern is always the same: context arrives, gets captured, gets transformed into structured artifacts, gets connected to existing artifacts, and generates compounding intelligence through the recursive loop. The specifics vary by domain, but the architecture does not.

Practical Considerations and Honest Limitations

I would be dishonest if I presented context farming as frictionless. There are real challenges.

  • Signal-to-noise ratio is a genuine concern. Not everything captured is worth converting into a knowledge artifact. Part of the discipline is building filters—both automated and human—that distinguish between context that will compound and context that is merely noise. An offhand comment about lunch preferences does not need to be an artifact. A seemingly offhand comment about why the client rejected the previous vendor’s approach absolutely does.
  • Maintenance cost is non-trivial. Knowledge artifacts can become stale, contradictory, or misleading as circumstances change. The pipeline must include mechanisms for deprecation, versioning, and conflict resolution. An artifact that says “our deployment target is AWS” becomes actively harmful if the team has since migrated to Azure and no one updated the knowledge base.
  • Privacy and sensitivity require careful handling. Not all context should be captured, and not all captured context should be accessible to AI systems. Meeting conversations often contain confidential, personal, or politically sensitive information that requires thoughtful governance before it enters a knowledge pipeline.

These are solvable problems, but they require intentional design. The pipeline is not just a technical system; it is a sociotechnical system that requires cultural buy-in, governance structures, and continuous refinement.

The 10x Multiplier

The multiplier does not come from any single interaction with AI being ten times faster. It comes from the elimination of context reconstruction cost. Today, every time you engage AI on a complex task, you spend most of your effort rebuilding context: explaining the background, describing constraints, providing relevant history, correcting misunderstandings that arise from insufficient context. This reconstruction cost often exceeds the cost of the actual task.

With a well-farmed knowledge base, context reconstruction drops to near zero. You do not explain the background because the AI already has it. You do not describe constraints because they are documented in interconnected artifacts. You do not provide history because the full decision lineage is available. You simply say what you need, and the AI operates with the fluency of a senior team member who has been present for every meeting, read every document, and remembers everything perfectly.

That is not a marginal improvement. That is a qualitative shift in what becomes possible. Tasks that previously required extensive briefing and iteration become near-instantaneous. Analyses that would have taken days of context gathering happen in minutes. Proposals that would have required weeks of research and consultation emerge fully formed, because the knowledge base already contains the accumulated wisdom of months of captured context.

The question is not whether AI will transform professional work. It already has. The question is whether you are farming the context that makes the transformation real—or whether you are starting from zero every time you open a new chat window.

Getting Started: The Minimum Viable Pipeline

You do not need a sophisticated system to begin. The minimum viable context farming pipeline requires three things.

First, a capture habit. Start transcribing every meeting and saving every substantive discussion. Most of your tools already support this. Just turn it on.

Second, an artifact template. Design a simple, consistent structure for your knowledge artifacts. It does not need to be complex. A basic template might include: the source (meeting, discussion, field visit), the date, the participants, the key decisions or observations, the open questions, the connections to other artifacts, and a brief AI-generated summary. Use this template every time.

Third, a storage convention. Pick a place to keep your artifacts and a naming convention that makes them retrievable.This can be as simple as a well-organised folder of markdown files or as sophisticated as a vector database with semantic search capabilities or retrieval-augmented generation (RAG). Most AI clients now provide ‘Projects’. That is a good place too. What matters is that artifacts are stored with sufficient metadata to enable intelligent retrieval: dates, participants, topics, project associations, and explicit links to related artifacts. A folder per project, with artifacts named by date and type, is sufficient to start. As your practice matures, you can invest in more sophisticated indexing and retrieval systems.

What is missing, in most organizations is the intentionality. The decision to treat context as a first-class resource. The commitment to building the pipeline before you desperately need it. The discipline to feed it consistently, even when the benefits are not yet visible.The magic is not in the tooling. It is in the discipline of never letting context evaporate. Every meeting you walk out of without an artifact is lost compounding. Every design discussion that is not captured is a withdrawal from your future AI’s effectiveness.

I believe we are at the very beginning of understanding what becomes possible when we stop treating AI as an on-demand oracle and start treating it as a system that requires the same kind of contextual investment we give to human colleagues. The organizations and individuals who internalize this shift; who build the pipelines, establish the routines, and commit to the discipline of context farming; will not merely keep pace with the AI revolution. They will be the ones setting the pace.

The context is the product. Farm it accordingly.

Share this article

Comments

Related Posts

The Squeeze

The Squeeze

The Painful Gap Between Here and Abundance: How AI will compress economies before it expands them — and why most of us aren't ready

15 min read
To My Children: A Letter on Identity and Inner Power

To My Children: A Letter on Identity and Inner Power

Throughout history, humanity's greatest tragedies have sprouted from the same poisonous root—the belief that one group stands closer to God, truth, or righteousness than another. Consider the ancient...

7 min read