Anapoly Notebook | Digital Garden

Status: 🔸 Seed → ✅ Growing → 🔸 Well-formed → 🔸 Fruitful → 🔸 Retired

Transparency label: Human-authored, with AI input limited to suggestions, edits, or fact-checking.


An AI-embedded Business

Introduction

For this essay, an AI-embedded business is defined as one in which AI systems are deeply integrated into its operations. Because of the way they interact with members of the business team, these systems can seem to behave as if they, too, were team members. But they remain instruments of human intention, and the aim is to understand how AI can be made to operate dependably within human-defined structures of control, trust, and responsibility. We will explore the methods that make such operation durable: contextual scaffolding to define the AI’s working environment, context engineering to manage that environment as part of the business infrastructure, and the disciplined management of knowledge.

This is very much a working paper, not a finished document. The intention is to move step by step:

  1. to clarify what embedding AI actually involves;

  2. to establish the disciplines that keep it reliable and auditable; and

  3. to model how an AI-embedded organisation creates and delivers value.

How do we control an AI?

To use an AI effectively, we have to configure it’s behaviour to suit the purpose of our work. We do this by giving it information in the form of instructions, reference material, and prompts . We put this information into the AI’s context to shape how it behaves.

Contextual control, the method of controlling the AI by putting information into its context, is similar to the way we control a computer by giving it a program to run. But there is an important difference: a computer is deterministic, it executes the instructions in its program precisely, whereas an AI interprets the information we give it. We should aim, therefore, to give the AI all the information it needs to avoid losing focus. A vague prompt gets a vague answer. A precise prompt with the right background information gets us something much more useful.

There are parallels with the way we guide a human team member. A person draws on stored knowledge, follows instructions, and interprets information in light of prior experience. An AI does something broadly similar: it draws on the knowledge acquired during training, and interprets the information in its context to generate a response. The difference lies not in the act of interpretation itself, but in its foundation: humans rely on lived experience, while AI relies on patterns in data. In practice, however, both can participate in dialogue where meaning and intent emerge through ongoing exchange

The distinction between instructional control and contextual control is important. Traditional computing is about explicit commands; AI control is about constraint and alignment through shared context. “Control” here really means steering, not command and control. We must acknowledge that interpretation of its instructions by an AI introduces uncertainty and risk of misalignment. Feedback is essential: testing and review provide the same corrective loop that supervision gives human staff.

How do we embed AI?

When we adopt AI in a business, its characteristics make it more than a passive software tool. Embedding means integrating AI systems so closely into day-to-day operations that they take on defined operational roles within human workflows. These roles are designed, supervised, and continually adjusted by people.

Embedding is therefore a process of mutual adaptation. The business reshapes its routines to make best use of AI capabilities, while AI configurations are refined to fit the business’s evolving needs. The result is a working environment where human and machine contributions are interdependent: people provide purpose, judgment, and oversight; AI systems provide speed, consistency, and analytical reach.

To be “embedded” in this sense does not mean giving AI autonomy. It means building the capacity to use it systematically, predictably, and accountably, so that its outputs become part of the organisation’s trusted workflow rather than isolated experiments or ad-hoc tasks.

Contextual Scaffolding

For an AI system to perform usefully, humans must define and maintain the informational environment in which it operates.. We might call this operational workspace the AI's contextual scaffolding. It shapes the AI's behaviour, and its key elements are:

Different business functions require different scaffolding configurations. In practice, this means preparing and maintaining context packages: structured sets of instructions, reference material, and data that define how the AI should operate within a given process or workflow. These packages are created and maintained through the discipline of context engineering described in the next section.

In an AI-embedded business, contextual scaffolding is not a one-off setup but a managed artefact. It changes with the nature of the work and must be maintained alongside business processes, policies, and data assets. By treating contextual scaffolding as part of the operational infrastructure, a business can ensure that AI contributions remain aligned with objectives and that every output is traceable to a defined configuration.

Temporary and Persistent Scaffolding

Contextual scaffolding can be either temporary or persistent, depending on the nature of the work it supports.

Temporary scaffolding is created for a defined goal, typically a project or experiment. It evolves with the work, is adjusted as tasks progress, and is retired when the project ends. Its purpose is to manage change.

Persistent scaffolding supports repeatable business functions such as client support, billing, or recruitment. It provides a stable, reusable environment that keeps AI systems aligned with established processes over time. Its purpose is to maintain stability.

Together, these two forms of scaffolding give the business both adaptability and consistency. Temporary scaffolding enables exploration and innovation; persistent scaffolding ensures dependable day-to-day performance. Both are designed, version-controlled, and maintained through context engineering.

Context Engineering

Context engineering is the discipline that builds, maintains, and governs the contextual scaffolding that enables AI to perform dependably within the business’s operational systems. Where contextual scaffolding defines what the AI needs to know to play its part, context engineering defines how that knowledge is built, stored, and refreshed, making it part of the organisation’s operational fabric so that the AI stays aligned as the business and its work evolve.

The discipline gives structure and traceability to AI use. It ensures that the information shaping AI behaviour is versioned, reviewable, and reusable. It also provides a way to scale: once a sound configuration is in place, it can be adapted for other roles or projects without starting again.

In this sense, context engineering is the operational technology of embedding. It keeps human and AI in the same operational framework, and it allows that framework to be maintained as both people and AIs change over time.

How do we engineer context?

Why context matters

AI systems don’t understand a business simply because they have access to its data. They perform well only when the right information is presented in the right form. Poorly prepared information leads to context rot, the gradual loss of relevance and accuracy as the AI’s “understanding” drifts away from current reality. Context engineering prevents that drift. It is the discipline of preparing and maintaining the information environment that allows AI to think and act like a well-briefed team member. In practice, it combines knowledge management, process design, and lightweight governance to keep the AI’s world coherent.

What context is made of

Every task depends on two kinds of context:

Persistent context provides continuity and alignment; current context gives focus and relevance. The art of context engineering lies in balancing the two, keeping the AI grounded in long-term purpose while tuned to the specifics of the moment.

How context is built and maintained

Context engineering turns business knowledge into structured, reusable form. It draws on several key technical practices.

Structured formats use Markdown, JSON, or similar machine-readable layouts to keep information clear and accessible.

A chunking strategy divides material into small, self-contained units (typically 100–500 tokens) that each express a single idea or process.

A knowledge architecture helps maintain a consistent hierarchy of files and references so that context can scale from overview to detail.

Context packages, predefined sets of persistent and current information, can be loaded or refreshed for a recurring task.

Versioning and traceability keep track of which context packages, prompts, or reference sets produced particular outputs.

These techniques make context engineering repeatable. They allow us to update one element without corrupting the whole system and to reuse proven configurations across different projects or roles. Context engineering maintains both persistent and temporary scaffolding, ensuring that project-based work and routine operations share the same disciplined framework.

Simply adding more context does not guarantee better performance, however. In fact, it can introduce significant challenges:

The deeper issue is not just the volume of context, but how it is managed. Large Language Models lack inherent mechanisms for selective attention or true contextual reasoning. Without careful structuring (such as prioritizing, filtering, or summarizing information) they treat all input as equally important, diluting the value of the context provided.

For businesses, this underscores the need for a disciplined approach to curating, maintaining, and delivering information in ways that align with the AI’s strengths and the organization’s goals.

The business value

When done well, context engineering turns scattered information into an operational asset. It ensures that every AI instance works from the same baseline as its human counterparts, and that this understanding can be checked, updated, and improved. It keeps AI performance explainable, traceable, and accountable to human intention: the practical test of an AI-embedded organisation.