Anapoly Notebook

A digital garden for ideas about shaping Human-AI interaction

digital garden: a place to share and develop ideas in a flexible, interconnected, and constantly updated space.
ideas develop from seed → growing → well-formed → fruitful → retired

Transparency label: human only


Maggie Appleton describes digital gardening as the nurturing of ideas, connecting them through contextual association rather than publication date. Beginning as seeds, the ideas grow and evolve through thinking. Some prove fruitful, others less so.

The first seed in this garden was planted during a philosophical exchange with Giles Freathy. We were talking about the danger of outsourcing our thinking to AI, and he commented that there needs to be an ethos of caring about how the Human-AI relationship evolves. That led to the thought of how values, ethics, culture, and ethos fit together.

A second seed arises from asking "What is the problem with Human-AI interaction?". We should explore a collaboration to study the question.

But what is AI, and how do we interact with it? By AI, we usually mean large language models. Though not yet intelligent, they can seem so. It's estimated that approaching a billion people now use AI in one way or another. Most commonly, this is through a chat interface such as that offered by ChatGPT, which reportedly has nearly 500 million users.

We interact with the "chatbot" through its context. This consists of much more than a chat window, and makes it possible for us to be not just passive users of what the AI has to offer, but to actively manage its behaviour. Goal-Directed Context Management explains how we can do that when the AI is being used as a collaborator to produce a report or other knowledge-based product. But the approach can be adapted to suit almost any scenario. It enables us - if we so choose - to configure the AI so that it enhances our capacity to think, rather than letting it do our thinking for us. That is a seed worth nurturing.

As well as managing the AI’s behaviour, we need to manage our own. Knowledge workers, in particular, each need a system of some kind to help them capture, learn from, enrich, and create ideas: a personal knowledge management system, or "second brain". It need not be digital, but in a digital world it makes sense for it to be so. I have chosen to use Obsidian for my second brain.

The purpose of a second brain is to help us think creatively. It is an idea factory. It should be structured and organised accordingly, and tailored individually to suit the way we process ideas. AI must not be allowed to intrude on the thinking process. There is scope, however, for AI technology to enhance the process without intruding. For example, the ideas in my second brain can be embedded in a vector database, similar to the way an AI’s knowledge is stored. That enables the similarities amongst widely dispersed ideas - the semantic connections - to be highlighted and to stimulate creative thinking. This can be done entirely on my local computer with no online involvement, thus preserving privacy.

I can take this a step further by installing an AI locally. A large language model whose capability could only be dreamed of three years ago, can now run on a high-end laptop. If I connect this AI to my second brain, I will be able to use it in all sorts of interesting ways to explore and develop my ideas. This has the potential to boost my creativity greatly. Again, with no internet connection needed, thus preserving privacy.

A danger from letting AI into my second brain is that I will begin to let it do my thinking for me. This is where context management comes into play. I will have to configure the AI's behaviour to reduce that danger. I can give it freedom to find things in my second brain, to point me in useful directions, to critique my ideas, to highlight interesting relationships amongst them, and to question the logic of my arguments. Equally, I can instruct it not to offer its own ideas or solutions to the problems I am thinking about. I must configure it to guide me up the knowledge mountain, not carry me to the top. It will be interesting to see how well I can succeed with this. It will require an ethos of caring about how the Human-AI relationship evolves.


Seeds

Ethos of caring about how the Human-AI relationship evolves
What is the problem with Human-AI interaction?
Managing AI behaviour
Managing our own behaviour