Anapoly Notebook

A digital garden for ideas about shaping Human-AI interaction

digital garden: a place to share and develop ideas in a flexible, interconnected, and constantly updated space.
ideas develop from seed → growing → well-formed → fruitful → retired

Transparency label: human only


Maggie Appleton describes digital gardening as the nurturing of ideas, connecting them through contextual association rather than publication date. Beginning as seeds, the ideas grow and evolve through thinking. Some prove fruitful, others less so.

The first seed in this garden was planted during a philosophical exchange with Giles Freathy. We were talking about the temptation to outsource our thinking to AI. This risks us losing our critical thinking skills, and the judgement and decision-making abilities which depend on that. We need a better understanding of the problem with Human-AI interaction. Perhaps we could set up a collaboration to study it?

But what is AI, and how do we interact with it? By AI, we usually mean large language models. Though not yet intelligent, they can seem so. This in itself is a concern. Cassie Kozyrkov explains why we should worry about AI psychosis in this Substack post.

“Some people reportedly believe their AI is God, or a fictional character, or fall in love with it to the point of absolute distraction. Meanwhile those actually working on the science of consciousness tell me they are inundated with queries from people asking ‘is my AI conscious?’ What does it mean if it is? Is it ok that I love it? The trickle of emails is turning into a flood. A group of scholars have even created a supportive guide for those falling into the trap.”

“Simply put, my central worry is that many people will start to believe in the illusion of AIs as conscious entities so strongly that they’ll soon advocate for AI rights, model welfare and even AI citizenship.”

The second seed in this garden arises from the need for there to be an ethos of caring about how the Human-AI relationship evolves. We need to think carefully about how values, ethics, culture, and ethos fit together.

It's estimated that approaching a billion people now use AI in one way or another. Most commonly, this is through a chat interface such as that offered by ChatGPT, which reportedly has nearly 500 million users. We interact with the "chatbot" through its context, which is much more than just a chat window. If we choose carefully what we put into the AI's context, we can turn ourselves from passive users of what it has to offer into active managers of its behaviour. Alejandro Piad Morffis suggests one way to do this when writing. Another approach is Goal-Directed Context Management which applies instructions and prompts progressively, each set being tightly focused on a specific stage of work. Both approaches can be adapted to suit almost any scenario. They enable us - if we so choose - to configure the AI so that it enhances our capacity to think, rather than letting it do our thinking for us. That is a seed worth nurturing.

But as well as managing the AI’s behaviour, we need to manage our own habits of thought. Knowledge workers, in particular, need to capture, learn from, enrich, and create ideas. We are encouraged to use a personal knowledge management system, sometimes called a "second brain", to help us with this. I had been experimenting with Obsidian for that purpose. But I've come round to Joan Westenberg's way of thinking about it when she decided to delete her second brain:

"I don’t want to manage knowledge. I want to live it. I still love Obsidian. And I’m planning on using it again. From scratch. And with a deeper level of curation and care - not as a second brain, but as a workspace for the one I already have."

I, too, think that we need a workspace for the brain, and that is the third seed in this digital garden. The workspace is an idea factory. But rather than an assembly line, we can think of it as a network of interconnected ideas at varying levels of maturity and complexity. It must let us bring in other people's thinking, helping us capture ideas and add them to our existing network of knowledge. The ideas must be discoverable, yet retain enough context for each to be understandable. We must be able to draw on and remix them in order to create new ideas and original thoughts; and it must be easy for us to share that thinking. Above all, our workspace for the brain must suit the way we think; it should be structured and organised - tailored - for each of us individually.

There is scope for AI technology to enhance our thinking in that workspace without intruding on it. For example, ideas held in a knowledge network within Obsidian can be embedded into a vector database, roughly similar to how an AI organises knowledge. That enables the similarities amongst widely dispersed ideas - the semantic connections - to be highlighted for us to think about. It enables us, also, to search the network not just for the presence of words, but for ideas with similar meanings to the words in our search term.

We can take this a step further by inviting an AI into our thinking space. If we connect the AI to the vector database mentioned above, we will be able to use it in all sorts of interesting ways to explore and develop our thinking. The danger here, of course, is the temptation to let the AI do our thinking for us. This is where context management can help. We will have to configure the AI's behaviour to reduce that danger. We can give it freedom to find things in the network, to point us in useful directions, to critique our ideas, to highlight interesting relationships amongst them, and to question the logic of our arguments. Equally, we can instruct it not to offer its own ideas or solutions to the problems we are thinking about. We must configure it to guide us up the knowledge mountain, not carry us to the top.

It remains to be seen if that vision can be put into practice. It will require an ethos of caring about how the Human-AI relationship evolves.


Seeds

What is the problem with Human-AI interaction?
Ethos of caring about the Human-AI relationship
Managing AI behaviour
Managing our own habits of thought