Category Archives: alignment

Ambient AI

A community of abstract digital minds

Ambient AI

It’s always problematic making predictions – especially in times of huge technosocial change and disruption. We are experiencing one now (though it may not seem so apparent in day-to-day life) – important aspects need to be identified, and how they will evolve and move forward over the next decade. There are going to be huge, transformational impacts over the immediate future and, especially, our children’s lifetimes.

The principle emergent technology to understand is definitively AI – nothing else comes close, because it is so all-encompassing. Large Language Models are springing up everywhere, with surprising competencies, many of them opensource [1][2][3] and suitable for training upon domain-specific knowledges [4] as well as general tasks.

Interesting approaches in prompt-engineering (PE), chain-of-thought reasoning (CoT), reflexion[5] and, fascinatingly, ‘role-playing’[6] using LLMs, also seem to be improving benchmark performance in concert with reinforcement learning with human feedback (RLHF) [7]:

Thinking of these emergent capabilities, in the context of the current AI arms-race, the issue of human-AI alignment  [8] [9] is of crucial regulatory importance:

Ultimately, to figure out what we really need to worry about, we need better AI literacy among the general public and especially policy-makers.  We need better transparency on how these large AI systems work, how they are trained, and how they are evaluated.  We need independent evaluation, rather than relying on the unreproducible, “just trust us” results in technical reports from companies that profit from these technologies. We need new approaches to the scientific understanding of such models and government support for such research.

Indeed, as some have argued, we need a “Manhattan Project of intense research” on AI’s abilities, limitations, trustworthiness, and interpretability, where the investigation and results are open to anyone.  [9]

Placing this in the context of existential threat, it is well worth absorbing this interview with Geoffrey Hinton in it’s entirety:

Monoliths vs iOT

Although the above concerns sound like science fiction, they are not, and the consequence is that anyone working with AI development (which basically means anyone who interacts with AI systems) must situate themselves within an ethical discourse about consequences that may arise from the use of this technology.

Of course, we have all been doing this for many years through social media and recommender systems – like Amazon, Facebook, VK, Weibo , Pinterest, Etsy – and Google, MicrosoftApple, Netflix, Tesla, Uber, AirBnB etc. – and the millions of data-mining subsidiary industries that have built up around these. Subsidiary re-brands, data-farms, click-farms, bots, credit agencies, an endless web of information with trillions of connections.

In reference to Derrida, I might whimsically call this ‘n-Grammatology”  – given that the pursuit of n-grams has arrived us at this point for the ambiguous machines [10]. A point where the ostensive factivity of science meets the ambiguous epistemology and hermeneutics of embeddings in a vector space – the ‘black box’.

What we know is that AI is a ‘black box’ and that our minds are a ‘black box’, but we have little idea of how similar those ignorances are. They will perhaps be defined by counter-factuals, by what they are not.


One of the mythologies that surrounds AI that is hard to avoid is that it occurs ‘somewhere else’ on giant machines run by megacorporations or imaginary aliens:

However, as the interview with Hinton above indicates, what has been achieved is an incredible level of compression : a 1-trillion parameter LLM is about 1 terabyte in size:

What this seems to imply is that the kernel will easily fit onto mobile, edge-compute and iOT devices in the near future (e.g. Jetson Nano), and that these devices will probably be able to run independent multimodal AIs.

“AI” is essentially a kind of substrate-independent non-human intelligence, intrinsically capable of global reproduction across billions of devices. It is hard to see how it will not proliferate (with human assistance, initially) into this vast range of technical devices and become universally distributed, rather than existing solely as a service delivered online via APIs controlled by corporations and governments.

AI ‘Society’

The future of AI is not some kind of Colossus, but rather a kind of of global community of ambient interacting agents – a society. Like any society it will be complex, political and ideological – and throw parties:

Exactly how humans fit into this picture will require some careful consideration. Whether the existential risks come to pass are out of the control of most people, by definition. We will essentially be witnesses to the process, with very little opportunity to affect the direction in which it goes in the context of competition between state and corporate actors.

The moment when a human-level AGI emerges will be a singular historic rupture. It seems only a matter of time, an alarmingly short one.

For the next post I will put aside these speculative concerns, and detail some of the steps we have made towards developing a system that incorporates AI, ambient XR and Earth observation. My hope is that this will make some small contribution to a useful and ethical application of the technology.



[1] “Open Assistant.” Accessed May 9, 2023.

[2] “LLMs (LLMs),” May 6, 2023.

[3] “Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs.” Accessed May 9, 2023.

[4] philschmid blog. “How to Scale LLM Workloads to 20B+ with Amazon SageMaker Using Hugging Face and PyTorch FSDP,” May 2, 2023.

[5] Shinn, Noah, Beck Labash, and Ashwin Gopinath. “Reflexion: An Autonomous Agent with Dynamic Memory and Self-Reflection.” arXiv, March 20, 2023.

[6] Drexler, Eric. “Role Architectures: Applying LLMs to Consequential Tasks.” Accessed May 9, 2023.

[7] “Reinforcement Learning from Human Feedback.” In Wikipedia, March 30, 2023.

[8] Bengio, Yoshua . “Slowing down Development of AI Systems Passing the Turing Test.” Yoshua Bengio (blog), April 5, 2023.

[9] Mitchell, Melanie. “Thoughts on a Crazy Week in AI News.” Substack newsletter. AI: A Guide for Thinking Humans (blog), April 4, 2023.

[10] Singh, V., 2015. Ambiguity Machines: An Examination [WWW Document]. URL (accessed 4.26.23).