I created this “Investigations in Mind” series to think through topics raised by my work with LLMs and autonomous agents that raised philosophical (and thus ethical) questions around consciousness. As you may have noted, I haven’t added more articles on the series for the past year. This was for a few reasons.
Agentic Work with LLMs is tricky#
At Microsoft, the small research engineering team I am a part of has continued to work on applications of LLMs and agents. Sam Schillace writes a weekly article that revolves around many of our explorations called Most of our work has been open sourced and you can see where we’ve come in our Semantic Workbench repository. You’ll find all sorts of chat-based assistants with all kinds of functionality. We’ve explored scenarios around knowledge transfer with assistants, large context/knowledge-base management, various forms of synthetic memory and notetaking, team collaboration, tool calling, and a heap of work towards automation. I’m particularly interested to continue working on a type of procedural memory with my skill library as a starting point.
My original goal of creating long-running agents using feedback loops proved too unreliable. I believe we’ll definitely still get there, but running the entire agent in one giant Observe-Plan-Act loop is beyond the capabilities of even state-of-the-art LLM models. It’s likely we’ll need multiple loops powering dozens or hundreds of subsystems.
Instead, what we find is LLM generation is unreliable and, for many tasks quite poor. These issues are surmountable and there are countless groups at Microsoft and globally working on them. Most of them, I believe, will be solved with non-LLM agentic systems using LLMs as a much smaller (but crucial) part of the system.
LLMs aren’t conscious#
When I ran the Seattle AI Society, the most popular reoccurring topic, hands down, was whether or not LLMs are conscious. When we first started working on forms of memory for LLM chatbots (two years ago now) I was asked, on more than one occasion, whether deleting the memory file was ethical or if I was not, in actuality, killing a conscious or sentient being hundreds of times each day.
Integrated Information Theory (IIT) points towards a definition of consciousness as a universal property of systems. While we have a long way to figuring out how that works, it makes a lot of sense to me, personally. Throughout human history, we have worked hard to distinguish ourselves from other things that are unlike us. We attempt to get to the significance of what it means to be human by positing the world can be hierarchically classified from omniscient gods to base elements. In these hierarchies, we put humans next to gods, primarily on the basis that we are conscious. With this classification established, we justify all manners of violence against life forms lower on the ladder while hardly recognizing that be harming other life forms invariably harms ourselves, because we are all connected. Instead of recognizing this connection we put vast energy in denying these other life forms have the rights we claim for ourselves… almost always on the grounds that our consciousness gives us special privileges.
All of this to say, I find the exercise of defining consciousness to oftentimes be a disingenuous game intended to back a political position.
I think it’s much simpler, and more genuine, to simply recognize many types of physical systems can be described as having a kind of consciousness, of being conscious. Yes, apes, dogs, chipmunks, ants are conscious. Mycorrhizal fungi-connected plants may be conscious. LLMs are not conscious.
We get fooled into thinking LLMs are conscious because we have never experienced anything that wasn’t conscious being able to hold a conversation with us. Many people thought Thomas Edison’s phonograph had a soul. We think we’re smarter than that.
Recently, OpenAI announced one of their language models passed the Turing test, which many historically have considered to be the point at which we would truly have machines that think. The lack of fanfare over this milestone betrays the fact that we never really thought the Turing test would be a good test for that. Rather than testing consciousness, Turing’s test tests how well a machine can generate plausible text in a conversational setting. It might actually be testing how easily duped we can be.
After having worked with LLMs nearly every day since they were introduced, I now consider them to be something much more akin to advanced search. You put in some text as a search query and you get new text as the search result. The new text was pieced together word by word and phrase by phrase, each new chunk being another exhaustive search of the massive language model corpus. The reason you don’t see a spinny icon in your browser after each new word is a testament to how scalable and performant our distributed systems, networks, and chips have become.
This is mind-blowingly useful tech that will open vast new capabilities and disrupt our global tech landscape on a massive scale. Humanity has unlocked a streaming data prediction technique that will change how we do what we do.
And that’s not consciousness.
We Have More Immediate Concerns#
I won’t stop exploring consciousness or autonomous agents. As I laid out in It’s Time, these are life-long projects for me. However, over the past year, it’s become increasingly clear that these are not yet (not just yet) the largest concerns facing us. We need to worry much less about conscious AI and much more about people who use our current AI and technology to the detriment of society and the world.
Our current generation of AI, and technology in general, can be used, are being used, at the expense of the many for the benefit of the few, and to solve that challenge we don’t need to uncover consciousness or develop new intelligence. We actually have most of what we need already and I will be applying myself in the upcoming months to that end.
If you’ve enjoyed the “Investigations in Mind” series, please continue on with me by following my website at payne.io. I promise to keep pulling together ideas from history, looking at data from current events, turning over ethical and philosophical frameworks, exploring technology, and coding new approaches towards what I hope might be a stronger society and a more connected world.
Best Wishes, P.