Skip to main content
Background Image
  1. Posts/

My Programming Career is a Historical Artifact

·2304 words·11 mins·
Paul Payne
Author
Paul Payne
Technologist. Seattle, WA.
Table of Contents

Ok. I’m calling it. My entire 30 year programming career is a historical artifact. My current work with AI has convinced me that, in a few short years, humanity will look back and think how funny it was that people once actually programmed machines.

Programming needs humans
#

I started programming when I was 8 years old by pouring through a Commodore 64 user manual and typing in code from the back of magazines. I haven’t stopped learning since. Programming isn’t one of those things that you study for a few years and then you’ve got it down. For the past 40 years I have been in a continuous stream of industry-wide technological evolution. When we hire programmers, some of the primary qualities we look for are adaptability, independent learning, and critical thinking. It takes continuous effort to stay at the edge.

Programming requires a mechanical sympathy at its base. You need to understand how a machine works–storage, memory, processing, interfaces. On top of that base you learn semantics of languages and toolsets–what magical incantation of characters can actually be executed by the system. On top of this, abstractions for larger programs, then systems of programs, then systems of computers. A large part of the work of a programmer is managing complexity, so learning and developing techniques for this is crucial. More experienced developers embrace abstractions like data structures and algorithms, configuration and state, encapsulation, functional vs object composition, entities vs data, events vs snapshots, sequences vs recursion, it really is endless. All of this in a continuous stream of new product releases that reconfigures your toolbox weekly.

And then the trade-offs! All of programming (like all of engineering) is about the tradeoffs. You can trade processing for memory, time for completeness, reliability for cost, maintainability for speed; tweaking a hundred variables to make the program that meets your needs.

Programs don’t exist in a vacuum. Programs are meant to solve real problems from real people in ways that make it into real products and services. Programmers must know how to define problems, identify user needs and satisfy them in understandable and desirable ways, ensure the systems they build are feasible, and can create viable businesses. Programmers program programs that interact with users so we are continuously at the edge of psychology and society and their interaction with the technology we build.

What a programmer learns over a few decades is how thinking itself works. If you are trying to decompose reasoning over a problem space into a set of steps or split it up so different parts can work together, or figuring out what things will evolve over time, you should really have a programmer in the room. This is why programming, as a discipline, has made its way into just about every scientific field over the last half century. Programmers are the designers, architects, and builders that supercharge problem solving in all domains.

Wait… does programming need humans?
#

Three years ago, I finished interviewing several candidates for a senior engineering role. It’s not uncommon to find people in the interview loops who, for whatever reason, just haven’t developed the basic programming skills required to be successful in a programming job. Depending on the job and the candidate, we might overlook this a bit if they have strong critical thinking or problem solving skills, but it’s still a bit shocking to find. After a series of failed candidates, I slumped into my office chair and exclaimed “Why are these candidates struggling with programming a solution to this basic problem!?”. My colleague, amused, asked “What’s the problem?” and I described it to him. “Ok” he said, “… and what form would a satisfactory solution be in?”. I turned around and realized he wasn’t actually trying to help me with my interviewing skills, he was typing the question into OpenAI’s GPT3-Davinci playground that we had just received early access to. A few keystrokes later, he submitted the question and out popped the answer in perfect Python. I looked at it. It was correct and efficient. I was amazed. So, I asked him to solve it in a different programming language. It did. So I asked him to solve it in a different programming paradigm. It did. THEN it stopped waiting for us to ask for more variations and just started printing out solutions in a series of other languages.

Our small team at Microsoft has been working with LLM-driven systems since that day. At first I believed, as the rest of industry realized over the next two years, that this technology alone was enough to increase productivity in nearly every field computer technology touched. But I also believed it would take a long time to make these systems capable of solving more than just toy problems. The skills a programmer uses daily were just too much to expect of a text-completing neural network function.

We tackled issues, one by one. First, they had such a short memory window, kind-of like an attention span, so we developed techniques for dealing with this. Then we connected the systems to other tooling to give them more capabilities than just writing text. Then we tackled getting these systems to handle longer chains of actions in a row. Next, we worked on how to decompose larger tasks into smaller ones and still make forward progress. Then we worked on techniques for making their results more measurable and reliable. Eventually, we worked on making them more introspective and able to adapt themselves in response to their performance in real-time. Throughout our investigations, the models became more capable and the ecosystem grew. Step by step, our systems took more and more off our plate as programmers.

This spring, Anthropic released Claude Code which contained a few crucial elements that really kicked our systems into overdrive. First, it could interact with your computer directly through the command line rather than being trapped in a web-app like ChatGPT or API. Second, they did a great job of incorporating some well-known patterns in an elegant way–task decomposition, task delegation, and tool calling (like web fetch and file search)–that allowed the system to be able to orchestrate a series of tasks in a reliable way. We have been using tools like Claude Code all year and have been increasing the capability of our AI systems to take on larger tasks, with more reliability, over longer periods of time.

And none of this made me feel like programming itself was going to become a historical artifact.

Nope. Programming doesn’t need humans
#

Until last week. We’ve been packaging up our various techniques and experiments on top of Claude Code, kind-of using it as a quick experimentation platform. We were able to take many of our hard-earned discoveries and integrate them in quickly. We’ve released an early prototype at https://github.com/microsoft/amplifier and will be releasing a newer version, which is model agnostic (not running on Claude Code) in the next week or two. This won’t be the thing that replaces me as a programmer, but it convinces me that our timeline is months, not years. It already does the hard part.

With Amplifier, you now have the ability to describe what you want and it will build it for you, from design, to backend, to frontend, including testing, according to your own programming philosophy. But this isn’t the main point. The main point is that the more you use it, the better it gets. You create reusable tools over time. It captures useful techniques. It learns from your usage of it, suggesting ways to improve itself. This is a type of exponential productivity. The things you build help you make things faster.

In the past week, I’ve used Amplifier to examine two separate code bases, extract a feature set I was working on in one, propose three ways it might integrate the same ideas in the other, implement each (in parallel) and then provide me a summary of what worked and what didn’t. I also used it to take a (large) set of bash scripts I have been working on and convert them into a web service API, a golang CLI, and a full web app frontend. Sure, the prompt takes time and programmer knowledge to set up, but each of these tasks were about three prompts to Amplifier. In each case, Amplifier went off for 30-120 minutes and came back with working software, including documentation.

At the same time, others on our extended team used it to build a dozen other things, including newer versions of Amplifier, an extensive evaluation framework, and two different graphical experimentation desktops. I was most impressed by a designer, who had limited programming background, envisioning an entirely new way to go from desired design aesthetic to a full design framework, taking ideas and workflows from combinations of other tools like Figma and Storybook, to create an entirely new assistant-driven design app that I imagine might 10x his productivity. Last week, we pointed Amplifier at an interesting project for managing large interconnected task lists and it integrated it into itself, allowing my colleague to run it continuously all night long, completing dozens of tasks autonomously, and even preparing him a “morning report” that included multiple implementations of various components to get his feedback on what it should do next.

This is not the final product, and we (and the industry) have a lot of work to do. Hardware and software makers (drivers, core libraries, integrations) will need to publish product context for AI to use. We’ll need to continue to evolve and publish new standards for system integration. But for the first time, I had the sensation that the hard part had been done. From here, it’s a straight shot to machines doing what you ask without programmers in the middle.

What the (near) future looks like
#

The project I mentioned above, taking bash scripts and turning them into a web application? It’s actually all, the entire development, running on a Raspberry Pi micro-PC. I just installed Amplifier on it, mounted the old code base, and told it to make all the other projects. I never touched the code. It was already better than I would have made myself over many weeks. I’m planning on releasing this project to be used by others, but you can imagine using the same process to build custom, personalized software without even distributing it. Instead of downloading and installing software, we’ll be able to just tell the machine what we want and it will create it. Do you want to change some things? Add new features? Make an entirely new application? Just ask.

All of the programming skills I described above, the skills I developed over a lifetime, can now be accomplished by tools like Amplifier with the right prompting. The final stretch is to wrap up all this expertise and hand it over to the AI and then anyone, with no programming experience, will be able to have AI build whatever software they need, as they need it.

So what now?
#

I don’t mind describing this as a personal existential crisis. There’s certainly a good part of my identity that is wrapped up in being a programmer. I’ve thought of it as a kind of magic to be able to find the right incantation to get a machine to do amazing things. It was a career super-power. So what does it mean that everyone can do magic? That my superpower is now just… everyday life? Will I still have a seat at the table?

And what does it mean for my sense of legacy? We want to imagine the things we do in life lasting beyond us. But it’s clear now that my career legacy will be a curious one… like a blacksmith, or scribe, or court jester.

But perhaps this is even stranger than previous tech-driven obsoletions. I all previous cases, the person becoming redundant was able to take their domain knowledge and problem-solving skills and up-level them to the new thing. Blacksmiths built foundries. Scribes became printers. Jesters became comedy actors. They carried on solving the same problems, often at a bigger scale, using their new tools. But what to do when problem solving itself is obsolete?

Just now, I paused to go read my colleague Brian Krabach’s post on his existential crisis with this. He arrived at this point six months ago, but I didn’t believe him, yet. Sure, machines were going to replace some amount of programming, but we’d still need programmers to get them to do it for the foreseeable future. Amplifier is demonstrably, daily, proving I was wrong on the timeline. In his article, he describes how he found solace in realizing that replacing the programmer unblocked him from doing what he, as a programmer, always loved–solving bigger problems faster. But I’m not so sure. Identifying the problems and proposing solutions always seemed like the easy part to me. My time in various innovation labs showed me that these things, too, were just processes that can be automated. “But machines don’t have access to the data about what problems need to be solved! They don’t know what problems we face!”. Sure–until we tell them to write the software to do that.

So, yes. There is quite a bit of solace in thinking that real problems are on the verge of being solved–societal level problems being the most urgently needed right now. And I will undoubtedly get involved in many of them. I just need to grapple with the fact that my contribution will no longer primarily be with my hard-earned programming skills. Those will soon be at the fingertips of anyone who can purchase some compute.

P.S. In case you’re wondering, yes, that is a highschool senior photo of me soldering a component on a circuit board. 😆