This essay seeks to answer the question, “What is personal value in a world of AI?”. As I’m writing this in 2023, AI is all the buzz. OpenAI’s release of ChatGPT last year has completely shifted the strategy of many companies and became the catalyst for many startups. No one knew that the underlying technology (the transformer neural network) would be so successful at simulating human writing. A shift of this magnitude has brought significant polarization in around the implications - some say that it will be the best thing mankind has ever done, and some say it will destroy us all. Most agree that many jobs will either change substantially or be completely replaced. With such substantial change right around the corner, how can we prepare, as individuals, as employees, and as managers?
First, a caveat. Some believe that the possibility for self-replicating and self-aware AI is right around the corner. If that happens, it will bring with it its own set of implications, many for which no one can prepare for in any meaningful way. For this essay, let’s assume that progress in AI centers around improvements in speed, reliability, tool usage, and reasoning.
The hardest part of preparing for an AI-driven future is figuring out just what we think will actually happen. According to the popular media, we’ll either all die or all be rich. The truth must be somewhere in between. The issue is complicated by the fact that the field is improving so rapidly. For this reason, asking what capabilities AI will have in the future is a moving target and is up to dramatic levels of interpretation. Instead, let’s ask the question: What are the capabilities AI will never have? Going about it this way, we have a solid target and can better explore what the end state we believe is reasonable.
What Is Personal Value?
The first thing that we should discuss is the concept of value. Value is a multi-faced topic that means different things to different people. Perhaps lets start with an example. Today you are a software engineer with 10 years of experience in Java. Your boss tells you one day that your role will be replaced by an AI agent that has been trained on the company’s codebase and will write code faster and better than you ever could. Your boss intends to chat with it and asks it to build the features you were once tasked with. You are clearly being replaced by a machine.
Could this actually happen? To many it is an all too clear inevitable future. But it brings up the more underlying question - what are the differences between man and machine? Or more specifically, to what extent can a man be replaced by a machine?
To answer these questions I think it best not to start by defining how close a machine can get to being human, but rather what are the things we know a machine will never do. In this thought experiment, we should assume that the current restrictions on AI - memory, speed, and cost - have been overcome to the extent that we don’t need to consider them. Given the pace of change in AI, I think it reasonable to assume these barriers will be overcome in the coming years.
Human Experience
First, a machine does not have the experience of a human. It’s never been born, or raised. It doesn’t have parents, it doesn’t have a body. It doesn’t need food and has never felt hungry or full. It has never felt thirsty or tired. It’s never fallen on the playground and scraped its knees. It’s never looked around and couldn’t find mommy. It’s never been to school and made friends. It’s never felt the joy of real friendship nor the pain when a friendship sours. It’s never gotten into a fight and learned to standup to bullies. It’s never gotten a bad grade or botched a report at work. It’s never raised children nor had to take care of aging parents.
What’s interesting to me is that modern AIs like GPT-4 have learned all about what humans have said about these things, and is really great at simulating what humans would do and how they would feel in all these scenarios. In a way it “knows” and “understands” these experiences without having the experience itself. And it’s not too much of a mental leap to extrapolate out a few years that these systems will be even better at understanding human experience and emotion.
Mediated Senses
Second, machines are reliant on humans for their input and output. Humans understand the real world through our senses. And we learn to trust those senses over time. As a baby, you learn that the toy you see is also something you can reach out and touch. The smell of good food indicates the presence of something that you can see with your eyes and touch with your hands. The senses reinforce themselves and work together to develop an understanding of that which is real. Now the philosophers will begin to ask “but what is real”? … and my reply is that there is a common understanding of what is real across humanity. My point is not to discuss whether what we experience is real. Rather, that what we experience is real to us and there exists a systematic reality outside of each human that we interact with.
What’s more important here is that for humans our bodies are what create our senses for us. We cannot design our eyes or ears (although we may improve them through glasses or hearing aids). When a machine is interacting with its environment, humans create the systems for which it experiences the world. Humans mediate the “experience” of the AI. An AI does not know that it’s output is going to a human, or to an agent, unless it’s told that’s the case.
What this means is that humans will always have a role in shaping AI. And that humans have a unique role in shaping how AIs perceive reality. There’s no way that an AI can “trust” it’s own senses. All input is decided by humans.
Now this does get turned on its head if we look into a future where autonomous robots are in our houses helping us with household chores and playing ball with the kids outside. But even this type of experience is mediated by humans - we can turn machines off.
The implication here is that time is no longer a constant progression for a machine. As humans, our consciousness is a stream of sensed information and thoughts. There are times when we have nothing new going on, and we call this boredom. Machines are exempt from this constant continuance of time.
Pain
Third, a machine can’t be punished for doing something wrong in any real sense. Punishment is a key deterrent of bad actors; for instance, if you kill someone and are convicted in court you will likely go to prison for the rest of your life. If you accidentally kill someone and are convicted (manslaughter) you may go to prison, or at the least probation for some time.
Machines are not the same way. If a machine kills someone, say it determines in HAL-9000 style that it must kill someone to achieve its given mission, there would be three options. First would be to turn it off and delete its files. Second would be to give it a tedious task a la Sisyphus - not really helping to provide justice. The third would be to make a decision between holding those accountable that created the machine, ruling that the person owning or in charge of the machine was liable, using the same logic you would if someone accidentally killed themselves playing with a gun.
The third option is obviously the only real choice. You can’t send a machine to prison, give it probation, or punish it in any real way. The implication here is that there will always need to be a human in the loop when machines are given dangerous responsibilities. This is already true. Aircraft have been able to fly themselves for decades, but we still have a pilot onboard. Why? Because pilots can be reprimanded, fired, and sent to prison. This negative incentive is necessary to keep trust in a system where people are in an inherently dangerous situation moving at six hundred miles per hour to get to their destination. No matter how safe planes are - there’s always a non-zero chance of catastrophic failure. And that means that a human must be held accountable if something goes wrong. If auto-pilot reached a point where pilots were not necessary on board, and people died in a crash, one can only imagine the questions regulators would ask of the company that build and tested the software - holding them liable for not only the software, but the situations in which the software was put in control.
Where humans will continue to thrive
The 3 things humans have that machine can’t have - experience, trusted senses, and pain, help us to frame what a world will look like with incredibly intelligent agents in the wild.
- We can be sure that humans will still be needed in empathetic roles (therapy, medicine, friendships, parenting, mentoring, financial advisory, teaching)
- We can be sure that humans will still be building things - that is taking a vision and making it real - although this will be more directing (tell, sense, correct loop) than doing (build, sense, correct). This also means that humans will be responsible for the narration of the truth, held responsible to answer the question “what happened?” and held accountable when lying.
- We can be sure that humans will still be accountable for risky activities - both risky for an individual (an executive that trusts a friend to deliver something when jobs are on the line), or dangerous (pilots, big construction projects, etc.).
And all of this doesn’t discount the fact that these systems have many things that they are very good at. Language Translation is just about a solved problem now. LLMs can sense and respond to meaning and nuance in language. LLMs rarely make spelling or grammar mistakes. In this way I like to think that we’ve already created C3PO, very smart but also can be very dumb and comical.
Assumptions for the future
So where does that put humans, especially when it comes to work? It all comes down to value. What is the value that we provide outside of what an AI could do? Well first we need to make some assumptions about the future to constrain our thinking. First, we’ll assume that AI cannot just gain consciousness. We’ll cover that later, but for the first part, it will simplify our analysis. Second, we’ll assume that the capabilities of LLMs and diffusion models today will continue to progress but we won’t see significant new capabilities and failure modes will be present but diminished. For instance, GPT-5 and GPT-6 (or company’s versions) will cover more topics, be right more often than not, be much more stearable and explainable, be more trainable with custom data, and will run faster and cheaper. Among all these dimensions there could be step level improvements in the next few years. But there will still be problems with hallucinations and incorrect answers in some cases. Thirdly, we’ll assume that tooling and integrations on top of these LLMs will expand dramatically. AI will be a new user interface (perhaps the primary?) to computers / phones and a first-class feature for all apps. Context window limitations will all but disappear, giving way to full code-base and organization-wide document analysis. I expect all of these things in the next 1-5 years.
Final Thoughts
So when all of these things are in place, what is the “job” of a human? How can a human compete with intelligence at this scale?
- Sensing / Storytelling Value - Providing a feedback loop to an AI; quality control; input of what’s all going on; prompting; “knowing” the AI; trusting senses and storytelling of all that happened
- Outcomes Value - Being trusted to deliver an outcome under reasonable circumstances and accountable for the job.
- Relationship Value - Being empathetic; connecting on a human level; parenting; having fun together. Having children together. Love.
- Leadership Value - Holding a strong perspective, motivating others, persistence, grit, etc.