Super Intelligence and Market Fluctuations
Artificial General Intelligence (AGI) or Super Intelligence, has long been the dream and work of many reputable scientists like Alan Turing wondering if machines can think in 1950, to recent thinkers calling it our final invention.
I think that the current AI research community and especially scientists, in general, are a bit more conservative than the average professionals in their wording and claims, as witnessed for example in the recent Microsoft Research paper “Sparks of AGI: Early Experiments with GPT4”.
But one thing for sure is that the horizon of when we will have super intelligence has been dramatically shortened in the eyes of these scientists themselves. I used to hear lots of reputable AI scientists on the Lex Fridman podcast answer the question “When will we have AGI?” in the multiples of 10s (in years).
I would bet that if you asked the same people1. if we will have AGI by the year 2028 it’d be hard to hear someone answer NO with absolute confidence and explain why that would be the case2.
Now that the importance and horizon of AGI are established, I want to take a small detour and talk about another aspect of AGI outside of the technical debate, which is the economic fluctuations that we will experience on the path toward AGI.
We’re already witnessing small market fluctuations in different businesses like Cheg, StackOverflow, etc and I believe more is yet to come. I can already imagine different use cases where companies like Duolingo will become worthless without this technology. The reason this is happening and will continue to grow in my opinion is actually an irrefutable statement.
The mission of OpenAI3 is to create AGI that outperforms humans at the most economically valuable work and distribute the benefits to everyone.
If AGI can outperform humans in economic value, in markets that already exist then we shall see revenue drops of competitors. But AGI isn’t any kind of technology, it is unique in that it encompasses the whole human consciousness experience and economic skills. It’s a kind of all-in-one solution to problems that different companies exist to provide services for.
Now that’s not to say that one product is enough to distribute the benefits of superintelligence. For example, Khan Academy recently demoed how they’re fine-tuning GPT4 and making it more friendly for education for the young generation.
It still remains unclear to me what the inflection point is where society will have to gather and implement mechanisms like Universal Basic Income (UBI) to compensate for these expected economic fluctuations.
I know that AI won’t be a zero-sum game, and I am confident that we will create more interesting4 jobs for humans. But in the meantime, we should keep track of AGI’s contribution to GDP, and maybe define a certain threshold where UBI becomes part of our economies.
Maybe AGI will help us figure out how to solve these problems!
Notes
-
There are a few scientists that still fight for symbolic AI to this day when neural networks are the only methods that have yielded actual progress, so I wouldn’t really ask for their opinions personally on this question. ↩
-
With GPT4 as the current baseline and lots of clear paths to make progress like multi-modality, power law, data available, and the recent H100s from Nvidia I think no one can make a reasonable claim against why deep learning won’t lead to AGI. ↩
-
I highly recommend reading the OpenAI Charter. ↩
-
I’m excited to see if space exploration will be accelerated after AGI and UBI. ↩