AI continues to be a central economic growth engine in 2025. Microsoft plans to invest $80 billion in its AI infrastructure this year alone. The carrot dangled in front of us all during this AI emergence period is the future state known as AGI, Artificial General Intelligence. I wanted to spend some time defining AGI because it’s not clear what AGI is or that we will know when we’ve reached it. AGI tends to be used as a future promise as well as a warning.
A good place to start in defining AGI is to look at what leaders in the field have said previously:
- Alan Turing, the creator of the Turing test, said the best way to evaluate an intelligent machine is by assessing if it can exhibit behavior indistinguishable from a human. In some cases, we’ve surpassed the Turing test, but in many cases, we have not. Everyone knows when they are stuck talking to an AI when they would rather be talking to a human. I’d argue that the Turing test still stands unbroken, which means we have not reached AGI, which I’ll get into more later. Alan Turing did not have a comment on AGI specifically, as his work is from the 1950s, and that term did not exist at the time.
- Sam Altman, founder of OpenAI, defines AGI as highly autonomous systems that outperform humans at most economically valuable work.
- Microsoft, which provides all of the compute for OpenAI, defines AGI as $100B in profits.
- Elon Musk defines AGI as “smarter than the smartest human.”
- Jensen Huang, NVDA CEO, defines AGI as “fairly competitive to human intelligence.”
- Yan LeCun, META AI, says, “There is no such thing as AI. Human intelligence is highly specialized.”
- Fei-Fei Li, “godmother of AI,” says, “I frankly don’t even know what AGI means. Like people say you know it when you see it, I guess I haven’t seen it.”
- Peter Thiel says AGI is “hopelessly vague.” He also said the Turing Test is more important than AGI, which I agree with and I’ll get into more in a moment.
In general, AGI has been used throughout 2024 as a marketing term to capture our imagination and attention. Sam Altman’s definition is based on economic utility, which has nothing to do with the Turing Test. Based on Sam Altman’s economic definition, I’d argue that we are maybe 10-25% of the way to AGI. ChatGPT is an excellent research assistant, but it can’t perform at a level higher than a top-level intern, and it’s nowhere close to high-level economically valuable work.
According to Elon Musk’s definition, AGI has already arrived. ChatGPT Pro has more breadth of knowledge and is a clearer communicator than any person I’ve ever met or will meet. It has encyclopedic knowledge and can be wonderfully kind and endlessly patient. It tends to know more than a generalist but less than a specialist, but it absolutely knows more than any person could ever know.
Back to the Turing Test. Humans want to collaborate with other humans. They do not want to talk to AI under any circumstances. An AI cannot replace a human in a collaborative high-level exchange. The real test for AI will be when an AI tool can hold a verbal conversation for over an hour, and the other person does not know they are talking to an AI or does not care. So, I agree with Thiel that AGI is less important than the Turing Test. For an AI to become fully autonomous, it has to be able to seamlessly navigate social interactions.
The final AGI and Turing Test would be what people have asked AI to do all along, which is to do anything and everything for them. A true AGI that passes the Turing test could start a business, file legal paperwork, test product market fit, receive feedback from customers, act ethically and within the bounds of compliance, source product and produce code, manage finances, source vendors, negotiate contracts, hire and manage employees and so on. Consider this when reading articles about AGI over the course of 2025.
OpenRouter
To cap off this discussion, if you are deeply interested in AI, you might find this tool interesting. OpenRouter aggregates the different generally available LLM models into a single platform. As of this writing, there are 293 models. Each model has unique strengths, weaknesses and biases. I recommend spreading the same question across different models to see how the answers differ.
Weekly Articles by Osbon Capital Management:
"*" indicates required fields