Intelligence and AGI

To define AGI as machines that demonstrate all facets of human intelligence, we must consider every task humans can perform and assess how well machines can replicate these abilities. This evaluation should go beyond standard IQ-like tests, which LLMs can already handle competently, to include creativity, divergent thinking, analogical reasoning, metaphorical abstraction, emotional intelligence, and bodily-kinesthetic intelligence, the latter of which would necessitate some form of physical embodiment.

Firstly, this approach seems impractical due to the immense variety of human skills. There is simply too large a number of skills and traits that humans possess. Furthermore, how can we 1) discretize and 2) quantify the continuous spectrum of human capabilities, in a way that is meaningful to a definition of “genuine intelligence”? Is being able to empathize with others a marker of intelligence? How about being able to approximate solutions to PDEs in a few seconds? While computers have no problem doing the latter, they will struggle with the former. But does that make a machine less intelligent? Conversely, a human will find it quite intuitive to empathize with others, but nearly impossible to solve hard computational problems in a matter of seconds. But should we consider a human less intelligent than a machine? Clearly, the term intelligence becomes fuzzy. Therefore, our current definition and approach to AGI seems flawed.

These challenges bring me to my next point – that this definition of intelligence is anthropocentric and faulty. The notion that intelligence must be "real," "true," or "authentic" based solely on human standards and our understanding of it is problematic. We cannot even seem to agree on a definition for human intelligence, so assessing whether or not a machine is intelligent is clearly a hard problem. Building AI systems with the sole purpose of achieving “human level intelligence", may lead us to focus on false criterions and potentially cause us to overlook other landmarks that could significantly advance the field in unique and valuable ways. Not to mention the ever moving goal post of Turing tests – a faulty pattern of AI research goes as follows:
1. We define a problem that we think requires “intelligence” (i.e, chess)
2. We work towards that problem and achieve a solution (i.e, DeepBlue)
3. We then change the definition of “intelligence”
4. Rinse and repeat.

Even if we reached AGI according to the current definition, people would probably come up with a variety of reasons why it still isn't.

In short, “AI is anything that has not been done yet” – Larry Tesler.