I think I need to point out some obvious issues with the paper.
Definition of artificial:
>Made by humans, especially in imitation of something natural.
>Not arising from natural or necessary causes; contrived or arbitrary.
Thus artificial intelligence must be the same as natural, the process of coming up with it doesn't have to be natural.
What this means: we need to consider the substrate that makes natural intelligence. They cannot be separated willy nilly without actual scientific proof. As in, we cannot imply a roll of cheese can manifest intelligence based on the fact that it recognizes how many fingers are in an image.
The problem arises from a potential conflict of interests between hardware manufacturer companies and definition of AGI. The way I understand it, human like intelligence cannot come from algorithms running on GPUs. It will come from some kind of neuromorphic hardware.
And the whole point of neuromorphic hardware is that it operates (closely) on human brain principles.
Thus, the definition of AGI MUST include some hardware limitations. Just because I can make a contraption "fool" the tests doesn't mean it has human like cognition/awareness. That must arise from the form, from the way the atoms are arranged in the human brain. Any separation must be scientifically proven. Like if anyone implies GPUs can generate human like self awareness that has to be somehow proven.
Lacking a logical way to prove it, the best course of action is to closely follow the way the human brain operates (at least SNN hardware).
>The resulting AGI scores (e.g., GPT-4 at 27%, GPT-5
at 57%) concretely quantify both rapid progress and the substantial gap remaining
before AGI.
This is nonsense. GPT scores cannot decide AGI level. They are the wrong algorithm running on the wrong hardware.
I have also seen no disclosure on conflict of interests in the paper.
Definition of artificial:
>Made by humans, especially in imitation of something natural.
>Not arising from natural or necessary causes; contrived or arbitrary.
Thus artificial intelligence must be the same as natural, the process of coming up with it doesn't have to be natural. What this means: we need to consider the substrate that makes natural intelligence. They cannot be separated willy nilly without actual scientific proof. As in, we cannot imply a roll of cheese can manifest intelligence based on the fact that it recognizes how many fingers are in an image.
The problem arises from a potential conflict of interests between hardware manufacturer companies and definition of AGI. The way I understand it, human like intelligence cannot come from algorithms running on GPUs. It will come from some kind of neuromorphic hardware. And the whole point of neuromorphic hardware is that it operates (closely) on human brain principles. Thus, the definition of AGI MUST include some hardware limitations. Just because I can make a contraption "fool" the tests doesn't mean it has human like cognition/awareness. That must arise from the form, from the way the atoms are arranged in the human brain. Any separation must be scientifically proven. Like if anyone implies GPUs can generate human like self awareness that has to be somehow proven. Lacking a logical way to prove it, the best course of action is to closely follow the way the human brain operates (at least SNN hardware).
>The resulting AGI scores (e.g., GPT-4 at 27%, GPT-5 at 57%) concretely quantify both rapid progress and the substantial gap remaining before AGI.
This is nonsense. GPT scores cannot decide AGI level. They are the wrong algorithm running on the wrong hardware.
I have also seen no disclosure on conflict of interests in the paper.