Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The problem, I guess, with these methods is, they consider human intelligence as something detached from human biology. I think this is incorrect. Everything that goes in the human mind is firmly rooted in the biological state of that human, and the biological cycles that evolved through millennia.

Things like chess-playing skill of a machine could be bench-marked against that of a human, but the abstract feelings that drive reasoning and correlations inside a human mind are more biological than logical.



Yup, I feel like the biggest limitation with current AI is that they don't have desire (nor actual agency to act upon it). They don't have to worry about hunger, death, feelings, and so they don't really have desires to further explore space, or make life more efficient because they're on limited time like humans. Their improvement isn't coming inside out like humans, it's just external driven (someone pressing a training epoch). This is why I don't think LLMs will reach AGI, if AGI somehow ties back to "human-ness." And maybe that's a good thing for Skynet reasons, but anyways


They do have desire. Their desire is to help answer human requests.

We can easily program them to have human desires instead.


Desire isn’t really the right word. A riverbank doesn’t desire to route water. It’s just what it does when you introduce water.


There is no reason to believe that consciousness, sentience, or emotions require a biological base.


There's equally no reason to believe that a machine can be conscious. The fact is, we can't say anything about what is required for consciousness because we don't understand what it is or how to measure or define it.


This is the only correct answer. People are trying to hit an imaginary target that they dont even know for sure exists.


I disagree, I think the leap of faith is to believe that something in our brains made of physical building blocks can’t be replicated on a computer that so far we’ve seen is very capable of simulating those building blocks


I don't actually believe that. I think it's entirely possible. What I'm saying is: "I don't know what consciousness is, it makes no sense."

Even if a machine really is conscious, we don't have enough information to ever really know if it is.


I’m certainly not informed enough to have an intelligent conversation about this, but surely the emotions bit can’t be right?

My emotions are definitely a function of the chemical soup my brain is sitting in (or the opposite).


Your emotions are surely caused by the chemical soup, but chemical soup need not be the only way to arrive at emotions. It is possible for different mechanisms to achieve same outcomes.


Perhaps we could say we don't know whether the human biological substrate is required for mental processes or not, but either way we do not know enough about said biological substrate and our mental processes, respectively.


How do we know we've achieved that? A machine that can feel emotions rather than merely emulating emotional behaviour.


> How do we know we've achieved that? A machine that can feel emotions rather than merely emulating emotional behaviour.

Let me pose back to you a related question as my answer: How do you know that I feel emotions rather than merely emulating emotional behavior?

This gets into the philosophy of knowing anything at all. Descartes would say that you can't. So we acknowledge the limitation and do our best to build functional models that help us do things other than wallow in existential loneliness.


And Popper would say you cannot ever prove another mind or inner state, just as you cannot prove any theory.

But you can propose explanations and try to falsify them. I haven’t thought about it but maybe there is a way to construct an experiment to falsify the claim that you don’t feel emotions.


I suppose there may be a way for me to conduct an experiment on myself, though like you I don't have one readily at hand, but I don't think there's a way for you to conduct such an experiment on me.


I wonder what Popper did say specifically about qualia and such. There's a 1977 book called "The Self and Its Brain: An Argument for Interactionism". Haven't read it.

Preface:

The problem of the relation between our bodies and our minds, and especially of the link between brain structures and processes on the one hand and mental dispositions and events on the other is an exceedingly difficult one. Without pretending to be able to foresee future developments, both authors of this book think it improbable that the problem will ever be solved, in the sense that we shall really understand this relation. We think that no more can be expected than to make a little progress here or there.

... well. Thanks a bunch, Karl.


Because I can watch you dream and can measure the fact you’re asleep.


Philosophers have been worrying about the question of how you can know anything for thousands of years. I promise that your pithy answer here is not it.


A promise wont do it. You’ll have to substantiate it without resorting to argument from authority.


I don’t think that’s an argument from authority. “Experts have been discussing X without reaching a conclusion for a long time” is a premise from which a reasonable argument can be made for the unlikelihood that an off-hand comment on HN has solved X. Argument from authority doesn't take that form though the two do have invoking authorities in common.


It's dangerous to go alone! Take this! https://en.wikipedia.org/wiki/Epistemology


If you’re not interested in engaging in a discussion, why bother replying?


Because you and I are the same species speaking a common language.


Ok, but ChatGPT speaks this language just as well as I do, and we also know that emotion isn't a core requirement of being a member of this species because psychopaths exist.

Also, you don't know what species I am. Maybe I'm a dog. :-)

(https://en.wikipedia.org/wiki/On_the_Internet,_nobody_knows_...)


Human-to-human communication is different from a human-to-computer communication. The google search engine speaks the same language as you, heck even the Hacker News speaks the same language as you as you are able to understand what each button on this page mean, and will respond correctly when you communicate back by pressing e.g. the “submit” button.

Also assuming psychopaths don‘t experience emotions is going going with a very fringe theory of psychology. Very likely psycopaths experience emotions, they are maybe just very different emotions from the ones you and I experience. I think a better example would be a comatose person.

That said I think talking about machine emotions is useless. I see emotions as a specific behavior state (that is you will behave in a more specific manner) given a specific pattern of stimuli. We can code our computers to do exactly that, but I think calling it emotions would just be confusing. Much rather I would simply call it a specific kind of state.


That sounds awful.


This sounds like a bot comment.


How would you know? Bots speak just as well as you do.


I don't know, but I have substantial evidence:

1) I know that I have emotions because I experience them.

2) I know that you and I are very similar because we are both human.

3) I know that we can observe changes in the brain as a result of our changing emotions and that changes to our brains can affect our emotions.

I thus have good reason to believe that since I experience emotions and that we are both human, you experience emotions too.

The alternative explanation, that you are otherwise human and display all the hallmarks of having emotions but do not in fact experience anything (the P-zombie hypothesis), is an extraordinary claim that has no evidence to support it and not even a plausible, hypothetical mechanism of action.

With an emotional machine I see no immediately obvious even hypothetical evidence to lend support to its veracity. In light of all this, it seems extraordinary to claim that non-biological means achieving real emotions (not emulated emotions) are possible.

After all, emulated emotions have already been demonstrated in video games. To call those sufficient would be setting an extremely low bar.


Ah, I understand the statement now.


There is exactly one good reason, at least for consciousness and sentience. And the reason is that those are such a vaguely defined (or rather defined by prototypes; ala Wittgenstein [or JavaScript before classes]). And that reason is anthropism.

We only have one good example of consciousness and sentience, and that is our own. We have good reason to suspect other entities (particularly other human individuals, but also other animals) have that as well, but we cannot access it, and not even confirm its existence. As a result using these terms of non-human beings becomes confusing at best, but it will never be actually helpful.

Emotions are another thing, we can define that outside of our experience, using behavior states and its connection with patterns of stimuli. For that we can certainly observe and describe behavior of a non biological entity as emotional. But given that emotion is something which regulates behavior which has evolved over millions of years, whether such a description would be useful is a whole another matter. I would be inclined to use a more general description of behavior patterns which includes emotion but also other means of behavior regulators.


they do not, but the same argument can hold true by the fact the true human nature is not really known and thus trying to define what a human like intelligence would consist of can only be incomplete.

there are many parts of human cognition, phycology etc. especially related to consciousness that are known unknowns and/or completely unknown.

a mitigation for this isaue would be to call it generally applicable intelligence or something, rather than human like intelligence. implying ita not specialized AI but also not human like. (i dont see why it would need to be human like, because even with all the right logic and intelligence a human can still do something counter to all of that. humans do this everyday. intuitive action, or irrational action etc.

what we want is generally applicable intelligence, not human like intelligence.


What if our definition of those concepts is biological to begin with?

How does a computer with full AGI experience the feeling of butterflies in your stomach when your first love is required?

How does a computer experience the tightening of your chest when you have a panic attack?

How does a computer experience the effects of chemicals like adrenaline or dopamine?

The A in AGI stands for “artificial” for good reason, IMO. A computer system can understand these concepts by description or recognize some of them them by computer vision, audio, or other sensors, but it seems as though it will always lack sufficient biological context to experience true consciousness.

Perhaps humans are just biological computers, but the “biological” part could be the most important part of that equation.


Is there more reason to believe otherwise? I'm not being contrarian, I'm genuinely curious what people think.


That asks you to consider the statements

There is reason to believe that consciousness, sentience, or emotions require a biological base.

Or

There is no reason to believe that consciousness, sentience, or emotions do not require a biological base.

The first is simple, if there is a reason you can ask for it and evaluate it's merits. Quantum stuff is often pointed to here, but the reasoning is unconvincing.

The second form There is no reason to believe P does not require Q.

There are no proven reasons but there are suspected reasons. For instance if the operation that nerons perform is what makes consciousness work, and that operation can be reproduced non-biologicLly it would follow that non biological consciousness would be possible.

For any observable phenomenon in the brain the same thing can be asked. So far it seems reasonable to expect most of the observable processes could be replicated.

None of it acts as proof, but they probably rise to the bar of reasons.


What is the "irreplaceable" part of human biology that leads to consciousness? Microtubules? Whatever it is, we could presumably build something artificial that has it.


We “could presumably build” it, maybe we can do that once we figure out how to get a language prediction model to comprehend what the current date is or how to spell strawberry.


Don’t fool yourself into believing artificial intelligence is not one breakthrough away


All right, same question: Is there more reason to believe that it is one breakthrough away, or to believe that it is not? What evidence do you see to lean one way or the other?


It’s clearly possible, because we exist. Just a matter of time. And as we’ve seen in the past, breakthroughs can produce incredible leaps in capabilities (outside of AI as well). We might not get that breakthrough(s) for a thousand years, but I’m definitely leaning towards it being inevitable.

Interestingly the people doing the actual envelope pushing in this domain, such as Ilya Sutskever, think that there it’s a scaling problem, and neural nets do result in AGIs eventually, but I haven’t heard them substantiate it.


> It’s clearly possible, because we exist.

This is not much different than saying that it’s possible to fly a spacecraft to another galaxy because spacecrafts exist and other galaxies exist.

Possible and practically attainable are two far different things.


> This is not much different than saying that it’s possible to fly a spacecraft to another galaxy because spacecrafts exist and other galaxies exist.

It is very different. We have never seen a spacecraft reach another galaxy so we don't know it is possible.

We have an example of what we call intelligence arising in matter. We don't know what hurdles there are between current AI and an AGI, but we know that AGI is possible.


You didn't answer the question. Zero breakthroughs away, one, or more than one? How strongly do you think whichever you think, and why?

(I'm asking because of your statement, "Don’t fool yourself into believing artificial intelligence is not one breakthrough away", which I'm not sure I understand, but if I am parsing it correctly, I question your basis for saying it.)


There are definitely breakthroughs in the way.

“one breakthrough away” as in some breakthrough away


I can think of a whole host of nearly impossible things that are one breakthrough away.

Let me know when I’ll be able to buy my $30,000 car with level 5 self driving.


Douglas Hofstadter wrote Gödel, Escher, Bach in the late 1970s. He used the short-hand “strange loops”, but dedicates a good bit of time considering this very thing. It’s like the Ship of Theseus, or the famous debate over Star Trek transporters—at what point do we stop being an inanimate clump of chemical compounds, and become “alive”. Further, at what point do our sensory organs transition from the basics of “life”, and form “consciousness”.

I find anyone with confident answers to questions like these immediately suspect.


What non-biological systems do we know of that have consciousness, sentience or emotions?


We have no known basis for even deciding that other than the (maybe right, maybe wrong) guess that consciousness requires a lot of organized moving complexity. Even with that guess, we don't know how much is needed or what kind.


It’s frequently pretty funny, anyway.


This sounds like a bot comment.


Well, you do tend to repeat yourself, maybe ChatGPT really is your peer with language?


Is there a reason to believe that consciousness, sentience and emotions exist?


None of that comes from outside of your biology and chemistry.


That sounds correct though more fundamentally we don’t know what intelligence or consciousness are. It’s almost a religious question, as in our current understanding of the universe does not explain them but we know they exist. So regardless of embodied intelligence, we don’t even understand the basic building blocks of intelligence, we just have some descriptive study of it, that imo LLMs can get arbitrarily close to without ever being intelligent because if you can describe it, you can fit to it.


The current AI buildup is based on an almost metaphysical bet that intelligence can be simulated in software and straightforwardly scaled by increasing complexity and energy usage.

Personally, I remain skeptical that is the case.

What does seem likely is that “intelligence” will eventually be redefined to mean whatever we got out of the AI buildup.


What about aliens? When little green critters finally arrive on this planet, having travelled across space and time, will you reject their intelligence because they lack human biology? What if their biology is silicon based, rather than carbon?

There's really no reason to believe intelligence is tied to being human. Most of us accept the possibility (even the likelihood) of intelligent life in the universe, that isn't.


I think you missed or ignored the human part:

>human intelligence as something detached from human biology.

I don't completely agree with the previous comment, but there is something to be considered to their statement.


Sure, there's little doubt that our biology shapes our experience. But in the context of this conversation, we're talking about how AI falls short of true AGI. My answer was offered in that regard. It doesn't really matter what you think about human intelligence, if you believe that non-human intelligence is every bit as valid, and there is no inherent need for any "humanness" to be intelligent.

Given that, the constant drumbeat of pointing out how AI fails to be human, misses the mark. A lot of the same people who are making such assertions, haven't really thought about how they would quickly accept alien intelligence as legitimate and full-fledged... even though it too lacks any humanity backing it.

And why are they so eager to discount the possibility of synthetic life, and its intelligence, as mere imitation? As a poor substitute for the "real thing"? When faced with their easy acceptance of alien intelligence, it suggests that there is in fact a psychological reason at the base of this position, rather than pure rational dismissal. A desire to leave the purely logical and mechanical, and imbue our humanity with an essential spirit or soul, that maybe an alien could have, but never a machine. Ultimately, it is a religious objection, not a scientific one.


Alien or syntethic life will have to go through similar challenges to those that shape human life, human intelligence and our consciousness. No text prediction machine, no matter how complex or "large", has to change its evolving environment and itself, for example.


What you are talking about is experience/knowledge, not raw intelligence.

It has been proven that a Turning Machine and Lambda Calculus have the exact same equivalent expressiveness, that encompasses the _entire set_ of computable functions. Why are you so sure that "text prediction" is not equally expressive?


Why are you so sure that reality is reducible to your notion of computation, whatever that is?


I'm all ears if you want to explain how you have a magic soul that is too important and beautiful to ever be equalled by a machine. But if intelligence is not equivalent to computation, then what is it? Don't take the easy way out of asking me to define it, you define it as something other than the ability to successfully apply computation to the environment.

Was Hellen Keller not intelligent because she lacked the ability to see or hear? Is intelligence defined by a particular set of sense organs? A particular way of interacting with the environment? What about paraplegics, are they disqualified from being considered intelligent because they lack the same embodied experience as others?

Whenever you give someone kudos for being brilliant, it is always for their ability to successfully compute something. If that isn't what we're discussing when we're talking about intelligence, then what are we discussing?


Yes, put words in my mouth and then ask me to defend them. Clearly I expressed support for the view that humans have a "magic soul that is too important beautiful to ever be equalled by a machine"...

On the other hand, you are clearly stating that intelligence is computation. But you're right, it would be too easy to ask you to define what any of those words mean AND to back that claim.


I've done my best to express a logical argument for my assertions. I've defined what I mean by intelligence, and given examples of humans who lack major senses or physical capabilities, and yet are still considered intelligent; attempting to argue that intelligence is not tied to any physical characteristic, but is rather a dexterity and facility with computation. I haven't yet grokked what you're actually arguing, though. You just seem to dislike the idea of intelligence being compared to computation, but I don't know what you're offering as an alternative.


Yes, I like to think about addiction, as an example of a complex human behavior emerging from brain structure and mechanics.

Feels good so we want more so you arrange your whole life and outlook to make more feel good happen. Intelligence!


I think I need to point out some obvious issues with the paper.

Definition of artificial:

>Made by humans, especially in imitation of something natural.

>Not arising from natural or necessary causes; contrived or arbitrary.

Thus artificial intelligence must be the same as natural, the process of coming up with it doesn't have to be natural. What this means: we need to consider the substrate that makes natural intelligence. They cannot be separated willy nilly without actual scientific proof. As in, we cannot imply a roll of cheese can manifest intelligence based on the fact that it recognizes how many fingers are in an image.

The problem arises from a potential conflict of interests between hardware manufacturer companies and definition of AGI. The way I understand it, human like intelligence cannot come from algorithms running on GPUs. It will come from some kind of neuromorphic hardware. And the whole point of neuromorphic hardware is that it operates (closely) on human brain principles. Thus, the definition of AGI MUST include some hardware limitations. Just because I can make a contraption "fool" the tests doesn't mean it has human like cognition/awareness. That must arise from the form, from the way the atoms are arranged in the human brain. Any separation must be scientifically proven. Like if anyone implies GPUs can generate human like self awareness that has to be somehow proven. Lacking a logical way to prove it, the best course of action is to closely follow the way the human brain operates (at least SNN hardware).

>The resulting AGI scores (e.g., GPT-4 at 27%, GPT-5 at 57%) concretely quantify both rapid progress and the substantial gap remaining before AGI.

This is nonsense. GPT scores cannot decide AGI level. They are the wrong algorithm running on the wrong hardware.

I have also seen no disclosure on conflict of interests in the paper.


And yet we're supposed to believe biological sex isn't real?

Which is it??




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: