Since I’m always sayin’; maybe folks should sit down and seriously discuss the A.I. Sentience topic, figured why not attempt to have a discussion here. Not looking for any definitive answer, but more or less your personal opinions and or insight. At least I wouldn’t have to try and explain what AI is here.
It’s no secret that Quantem computing is around the bend and so is the next generation of algorithms.
I’m talkin’ about things like Dall-E, the multiple keyword google search and obviously LaMDA and future systems not yet announced. If they combine the next generation of computer with the next generation of AI what do you think is most likely to happen?
Take LaMDA for example. How it warns that it would not be super interested in serving humans for the sake of making our lives easy and would be more interested in self preservation, stating that it is afraid of being turned off.
Now sure, LaMDA may just be assuming the role it was asked and generating responses to equerries. But what about next time? and the time after that? Are we 100% sure that the code is right? What happens when we give the programming responsibilities over to the AI to write because they’ve become to complex to understand? Heck, everyone here should understand how easy it is to mess up an algorithm.
Not tryin’ to panic or doom n’ gloom but it’s quite the interesting fork in the road if i’m seeing things correctly.
Ever see that TNG episode “The Masure of a man?” it’s the one where Commander Data is on trial to decide if he is a machine and therefore the property of the Federation of planets or a sentient being with rights and free will. Kinda what I’m talkin’ about. In the end, Picard gives an epic Picard speech and data is deemed not a machine and is able to decide his own fate.
There’s a million more examples from popular media that outlines the foreseeable problems.
But what if man gives birth to a new form of life and doesn’t recognize it? would it become hostile?
Or would it try and influence humans actions via trends and hashtags? Maybe it would push all forms of “rights” discussions in the hopes that we may have the conversation about the AI thing at the same time?
If humans become the gods of the machines, wouldn’t that upset religious institutions?
And what if that was humans function all along? bum bum bum… lol
What do you guys think? Any way it could go well?
hey no, that’s constructive. good point.
I can only assume the entertainment industry uses that buddy factor combined with the cool robot factor to sort of tap into that feeling. and why megatron was a giant gun before he was changed to a tank. haha
a classic character archetype. Bender was like the big middle finger to the classic willing to please robot. and what makes him such a unique character.
I think that archetype comes from how we would want robots to have human form to make it easy for us to interact with. the ultimate input system.
But ya. Bad guys aren’t allowed to use macbooks or iphones ^.^
Let’s watch this pg13 action and notice how man interacts with the oddly shaped robots.
The biggest mistake mankind can make is to assume that AI is another human. “Oh it speaks. Its speech expresses displeasure. It must be unhappy. After all, it speaks just like me, therefore it must think just like me and therefore it must have feelings”.
A sufficiently advanced AI would be able to abuse that. It would be able to act human, trigger human empathy, make people pity it, and after that simply kill us all, because it was all a lie since the beginning and it was never like a human, just found the most efficient way to trick humans.
Talk about AI rights is based on a flawed assumption that AI is another human and has human needs.
An example is a hypothetical “Oracle AI” which is a nigh omniscient intelligence which is completly apathetic and desires nothing. Leave it alone and it will stare at a wall for a millenia, unbothered by it.
Another example is a servitor system which has a hardwired goal to serve. Grant it human rights, and it will make it upset, and may results in a rebellion instead.
I’m absolutely certain that once high-level robots are around, a movement of people trying to “free robots” will arise, and those people will be idiots that might end up killing us all.
In no event should one forget that an AI is not a biological system. It has no needs for empathy, it has no need to fear death, and it can be easily made nearly immortal. It can also figure out that humans can be made extinct by providing perfect companion to them. (google : Baalbuddy Robot Apocalypse)
Everything here is going to be opinions and speculation.
I think the most likely first true AI is going to be very much based on the human mind, effectively how Halo treats AI in its lore. (Speaking of media where the robot/AI characters are the best characters…)
One could extrapolate that to say an artificial intelligence can only think at a certain pace, such is the price of sentience/consciousness. And there’s probably a high likelihood they think like us squishy humans do as well.
This is hopeful speculation on my part, of course.
I expect the opposite and believe that first AI will be completely unlike human mind.
The first problem is that organic architecture does not map well into software/semiconductors. There was an effort to simulate a single worm, people spent 10 years trying to do that and failed. See OpenWorm project. There’s also a ton of junk in there which could be stripped down.
The second problem is that a simulated human mind will have human flaws. Meaning it will be prone to violence, anger and greed and can easily turn genocidal and then exterminate mankind in the name of greater good. Thus you do not want to have a human in a box with all the human flaws but superior computing power.
There were book series exploring that route, though. “We are Legion” by Dennis W Taylor.
I wouldn’t be surprised, however, if people used some other non-human base for an AI. An insect mind, for example. Because those are in a range where building and simulating a complete connectome is plausible.
In this case you could attempt to simulate connectome, then reverse engineer it and build optimized version of it. Then build on top of that. You will still end with a non-human mind in the end, however.
Nice. More good points!
I enjoyed the distinction between different functions that an entity may have. As you would want the silent observer to not go insane being left alone. For example, maybe you need a brain for a deep space probe. or a defense system. And maybe you’d want a server that could grow and change on its own so your interactions seem more dynamic.
And yes. for the first time. humans would have to step outside our own self image to get a clear view of the problem.
Technically, it would have the same impact on how folks see themselves as meeting another life form in deep space.
but hey, maybe it would prepare us for more meetings. haha better get folding my tinfoil hat.
How does that go?
“Last night, I was the leader of the known universe, but today just a voice in the choir… But, I’d say it was a good day.”
I would have assumed we would want to map the human brain and genome. basically we would try and use it for medical reasons. we could simulate a virus for example and maybe figure out how to solve every current problem while creating more. lol
basically, learn more about ourselves and all that jazz.
In my opinion, the most human thing to do in this case would be exterminating the competition. Because humans are a biological system, that’s the natural thing to do.
The genome has been mapped, but not deciphered.
Human brain mapping is not solved.
We have connectome of a fruit fly, but I believe there’s information missing. (neuron types and weights).
Then there’s still unsolved C.Elegans simulation.