Discussion Topic |
|
This thread has been locked |
Largo
Sport climber
The Big Wide Open Face
|
|
Topic Author's Reply - Jun 18, 2014 - 04:01pm PT
|
I mentioned having a conversation with a programmer who is focused on AI, and how it opened my eyes to many things. The first thing is he didn’t have a philosophy as such. I’m sure he had ideas about reality, but whether or not he was a “physicalist” or not was not even a concern to him. He was simply wrestling with the very practical concerns about ever being able to program sentience into a machine. No matter what program he wrote, he would still have to “compile” it into long streams of 1s and 0s that a computer can understand and to do that he had to look at brain function in terms of mechanical functions that he could then define in terms of a task which in turn he could in turn write in code in order to carry out that task. I am dumbing down his drift here but this was the basic idea as he presented it.
Neuroscience has broken down objective brain functioning into many tasks that are understandable and probably translatable – at least in theory – into computer code. All of these assume that the human mind or the human brain (or both) is an information processing system and that thinking is a form of computing. The problem is that sentience and computing are not the same things - or that canot be programmed as the same, at any rate - any more than awareness and discursive thinking are selfsame. That means, according to my friend, that he could program all the computational tasking in the brain and it still wouldn't mean that the computer, thus coded, would be sentient, or that he could ever code sentience as a “task.”
Many of these challenges (my friend pointed out) were taken up by George Dvorsky (paraphrased here) per how an artificial human brain might be built, or how a machine might be programmed to be sentient – which amount to much the same thing. Much of the challenge boils down to fundamental assumptions underlying artificial intelligence research based on the computational theory of mind.
The computational theory of mind says our brain works like a computer. It takes input from the outside world, then performs algorithms to produce output in the form of mental state or action. Here, the brain is an information processor and your mind is “software” that runs on the “hardware” of the brain.
“If brain activity is regarded as a function that is physically computed by brains,” says Dvorsky, “then it should be possible to compute it on a Turing machine, namely a computer.”
Many adherents of the computational theory of mind often claim that the only alternative theories of mind involve a supernatural or dualistic component - ironic, because fundamentally, the computational theory is strictly dualistic. Our mind is something fundamentally different from our brain – it’s just software that can, in theory, run on any substrate.
A truly non-dualistic theory of mind can only mean that mind and brain are identical. This doesn’t rule out an artificial human brain, or a sentient machine. It’s just that actually programming such a thing would be much more akin to embedded systems programming rather than computer programming. Moreover, it means that the hardware matters a lot – because the hardware would have to essentially mirror the hardware of the brain. This enormously complicates the task of trying to build an artificial brain. We can say, in theory, that the brain DOES sentience, but how do we build that? And WHAT is it that we build? What, exactly, would an embedded systems programmer do, or start, or end?
Looking at the workings of the brain in more detail reveals more challenges with a computational theory, according to both my friend and Dvorsky – as well as others.
The brain isn’t structured like a Turing machine. It’s a parallel processing network of neural nodes – but not just any network. It’s a plastic neural network that can in some ways be actively changed through influences of will and/or environment. For example, so long as some crucial portions of the brain aren’t injured, it’s possible for the brain to compensate for injury by actively rewriting its own network. Or, as you might notice in your own life, its possible to improve your own cognition just by getting enough sleep and exercise. You don’t have to delve into the technical details to know as much. Just consider the prevalence of cognitive dissonance and confirmation bias. Cognitive dissonance is the ability of the mind to believe what it wants even in the face of opposing evidence. Confirmation bias is the ability of the mind to seek out evidence that conforms to its own theories and simply gloss over or completely ignore contradictory evidence. Neither of these aspects of the brain are easily explained through computation – it might not even be possible to express these states mathematically.
If the parts of the brain we think of as being fundamentally human – not just intelligence, but self-awareness – are emergent properties of the brain, rather than functional ones, as seems likely, the computational theory of mind gets even weaker. Think of consciousness and will as something that emerges from the activity of billions of neural connections, similar to how a national economy emerges from billions of different business transactions. It’s not a perfect analogy, but that should give you an idea of the complexity. In many ways, the structure of a national economy is much simpler than that of the brain, and despite that fact that it’s a much more strictly mathematical proposition, it’s incredibly difficult to model with any kind of precision.
According to Dvorsky, the mind is best understood, not as software, but rather as an emergent property of the physical brain. So building an artificial intelligence with the same level of complexity as that of a human intelligence isn’t a matter of just finding the right algorithms and putting it together. The brain is much more complicated than that, and is very likely simply not amenable to that kind of mathematical reductionism, any more than economic systems are.
Getting back to the question of artificial intelligence, then, you can see why it becomes a much taller order to produce a human-level intelligence. It’s possible to build computers that can learn and solve complex problems. But it’s much less clear that there’s an easy or even possible road to a computer that’s actually sentient, that can take a break, say, and post on this thread of its own accord.
JL
|
|
Tvash
climber
Seattle
|
|
Jun 18, 2014 - 04:22pm PT
|
I think one has to talk about the neural/body system - an entirely integrated one - as the seat of consciousness. Physically, this is entirely different than our processing technology.
One may be able to model such a system in silicon, but, at a quadrillion real time, plastic connections - that's no simple task. Would such a model be able to mimic the timing and simultaneity of the neural/body system, or would unsurmountable, physical differences remain, regardless of how sophisticated our processing tech becomes? After all, the functioning of consciousness is largely an analog system - fine tuned down to the molecular level. That makes for a pretty tough simulation at resolutions comparable to what we as humans actually experience.
Or can we synthesize our neural/body networks to perform like their biological muses?
The practical answer is probably neither. I do believe artificial consciousness is practically inevitable - the demand for it is too great (ie, real people are way too much trouble to deal with for a lot of jobs), but it will be come in unique flavors as dictated by the physical system, (whatever that's going to be) upon which it will be based.
Oh, and John, I wish you a speedy recovery from your ordeal.
|
|
MH2
climber
|
|
Jun 18, 2014 - 04:50pm PT
|
You are out of your depth, JL, but kudos for taking the jump.
|
|
Largo
Sport climber
The Big Wide Open Face
|
|
Topic Author's Reply - Jun 18, 2014 - 05:14pm PT
|
Thanks for your vote of confidence MH2, you're always good for a beat down. Was it something I said LOL. Or are you off your meds again . . .
And thanks Tvash, for your thoughts about recovering from this dental epic. I gots to wonder what people did like 200 years ago before numb juice. They just yanked that sh#t out of there with pliers and the patient had to like it. Arrrrrrrrrrrrg.
I think the most interesting this my friend the programmer said is that all the processing stuff and computational tasks were at least in theory part of an analogue system, and translating those tasks (memory, discursive wrangling, logic, etc) into code seems almost inevitable. This harks back to a the fundamental assumption fo what's called computational functionalism. This, in turn, derives from the Church-Turing thesis which
states that a Turing machine can emulate any other Turing machine.
So, the belief goes, every physically computable function can be computed by a Turing machine. And if brain activity is regarded as a function that is physically computed by brains, then it should be possible to compute it on a Turing machine, namely a computer.
But the problem my friend points out is that this is all good and well till it's time to actually write the code for sentience. Trying to frame sentience as a task might lead to coding a kind of scanning function of content, or all that processing going on, but none of that implies a self-aware machine. And trying to program an emergent function is like trying to control a national economy at the level of sticking coins into a piggy bank.
How does one code for something that is not a computation and is not itself a task but rather is the awareness of tasks? And again, when looked at in terms of actual coding, and not as abstract ideas or theories, the job becomes rather difficult.
JL
|
|
MH2
climber
|
|
Jun 18, 2014 - 07:29pm PT
|
I mentioned having a conversation with a programmer who is focused on AI, and how it opened my eyes to many things. The first thing is he didn’t have a philosophy as such. I’m sure he had ideas about reality, but whether or not he was a “physicalist” or not was not even a concern to him. He was simply wrestling with the very practical concerns about ever being able to program sentience into a machine. (JL)
What does he want the machine to do? How would he recognize sentience in the machine?
How does one code for something that is not a computation and is not itself a task but rather is the awareness of tasks? (JL)
How would you know when a machine was aware of tasks?
You are safe as long as you say that subjective mind-awareness is not graspable. No one can tell you that a machine is capable of doing something which you are unable to define. There would be no way to test the claim.
Whether sentience is a computable function is too poorly defined a question to worry about.
|
|
Largo
Sport climber
The Big Wide Open Face
|
|
Topic Author's Reply - Jun 18, 2014 - 08:03pm PT
|
You are safe as long as you say that subjective mind-awareness is not graspable. No one can tell you that a machine is capable of doing something which you are unable to define. There would be no way to test the claim.
Whether sentience is a computable function is too poorly defined a question to worry about.
Safe from what? What do you suspect I am hiding from? Your projections are showing old timer.
"Not graspable" means that sentience is not a quantifiable thing, while all the wonderfully complex computational and processing functions are almost certainly mechanical functions translatable to some kind of computer code. What my friend was saying is that if you believe that sentience IS a thing and that said thing or function is quantifiable, kindly help him out and describe sentience in a way he can start cooking up some code. It won't do to tell the guy "don't worry about that." He needs answers.
I have defined sentience - as the dynamic interplay or awareness, focus and attention. I never tried to posit these as quantifiable objective things because it is not my understanding that they are. Again, if your understanding is otherwise, kindly jot it out.
JL
|
|
Ward Trotter
Trad climber
|
|
Jun 18, 2014 - 09:15pm PT
|
kindly help him out and describe sentience in a way he can start cooking up some code. It won't do to tell the guy "don't worry about that." He needs answers.
He's lazy and is trying to get someone to do all his creative thinking for him.
For starters.
Tell him he needs to quit exclusively defaulting to the human model for intelligence and start imagining a machine intelligence that doesn't need to merely mimic Justin Bieber to meet the necessary qualifications for "sentience".
|
|
MH2
climber
|
|
Jun 18, 2014 - 10:24pm PT
|
He needs answers. (JL)
He needs a better question.
Your definition of sentience as the dynamic interplay of awareness, focus, and attention is not of much help to anyone whose goal is an algorithm for sentience.
When I say you are out of your depth, I am questioning how reliably you have recounted this AI guy's opinions.
|
|
Ward Trotter
Trad climber
|
|
Jun 18, 2014 - 10:52pm PT
|
IMO, when we create a self conscious machine, its only "emotion" will be logic. No love, aggression, or empathy. What would be the conclusion of its analysis? Now, that's some scary thought.
Some time ago, I think on the other thread , I posted my SciFi synopsis of a probable future drawing on some of the ideas surrounding the "technological singularity " and other semi-fanciful ideas. This was actually a response to another thread that solicited wild and incredible predictions of the future:
I predict that by the 2020s we will see the first supercomputers designing new generations of supercomputers, with material innovations now on the horizon. The first credible android/ cyborgs will start to emerge , designed and built by other computers.
In the 2030s nationalistic regional resource wars will break out, culminating in some sort of hideous conflict by 2035 in which nuclear weapons will be used on a somewhat large scale, resulting in the subsequent death of millions of people , largely by radioactive contamination of the environment and food chain .
The 2040s will be a period of recovery from these hideous wars and environmental devastation.
A new nation will be founded that will attract growing numbers of disaffected people who seek to escape what they consider the folly of a world based upon the runaway excesses of unrestrained technology.
The world essentially will be divided between these two camps-- the 'naturals ' who favor a restricted and highly controlled technological society and the 'techs ' who favor no restrictions on technological growth.
In the early 2050s supercomputers will become so advanced that they will start to exhibit strong independent and autonomous characteristics . These highly advanced "supers" collectively develop a meta-program that is so awesomely powerful and omnipresent that it hyperlinks every computer on earth and begins to solve hitherto intractable technical problems with absolutely astounding solutions, at ever- increasing incredible speed. The cures for almost all human disease, like cancer, are finally realized, as well as a reversal of the radioactive genetic damage from the wars of the 2030s..
These computers ,having been built and designed by other computers, will consist of almost entirely different materials and components from the computers we have today.
By the 2060s ,the meta-program , which is not located in any one specific locale, has grown so powerful and omnipresent that it has designed and controls every aspect of collective human life, from food production, resource extraction, manufacturing, to advanced environmental regulation and management. Computers do everything.
By 2070 the meta-program has decided that humans , at a population of 14 billion, represent a continuing threat to the integrity of the planet and to an ever growing legion of Artificial Intelligences.
The meta-program therefore designs a highly specialized group of viruses that can kill a person within an hour and subsequently massively dehydrate the corpse in another hour. Humans go from apparent health to a pile of dust in 2 hours.
The meta-program decides to eliminate 98% of the human race...
The year: 2072.
Only one thing stands between the unfeeling meta-program and world domination... The Hindu SuperHero , Werner Braun.
|
|
jstan
climber
|
|
Jun 19, 2014 - 01:06pm PT
|
I would suggest we not create computers with all of these exalted properties
until we have arranged for them also to have a sense of humor.
|
|
Tvash
climber
Seattle
|
|
Jun 19, 2014 - 01:17pm PT
|
I was just talking to a couple of twin 12 year olds, and told them to enjoy their robot cars for as long as they can, because the robot apocalypse will follow shortly afterwards.
Programming self-replicating/evolving machines not to exterminate their number one threat is going to take some thought...if it's even possible.
Good luck wit dat, kiddies.
|
|
Ward Trotter
Trad climber
|
|
Jun 19, 2014 - 01:34pm PT
|
robot apocalypse will follow shortly afterwards.
One day I idly considered what would perhaps be a good analogy to parallel the exact moment the human race discovered their machines were working at cross purposes to humanity. The only thing I could come up with is the exact moment a wealthy, politically active conservative Republican learns that he is being audited by the IRS.
|
|
Largo
Sport climber
The Big Wide Open Face
|
|
Topic Author's Reply - Jun 19, 2014 - 02:12pm PT
|
Your definition of sentience as the dynamic interplay of awareness, focus, and attention is not of much help to anyone whose goal is an algorithm for sentience.
---
Perhaps you are thinking that another definition of sentience would provide just the information to get that algorithm started. What my friend was saying - and this is reliably the case - is that there currently is no such definition out there on which they can even start to build an algorithm, because neuroscience is not studying sentience, but objective functioning. this is exactly what becomes clear when an AI buff starts the actual job of conceiving how to program sentience. People insisting that objective functioning and sentience are the same thing have yet to provide one helpful input per that algorithm.
That doesn't make sentience "supernatural," but it does suggest that it just might be qualitatively different than all the other quantifiable things in reality. The inability to even consider sentience in some other non-mystical light has gotten us right to this point - sans algorithm. It's not like they have a poor algorithm, or a rudimentary one. They have none at all. Not even a start. Not the first bit of code written. Zero.
JL
|
|
jgill
Boulder climber
Colorado
|
|
Jun 19, 2014 - 03:05pm PT
|
. . . because neuroscience is not studying sentience, but objective functioning (JL)
My suspicion is that sentience should be approached indirectly through objective functioning. At some point in that continuum of investigation the meaning and mechanics of sentience may become clear. To approach sentience directly seems to devolve quickly into incoherent flapdoodle.
But I could be wrong, and philosophers like John may stumble upon an epiphany. Good luck.
|
|
Ward Trotter
Trad climber
|
|
Jun 19, 2014 - 03:06pm PT
|
Perhaps you are thinking that another definition of sentience would provide just the information to get that algorithm started.
Another definition of sentience is not out of the question, nor does it escape categorical irrelevance as regards the issue of machine intelligence. It may be, and probably will be, possible to achieve a simulation of what would appear to you , the observer, as a discernible subjective state. On the margins of practical AI applications this might be considered necessary: as in the example of a robotic housemaid that remembers your birthday with a gift and sheds an artificial tear or simulated smile as you blow your cake candles out.
I ask the question once "if you encountered an advanced android in the grocery store and struck up a casual conversation, how would you know this android was in fact an android ,and furthermore ,did not possess a subjective life?"
If a machine could be made to in every way appear human and act human, why would it be paramount to mimic a state or set of attributes that you the observer could not tell, or know, was actually even present?
Moreover, an internal state in fact that you ,the observer, did not even yourself have a working definition for ?
|
|
cintune
climber
The Utility Muffin Research Kitchen
|
|
Jun 19, 2014 - 03:59pm PT
|
http://io9.com/how-does-anesthesia-work-doctors-arent-sure-and-her-1592809615
Interesting and timely.
"Recovery from anesthesia, is not simply the result of the anesthetic 'wearing off' but also of the brain finding its way back through a maze of possible activity states to those that allow conscious experience," noted researcher Andrew Hudson in a statement. "Put simply, the brain reboots itself."
|
|
MH2
climber
|
|
Jun 19, 2014 - 04:00pm PT
|
But I could be wrong, and philosophers like John may stumble upon an epiphany. (jgill)
Whereas I merely stumble.
It is possible that Hebb synapses are all you need to accomplish anything nervous systems do. The mystery was solved in 1949. The Hebb synapse allows for associative memory, connecting the taste of a cake to Sunday mornings at Combray. Subjective or objective? Connect enough of the right things and you probably get sentience.
Largo, is your AI guy actually working on programming sentience? Or just pointing to the obvious difficulty of the problem? If he is working on the problem, is he any better than Henry Markham?
|
|
go-B
climber
Cling to what is good!
|
|
Jun 19, 2014 - 04:02pm PT
|
|
|
|
SuperTopo on the Web
|