Discussion Topic |
|
This thread has been locked |
Largo
Sport climber
The Big Wide Open Face
|
|
Topic Author's Reply - Apr 26, 2017 - 03:41pm PT
|
Funny thing is that you can answer most of these questions by simply watching your creative process when do any kind of complicated task that requires your conscious attention, attention simply being your focused awareness. Trying to figure it out from the outside will only give you the mechanical processing aspect, and that is going on as well. In spades.
And Dingus, I have a motion detector in the back of my house that flips the light on once it gets an input from the sensor. That is, once it machine registers movement, it flips a switch owing to it's programing. However I doubt even you would claim that the sensor is consciously aware of and has an experience of not only BEING a sensor, but also flipping on a switch.
Computers work on much more complicated algorithms but added complexity (the so-called complexity argument) has never, in the history of computer science, indicated the computer being sentient, or that the computer was showing even the slightest deviation from sentactic processing, bound by rules and code, even code it writes "itself."
So we may fairly ask: What is the difference between Vulcan, world's most complex computer, up at Livermore, and MH2 while writing on this thread?
And when Hard AI geeks make claims for a sentient computer in five or ten years, what will be the difference between it, and Vulcan. And you?
|
|
WBraun
climber
|
|
Apr 26, 2017 - 04:17pm PT
|
So then I approached Siri with my blow torch.
That stoopid gross material machine didn't make any noise nor say anything nor even scream.
Stoopid non-sentient machine .....:-)
|
|
Nuglet
Trad climber
Orange Murica!
|
|
Apr 26, 2017 - 04:40pm PT
|
Mind is a terrible thing to taste
|
|
Largo
Sport climber
The Big Wide Open Face
|
|
Topic Author's Reply - Apr 26, 2017 - 06:45pm PT
|
Was walking through a casino in Vegas when the celebrated Turkish Philosopher of Mind , Zoltar X., called me over and we were instantly in a beef re consciousness. Can't escape this stuff...
|
|
MH2
Boulder climber
Andy Cairns
|
|
Apr 26, 2017 - 07:01pm PT
|
So we may fairly ask: What is the difference between Vulcan, world's most complex computer, up at Livermore, and MH2 while writing on this thread?
Do you have an answer? I am curious. I did not know that Vulcan was writing on this thread.
[Click to View YouTube Video]
|
|
MikeL
Social climber
Southern Arizona
|
|
Apr 26, 2017 - 07:06pm PT
|
Dingus: To which I'd respond, 'no seriously, ask Siri. She'll set you straight.'
I think you’re delusional and a bit too optimistic about technology.
First of all, there’s the anthropomorphism of “She’ll.” You need to fix that.
I grant you technology does remarkable things, in this case machine learning. But understanding means perceiving a being and personality (let’s put the TT {Turing Test} aside for the moment). In your work, I assume, being face to face brings far more recognition of what’s going on. That might be understanding, . . . and it might come up short. There are case studies written on how people have mis-interpreted counterparts from other organizations and societies. It’s legion. Understanding, among human beings I think, tends to mean that one knows another. Is that what you think a machine (Siri) does? It knows me? Can Siri lie? Can it perceive a lie? Does it understand me enough to manipulate me? Can it show compassion from its heart, can it show sympathy authentically (feelings needed, here), can it be sensitive to who I am and what I am, can it exhibit mastery (know what that is?), can it be patient, forbearing, lenient, merciful, forgiving, and humane? Or is it a sophisticated input-output mimicking what a TT would expose?
In my past field, I think there's all sorts of armchair theorists who have some kind of intuition on how things work and what's possible. It seems easy to be a futurist until you've actually had to make specific predictions. More importantly, at least academically, the proof of the pudding is coming up with theories and testing them with concrete data. You think that Siri is sentient, apparently. Do you think I could talk to it about my existential thoughts about the dementia that my father-in-law is going through and my own feeling about it?
|
|
MikeL
Social climber
Southern Arizona
|
|
Apr 26, 2017 - 07:10pm PT
|
Why, look you now, how unworthy a thing you make of me! You would play upon me; you would seem to know my stops; you would pluck out the heart of my mystery; you would sound me from my lowest note to the top of my compass: and there is much music, excellent voice, in this little organ; yet cannot you make it speak. 'Sblood, do you think I am easier to be played on than a pipe? Call me what instrument you will, though you can fret me, yet you cannot play upon me. (Hamlet, Act 3, Scene 2)
|
|
High Fructose Corn Spirit
Gym climber
|
|
Apr 26, 2017 - 07:45pm PT
|
The concept of being blown away by the future speaks to the magic of our collective intelligence — but it also speaks to the naivety of our intuition. -Tim Urban
http://waitbutwhy.com/2017/04/neuralink.html#part6
"A lot of people find it scary to think about, but I think it’s exciting. Because of when we happened to be born, instead of just living in a normal world like normal people, we’re living inside of a thriller movie."
....
Looking pretty tough there, Largo. Are you still benching three plates?
|
|
jgill
Boulder climber
The high prairie of southern Colorado
|
|
Apr 26, 2017 - 09:31pm PT
|
Beware an angry Zoltar . . .
|
|
MikeL
Social climber
Southern Arizona
|
|
Apr 26, 2017 - 10:12pm PT
|
Dingus:
It’s not my definition. It be resonate with Mr. Webster’s. But what does he know?
I can see that you’re not a great fan of the humanities.
One of the problems of technologists thinking about sentience, in my view, is that their view tends to be rather narrow—that of a technologist: focusing on decisions, data collection, and analyses. I submit what sentience is, goes light years beyond those menial and vulgar tasks.
But, hey, I don’t live as a technologist. I appear to be a being that has unplumbed depths, a wide symphony of feelings, and thoughts that seem to have lives of all their own—all unbeknownst to me consciously. Moreover, I sense a soulful way of being in the universe that seems to be completely artistic.
I suggest to you: be careful whose authority and wisdom you rely upon. Siri’s? I’d demur. Facts and data, . . . ok, I guess. Those can be acquired in myriad places. In university, we think facts are not the most important things.
Sycorax, I bow to your expertise and choices in the matter. Now that I think of it, Hamlet was a waffler. :-)
Be well, all.
|
|
Dingus McGee
Social climber
Where Safety trumps Leaving No Trace
|
|
Apr 27, 2017 - 06:57am PT
|
Jim Brennan,
you will have to ask Adam, as he likely knew the difference of his [digital] image of the vulnerable and the real thing.
|
|
WBraun
climber
|
|
Apr 27, 2017 - 07:00am PT
|
largo declared machines cannot understand human symbols
100% true.
The stoopid machine can only process the symbols that have been programmed into the machine to accept by a living entity.
It can't for the pure nuts and bolts of it understand the real meanings of those symbols.
Just as you can't even understand most of em either because you thinking you're a gross materialist in your mind.
That's why you mental speculate so much.
When you transcend the material plane then their actual full meanings become revealed .......
|
|
Dingus McGee
Social climber
Where Safety trumps Leaving No Trace
|
|
Apr 27, 2017 - 07:00am PT
|
MikeL,
...where Searle partially relies upon “consciousness” to solve);
and his reliance on consciousness for the answer is what make his line of defence circular -- or begging the question. IYW
|
|
MikeL
Social climber
Southern Arizona
|
|
Apr 27, 2017 - 07:36am PT
|
Dingus,
Understand = sentience in my book (and apparently in a few books on the consensus of the meaning of the word denotatively).
If you don’t mean sentience, then why don’t we make the word “understand” equivalent with with the word “process,” ok? I’d be fine with that. Like the anthropomorphism with “she’ll,” you too could be criticized for sneaking in some hidden assumptions.
As a teacher, “to understand” has huge implications for many of us. Grok. To be aware. To see into the essence of things. (Symbols or otherwise.) These are notions that were brought forward by the Greeks, but they have a pedigree and heritage that goes way back earlier than that. To understand is what separates advanced human beings from less developed human beings, more so from animals, and far from any machine, IMO.
|
|
Dingus McGee
Social climber
Where Safety trumps Leaving No Trace
|
|
Apr 27, 2017 - 07:38am PT
|
JohnL,
how about something a little more constructive in all this chatter than the situation of a buzzer at your back door? I have one to and it tells me when to sh#t.
Why not ask for or create a flow chart as to how you/one might model consciousness[for a robot, not simply an abstract computer unattached to a body]? And you are permitted to reverb all thoughts entering the
consciousness back through the robot body.
From flow charts we can build, select or make algorithms but I suspect Mother Natures choices are quite simple as electrochemical signaling is quite slow compared to electrometal signaling. Keep it simple.
Now let's understand both the ideas of kind and degree. We want for our first conscious prototype something quite simple but it must have one aspect of consciousness to meet the criteria of kind. Of course it is likely to be crude but this model difference then is simply a matter of degree of what we call consciousness.
I will contribute one idea to this endeavor for its beginning: The consciousness structure likely must have a signal suppressor too prevent data overload.
The making [or having] of feelings is another interesting modeling problem -- would you rather start here?
|
|
MikeL
Social climber
Southern Arizona
|
|
Apr 27, 2017 - 07:48am PT
|
Dingus: . . . his reliance on consciousness for the answer is what make his line of defence circular -- or begging the question.
I wouldn’t say so. Moreover, it’s a common practice to look to what Ed has exhibited in a cartoon many times: “here a miracle happens.”
It’s my academic opinion that all models rely upon something like that, even the field of mathematics.
“Let’s make the following assumptions to begin with . . . .”
“These axioms we hold to be true: . . . .”
“Let’s operationalize the variables in the following way.”
“Statistically, the testing of our data confirms . . . .”
“At a .05 level of confidence, . . . .”
“We’ve chosen the following site for our research study: . . . .”
On and on. Little stick men.
If you’re in the finding-truth, research game, you *must* suspend your judgment a bit to be able to contribute to the conversations.
Finding a petitio principii (“begging the question” fallacy) should not require complicated thinking. It should be rather obvious to a person not directly involved in a presentation. If you want to get technical about it, every so-called fact in the world “begs the question” if the universe / reality / the world is really just one thing.
|
|
Largo
Sport climber
The Big Wide Open Face
|
|
Topic Author's Reply - Apr 27, 2017 - 09:16am PT
|
I'll refresh your memory; largo declared machines cannot understand human symbols. I demonstrated with real world, off the shelf hand-held technology that is not the case.
Balony, Dingus. What you showed is what they call "machine registration." That is, the machine, if programed accordingly, can register an input and produce an output. But it's all sentactic. There's no semantic understanding. There is no, "I got his" going in inside the machine. Only electrical signals.
What often gets conflated or confused here is the difference between recognition and understanding. I can recognize the symbols in fancy math equations (to some extent) but I understand little to nothing. Without an observing consciousness, there is no understanding. Only registration - and that's more than enough for the motion sensor outside my door to flip on a light or for a driverless car to negotiate traffic. Or whatever. It doesn't need to understand, only respond to inputs.
Also, MH2 wrote: So we may fairly ask: What is the difference between Vulcan, world's most complex computer, up at Livermore, and MH2 while writing on this thread?
Do you have an answer? I am curious. I did not know that Vulcan was writing on this thread.
Had to say if MH2 is being honest here (asking an honest question, as opposed to something in which he already has the "right" answer in his head), but amigo, you have confounded yourself over the years by conflating inside and outside, internal and external, subjective and objective, mechanical with conscious. If you are in fact clear on why you have done so, it should be an easy task to differentiate the difference in the process between Vulcan working on an input, and your own process while reading these words. Of course you can slip and slide and dodge a simple, honest answer with double talk or deflection, but my sense (and I might be totally wrong) is that you don't actually know the difference well enough to clearly state it, and have to tap dance around an answer with various dodges and cant.
I have found it invaluable in make clear, at least to myself, what those differences are.
And Dingus, the fatal error (IMO) that people make in trying to model consciousness is to try and follow a Turing test model or to get hung up on WHAT we are conscious about (qualia, etc) as opposed to the fact that we are consciousness of anything, be it Dennett's illusory subjectivity, or a quark. So if you want to start working up a model, start with awareness. It all starts there.
|
|
MH2
Boulder climber
Andy Cairns
|
|
Apr 27, 2017 - 09:23am PT
|
The making [or having] of feelings is another interesting modeling problem -- would you rather start here?
It would be lovely to see Dingus McGee and Largo breadboard an aspect of consciousness.
|
|
Dingus McGee
Social climber
Where Safety trumps Leaving No Trace
|
|
Apr 27, 2017 - 11:07am PT
|
Largo,
your pat answer/question that, "how could 1's and 0's every make consciousness?" is just as clueless as someone asking how could atoms and molecules ever make a person.
|
|
eeyonkee
Trad climber
Golden, CO
|
|
Apr 27, 2017 - 11:36am PT
|
Dingus McGee, I have one question for you. How could atoms and molecules ever make a person?
|
|
|
SuperTopo on the Web
|