Discussion Topic |
|
This thread has been locked |
jgill
Boulder climber
The high prairie of southern Colorado
|
|
Sep 21, 2017 - 04:43pm PT
|
So sorry to hear of Lute's passing, Dingus. The little guys are great and trustworthy companions.
|
|
MH2
Boulder climber
Andy Cairns
|
|
Sep 21, 2017 - 08:35pm PT
|
yanqui,
We have confused ourselves. An essentially human trait, perhaps. Brains too big.
Here is what I quoted from paul:
Seems to me that what separates machine knowing from human knowing is the independent entity that realizes it is knowing, that observes the process of knowing, that finds satisfaction in the experience of knowing
Then paul followed up:
The question isn't what you know, it's what is knowing? How is your knowing different than the knowing of your heater's thermostat or your toaster?
So we need paul to tell us what knowing is. If he doesn’t know, why did he make the claims in the first quote?
People use words like knowing, awareness, self, and consciousness, in colloquial speech. Looseness in the meaning of these words is easily tolerated.
Paul often makes grand statements that are short on specifics. Less often he makes statements about history that have wonderful detail. He can do better than, “the independent entity that realizes it is knowing, that observes the process of knowing, that finds satisfaction in the experience of knowing.”
Depending on what is meant by “satisfaction,” a fly could find satisfaction in knowing that its stomach is full. Tell me why a fly is not an independent entity that realizes it is knowing, that observes the process of knowing, that finds satisfaction in the experience of knowing.
As to machines, they are too different from us, at present, to function as independently as a fly does. I can see no reason to attempt to compare machine awareness to human awareness, unless you get past the loose notion of “awareness” to some better basis for comparison.
Could a car have self-awareness? Could a car have a self? We can answer in the same vein as the question. We already have the phrase, “self-driving car.” So we agree that a car can have a self. If that car monitors the air pressure in its tires, then it is self-aware, loosely speaking.
edit:
I should add that yanqui distinguishes different possible kinds of awareness in a car: being aware of the road versus being aware that it is a machine that knows how to drive a car. Paul can surely acknowledge that there are different kinds of knowing, too, and that the distinctions could be important when comparing people to machines or other animals.
|
|
WBraun
climber
|
|
Sep 21, 2017 - 09:49pm PT
|
What your problem is
Is that you spin everything in circles inside your uncontrolled minds.
You have no over control over your minds because you don't even know what your own mind is and how it works,
Just excepting and rejecting according to its whims is all you have.
All because you are ultimately clueless.
Clueless people have no real foundation.
They build it on ether and left out intelligence.
Everything you have is built on theory and you have nothing solid.
That is your defects .....
|
|
yanqui
climber
Balcarce, Argentina
|
|
Sep 21, 2017 - 10:36pm PT
|
I can see no reason to attempt to compare machine awareness to human awareness, unless you get past the loose notion of “awareness” to some better basis for comparison.
Ok. I think part of the problem is that as we grow-up, we develop a complex conceptual framework (concepts, theories, stories, memories, biases, even outright falsehoods) that mediate and define our awareness of ourself. First off, there's gazillons of simple facts that might fall in the realm of self-awarness. Stuff like e "this is my hand", "my nose itches", "I need to put shoes on". Perhaps the equivalent of a self-driving car recognizing it needs a fill-up. But then there are gazillions of deeper apsects. Some have to do with the roles we take on. Roles are a big part of how we define who are. Am I a good teacher? A good father? brother? husband? Do I do enough research? Should I take more care of the environment? What does all that stuff mean and do I even care? It seems to me, when it comes down to it, our self-awareness also very much depends on understanding other people's point of view. We all have this great ability to delude ourselves. I don't know how many times I was totally convinced I was the retional one, but after listening more carefully to someone else's point of view, I realized I was being the dick. I suppose realizing you're being a dick is a part of self-awareness.
Anyways, I'm starting to ramble a bit, but it seems that defining human self-awareness (even as a capacity) is beyond my capabilities tonight. It's like each one of us is a complex theory, more difficult and mind-boggling than Quantum Mechanics.
On the empirical level, one example of something (mildly) interesting that's been done is the so-called "mirror test". It turns out some animals "naturally" (i.e. without specific training) recognize themselves in mirrors (like humans do) and others don't. Recognizing yourself in a mirror seems to be an indication of "self-awareness". But as a definition I think this is way too weak and it doesn't even seem to be a reasonable necessary condition because of its over-dependence on the visual aspect.
|
|
Dingus McGee
Social climber
Where Safety trumps Leaving No Trace
|
|
Sep 22, 2017 - 04:43am PT
|
Yanqui,
I think you are on the right track. Our awareness as perceived has been the great talking topic. Channeling Damasio and Metzinger ( & MikeL) it is just the flow of information by transduction -- [MikeL words] from Consciousness I & consciousness II to consciousness III [C3] or self awareness. We are told (over and Over) from simple awarenesses [from C1 homeostasis & C2] (like Largo's backyard alarm) to C3. It is the transduction & interpretation that make awareness at C3 seem so real. This is us infinitely supported by the transparent action of CI and CII in the background.
The story of transduction. We have been hiking with no water for some time. C1 homeostasis knows our fluids are out of balance and water is needed. Think of Largo's back yard sensor going off. C1 sends that info to C2 and unconsciously we start glancing to the valley below and the looming snowfields on a distant Mt top [some feeling modules have kicked in]. At some point we declare we are thirsty which is action from C3 [Largo would say I have the awareness that I am thirsty]. The deception begins. Now we get that Largo's awareness has determined he is thirsty. The joke is on Largo and how he has been fooled. Two hours later Largo is seen at a bar with a beer in his hand and no glass of water in sight.
I think all our conscious awareness C3 arises from C2 [even the grandest of thinking] and we [the self module] are told over and over [info from C2 by transduction] that we are aware of this or that happening now [the eternal present]. This action is us -- just the flow of information. Perhaps this is a hard problem to swallow? And of course there are gaps but we do not even notice them. Soon another transduced message is played and we now say I am aware of such. We certainly do fool ourselves about what awareness is.
Thanks for all the sympathy notes of Lute's passing. I now have a young Aussie-BC mix.
|
|
yanqui
climber
Balcarce, Argentina
|
|
Sep 22, 2017 - 05:45am PT
|
Wow, what a coincidence! Here I was just writing about the mirror test and self-awareness and right now I see on this morning's front page of the New York Times a short film about how dogs recognize themselves through smell. Let's see what that's about.
Edit to add:
Here's the link:
https://www.nytimes.com/2017/09/22/science/dogs-smell-recognition.html.
It seems the animal behavior scientists have the same problem MH2 brought up about the concept of "self-awareness". According to the short article, Gordon Gallup defines self-awareness as "“the ability to become the object of your own attention.” With that definition, it's not clear to me why a chimpanzee locating a dot on it's forehead in a mirror is self-awareness but one of my dogs stopping during a hike to check out a sore paw is not. I imagine it's supposed to have something to do with recognizing (identifying? having some "whole concept" or knowledge of?) yourself as a specific individual, seperate from other things. Sounds OK, but it doesn't really seem to clear things up much. And there are many levels at which that "knowledge" can exist. Recognizing your own image or your own smell are only two. My dogs can recognize their own names. Actually, it seems to me they recognize each others' names and the names of my cats as well. Doesn't that indicate some kind of "self-awareness"? There are many other levels of this I could consider here, just with my dogs.
|
|
Dingus McGee
Social climber
Where Safety trumps Leaving No Trace
|
|
Sep 22, 2017 - 05:52am PT
|
Yanqui,
It appears that dogs can backtrack following the odors [ & visual clues they remember?] they left in the initial passing. They know what they smell like.
|
|
Largo
Sport climber
The Big Wide Open Face
|
|
Topic Author's Reply - Sep 22, 2017 - 06:56am PT
|
Dingus McGee. You're still not getting it, IMO, because you're still vectoring everything off WHAT you are aware of, believing that content (lower levels of an integrated system) source awareness by need and necessity. This idea has been forwarded by many, perhaps it's wonkiest expression being that the brain "assigns" awareness to itself, though obviously awareness is postulated by the brain that did the postulating in the first instance.
Problem is you are trying to unpack all of this by observing behavior etc. from a 3rd person perspective. I'll try and jot out some ideas later that might make this clear, or moreso at least.
Per my backyard sensor - I believe you are mistaking what they call machine registration (mechanically processing an input), with conscious awareness. Maybe try and perceive the difference, then you're off to the races.
|
|
MikeL
Social climber
Southern Arizona
|
|
Sep 22, 2017 - 07:15am PT
|
There is nothing to compare to awareness, so saying it’s like this or that is useless.
It’s pretty funny, really. Here awareness always is, and people are looking high and low for it, to say what it is, to define it, to distinguish it from any thing else.
If everything were blue, what would one achieve in attempting to say what blue was?
Duck: . . . you spin everything in circles inside your uncontrolled minds.
+1
|
|
yanqui
climber
Balcarce, Argentina
|
|
Sep 22, 2017 - 07:45am PT
|
If everything were blue, what would one achieve in attempting to say what blue was?
It would certainly be a drag for the person in charge of disarming bombs. I mean, just imagine: "Cut the blue wire!"
|
|
MH2
Boulder climber
Andy Cairns
|
|
Sep 22, 2017 - 07:49am PT
|
I think part of the problem is that as we grow-up, we develop a complex conceptual framework (concepts, theories, stories, memories, biases, even outright falsehoods) that mediate and define our awareness of ourself.
Yes.
Hence my example earlier about how it might make sense to say a self-driving car is "aware" of where the road goes, but I doubt it's aware that it's a machine that knows how to drive a car.
During our growing up we may be taught things that help us to regard ourselves as beings that are self-aware. I have trouble picturing how a chimp, for example, would come to consider itself self-aware. I doubt that the need would arise. Same thing for a self-driving car.
Watching what animals do, and what people do, is our best guide to what is going on inside their heads. We must be careful of what we conclude. In the mirror test, how do we know the animal recognizes itself? If it removes a piece of tape from its forehead, maybe it only sees the tape and is curious about it.
I don't know much about the mirror test and probably am not doing it justice.
|
|
yanqui
climber
Balcarce, Argentina
|
|
Sep 22, 2017 - 08:06am PT
|
The mirror test is really pretty simple. I guess it's not that big of a deal (except perhaps to the guy who thought of it). Although I think it does show something kind of interesting about human beings and animals and it's worth mentioning that human children need a certain level of development to pass it. As we were saying, whether it indicates "self-awareness" or not depends on what you mean by that. It seems reasonable to me that a concept of "self-awareness" should allow it to exist in a spectrum, both in human development and animals as a whole. Anyways, here's a little video about the test:
[Click to View YouTube Video]
|
|
WBraun
climber
|
|
Sep 22, 2017 - 08:11am PT
|
Whether it indicates "self-awareness" or not
You can't even start there. The test is useless.
You have not even yet understood or defined the self itself yet.
You don't even have the root yet.
All you are doing, again and again, is just plain guessing, spinning in circles.
You people are lost and start guessing because you don't even know where you are and what you are ......
|
|
Marlow
Sport climber
OSLO
|
|
Sep 22, 2017 - 09:18am PT
|
|
|
Ed Hartouni
Trad climber
Livermore, CA
|
|
Sep 22, 2017 - 09:50am PT
|
"There is nothing to compare to awareness, ..."
I like this quote from MikeL, it can be interpreted in a way that he didn't intend it to be. My interpretation is that what he, and others, mean by "awareness" isn't a thing at all (which they will agree with to first order) but taken literally, there is no "awareness," which would seem to be counter to our experience.
Yet our experiences of "awareness" are probably all over the map, some of that experience is shared, some is unique.
But the variance in our experience make "awareness" a very poor discriminate regarding the existence of "mind" or "consciousness" let alone a counter example to the possible scientific explanation of those phenomena (or perhaps it is a single phenomenon).
And while Largo has long resisted the idea of the Turing Test, that is the challenge to discriminate between the sources of intelligence (originally a test between a human and a machine), the practical implications of that test being met by the non-human competitor have recently become ubiquitous, and with serious consequences.
But before I provide evidence for this, one might contemplate the human response to deception. You can read an interesting summary in the Wikipedia article,
http://en.wikipedia.org/wiki/Deception
and the Stanford Encyclopedia of Philosophy
http://plato.stanford.edu/entries/lying-definition/
I raise this issue because of the circular argument made to dismiss the success of machines in the Turing Test, that is that they somehow "fool" the humans into believing they are human. But isn't that the point? how is a human "fooled" by a machine?
By the way, a huge fictional literature exploring deception exists, but I'm sure I'm not the one to suggest the very best of this fiction, others could do it better than I.
The Turing Test is not just an academic issue, in the September 15th issue of Science there is the interesting article "Bot-hunters eye mischief in German election" by Kai Kupferschmidt
http://science.sciencemag.org/content/357/6356/1081.full
'On 3 September, as German Chancellor Angela Merkel and her main opponent Martin Schulz faced off in an election debate that many viewers panned as more of a duet than a duel, a far livelier effort was underway on social media. People on Twitter started using the hashtag #verräterduell, which translates as “duel of traitors” and mirrors the claim by the right-wing Alternative für Deutschland party that both Merkel's mainstream Christian Democrats and Schulz's Social Democrats have “betrayed” the country.
Yet much of the venom may not have been fueled by angry voters, researchers say. Instead it looks like the work of bots, or fake social media profiles that appear to be connected to human users, but are really driven by algorithms.'
Now Twitter allows 1% of tweets to be from automated accounts, and if you pay the fee, this can be as much as 10%. This is for the purpose of advertising, which is the major economic engine of the internet.
These automated accounts are referred to as "bots." Some estimates put the number of Twitter profiles that are bots as high as 15%, which translates to 50 million profiles. And these bots aren't concentrating on commerce, but on politics.
The article asserts that "[e]arlier versions of social bots were easy to identify because many posted continuously day and night, but in the arms race between botmakers and bot-detectors they have become harder to identify."
It is a bitter irony that "[m]any researchers are turning to machine-learning techniques to distinguish real and fake users. For instance, Ferrara arrived at his estimate of bots using an algorithm that he trained on millions of tweets from verified human users and bots. It tracks hundreds of features, including an account's age and use of emoticons."
That is AI applied to detect AI, where humans cannot.
So while Largo may claim that this is impossible, that machines cannot have "minds" or "consciousness," it seems that that objection is rather moot, what we cannot know is whether or not those machines can possess "mind" or "consciousness," the machines can convince us otherwise.
From my point of view, it raises the issue whether or not we have convinced ourselves that we possess them, and MikeL's statement comes into stark focus, "There is nothing to compare to awareness, ..." possibly because awareness, so described, doesn't exist.
I wonder what the machines have to say about this.
|
|
MH2
Boulder climber
Andy Cairns
|
|
Sep 22, 2017 - 10:17am PT
|
Orangutans are far more aware than previously thought.
I was not aware of that.
But then I saw no difference at all between the reaction of the two orangs to the mirror test. They both reached up to touch the spot of white on their foreheads. How was the color applied? Did they react then? If so, how? If not, why not?
edit:
And there are questions about how apt it is to compare human 2-year olds to adult apes. There may be differences in motor skills.
|
|
MikeL
Social climber
Southern Arizona
|
|
Sep 22, 2017 - 10:54am PT
|
Ed: "There is nothing to compare to awareness, ..." possibly because awareness, so described, doesn't exist.
Wait. Where was awareness described, please? By AI programs or AI’s programmers? By criteria that define “lying” or “deception?”
(Sorry, did I miss something obvious?)
|
|
Dingus McGee
Social climber
Where Safety trumps Leaving No Trace
|
|
Sep 22, 2017 - 11:52am PT
|
Ed,
you are onto the same idea as I have. I will study your last post more.
My gut feeling was that "wow" how can this be? Consciousness II tells us,[by transduction] C-III, all we know. Somehow we get the grand idea that we are aware and the thing of awareness exists as an unusual entity [the awareness maker]. What awareness could be is very counterintuitive to all most any idea we have of it. It is the movement of information in a very special way [ the feeling modules and all the parallel processing going on.] These are not digital computers.
Hence the big fuss from Largo who is going keep screaming his 1st person experience [deception] of awareness when other knowns lead us elsewhere.
|
|
Dingus McGee
Social climber
Where Safety trumps Leaving No Trace
|
|
Sep 22, 2017 - 01:17pm PT
|
Largo,
...what they call machine registration (mechanically processing an input)...
Yes, part of the process may be thought of as machine registration but you are leaving out the part that makes us sentient -- the feeling modules which we have little to no control of when they are turned on and off. One cannot feel a lack of loss while attending his dog's euthanasia.
|
|
|
SuperTopo on the Web
|