What is "Mind?"

Search
Go

Discussion Topic

Return to Forum List
This thread has been locked
Messages 11495 - 11514 of total 22307 in this topic << First  |  < Previous  |  Show All  |  Next >  |  Last >>
Largo

Sport climber
The Big Wide Open Face
Topic Author's Reply - Nov 28, 2016 - 11:19am PT
It doesn't surprise me that people are trying to police their thoughts or direct their meditation toward some codified realm where good things are likely to happen. This is a pretty classic example of what PPSP was warning about per "designing your own practice." You're left with a conditioned ego trying to transcend itself (at least in theory) though policing itself. As was said many centuries ago, "how can this not lead to a great confusion."

The very words, "doing nothing" seem paradoxical. Radical allowing might be easier to understand as a concept. It usually takes a long time to realize that the practice only starts to catch fire at the threshold of self acceptance, whereas self-policing is actually a form of self denial insofar as you have decided going in what is "good" and otherwise. You are, in essence, trying to control the outcome, presuming to know where it should go. Or assuming that the outfit selling the course knows where it should go. These kind of mistakes are totally rookie errors, like stepping on the rope with crampons on. It amazes me that this stuff (codified paths) passes for expert commentary or sage advice.



Much of the confusion or muddling we find here arises from trying to
answer ontological questions (what IS this) with functionalist answers.

The beginning assumption with functionalism is that "real" is the exclusive domain of third-person external objects or phenomenon.

A kind of pathologically stubborn example of trying to apply this belief system was played out some decades ago by behavioralism, which sought to fully explain and understand human behavior by way of observable phenomenon. This also leads people like Daniel Dennett and others to nonsense statems like, "You only think you have first person experience. But it's an illusion." What's real, is machine registration, which we can observe and measure.

Ironically, though not surprisingly, this whole "mind as illusion" side show immediately dissolves once you try and apply it to, of all things, to what's called "hard A.I.," in particular to the persistent belief that it is theoretically possible to (among other things) build a sentient machine.
jgill

Boulder climber
The high prairie of southern Colorado
Nov 28, 2016 - 12:06pm PT
Mindfulness in Denver public schools



. . . the only thing “experience” points to is an ascetic, always-upward, absolute, god-nearness, peak experience, self-validating, self-justifying, godlike, intense, intrinsically valued, personifications of cloudlike wisps, with visions, away from physicality, away from what is materialistic, flight-like, transcendent, altered states of consciousness

That's a mouthful.

Ironically, though not surprisingly, this whole "mind as illusion" side show immediately dissolves once you try and apply it to, of all things, to what's called "hard A.I.," in particular to the persistent belief that it is theoretically possible to (among other things) build a sentient machine

I hope you're watching Westworld (HBO series - not the old movie).
Largo

Sport climber
The Big Wide Open Face
Topic Author's Reply - Nov 28, 2016 - 01:20pm PT
John, I love Westworld and a lot of Sci Fi stuff. I liked Blade Runner, too, but I never thought it was told the future. Both are fictional narratives that are made to see real because the writers knew how to pull that off. Building a replicant (al la Blade Runner) is a different thing, but I don't blame people for believing it will be done. That's the power of narratives. But one only has to dig into Dennett's statement, "we only think we have subjective experience, but it's actually an illusion," to immediately find the problems.


And this wonderful quote: ". . . the only thing “experience” points to is an ascetic, always-upward, absolute, god-nearness, peak experience, self-validating, self-justifying, godlike, intense, intrinsically valued, personifications of cloudlike wisps, with visions, away from physicality, away from what is materialistic, flight-like, transcendent, altered states of consciousness."

Funny, we know perfectly well that this montebank who spewed this dross never bothered to try mindfulness for one second, but professes to an expert evaluation based on second-hand reports. Not good science.

Isn't it interesting that people so often conflate the content with the sentience of same - the stage with the play, so to speak. Poor rube doesn't understand that experience also points to the physicality and directly toward the materialistic stuff he furtively defends.

One of the problems of Dennett's Folly (subjectivity is an illusion) is that he uses the agency of experience to announce the primacy and verity of so-called objective things while dismissing the agency itself as illusory, or conflating it with machine registration. The ability to believe this kind of thing must be why the tooth fairy et al (narratives) found such great traction.
jgill

Boulder climber
The high prairie of southern Colorado
Nov 28, 2016 - 02:11pm PT
Funny, we know perfectly well that this montebank who spewed this dross . . .


Oh my!


;>(
MikeL

Social climber
Southern Arizona
Nov 28, 2016 - 02:59pm PT
^^^^^^^

Well, that would be me, and I said that it's *not* that, but that many folks think that it is something like that.
Largo

Sport climber
The Big Wide Open Face
Topic Author's Reply - Nov 28, 2016 - 03:15pm PT
Have to have some fun on this thread or it gets deadly and sinks like a ship's anchor. Poking fun at what we perceive are silly beliefs is part of the bargain. But the greatest howler so far has to be, "You only think you have subjective experience. It's really just an illusion."

Like I said, if hard AI kooks ever tried to unpack this whopper - using nothing more than the Socratic method - they'd be in for an adventure.

eeyonkee

Trad climber
Golden, CO
Nov 28, 2016 - 03:17pm PT
You've got to be kidding with the last post, Largo.
High Fructose Corn Spirit

Gym climber
Nov 28, 2016 - 03:25pm PT
What they mean to say, Silly Rabbit, is that the brain is an amazing perception generator (wiggle that finger in the corner of your eye!) and its outputs (sentience, qualia, consciousness) have illusory qualities re them..

Not much different from when Einstein famously called time an illusion... when it would have been less confusing and way more clear to say time has an "illusory quality" or "illusory component" when perceived by the brain.

...

"What is mind?"

So much talk about meditation, mindfulness and dreams on this thread; and so little talk about evolutionary psychology when the latter provides so much (more) on the "mental life" and its workings (eg, instincts, thoughts and feelings).

This is by and large a thread that plainly illustrates America's science illiteracy manifested in the ST climbers camp.


Here's a thought: Instead of posting why not take one or two formal courses in Evolutionary Psychology - even online thru Stanford or Harvard or Chicago - just to see if they don't provide some fresh insight?

PS

Go-b and others... FYI... Facebook Science is not science. Breitbart Science is not science.
eeyonkee

Trad climber
Golden, CO
Nov 28, 2016 - 03:27pm PT
I think Ed should have more babies, some others fewer.

I don't do the Facebook. This is my only social media experience. While it's been fun in a lot of ways, it has made me less optimistic for the future. Not so much this thread per se, but the sum total of the political/religion/science threads. I used to believe that it would be more like Star Trek.
High Fructose Corn Spirit

Gym climber
Nov 28, 2016 - 03:37pm PT
Yeah, me too.

You should check out the latest Harris podcast featuring Stuart Russell, re AI and consciousness also (2) Humans (UK show, avail on your favorite peer to peer utility).

I think Humans is even better than Westworld for illustrating some practical everyday relations between human and cyborg (synths) insofar as AI and cybernetics were to ever take off. Regarding economics (eg, unemployment), politics, morality issues, meaning purpose and value... those thorny things.

I'd give a million dollars to know how all this civilization unfolds over the next 100 and 1000 years. Sheesh!!

I'm going to hate to leave the Party when my time comes.

...



Technology continuously decreases the need for capital & labor which will concentrate wealth & power. -Nicholas Berggruen
MH2

Boulder climber
Andy Cairns
Nov 28, 2016 - 07:30pm PT
Have to have some fun on this thread or it gets deadly and sinks like a ship's anchor. Poking fun at what we perceive are silly beliefs is part of the bargain.


Part of the fun:

Largo's physics

MikeL's math

Paul's anthropo-cosmology
jgill

Boulder climber
The high prairie of southern Colorado
Nov 28, 2016 - 07:58pm PT
I've been re-reading Owen Glynne Jones's book Rock Climbing in the English Lake District (1900), and Jones, a Physics Master in the London School District by profession, has this to say of a moment prior to a friend suggesting an ascent of Kern Knotts Crack: "I was lying on the billiard table just then thinking of the different kinds of nothing."

Jones (1867-1899) would have fitted right in on this thread.
Largo

Sport climber
The Big Wide Open Face
Topic Author's Reply - Nov 28, 2016 - 08:17pm PT
Fruity sez: What they mean to say, Silly Rabbit, is that the brain is an amazing perception generator (wiggle that finger in the corner of your eye!) and its outputs (sentience, qualia, consciousness) have illusory qualities re them.
---


No cigar on this on Fruity. You simply haven't reasoned this through, nor studied enough functionalism (Dennett et al) to even know where those people stand on the subject.

The majority of functionalists deny the existence of qualia. Dennett does not, though he labels qualia, and 1st person experience, an illusion - something many more nuanced thinkers have recognized and clearly illustrated is a nonsense statement.

Dennett fouled his own argument in this regards by admitting that there is nothing more obvious that the apparent reality of our direct experience, the "inner light show" going on inside our heads. But as a functionalist, he is hidebound to the old behavioralist model (now junked) of trying to understand and define humans simply by dint of their observable behavior, as viewed as a third-person phenomenon. Ergo, to this camp, only 3rd person phenomenon meets the criteria of being real.

This leaves Dennett in the impossible position of accepting the experiential verity of 1st person phenomenon, but denying the "reality" of same because it is not a 3rd person phenomenon.

If you were to ask a physicalist what criteria would have to be met for 1st person phenomenon to be "real," their only answer would be that subjective experience would have to be 3rd person observable phenomenon.

At the very least this smacks of a tautology as described by the ancient Greeks - a statement that is true merely by saying the same thing twice. Or a statement that is true solely because of the terms and criteria involved (only the 3rd person is real).

But this is just the surface layer of the insuperable problems with staunch functionalism. The impossibilities arise with you look at the output belief in real world terms, as in Hard AI. That's when the bottom falls out.

It would be interesting to see if someone like eeyonkee can reason out why this is so. I doubt it, but who knows.

And to answer your silly question about why poking your eye produces the illusory signal of light, sans photons - this is not a question about sentience at all. It is a question about content (an illusory light flash), not the active subjective experiencing of that content.

You quite naturally conflate the two because when you start with a machine-registration model of consciousness, content and awareness of content are self same, at least according to your belief (again, in mind studies this is typically called functionalism).

The differences between brain-generated content per external phenomenon, and what that content is when objectively measured or evaluated, is an especially interesting study.

Surprisingly, many so called science-types are prone to conflate internal perceptions of so-called external objects with things that are "out there," believing the two are fundamentally the same. That is, the moon that we perceive and whose qualities we measure actually exists "out there," in basically the same form as we perceive it.

But again, the clencher here is in seeing why Hard AI and the machine output model - at lease the one that goes along with Dennett's beliefs - are incompatible.
MH2

Boulder climber
Andy Cairns
Nov 28, 2016 - 08:35pm PT
But this is just the surface layer of the insuperable problems with staunch functionalism. The impossibilities arise with you look at the output belief in real world terms, as in Hard AI. That's when the bottom falls out.


You are confused. That is the only conclusion.


You are also afraid of bogeymen. There is no AI, Hard or other. There are a variety of approaches to machine learning. It is too soon to say where and how far they may go.

MikeL

Social climber
Southern Arizona
Nov 29, 2016 - 07:52am PT
I can’t say enough about seeing other things than what we’re used to (for many reasons). Seeing perspectively, from a trained point of view, is limiting. This means there is more to reality than one sees or even can see. It seems the way that view can become more inclusive is to simply relax. Don’t grasp.

The less you look for, the more that shows up.

MH2: It is too soon to say where and how far they may go.

That, about everything, eh? It seems it’s too soon to say about anything. The understanding that comes out of that would suggest that one try to avoid taking any notion too seriously or concretely.
Largo

Sport climber
The Big Wide Open Face
Topic Author's Reply - Nov 29, 2016 - 10:21am PT
But this is just the surface layer of the insuperable problems with staunch functionalism. The impossibilities arise with you look at the output belief in real world terms, as in Hard AI. That's when the bottom falls out.


You are confused. That is the only conclusion.


You are also afraid of bogeymen. There is no AI, Hard or other. There are a variety of approaches to machine learning. It is too soon to say where and how far they may go.

--------


MH2, you have to go into the corner again for reverting to your compulsion to do what psychologists call "inverting." That is, dodging the questions (how does the bottom fall out of Dennett's Folly when you start asking hard and specific questions) by making nonsense assertions: You are confused. You are afraid of bogeymen. There is no AI, hard or otherwise.

Of course there are millions of pages of commentary and thousands of books on AI, though the terms and parameters vary school to school. A quick look at what MH2 says doesn't exist. He's pulling a kind of Dennett Folly himself, but instead of saying "we only think we have experience," MH2 says we only think there is a subject called "AI." Or that what AI REALLY is, is the study of machine learning. Of course there are many from the Artificial Brain camp who would take issue with MH2 on this account. But first let's look at a brief overview, starting with what is generally called AI-complete.

"In the field of artificial intelligence, the most difficult problems are informally known as AI-complete or AI-hard, implying that the difficulty of these computational problems is equivalent to that of solving the central artificial intelligence problem—making computers as intelligent as people, or strong AI."

In this regards, AI machines are posited as nothing more than processing agents, data crunching machines, or super duper Turing rigs. Sentience is not a prominent or even a factor in this work.

However when you traverse sideways into things like the Blue Brain Project, conecived by one of the greatest hustlers and con men in the world - Henry Markham - you get quite another set of goals, parameters and promises. Specifically, that as early as 2030 the Blue Brain Project, with over a billion dollars of funding (much of it from the Swiss government), will have produced a thinking, feeling, talking, loving and fully sentient machine.

This work and these claims have become diversified by many camps and schools, most are versions of the artificial brain movement.

"Artificial brains are man-made machines that are just as intelligent, creative, and self-aware as humans. No such machine has yet been built, but it is only a matter of time. Given current trends in neuroscience, computing, and nanotechnology, we estimate that artificial general intelligence will emerge sometime in the 21st century, maybe even by the year 2050.

"We consider human consciousness to be the most pressing mystery, and yet most within our reach. By reverse engineering the human brain we will come to understand it. By reconstructing and enhancing the brain we will be empowered to push forward our understanding of the universe and to evolve life to the next level."

AI is very much divided as to the possibility of sentient machines, many insisting that sentience is not and should not be a goal of AI, which should remain focused on machine learning. But the assumption that intelligent machines will also be sentient is a given to the majority of people who are raising alarmist warnings about the pending "singularity," or that time in the very near future (they insist) when we build machines that are more intelligent than we are.

"Some of today's top techies and scientists are very publicly expressing their concerns over apocalyptic scenarios that are likely to arise as a result of machines with motives. Among the fearful are intellectual heavyweights like Stephen Hawking, Elon Musk, and Bill Gates, who all believe that advances in the field of machine learning will soon yield self-aware A.I.s that seek to destroy us—or perhaps just apathetically dispose of us, much like scum getting obliterated by a windshield wiper. In fact, Dr. Hawking told the BBC, “The development of full artificial intelligence could spell the end of the human race.”

My point in this regards is not concerned with the ongoing debate within AI per sentience - either as a worthy goal or a non-starter - but rather the deep seated belief within the entire community that in their heart of hearts, most would insist that sentient machines are at least theoretically possible, believing as they do in the functionalist or machine output/machine registration model, which at bottom is Dennett's position. As was recently stated in a fine article in Psychology Today,
"Strong A.I., by definition, should possess the full range of human cognitive abilities. This includes self-awareness, sentience, and consciousness, as these are all features of human cognition."

My point in all of this is not to refute the various camps of AI, but rather to investigate what their basic assumptions are, starting with Dennett's basic thesis that sentience is not a first person phenomenon at all, but rather a third person function, and that "we only think we have experience," which in reality is just machine registration that has reached a critical level of complexity.

Using this as a starting point allows us to dive into the subject and pretty quickly, by way of simply asking questions al la the Socratic method, see the bottom fall out of these basic assumption per sentience.
paul roehl

Boulder climber
california
Nov 29, 2016 - 10:49am PT
My point in all of this is not to refute the various camps of AI, but rather to investigate what their basic assumptions are, starting with Dennett's basic thesis that sentience is not a first person phenomenon at all, but rather a third person function, and that "we only think we have experience," which in reality is just machine registration that has reached a critical level of complexity.

The idea that we only think as an illusion takes us back to a descent into solipsism that requires the question who or what is partaking in the illusion? The hard problem remains when we simply ask what is it like to be anything? What is the taste of chocolate? What is any "experience?"
Ed Hartouni

Trad climber
Livermore, CA
Nov 29, 2016 - 01:13pm PT
The idea that we only think as an illusion takes us back to a descent into solipsism that requires the question who or what is partaking in the illusion? The hard problem remains when we simply ask what is it like to be anything? What is the taste of chocolate? What is any "experience?"

why do you have to ask?
paul roehl

Boulder climber
california
Nov 29, 2016 - 01:25pm PT
why do you have to ask?

...because it's there.
Largo

Sport climber
The Big Wide Open Face
Topic Author's Reply - Nov 29, 2016 - 01:44pm PT
The idea that we only think as an illusion takes us back to a descent into solipsism that requires the question who or what is partaking in the illusion? The hard problem remains when we simply ask what is it like to be anything? What is the taste of chocolate? What is any "experience?"

why do you have to ask?


Doesn't this arise from the same impulse to wonder about that boson over there?

I consider it fundamental to all inquiring minds to ask ontological questions: What the hell IS that? Followed by functional questions: How does that work? How does this apparent object or phenomenon interact with the rest of reality?
Messages 11495 - 11514 of total 22307 in this topic << First  |  < Previous  |  Show All  |  Next >  |  Last >>
Return to Forum List
 
Our Guidebooks
spacerCheck 'em out!
SuperTopo Guidebooks

guidebook icon
Try a free sample topo!

 
SuperTopo on the Web

Recent Route Beta