Tuesday, February 23, 2010

Embodying Cognition

Last week the podcast All in the Mind had a good talk with neuroscientist John Donoghue from Brown University about interfacing brains with machines.  Donoghue founded a enterprising upstart, Cyberkinetics, which cleverly makes a product called the BrainGate, which is gizmo that plugs a brain into a computer.  Think it's crazy?  So do people at Gizmag and Wired.
Currently the product has patients moving a cursor around a large screen, the concept carries some intriguing baggage.  Specifically, I'm thinking of those pesky zombies.  Here's a thought experiment:

Imagine a completely paralysed person of good mental health.  We connect her up to one of Donoghue's BrainGate's and now she's moving a cursor around.  Now imagine we afford her the ability of "clicking".  At this point she's able to use a standard computer, with an on-screen keyboard, given it's positioned in her gaze.  Now imagine we connect a sophisticated robot to this computer giving our permanently supine patient input, by way of LCD, and ability to control the robot.  Predictability a bit slower than most, our patient now has the ability to move about the world and receive visual input from this interaction.  Is it too naive to say we have succesfully embodied (some of) her cognitive processes?  I think it is...at least at this point.  But imagine we give her a suit that can stimulate her skin as the robot's skin is stimulated.  Likewise, we reproduce all the other senses with technology.  Now imagine we've perfected the BrainGate and deprecated the silly cursor-interface in lieu of a faster, fully integrated neural interactive mode.  Lastly, imagine this apparatus is so absolutely integral that were the robot to experience what it is programmed to know as death, the interface would heartlessly recreate the experience for our hapless patient.

Sad really.

Could we have said the robot was conscious?  No way.  Unless!  Conscious is as conscious does, a credo the robot lawfully enforces, however arbitrarily.  Might this be "our" relationship to our "mind?"  And might this relationship be of some evolutionary worth?

Monday, February 15, 2010

Kurzweil's folly

What follows is a cleaned-up, link-infused conversation I recently had with a biologist friend and practicing Luddite:

Adam: I saw Molly the other day, not quite Chomsky, but close.
me: Yeah?
Adam: Yeah, now I'm here in Parkland eating some oatmeal squares.
me: I bought your book the other day.  Not quite Chomsky, but close.
Adam: oh really?  So you're the one.
me: Yeah, it's in the mail...so don't ask any questions, but I'm sure it's a regular tour-de-force.
Adam: Yeah dude.
me: Hey, I found the anti-Adam.
Adam: You found what?
me: Ray Kurzweil.
Adam: Too late!!!  Already knew about the Great D******bag and his coming singularity.
me: So you're in, yeah?  We take off next week.
Adam: Oh yeah, transhumanism and the whole schebang!
me: A friend over here gave me one of his books.
Adam: What a gomer, huh?
me: My friend's Canadian...but Ray, yeah: pipe-dreams in print.
Adam: It's a good idea to study the rate of technological change but he gets some things so wrong.
me: I agree with his accelerating returns from technology...but I'm pretty sure he looses all ground with his blinding optimism.
Adam: Yeah, but not just that.  When he plotted human achievement in the line of cosmic and biological evolution my head exploded.  He made a really basic mistake evolution is not exponential and never will be.
me: Well, he slips in memetic evolution as the exponential part.  But I think he had it a bit backwards.
Adam: You know the Fermi paradox?  Between the number of planets and galaxies and time, if a technological singularity was inevitable it should have already happened.  But of course…it hasn't...
me: Is that the one in reaction to Carl Sagan's argument for ET life?
Adam: I think so.  It's not a direct refutation of Kurzweil but it's suggestive.
me: Kurzy also misses the growing and absolutely irrefutable tension between our biology and our technology.  I thought this was his grossest error.  I mean, people in "developing nations" are unbelievably less likely to have mental illness, or stress related problem, and even drug addiction.  Sure they might be starving and we can't seem to find an alternative to join-us-or-die capitalism, but there's an undeniable link between pace and complexity (which Kurzy exalts) and Huge Societal Problems.
Adam: Yep.  We're still a bunch of apes.
me: I want to be bonobo.
Adam: I've been reading a lot of junk on language and how it turned up in the first place.  At Kurzweil's pace of evolution, we're due-up for another ground-breaking advancement like language.
me: If you're interested in language stuff, check out Steven Pinker.  Brilliant stuff.  He's one of the few than run a psychology tack without sacrificing the linguistics.
Adam: Oh yes, I've watched Pinker.
me: He's cool-looking too, right?
Adam: I've got this penchant for stalking cool people on the internet and pacing my room listening to their talks.
me: Oh! We're studying all about qualia in a philosophy class.  Crock of shit...basically...
Adam: Yeah, qualia is bullshit.
Adam: I listen to Pinker, Dennett, Chomsky, Jackendoff and the rest.
me: Yeah, Dennett has a pretty home-run argument against qualia.
Adam: Good, it deserves it.  You know, you might like my little book.
me: I'm excited to get it.
Adam: It doesn't directly apply to consciousness but it toys with at least one implication.
me: If I cite it...I'll let you know.
Adam: Here's a line from it: "I am increasingly convinced that the real division in the intellectual arena is not between selfish ideologies but between two opposite poles separated by their stance on information flow. One pole claims the human minds as an autonomous source of information and the other sees it as a recipient."
me: Interesting.  The "current state" of cognitive science is increasingly "postcognitivist", which basically breaks cognition from the traditional view of purpose-built sub-systems -- the view that we're computers that process input and provide output -- and forms a new one where we're walking accidents of thought.  Hah!
Adam: I could see that.  Is there any space for computation in this view?  Is computation just another subsystem?
me: Well, that might be good traditionalist defense.  The new guard kind of hold that when we say, see a baseball coming towards us, we aren't computing anything.  Instead, we have an experience, informed by our history of sensorimotor experience that tells us, rather simply, to deal with the situation.  (Catch it or get out of the way.)  The can of worms is really in embedded cognition which is the view that we use our world and our bodies as our cognitive system...not just our brains. So then we've this recurrent problem of interaction-as-cognition.  It gets a bit hairy, but really interesting.  And it really draws a useful line in the sand between cognitive science and psychology.  (Which in my opinion should admit that it's a clinical discipline and hand the theoretical torch off to cognitive and neuroscience.)
Adam: So wait, if the "conscious" aspect of consciousness is epiphenomenol, how could evolution have shaped it, or is the point that it never has?
me: I suppose that's a really good question. But it's kinda like a lot of evolution: consciousness wasn't useful until it was accidentally made available by an organized brain.  Hofstadter has wonderfully human and respectfully emotive ways of putting this stuff.
Adam: Read this.  It's cool, though certainly might already know about it, but Doug's even got something to say about Kurzweil.
me: Cool, I will, but I got run to the store before it closes.
Adam: Good talking.