RPI's Selmer Bringsjord - "Is Westworld Our (Near) Future?" Wednesday at Noon

 

RPI's Selmer Bringsjord - "Is Westworld Our (Near) Future?" Wednesday at Noon

Sage 4101

February 15, 2017 12:00 PM - 1:30 PM

"Is Westworld Our (Near) Future?" asks Selmer Bringsjord, Professor of Cognitive Science and Computer Science at Rensselaer, in this week's Cognitive Science lecture, Wednesday Feb. 15 at 12 noon in Sage Labs 4101 ... and his answer is Yes.

Westworld is an HBO series that deals with the “big questions” of AI in "an undeniably vivid and timely way," says Bringsjord, and "There are indeed many reasons why an affirmative is the correct answer to the title of this talk." These include:

1) The Deadly Principle:  Westworld shows the following principle in striking action: "Any agent that is at once autonomous, powerful, and highly Intelligent, is dangerous; and the higher the degree to which these three attributes are possessed, the more dangerous is the agent in question."

This principle is one that might well eventually lead to a future in which AI poses an “existential risk” to the human race — if humanity allows shoddy engineering of the sort seen in Westworld.  I explain what sort of exquisite engineering must be pursued in order to ensure that Westworldian danger is restricted to the realm of fiction, thereby allowing us to survive.  (Devotees:  Hosts are blessed by their creators with some narratological autonomy, of course; and what happens when “bulk apperception” is slid upward in a certain host?)

2) The Paradox of Engaging-but-Improvisational Entertainment/Art:  The real world will ineluctably move toward giving experiences to humans in environments that are at once immersive and populated with sophisticated AIs/robots with which humans interact.  This is happening before our eyes; it’s just that while the trend is inexorable, it’s also gradual.  The challenge, though, is to manage to keep story-based entertainment (and, for that matter, art) engaging, and at the same time new and improvisational.  Macbeth is great, yes; but the witches give us the same ghoulish deal in every run, and Lady M has her way with her man in every run as well.  Hence, a human participant in an immersive Macbeth is probably going to soon tire of the looping narrative.  Westworld is based on the dream of allowing humans to enter stories in immersive environments — in which new narrative is created on the fly by AIs themselves, drawing humans in.  (Is such entertainment/art possible?  As I explain, no.)

3) Red-PIll Robots Only, Please:  Westworld flies in the face of the policy of building only “red-pill” robots.  A blue-pill robot is one that — to make humans feel better at some level — deceives these humans into thinking that it’s actually conscious; that it really has feelings and self-awareness.  Humanity is building blue-pill robots.  That is very, very unwise.  It sets us up for massive self-deception, for a robot isn’t really conscious, and we deceive ourselves if we think otherwise.  We should be issuing firm disclaimers with robots even like today’s Pepper, but we’re not doing this.  Instead, we’re going the way of Westworld:  We’re building AIs out of simple computation, yet allowing ourselves to be hoodwinked into thinking that these AIs are truly conscious, so that we feel better.

Bringsjord notes that "Some of the r&d presented in this talk has been enabled by grants from ONR, and I am deeply grateful for this support."

Note to attendees:  If you have not seen Westworld (Season 1), you might want to consider doing so before attending this talk.  Be forewarned, however:  the series is not only intellectually deep, but both violent and racy, and therefore may not be your cup of tea.

The Cognitive Science weekly lecture is free and open to the RPI Community.

Add to calendar
Share|