I recently finished reading Michio Kaku's The Future of the Mind and found it very thought provoking. A combination of cutting-edge advances in psychology, artificial intelligence and physics, mixed together with numerous pop-culture references made for a very informative and inspiring read. Many of the ideas presented seemed very reminiscent of the narratives in The Mind's I, but with a greater emphasis on the practicality of technological advances. While I would no doubt recommend it to an interested reader, I don't exactly intend for this post to turn into a book review. This is more of a personal reflection on some of my thoughts while reading it.
Defining Consciousness: Kaku vs Jaynes
My first point of intrigue begins with Kaku's definition of consciousness, which he calls the "space-time theory of consciousness":
Consciousness is the process of creating a model of the world using multiple feedback loops in various parameters (e.g., in temperature, space time, and relation to others), in order to accomplish a goal (e.g. find mates, food, shelter).
Consciousness is a notoriously difficult phenomenon to define, as this is as good of a definition as any in the context of the discussion. What's interesting about this definition is that it begins with a very broad base and scales upward. Under Kaku's definition, even a thermostat has consciousness -- although to the lowest possible degree. In fact, he defines several levels of consciousness and units of measurement within those levels. Our thermostat is at the lowest end of the scale, Level 0, as it has only a single feedback loop (temperature). Level 0 also includes other systems with limited mobility but more feedback variables such as plants. Level 1 consciousness adds spacial awareness reasoning, while Level 2 adds social behaviour. Level 3 finally includes human consciousness:
Human consciousness is a specific form of consciousness that creates a model of the world and then simulates it in time, by evaluating the past to simulate the future. This requires mediating and evaluating man feedback loops in order to make a decision to achieve a goal.
This definition is much closer to conventional usage of the word "consciousness". However, for me this definition seemed exceptionally similar to a very specific definition I'd seen before. This contains all the essential components of Julian Jaynes' definition in The Origin of Consciousness!
Jaynes argued that the four defining characteristics of consciousness are an analog “I”, (2) a metaphor “me”, (3) inner narrative, and (4) introspective mind-space. The "analog 'I'" is similar to what Kaku describes as the brain's "CEO" -- the centralized sense of self that makes decisions about possible courses of action. Jaynes' "introspective mind-space" is analogous to the "model of the world" in Kaku's definition -- our comprehensive understanding of the environment around us. The "metaphor 'me'" is the representation of oneself within that world model that provides the "feedback loop" about the costs and benefits of hypothetical actions. Finally, what Jaynes' describes as "inner narrative" serves as the simulation in Kaku's model.
This final point is the primary difference between the two models. One of the possible shortcomings of Jaynes' definition is that the notion of an "inner narrative" is too dependent on language. However, Kaku avoids this confusion by using the term "simulation". Jaynes' hypothesis was that language provided humanity with the mental constructs needed to simulate the future in a mental model. I think the differences in language are understandable given the respective contexts. Jaynes was attempting to establish a theory of how consciousness developed, while Kaku was attempting to summarize the model of consciousness that has emerged through brain imaging technology.
While I must admit some disappointment that Jaynes was not mentioned by name, it's partly understandable. Jaynes' theory is still highly controversial and not yet widely accepted in the scientific community. With Kaku's emphasis on scientific advances, it might have been inappropriate for this book. Nevertheless, I'd be interested to hear Kaku's thoughts on Jaynes' theory after having written this book. Jaynes didn't have the luxuries of modern neuroscience at his disposal, but that only makes the predictions of the theory more fascinating.
Artificial Intelligence (or the illusion thereof)
While I continued to read on, I happened to come across a news story proclaiming that Turing Test had been passed. Now, there's a couple caveats to this claim. For one, this is not the first time a computer has successfully duped people into thinking it was human. Programs like ELIZA and ALICE have paved the way for more sophisticated chatterbots over the years. What makes this new bot, Eugene, so interesting is the way in which it confused the judges.
There's plenty of room for debate about the technical merits of Eugene's programming. However, I do think Eugene's success is a marvel of social engineering. By introducing itself as a "13-year old Ukrainian boy", the bot effectively lowers the standard for acceptable conversation. The bot is (1) pretending to be a child and (2) pretending to be a non-native speaker. Childhood naivety excuses a lack of knowledge about the world, while the secondary language excuses a lack of grammar. Together, these two conditions provide a cover for the most common shortcomings of chatterbots.
With Kaku's new definition of consciousness in mind, I started to think more about the Turing Test and what it was really measuring. Here we have a "Level 0" consciousness pretending to be a "Level 3" consciousness by exploiting the social behaviors typical of a "Level 2" consciousness. I think it would be a far stretch to label Eugene as a "Level 3" consciousness, but does his social manipulation ability sufficiently demonstrate "Level 2" consciousness? I'm not really sure.
Before we can even answer that, Kaku's model of consciousness poses an even more fundamental question. Is it possible to obtain "Level (n)" consciousness without obtaining "Level (n-1)"?
If yes, then maybe these levels aren't really levels at all. Maybe one's "consciousness" isn't a scalar value, but a vector rating of each type of consciousness. A a human would score reasonably high in all four categories. Eugene is scoring high on Level 0, moderate on Level 2, and poor on Levels 1 and 3.
If no, then maybe the flaw in the A.I. development is that we're attempting to develop social skills before spacial skills. This is partly due to the structure of the Turing Test. Perhaps, like the Jaynesian definition of consciousness, we're focused a bit too much on the language. Maybe it's time to replace the Turing Test with something a little more robust that takes multiple levels of consciousness into consideration.
The MMORPG-Turing Test
Lately I've been playing a bit of Wildstar. Like many popular MMORPGs, one of the significant problems facing the newly launched title is rampant botting. As games of this genre have grown in popularity, the virtual in-game currency becomes a commodity with real-world value. The time-consuming process behind the collection of in-game currency, or gold farming, provides ample motivation for sellers to automate the process using a computer program. Developers like Carbine are in a constant arms race to keep these bots out of the game to preserve the game experience for human players.
Most of these bots are incredibly simple. Many of them simply play back a pre-recorded set of keystrokes to the application. More sophisticated bots might read, and potentially alter, the game programs memory to perform more complicated actions. Often times these bots double as an advertising platform for the gold seller, and spam the in-game communication channels with the sellers website. It's also quite common for the websites to contain key-loggers, as hijacking an existing player's account is far more profitable than botting itself.
While I'm annoyed by bots as much as the next player, I must admit some level of intrigue with the phenomena. The MMORPG environment is essentially a Turing Test at an epic scale. Not only is the player-base of the game is on the constant look out for bot-like behavior, but also the developers implement algorithms for detecting bots. A successful AI would not only need to deceive humans, but also deceive other programs. It makes me wonder how sophisticated a program would need to be so that the bot would be indistinguishable from a human player. The odds are probably stacked against such a program.
Having played games of this type for quite some time, I've played with gamers who are non-native speaker or children and I've also seen my share of bots. While the "13 year old Ukrainian boy" ploy might work in a text-based chat, I think it would be much more difficult to pull off in an online game. It's difficult to articulate, but human players just move differently. They react to changes in the environment in a way that is fluid and dynamic. On the surface, they display a certain degree of unpredictability while also revealing high-level patterns. Human players also show goal oriented behavior, but the goal of the player may not necessarily align with the goal of the game. It's these type of qualities that I would expect to see from a "Level 1" consciousness.
Furthermore, online games have a complex social structure. Players have friends, guilds, and random acquaintances. Humans tend to interact differently depending on the nature of this relation. In contrast, a typical chatterbot treats everyone it interacts with the same. While some groups of players have very lax standards for who they play with, other groups hold very high standards for player ability and/or sociability. Eugene would have a very difficult time getting into an end-game raiding guild. If a bot could successfully infiltrate such a group, without their knowledge, it might qualify as a "Level 2" consciousness.
When we get to "Level 3" consciousness, that's where things get tricky. The bot would not only need to understand the game world well enough to simulate the future, but it would also need to be able to communicate those predictions to the social group. It is, after all, a cooperative game and that communication is necessary to predict the behavior of other players. The bot also needs to be careful not to predict the future too well. It's entirely possible for a bot to exhibit super-human behavior and consequently give itself away.
With those conditions for the various levels of consciousness, MMORPGs also enforce a form of natural selection on bots. Behave too predictably? Banned by bot detection algorithms. Fail to fool human players? Blacklisted by the community. Wildstar also potentially offers an additional survival resource in the form of C.R.E.D.D., which could require the bot to make sufficient in-game funds to continue its subscription (and consequently, its survival).
Now, I'm not encouraging programmers to start making Wildstar bots. It's against the Terms of Service and I really don't want to have to deal with anymore than are already there. However, I do think that an MMORPG-like environment offers a far more robust test of artificial intelligence than a simple text-based chat if we're looking at consciousness using Kaku's definition. Perhaps in the future, a game where players and bots can play side-by-side will exist for this purpose.
When I first started reading Kaku's Future of the Mind, I felt like his definition of consciousness was merely a summary of the existing literature. As I continued reading, the depth of his definition seemed to continually grow on me. In the end, I think that it might actually offer some testable hypotheses for furthering AI development. I still think Kaku needs to read Jaynes' work if he hasn't already, but I also think he's demonstrated that there's room for improvement in that definition. Kaku certainly managed to stimulate my curiosity, and I call that a successful book.