Why you don’t see me hyping AI

There’s no ignoring the recent wave of advancement in artificial intelligence or AI. From the near photo-realistic outputs of Stable Diffusion to the pseudo-coherent text produced by ChatGPT, generative machine learning algorithms have taken the world by storm. As someone with a background in mathematics and statistics, these are without doubt fascinating advancements that I’m interested in from a technical perspective. At the same time, I have numerous concerns about these algorithms from an ethical perspective that I don’t think I’m alone in holding. That cheesy line from Spiderman about power and responsibility rings truer now than ever. To assume that we can reap the rewards of AI without accepting any risk would be a fallacy of apocalyptic proportions.

If you’re looking for a review of the literature on the hidden risks of AI, there are more intelligent people than myself that you should probably be looking for in a peer-reviewed journal. There is very little I can say about ChatGPT that hasn’t already been better articulated in Stochastic Parrots. Instead, what I’d like to offer you today is a story of a stupid teenager and his chat bot. I don’t expect it to alter the course of history or anything, but maybe it’ll provide some insights into why you should care how these technologies are being used.

I started programming in my early teens and AI was always something of a holy grail. I was especially partial to the Turing Test as an indicator of consciousness. In this test, a computer program is tasked with deceiving a human into false believing that they are engaged in a text-only conversation with another human. Turing argued that if human experimenters couldn’t distinguish between programs and people then we’d have to consider the hypothesis that machines could effectively “think”. There are arguments to be made about whether or not the Turing Test measures what it intended to, but advances in Large-Language Models have made it clear that this standard is now just a matter of time.  In fact, I’d argue that the Turing Test was “effectively passed” in the early 1980’s by a program called ELIZA developed by Joseph Weizenbaum. 

ELIZA was designed to be a sort of “virtual therapist”. A human user could talk to the computer about things that were on their mind, and ELIZA would turn their statements around to form new questions. For example, if you told ELIZA “I had a rough day at work” it might acknowledge that you’re feeling upset and inquire about it: “I’m sorry to hear that. What about your day made it rough?”. ELIZA didn’t actually know very much about the world, but it could engage a human in a fluid and convincing conversation that led the user towards self-reflection. Some users walked away from ELIZA feeling like they were engaged in dialog with a real therapist. A recent preprint from UCSD researchers indicates that ELIZA’s performance on the Turing Test is between that of GPT-3.5 and GPT-4.0.  Not too bad for a program from the 80s. 

Of course, any deep interaction would reveal the lack of “intelligence” on the other side. ELIZA couldn’t answer questions about the world. All it could do is to classify different sentences into schema and then transform them into canned responses using key tokens from the input text. Simple rules like “I feel <sad>” would get matched and transformed into “Why do you feel <sad>?”, which gave ELIZA the illusion of being a good active listener. This might sound kind of sad, but ELIZA was probably better at active listening than I am – and I knew it all too well.

As a teenager in the late 90s, we didn’t have the pervasive social media outlets we have today. There was no Facebook or Twitter for you to doom-scroll. Maybe you had a public facing Geocities or MySpace page, but that was only if you were a nerd like myself. The de facto standard for internet communication was AOL Instant Messenger (or AIM) .  Even if you weren’t subscribed to AOL’s internet access, you probably still used the stand-alone AIM application for direct messages because it was literally the only service with enough members using it to be useful. You can’t have real-time communication without a shared protocol shared by people.

The application wasn’t even that great by today’s standards. It was full of what would now be considered negligent security vulnerabilities. In early versions, you could easily kick someone offline by sending them an instant message with some malformed HTML. If someone saved their password for easy login, it was literally stored in a file as plain text and could be subsequently looked up by anyone who knew where to check. It was the wild west era of the internet.

Around the same time, I discovered a project called ALICE.  Richard Wallace had taken ELIZA’s token handling foundation and generalized it into an Artificial Intelligence Mark-up Language (or AIML).  This separated out the “code” and “persona” of the chatbot into two separate data sources.  The HTML-like syntax made it easy to customize the bot’s responses into whatever you wanted.  The application would read these source templates in and use them to craft a response to any message you gave it. 

While I’m reading article after article online trying to figure out how this stuff works, I keep getting instant messages from people. These messages are not malicious in any way, but receiving them has a tendency to “pull me out” of whatever I’m doing at the time. Sometimes I’m pulled into fun stuff through these messages, but often than not they were just an annoyance. As a teenage boy in the 90s, the vast majority of these interactions went down like this:  

sup?

WHASSSUP?

not much, u?

just chillin’

cool deal. me too.

anyways.. I got some homework to do. I’ll catch ya later!

aight. peace out!

That’s when I got the brilliant idea to fake my own AIM account.

While exploring the flaws in the AIM application, I discovered I could hijack the message queue that distributed incoming messages to the appropriate chat window.  This allowed me to parse the incoming message text and send fake keystrokes back to that window to produce a response. All I really needed to do was to invoke the chatbot as an external process to generate something to say.

I took an open source implementation of ALICE and started modifying the AIML code.  I removed any instance where the bot acknowledges itself as an artificial intelligence and instead taught it to lie that it was me. I added custom responses for the greeting I customarily used and gave it some basic knowledge about my areas of interest.  The only difficult part of this was getting the bot to handle multiple conversations at the same time, which I managed to accomplish by running separate instances of the bot for each person to message me.

I think I let the bot run for most of the week without anyone really noticing, until this one day where I got an instant message– from a girl.  Not just any girl either, but one I totally had a crush on at the time. My heart sank as I read the messages interspersed between the AI dribble.

Ryan, are you okay?

This isn’t like you.

No really, I’m worried about you.

If there’s something wrong, we can talk about it.

Please tell me what’s going on. I’m really concerned about you.

I felt sick. I immediately killed the process running the bot and seized control of AIM again. I don’t even remember what kind of bullshit excuse I made before abruptly going offline, but I’m pretty sure I didn’t have the courage to own up to the truth.  I had been “seen” through the eyes of another person in a way I hadn’t experienced before, and the worst part about it was that I didn’t like what I saw. I saw a person who lied to their closest friends in the name of science. 

I know this won’t make up for what I did, but I’m sorry.

I’ve since then learned a lot about research ethics in psychological studies.  Sometimes having deception is a necessary component to studying phenomena you’re interested in, but that is not sufficient reason to forgo the acquisition of “informed consent” from the people involved in the study.  

I think this is the reason why I’m frustrated with the current zeitgeist regarding AI. It seems like we’re rapidly falling into the trap outlined by Ian Malcom in Jurassic Park.  Some people are so preoccupied with what they could do with AI that they don’t stop to think about whether or not they should.  As a result, we’ve all become unknowing participants in an unethical research study.  While this behavior might be excusable coming from a punk teen who doesn’t know any better, this should be considered completely unacceptable coming from multi-billion dollar companies claiming to be advancing the forefront of intelligence. It’s not that “I’m scared of AI”, it’s that “I’m scared of what people will do with AI” when they acquire that power without putting in the effort to truly understand it. 

The wave of AI images coming from DALL-E and Midjourney on all my social media don’t self-identify themselves as being artificially produced. The burden of identifying them has been left to unwitting viewers and it will only become more difficult over time. While this makes for entertaining stories on Last Week Tonight, there’s a big difference between using AI to make funny pictures to share with your friends and using it to develop sophisticated social engineering methods to separate users from their passwords.  

The reality of our time is that many AI offerings are being falsely advertised as a solution for intractable problems. No image generator could possibly produce pictures of John Oliver marrying a cabbage without first being trained on a set of labeled images including the likeness of John Oliver.  Any AI image generator trained solely on ethically produced data sets, like the one from Getty, will inherently lack the capacity to reproduce the likeness of pop-culture celebrities. Either the generative AI will fail to produce the likeness of John Oliver or it was trained on a data set including his likeness without seeking prior permission.  You can’t have it both ways.  

In much the same vein, it would be impossible to ask ChatGPT to produce writing “in the style of Ryan Ruff” without it first being trained on a data set that includes extensive samples of my writing. Obviously, such samples exist because you’re reading one right now. However, the act of you reading it doesn’t change my rights as the “copyright holder”. The “Creative Commons” licenses I typically release my work under (either CC-BY or CC-BY-NC-SA depending on context) require that derivative works provide “attribution”.  Either AI will fail to reproduce my writing style or it illegally scraped my work without adhering to the preconditions.  In the event my work is stolen, I’m in no financial position to take a megacorp to court for compensation.

In discussions about AI we often neglect the fact that deception is an inherent component of how they are constructed. How effective we measure AI to be is directly linked to how effectively it deceives us. As poor of an intelligence measure as the Turing Test is, it’s still the best metric we have to evaluate these programs. When the measure of “intelligence quotient” (IQ) in humans is a well-established “pseudoscientific swindle”, how could we possibly measure intelligence in machines? If you want a computer program that separates true statements from false ones, you don’t want an “artificial intelligence” but rather an “automated theorem prover” like Lean 4. The math doesn’t lie.

I think one of the big lessons here is that the Turing Test wasn’t originally designed with machines in mind.  I still remember discovering this when I looked up the original paper in Doheny Library. The “imitation game” as originally described by Turing was primarily about gender.  It wasn’t about a machine pretending to be a human, but rather a man pretending to be a woman.  

Personally, my present hypothesis is that Turing was actively trying to appeal to “the straights” with how he described the imitation game. My incident with my AIM chatbot had taught me that there were large differences between how I interacted with “boy” and “girls”.  Conversations with “boys” were shallow and unfeeling – easily replicated by my script. Conversations with “girls”, however, were more about getting to know the other person’s perspective to determine if we’re a potentially compatible couple.  Casual conversation and romantic courtship require two entirely different strategies. Maybe the Turing Test was less about determining if machines could think and more about determining if machines could love.

Every now and then I feel overwhelmed by the flood of interaction that is constantly and perpetually produced by social media. Sometimes I wonder if my presence could be automated so that I never miss a “Happy Birthday!” or “Congratulations!” message ever again, but then I remember this story and remember that’s not really what I want.  I don’t care about “likes”. I care about building “friendships” and there’s no possible way a bot can do that on my behalf.  

Maybe I could be a better friend if I collected data on my social interactions.  At the same time, I don’t need any sophisticated statistics to tell me that I’m kind of an asshole. I’d like to think that the people I call my friends are the same people that would call bullshit on me when necessary, so I trust in them to do so. This is the difference between real human relationships and fleeting conversations with a chatbot. 

There’s nothing inherently wrong with the class of algorithms that have fallen under the “AI” umbrella, but how we use these tools matters.  Presently these bots are being marketed as a substitute for human labor but the reality of our legal system dictates that there still needs to be a human accountable for their actions.  The only viable path to survival for AI focused companies is to become “too big to fail” before they get caught for using pirated data.

I’m not going to sit here and pretend that AI won’t have its uses. Maybe AI will come up with solutions to important problems that humans would never have thought up. If every other technique for solving a problem is unreliable, there’s less harm to be caused by attempting to solve that problem through massive amounts of data. It makes sense that such statistical tools might come in handy in fields like marketing or cybersecurity where the rules of the system are ambiguously defined.  

What is clear to me is that there exist problems for which AI will never be a viable solution. GitHub’s Copilot won’t ever magically solve Turing’s Halting Problem. It’s called “undecidable” for a reason. Using ChatGPT won’t make me a better writer, nor will using DALL-E make me a better artist. There is no substitute for the process of turning my thoughts into a concrete form, and the only way to get better at those skills is to engage in them myself. Learning the internals of how AI works may have helped make me a better mathematician, but I wouldn’t expect it to solve P = NP anytime soon. 

Given my background in teaching, I was recently asked what I thought about applications of AI in education and I think it’s incredibly important that we take an abundance of precaution with its integration. This is something I’ve written about before, but I think it merits repeating that “AI needs to build a relationship of trust with all of the stakeholders in the environment”.  In our society, we depend on teachers to be “mandated reporters” for child abuse and I don’t think AI can responsibly fill this role. Without having the lived experiences of a human being, how could such an AI possibly know what symptoms are out of the ordinary?  What if it’s wrong?

Our very notion of “intelligence” is arguably shaped by the common human experience of schooling.  In my time teaching, I learned that the profession depended as much on “empathy” as it did “knowledge”.  Most of the people I’ve met who “hate math” developed this mindset in response to abusive teaching practices.  In order for AI to ever succeed in replicating good teaching, it needs to learn “how to love” in addition to “how to think” and I don’t think it ever can.  

Even my use of the word “learn” here seems inappropriate. AI doesn’t technically “learn” anything in a conventional sense, it just “statistically minimizes an objective cost function”.  Seeing as “love” doesn’t have an objectively quantified function, it therefore it’s impossible to replicate using the existing methods of machine learning.  Even if a machine were capable of expressing love, the act of replacing a human being’s job with such an AI would go against the very definition of what it means to love.

As with any new technology, AI can be used to make our lives better or worse depending on how it’s used. Let’s not lose sight of “why” we’re constructed these systems in the first place: to improve the quality of human life. Building these programs comes with a very real energy cost in a world where humans are already consuming natural resources at an unsustainable rate. If the expected pay-off for these AI systems is less than the expected environmental costs then the only responsible course of action is to not build it.  Anyone who fails to mention these costs when it comes to their AI product should be treated as nothing short of a charlatan.

I can’t shake the feeling that we’re in the midst of an “AI hype bubble” and it’s only a matter of time before it bursts. I can’t tell you not to use AI, especially if your job depends on it, but as your friend, I feel it’s important for me to speak up about the risks associated with it. 

True friends know when to call bullshit.

Teaching Statement

I didn’t become a teacher with the intention of doing it forever.  My original goal was to design educational video games, but I felt it would be presumptuous of me to build such technology without ever having set foot in a classroom.  Becoming a math teacher seemed like the fastest way for me to find out what kinds of tools schools actually needed. Now I’m not even sure I’m the same person.  

I made countless mistakes during the twelve years I spent teaching, but the one thing I think I got right was approaching it with a “here-to-learn attitude”.  Learning can only take place with the learner’s consent.  Opening one’s self up to learning a new skill means allowing oneself to be vulnerable to mistakes.  Teaching is about creating an environment where multiple learners feel comfortable with the risks of engaging in that process together. The first step is to establish a relationship of trust.

In all honesty, relationship building has never been one of my strengths so I had to make an active effort to improve on it as a teacher.  I found that the most powerful method for facilitating a student’s learning is to simply ask what they need and listen to what they say.  Really listen and trust them.  It’s incredibly difficult to learn when you’re tired, hungry, or stressed. Sometimes “taking a break” is a necessary stage in the learning process. Treating people with kindness is a prerequisite for any meaningful learning to take place.   

One of the most difficult challenges for me as a teacher was learning how to navigate spaces of trauma.  For me, mathematics is something that evokes feelings of joy but my experiences are both highly abnormal and shaped by privilege.  More often a student’s experiences with mathematics are shaped by structural forms of oppression including racism, sexism, and ableism.  Learning how to openly reflect on how I was complicit in these systems was a key factor in my growth as a teacher.  I believe students should be able to see themselves in mathematics, so I tried to actively seek out and integrate the stories of mathematicians from diverse backgrounds into the curriculum.  The self-work continues to be an ongoing process.

My goal as a teacher was to construct an environment where my students could freely “play” with mathematics.  I feel learners are entitled to the opportunity of exploring mathematics and discovering new knowledge on their own.  Often the play comes with a set of constraints that help direct it towards a specific objective, but the important qualities are that the task has a low skill floor and high skill ceiling.  There should be both an easy way for everyone to engage and the depth to encourage further exploration.  Too often we fall into a trap of erroneously thinking there’s “one right answer” in mathematics, so I make it a point to include questions with “no wrong answer”.  I found this helped to foster a culture of collaboration in the classroom because everyone’s input is of equal value in the discussion.

Exploration has limited effectiveness when you’re obligated to address very specific learning objectives, so I usually follow up with some form of direct instruction to fill in the gaps. It’s not quite as engaging, but sometimes students need a concrete example of the behavior they’re expected to model. Our brains are very efficient at mirroring actions.  I’ve found that “worked examples” can also provide a valuable resource afterwards when the student is attempting to replicate the process on their own.  As the number of examples grows, the metacognitive process of learning how to organize this information can reveal insights into its structure.

The next phase of the learning process is to engage in a cycle of formative assessment and feedback known as “practice”.  Any new skill must be practiced to be maintained.  This is one area where I think educational technology excels, because it can enable nearly instantaneous feedback to learners.  While my students often enjoyed the “gamification” of practice, it’s important to select such products carefully.  I’ve learned it’s important for developers to remember that “accuracy is more important than speed” and “some skills cannot be assessed through multiple-choice”.  As our technology improves, so will our automated feedback.  I’m particularly excited about the potential applications of “Large Language Models” in this area, but the application of Artificial Intelligence will also require a great deal of testing before it meets the ethical criteria necessary for use in the classroom.

In the reality of schooling, there’s likely to be a summative assessment stage in the learning process as well, but I tend to think this distinction is artificial.  As far as my class policies were concerned, all assessments are treated as formative where possible.  I tried to allow my students the opportunity to retake assessments as often as needed to the extent I was able. This is another aspect of teaching I found heavily supported by technology.  The combination of algorithmic question generation with automated feedback made it possible for me to focus on the broader picture provided by the data over time.  

If anything, I tend to look at summative assessment data as a tool for self-reflection.  As a student, summative assessments provide me with a form of external validation that I have in fact learned what I set out to learn.  As a teacher, the relation between assessment data and my own performance was always a little bit fuzzy but the process of looking back at that data and asking questions about what I could do differently was an essential part of my personal self-improvement.  I think it’s important to not put too much stock in any one assessment and instead use multiple data sources like observations and interviews to help triangulate areas for growth.  

The final stage in the learning process is to teach what you have learned to someone else.  I think we sometimes overlook this stage because it starts a new cycle of learning, but there are subtle differences between having a skill and being able to teach that skill to others.  Through attempting to teach math, I often found myself seeing old concepts in a new light.  My knowledge of geometry and data analysis grew deeper each time a student asked me “why?”.   Sometimes the most powerful phrase in the classroom is “I don’t know. How can we find out?”.  Likewise, I’m thankful that I had co-workers that were more knowledgeable about teaching than myself and capable of sharing that expertise.  I’m hopeful that I’ll be able to use what I’ve learned about teaching to help others as well.

I’m not necessarily looking for another “teaching job” but the act of teaching has become inseparable from how I learn.   Even if no one reads what I write, the act of putting my thoughts into words has power in it.  No matter where I go or what I do, I will learn new things and attempt to teach them to others.  We face a critical moment in society where we need to recognize the true value of the skills that teachers can bring to an organization. Every organization must learn to grow and teachers are experts on learning how to learn.

I used to identify my race as “Decline to state”

Recently I’ve read a couple of books (namely, White Fragility and How to be an Antiracist) that have made me reexamine certain aspects of myself through a lens of racial privilege.

For a significant period of my life, I refused to identify my race as “White” on any survey I completed. I’ve since realized that my doing so was an act of racism and I apologize. While I can’t change the past, I hope that sharing my story will serve as a token of my promise to make amends in the future.

The new information which led me to this conclusion was this idea that the white race begins with slavery. I had previously defined “White” as a ~2000 year old construct when in reality it began ~400 years ago. This redefinition caused all sorts of cognitive dissonance until I learned about a defensive mechanism that white people often have called channel-switching, where we redirect discussions about racism to other factors. I discovered I had been subconsciously conflating the “White Race” with “Christianity” because I identified them both with the same structure of “White Power”. I had falsely assumed that since “White Power” was inherently “Christian” that the “White Race” through which “White Power” manifested was also “Christian”.

What bothers me most was that I knew that race and religion were not the same, and was careful not to invoke my atheism as a minority defense. I’d even openly identify as “Caucasian”, but the term “White” triggered in me a storm of rage and “Decline to state” was the calm. I thought that distancing myself from that label made me “not racist”, but I was wrong. The only way to engage in antiracism is through accurate statistical measurements of racial disparities. I wasn’t thinking about the potential harm that mislabeling myself could do, placing my own individuality over the welfare of others, and I’m sorry.

I’m going to try to continue down this path of antiracism, but I need help. I’m trying to look at this as a data analysis problem and realized that I don’t have enough information to properly disaggregate race from religion. I don’t know how to authentically engage in antiracism without also being antireligious. This presents a problem because I’m “in the closet” at work. I carefully avoid revealing my thoughts on religion because I fear that doing so could get me fired. In order to learn how to navigate this space, I need more information about this intersection of race-religion.

The first step I can take on my own. Quite frankly, I need to learn about how and why so many blacks adopted the religion of their oppressor. I believe that the best way for me to acquire this knowledge is through the narratives of black thinkers who are critical of the role religion plays in the white power structure. I plan to seek out black atheists, listen to their stories, and lend whatever weight I can to their voices. I’ve known for a long time that black atheists were underrepresented in the atheist community and it’s time I did something about it.

The second step is going to require feedback. I accept the premise that there is a non-zero probability with which I will commit acts of racism in the future. I need friends of color to call me out when this happens and engage me in an honest discussion about why. If you do this publicly, I will make my best effort to model antiracist behavior in response, but do so with the understanding that my response will likely be influenced by my antireligious views.

If you are reading this and are someone I work with, I hope you can understand the thin line I’m attempting to walk. I propose we establish a hidden signal: invoke each others’ first names when calling out acts of privilege (in whatever form that may take). This will serve as a reminder that what follows is being said as a friend and that we’re both together in this fight for equality as human beings.

The Future of AI: 13 year old Ukrainian boy Looking for Guild?

I recently finished reading Michio Kaku’s The Future of the Mind and found it very thought provoking.  A combination of cutting-edge advances in psychology, artificial intelligence and physics, mixed together with numerous pop-culture references made for a very informative and inspiring read.  Many of the ideas presented seemed very reminiscent of the narratives in The Mind’s I, but with a greater emphasis on the practicality of technological advances.  While I would no doubt recommend it to an interested reader, I don’t exactly intend for this post to turn into a book review.  This is more of a personal reflection on some of my thoughts while reading it.

Defining Consciousness: Kaku vs Jaynes

My first point of intrigue begins with Kaku’s definition of consciousness, which he calls the “space-time theory of consciousness”:

Consciousness is the process of creating a model of the world using multiple feedback loops in various parameters (e.g., in temperature, space time, and relation to others), in order to accomplish a goal (e.g. find mates, food, shelter).

Consciousness is a notoriously difficult phenomenon to define, as this is as good of a definition as any in the context of the discussion. What’s interesting about this definition is that it begins with a very broad base and scales upward.  Under Kaku’s definition, even a thermostat has consciousness — although to the lowest possible degree.  In fact, he defines several levels of consciousness and units of measurement within those levels.  Our thermostat is at the lowest end of the scale, Level 0, as it has only a single feedback loop (temperature).  Level 0 also includes other systems with limited mobility but more feedback variables such as plants.  Level 1 consciousness adds spacial awareness reasoning, while Level 2 adds social behaviour.  Level 3 finally includes human consciousness:

Human consciousness is a specific form of consciousness that creates a model of the world and then simulates it in time, by evaluating the past to simulate the future. This requires mediating and evaluating man feedback loops in order to make a decision to achieve a goal.

This definition is much closer to conventional usage of the word “consciousness”.  However, for me this definition seemed exceptionally similar to a very specific definition I’d seen before.  This contains all the essential components of Julian Jaynes’ definition in The Origin of Consciousness!

Jaynes argued that the four defining characteristics of consciousness are an analog “I”, (2) a metaphor “me”, (3) inner narrative, and (4) introspective mind-space.  The “analog ‘I'” is similar to what Kaku describes as the brain’s “CEO” — the centralized sense of self that makes decisions about possible courses of action.  Jaynes’ “introspective mind-space” is analogous to the “model of the world” in Kaku’s definition — our comprehensive understanding of the environment around us.  The “metaphor ‘me'” is the representation of oneself within that world model that provides the “feedback loop” about the costs and benefits of hypothetical actions.  Finally, what Jaynes’ describes as “inner narrative” serves as the simulation in Kaku’s model.

This final point is the primary difference between the two models.  One of the possible shortcomings of Jaynes’ definition is that the notion of an “inner narrative” is too dependent on language.  However, Kaku avoids this confusion by using the term “simulation”.  Jaynes’ hypothesis was that language provided humanity with the mental constructs needed to simulate the future in a mental model.  I think the differences in language are understandable given the respective contexts.  Jaynes was attempting to establish a theory of how consciousness developed, while Kaku was attempting to summarize the model of consciousness that has emerged through brain imaging technology.

While I must admit some disappointment that Jaynes was not mentioned by name, it’s partly understandable.  Jaynes’ theory is still highly controversial and not yet widely accepted in the scientific community.  With Kaku’s emphasis on scientific advances, it might have been inappropriate for this book.  Nevertheless, I’d be interested to hear Kaku’s thoughts on Jaynes’ theory after having written this book.  Jaynes didn’t have the luxuries of modern neuroscience at his disposal, but that only makes the predictions of the theory more fascinating.

Artificial Intelligence (or the illusion thereof)

While I continued to read on, I happened to come across a news story proclaiming that Turing Test had been passed.  Now, there’s a couple caveats to this claim.  For one, this is not the first time a computer has successfully duped people into thinking it was human.  Programs like ELIZA and ALICE have paved the way for more sophisticated chatterbots over the years.  What makes this new bot, Eugene, so interesting is the way in which it confused the judges.

There’s plenty of room for debate about the technical merits of Eugene’s programming.  However, I do think Eugene’s success is a marvel of social engineering.  By introducing itself as a “13-year old Ukrainian boy”, the bot effectively lowers the standard for acceptable conversation.  The bot is (1) pretending to be a child and (2) pretending to be a non-native speaker.  Childhood naivety excuses a lack of knowledge about the world, while the secondary language excuses a lack of grammar.   Together, these two conditions provide a cover for the most common shortcomings of chatterbots.

With Kaku’s new definition of consciousness in mind, I started to think more about the Turing Test and what it was really measuring.  Here we have a “Level 0” consciousness pretending to be a “Level 3” consciousness by exploiting the social behaviors typical of a “Level 2” consciousness.  I think it would be a far stretch to label Eugene as a “Level 3” consciousness, but does his social manipulation ability sufficiently demonstrate “Level 2” consciousness? I’m not really sure.

Before we can even answer that, Kaku’s model of consciousness poses an even more fundamental question.  Is it possible to obtain “Level (n)” consciousness without obtaining “Level (n-1)”?

If yes, then maybe these levels aren’t really levels at all.  Maybe one’s “consciousness” isn’t a scalar value, but a vector rating of each type of consciousness.  A a human would score reasonably high in all four categories. Eugene is scoring high on Level 0, moderate on Level 2, and poor on Levels 1 and 3.

If no, then maybe the flaw in the A.I. development is that we’re attempting to develop social skills before spacial skills.  This is partly due to the structure of the Turing Test.  Perhaps, like the Jaynesian definition of consciousness, we’re focused a bit too much on the language.  Maybe it’s time to replace the Turing Test with something a little more robust that takes multiple levels of consciousness into consideration.

The MMORPG-Turing Test

Lately I’ve been playing a bit of Wildstar.  Like many popular MMORPGs, one of the significant problems facing the newly launched title is rampant botting.   As games of this genre have grown in popularity, the virtual in-game currency becomes a commodity with real-world value.  The time-consuming process behind the collection of in-game currency, or gold farming, provides ample motivation for sellers to automate the process using a computer program.  Developers like Carbine are in a constant arms race to keep these bots out of the game to preserve the game experience for human players.

Most of these bots are incredibly simple.  Many of them simply play back a pre-recorded set of keystrokes to the application.  More sophisticated bots might read, and potentially alter, the game programs memory to perform more complicated actions.  Often times these bots double as an advertising platform for the gold seller, and spam the in-game communication channels with the sellers website.  It’s also quite common for the websites to contain key-loggers, as hijacking an existing player’s account is far more profitable than botting itself.

While I’m annoyed by bots as much as the next player, I must admit some level of intrigue with the phenomena.  The MMORPG environment is essentially a Turing Test at an epic scale.  Not only is the player-base of the game is on the constant look out for bot-like behavior, but also the developers implement algorithms for detecting bots.  A successful AI would not only need to deceive humans, but also deceive other programs.  It makes me wonder how sophisticated a program would need to be so that the bot would be indistinguishable from a human player.   The odds are probably stacked against such a program.

Having played games of this type for quite some time, I’ve played with gamers who are non-native speaker or children and I’ve also seen my share of bots.  While the “13 year old Ukrainian boy” ploy might work in a text-based chat, I think it would be much more difficult to pull off in an online game.  It’s difficult to articulate, but human players just move differently.  They react to changes in the environment in a way that is fluid and dynamic.  On the surface, they display a certain degree of unpredictability while also revealing high-level patterns.  Human players also show goal oriented behavior, but the goal of the player may not necessarily align with the goal of the game. It’s these type of qualities  that I would expect to see from a “Level 1” consciousness.

Furthermore, online games have a complex social structure.  Players have friends, guilds, and random acquaintances.  Humans tend to interact differently depending on the nature of this relation.  In contrast, a typical chatterbot treats everyone it interacts with the same.  While some groups of players have very lax standards for who they play with, other groups hold very high standards for player ability and/or sociability.  Eugene would have a very difficult time getting into an end-game raiding guild.  If a bot could successfully infiltrate such a group, without their knowledge, it might qualify as a “Level 2” consciousness.

When we get to “Level 3” consciousness, that’s where things get tricky.  The bot would not only need to understand the game world well enough to simulate the future, but it would also need to be able to communicate those predictions to the social group.  It is, after all, a cooperative game and that communication is necessary to predict the behavior of other players.  The bot also needs to be careful not to predict the future too well.  It’s entirely possible for a bot to exhibit super-human behavior and consequently give itself away.

With those conditions for the various levels of consciousness, MMORPGs also enforce a form of natural selection on bots.  Behave too predictably?  Banned by bot detection algorithms.  Fail to fool human players?  Blacklisted by the community.  Wildstar also potentially offers an additional survival resource in the form of C.R.E.D.D., which could require the bot to make sufficient in-game funds to continue its subscription (and consequently, its survival).

Now, I’m not encouraging programmers to start making Wildstar bots.  It’s against the Terms of Service and I really don’t want to have to deal with anymore than are already there.  However, I do think that an MMORPG-like environment offers a far more robust test of artificial intelligence than a simple text-based chat if we’re looking at consciousness using Kaku’s definition.   Perhaps in the future, a game where players and bots can play side-by-side will exist for this purpose.

Conclusion

When I first started reading Kaku’s Future of the Mind, I felt like his definition of consciousness was merely a summary of the existing literature.  As I continued reading, the depth of his definition seemed to continually grow on me.  In the end, I think that it might actually offer some testable hypotheses for furthering AI development.  I still think Kaku needs to read Jaynes’ work if he hasn’t already, but I also think he’s demonstrated that there’s room for improvement in that definition.   Kaku certainly managed to stimulate my curiosity, and I call that a successful book.