What Cognition-as-a-Service should mean?

I first came across the term Cognition-as-a-Service(CaaS.) here[1] and boy, was I excited! For a moment, I was like, “That will take a cognitive load off my mind.” Much to my disappointment, the article talked about:

The cognitive operating system will reach out and connect our bodies and even REACH into them via augmented reality devices like Google Glass, and the quantified self movement.

Everything is going to get smarter. Your phone, your calendar, your watch, your radio, your TV, your car, your refrigerator, your house, your glasses, your briefcase and clothing.

While there is no problem with this vision in itself, what bugged me is the use of the word cognition.  It is a group of mental processes that includes the attention of working memory, producing and comprehending language, learning, reasoning, problem solving, and decision making[2]

Watson, Siri etc don’t involve all these processes. For example, IBM Watson[3] is amazing at answering questions in natural language – a very complex problem. At the risk of angering computer scientists, in a gist the IBM Watson retrieves information from a knowledge base after having understood what the person is asking. Siri roughly does the same thing although it has to do that in a more location-aware manner. There is a lot of cognition that is left out.

Soon, your TV,car, radio etc. all will be smart – learning your patterns of daily use, predicting things for you and in some cases, even carrying out the tasks for you like driving.

But what about something that makes YOU smarter? Or at least more rational and logical. Something that helps you make decisions free from certain biases and outsmart others by augmenting your thinking process with logic, statistics, game theory etc. That is what I want CaaS to do.

Let me a expand a little more on the above claim.

Humans are not good intuitive statisticians or logicians. Our hunches of what might be logically correct or statistically relevant are not as often right as we might expect them to be. But we are good intuitive grammarians. Even children who are 4 years old conform to rules of grammar while talking, although they are not really good at identifying the rules.[4]

A glance at List of cognitive biases or List of fallacies should give an idea of the number of ways we can stray from making “perfect” decisions. For example, sometime or the other we have done something because a voice in our heads went, “Everyone you know is doing it. It must be right.” Right there, you just committed argumentum ad populum or appeal to the people. Politicians and public speakers use such fallacies to their advantage all the time.

Our cognitive system is far from perfect and it is affected by factors that would surprise most people.

For example, consider this experiment demonstrating the anchoring effect: People were divided into two groups and asked whether Mahatma Gandhi died before or after age 9, or before or after age 140 and also the age at which he died. But the two groups still guessed significantly differently (average age of 50 vs. average age of 67). [5][6] This is because we tend to fixate on the number at hand and make our approximations sometimes biased by numbers that might not have anything to do with the question. But does this happen in our daily lives? Remember the last time you bargained. You got shoes for the price of Rs. 800 while the price on the tag said Rs. 1000 and you were feeling pretty smug about yourself. But consider for a moment that the shopkeeper cleverly “anchored” the upper bound at an ridiculously high number and you did all your haggling keeping that high number in mind(like the 140 yrs anchor for Gandhi’s death).

We suck at intuitive probability too. Suppose we flipped a coin 7 times. The probability of the sequence of HHHHHHH is the same as HTTHTHH. But people bet more on the latter sequence because it “looks more like” a random sequence. People don’t expect 7 consecutive heads and would start wondering if the coin is biased if that happens.

We have developed some great concepts in logic, probability, game theory and decision theory and it would be awesome if we could include these in our intuitive thinking process. Imagine pointing out the logical inconsistencies in the speech of a politician or not falling for the tricks of advertisers and marketeers. But in life we don’t get opportunities to sit down with a notebook and calculate the probabilities or deduce the logic. This is where CaaS comes in.

What I would love is a system that gives red flags when there is a chance of us committing a logical fallacy or falling prey to a cognitive bias, a system that helps us interpret data correctly and intuitively or a system that  helps us base some of our key decisions on statistics and not mere conjecture, a system that gives us game theoretic strategies based on our real-life scenario. A perfect CaaS would not only require our speech and vision as inputs but also an idea of what is important to us and not and that is never really quantifiable. People would obviously be concerned about privacy if such a system did exist but I expect such services to be used only in situations where we know going in we need to be careful about the logic other people are going to use(lawyers, judges) or specious stats people might lure us with(advertisements).

Technologically, such a CaaS is a distant dream. Till then, it is a good idea to keep in mind the biases we might harbor and the fallacies we might commit.

X-as-a-Service(XaaS) refers to the growing number of services delivered over the internet like Software-as-a-Service(SaaS), Platform-as-a-Service(PaaS) etc. Google Docs is an example of a SaaS – a software which is hosted on the cloud , not on your computer.

[1] Why Cognition-as-a-Service is the next operating system battlefield 
[2] Cognition
[3] Watson (computer)
[4] ‘Thinking, Fast and Slow’ by Daniel Kahneman
[5]

[6] Anchoring

Advertisements

One thought on “What Cognition-as-a-Service should mean?

  1. Great blogpost, although it seems that you are missing some updates in IBM Watson. You mention that Watson does not involve working memory, producing and comprehending language, learning, reasoning, problem solving, and decision making. You write that, IBM Watson retrieves information from a knowledge base after having understood what the person is asking, whereas in an early stage Watson fields Jeopardy!’s clever, wordy, information-packed questions that have been written with only humans in mind, without regard or consideration for the possibility that a machine might be answering. IBM Watson is not only amazing in answering complex questions in natural language. In case of the IBM Oncology advisor, Watson does not understand questions, but ingest the medical record of a patient and suggests the best therapies that are described in the medical literature, case studies, journals and guidelines. Siri supports only tailor requests for users knowing that they’re speaking to a computer. Regarding the learning, IBM Watson supports dynamic Learning. Through repeated use, Watson literally gets smarter by tracking feedback from its users and learning from both successes and failures. With IBM Watson debating technologies, IBM delivers technology that can assist humans to debate and reason. More information and a demo about this effort can be found in this video (starting around 45 mins). Siri does none of that.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s