This might seem strange, suggesting that a science dedicated to learning from how we humans operate, could actually return the favour by teaching us about ourselves? As strange as this may sound, this is precisely what I am suggesting.
Having spent a good deal of my early career in the “first wave of AI” I had developed a healthy scepticism of many of the capability claims for AI. From the decade or more I spent as an AI researcher and developer I had come to the conclusion that AI worked best when the domains of endeavour were contained within discrete and well bounded ‘solution spaces’. In other words, despite the sophistication of mathematical techniques developed for dealing with uncertainty, AI was simply not that good in the “grey” areas.
AI’s Second Wave
The “second wave of AI” received a big boost when Google company Deep Mind managed to up the ante on IBM’s chess playing Deep Blue by defeating the world Go champion Lee Sedol. According to Founder and CEO of Deep Mind Demis Hassabisis, the success of their program AlphaGo could be attributed to the deeper learning capabilities built into the program, as opposed to Deep Blue’s largely brute force searching approach. Hassabisis emphasizes the ‘greyness’ in the game of Go, as compared to Chess. For those familiar with this ancient Chinese game, unlike chess, it has almost a spiritual dimension. I can vividly recall a research colleague of mine, who happened to be a Go master, teaching a novice colleague the game in a lunchtime session, and chastising him for what he called a “disrespectful move”. So AplhaGo’s success is indeed a leap forward for AI in conquering “grey”.
So what is this “deep learning” all about? You can certainly get tied up in a lot of academic rhetoric if you Google this, but for me it’s simply about learning from examples. The two critical requirements are the availability of lots of examples to learn from, and the development of what we call an “evaluation function”, i.e. something that can assess and rate an action we are considering on taking. The ‘secret sauce’ in AlphaGo is definitely the evaluation function. It has to be sophisticated enough be able to look many moves ahead and assess many competitive scenarios before evaluating its own next move. But this evaluation function, which takes the form of a neural network, has the benefit of being trained on thousands of examples drawn from online Go gaming sites, where the final result is known.
Deep Learning in Business
We can see many similarities to this context in business. For example, the law profession is founded on precedents, where there are libraries of cases available, for which the final result is known. Our business schools regularly educate their students by working through case studies and connecting them to the underlying theories. Business improvement programs are founded on prior experience or business cases from which to learn. AI researchers have taken a lead from this and built machine learning techniques into their algorithms. An early technique that we had some success with is called “Case Based Reasoning”. Using this approach, it wasn’t necessary to articulate all the possible solution paths, which in most business scenarios, is infeasible. All we needed to have was sufficient prior example cases to search through, to provide the cases that most matched the current context, leaving the human user to fill any gaps.
The Student Becomes the Teacher
Now back to my question; what can AI now teach us about ourselves? Perhaps the most vivid learnings are contained in the reflections of the Go champions that AlphaGo had defeated. The common theme was that AlphaGo was making many unconventional moves, that only appeared sensible in hindsight. Lee Sedol has stated his personal learning from his 4-1 defeat by AlphaGo in these comments: “My thoughts have become more flexible after the game with AlphaGo, I have a lot of ideas, so I expect good results” and “I decided to more accurately predict the next move instead of depending on my intuition”. So the teacher has now become the student!
It is common for us as human beings to become subjects of unconscious bias. We see what is being promoted as a “best practice”, perhaps reinforced by a selected few of our own personal experiences, and are then willing to swear by it as the “right” thing to do. We forget that there may be hundreds or even thousands of contrary cases that could prove us wrong, but we stubbornly stick to our original theses. Computers don’t suffer from these very human traits. What’s more they have the patience to trawl through thousands of cases to fine tune their learnings. So in summary, what can we learn from AI?
- Remember that a handful of cases is not a justification for developing hard and fast rules;
- Before you discount a ‘left field’ suggestion, try to understand the experience base that it is coming from. Do they have experiences and insights that are beyond those of your own close network?
- Don’t be afraid to “push the envelope” on your own decision making, but be sure to treat each result, good or bad, as contributing to your own growing expertise; and
- Push yourself to work in increasingly greyer areas. Despite the success of AlphaGo, it is still a game, with artificial rules and boundaries. Humans are still better at doing the grey stuff!