The Triple Helix @ UChicago

Spring 2018

"Should Science Involve Deep Thought?" by Chadbourne Smith

 

The modern age promises bright horizons for the sister fields of artificial intelligence and machine learning. The old systems, encumbered by their hard-coded heuristics, slow processors, and petite memory, were limited to small, well-defined rulesets.[1]  By modern standards, many might rightly refuse to praise these systems as intelligent. Their use was limited, required human oversight, and seldom surprised operators.

Contemporary systems, on the other hand, use two vastly more powerful and generalizable methods: evolutionary models and reinforcement learning.  Essentially, systems are given an input and evaluated on their output. In the evolutionary model, superior systems are carried over to the next round and used as the template for more systems, and in the reinforcement model, tweaks are made back to the system. All this takes place as fast as processors can handle.

Science has always developed alongside technology.  The relationship between the field of biology and the microscope alone exemplifies this. Light microscopy gave science visual confirmation and examination of cells, and once electron microscopy came along, science adopted the technology without missing a beat, and advanced accordingly.  One readily assumes this to be the case for machine intelligence systems. However, this is not likely at all to be the case.[2]  Upon collision with these developing technologies, science may very well be forced into a new age, in which case though ought to be given to the issue sooner, rather than later.

Science, presently, is a rigorous process.  It is self-correcting and places heavy burden of proof upon its constituents.  It has developed a long way from the philosophies of Aristotle, advanced beyond the overbearing empiricism of Bacon, and has steadily grown into the system we have in place today.  Yet, despite all its advances, for completely human reasons, science cannot now nor ever consider itself exempt from error and mistakes. 

In so many walks of modern life – stocks, data analysis, advertising; even mid-level management – machine learning engineers pursue the question of improvement as the question of removing human involvement.  This ideology will naturally and inevitable be turned one day toward science.

Consider Deep Thought, humanity’s greatest supercomputer.  This intelligence, handed the Ultimate Question, took seven and a half million years of processing and returned the answer:

42 

Would this satisfy you, as a scientist?  Should it? Mind, Deep Thought is a fiction, invented by Douglas Adams, but – as with all great science fiction – highly relevant in the questions it poses or, rather, the answer it gives.  Discount the comedic factor from this circumstance. Imagine you have a perfect intelligence, you ask a question beyond the present scope of human understanding, and you receive (within a reasonable timeframe) some simple answer, equation, or model.  Though you cannot say where it comes from, in all cases it works.  Imagining a world where science, from a human perspective, becomes little more than the mass accumulation of data is not a far extrapolation from either this scenario or even the present state of machine learning systems. 

This conception and the phrase “because it works” do not sit well with scientists.  That which works but is beyond our understanding sounds like witchcraft as opposed to science.  Understanding is fundamental and precious to science.  But what of the knowledge that we simply cannot grasp?  The argument for self-driving cars has never been the promise of a perfect driver (whether or not that could come to fruition), but rather that they are better than a human.  So, if we build a machine that can out-think us, what would that mean for science, to take ourselves out from behind the wheel?

For a glimpse into this world, consider a lower stakes instance of inventing ourselves into obsolescence: Google’s AlphaGo intelligence.  AlphaGo, for those who don’t know, is the neural network that beat, four to one, human Go champion Lee Sedol.  There are two things that need to be understood in order to realize why this is such a landmark. First, Go cannot be brute forced.  The number of game states for Tic-Tac-Toe can all be calculated and, by looking at all the remaining outcomes, a computer can play a perfect game every time.  Doing the same for Go would take longer than the heat death of the universe.  

Second, there exist no meaningful apparent heuristics.  Chess cannot be permutated like Tic-Tac-Toe, but has long since been cracked – chances are your computer comes with a chess “A.I.” pre-installed.  This is done with one of the aforementioned tools of machine learning’s youth: heuristics.  As a simple example, a chess A.I. can permutate nmoves in advance and do something like tally up the points of each side’s remaining pieces on the board, then pursue the path which maximizes this value.  Go has no such heuristics.  The factors in Go are too dynamic for that to be the case. 

Now, to momentarily appear contradictory, this landmark was achieved through the use of heuristics. An explanation: Go was trained using reinforcement learning, wherein it tweaked its inner workings (a neural network) in order to maximize victories against another machine opponent, which in turn tweaked itself, over and over countless times.  The key difference really lies in the apparentness of the heuristics.  The inner workings of AlphaGo are not a complete black box – we have access to them, but the sheer volume of connections in the system means we cannot say the exact process by which the answer was returned.  This is how the famous “37th move,” a move which led AlphaGo’s experienced human opponent, Lee Sedol, to leave the room to collect himself, and a move which Go champion Fan Hui called “not a human move” came to be.  AlphaGo made some calculation based on an opaque factor, and it worked.  Economics and medicine have already begun integrating 37th moves into their fields; science will soon decide if it has to as well.

Science has, to me, always been fundamentally an endeavor of understanding.  But time and our own humanity pose very real limits upon the knowledge we can even touch, let alone fathom.  Each and every answer does, of course, boil down to 42, but since when has an answer been enough to satisfy science?

 

References

[1] Bostrom, Nick. Superintelligence. Oxford University Press, 2016.

[2] Dirican, Cüneyt. "The Impacts of Robotics, Artificial Intelligence On Business and Economics." Procedia - Social and Behavioral Sciences195 (2015): 564-73. doi:10.1016/j.sbspro.2015.06.134.

 
UChicago Triple Helix