Over at Wired, a headline blares "Artificial Intelligence Is Now Telling Doctors How To Treat You." Sounds unique, novel, and even scary. Those AIs are replacing humans, to the point where even the detailed knowledge of the physician is superseded by a Dr. HAL! The article talks about Modernizing Medicine database and IBM's Watson supercomputer. One big problem with that headline (and the general thrust of the story): AI have been telling doctors how to treat you for quite some time. A long time ago, in a galaxy far away, Edward Shortliffe created the expert system MYCIN. Curious? You can actually play around with MYCIN. A simplified version of it is freely available online, courtesy of a leading Google employee.
Here's some source for a MYCIN-like expert system, straight from an early 1990s book penned by current Google director of research Peter Norvig. MYCIN uses expert rules to make medical inferences, and is one of the most famous artificial intelligence programs in the history of the discipline. It was derived from an earlier system, DENDRAL, specialized for chemical research. I excerpt the first few functions:
;;;; File mycin.lisp: Chapter 16's implementation of MYCIN. ;;;; A sample rulebase is provided in "mycin-rules.lisp". (defconstant true +1.0) (defconstant false -1.0) (defconstant unknown 0.0) (defun cf-or (a b) "Combine the certainty factors for the formula (A or B). This is used when two rules support the same conclusion." (cond ((and (> a 0) (> b 0)) (+ a b (* -1 a b))) ((and (< a 0) (< b 0)) (+ a b (* a b))) (t (/ (+ a b) (- 1 (min (abs a) (abs b))))))) (defun cf-and (a b) "Combine the certainty factors for the formula (A and B)." (min a b))(defconstant cf-cut-off 0.2 "Below this certainty we cut off search.")
Yes, you read that correctly. It's ANSI Common Lisp. You can read the 1975 dissertation that produced MYCIN as well, as long as you've got the 'fro, lava lamp, and bell-bottoms handy to fully take in the nostalgia. I'll throw in some vintage Parliament too. Right around the time that Shortliffe published his dissertation book, Parliament dropped Mothership Connection, the best funk record of all time. Coincidence? Correlation isn't causation. That said, while information may or may not want to be free, information certainly makes its funk the P-Funk and wants to get funked up. It also probably wants its funk uncut too, I might add.
Now, it might seem ridiculous to compare an old expert system based on a database of expert rules to a modern cognitive computer like IBM's Watson and the Modernizing Medicine database app that the article covers. Indeed, there's a world of difference between having to elicit rules from experts and what modern AI does, as hinted at in the article:
Artificial intelligence–essentially the complex algorithms that analyze this data–can be a tool to take full advantage of electronic medical records, transforming them from mere e-filing cabinets into full-fledged doctors’ aides that can deliver clinically relevant, high-quality data in real time. “Electronic health records [are] like large quarries where there’s lots of gold, and we’re just beginning to mine them,” said Dr. Eric Horvitz, who is the managing director of Microsoft Research and specializes in applying artificial intelligence in health care settings.
However, rule-based systems never really went away, as their inclusion in modern engineering and automation textbooks suggests. Another shocker: research in similar systems for medical diagnosis and analysis is still ongoing -- and in gasp Prolog too. Moreover, computational data mining and knowledge discovery in science and medicine isn't exactly new either. Nor is incorporating learning. My point is not to suggest that there is nothing new under the sun. Technology is not just about revolutionary new innovations -- it sometimes is also about novel combinations of older paradigms together. Things like better algorithms, methodologies, AI applications and systems, and more efficient and easier to use systems also matter too even if they aren't completely original. Given that quantum computing is not mainstream yet (and is still the subject of fierce debate), I also type this on a classical computer. It's certainly not entirely novel, but it's much more powerful (and very different) than the one I first used as a very young child!
Watson in particular is based on a combination of artificial intelligence technologies that dwarfs the old expert system paradigm in power and flexibility. Watson also promises, in theory, systems that are much easier for humans to interact with than ever before. Modernizing Medicine's user interface and accessibility is also worthy of note. By incorporating learning it also promises greater customizability. But all of this also represents a new response to a familiar problem: the need to capture human knowledge and make it accessible in the form of intelligent agents that cooperate with humans and increase productivity.
Why the headline --and lack of historical context -- raised my hackles is that the article regurgitates a traditional set of tropes. It opens with an accounting of how computers are taking over more and more of the expert's craft, and closes on an ominous note about computers making recommendations creeping into making decisions. To boot, the title of the article flat-out states that soon, an AI will be telling your doctor to what to do. The technology may be novel, and perhaps present novel problems. But the story does not tell us anything that someone with even a tiny grasp of AI history didn't know. Doctors and computer scientists have recognized the needed for intelligent assistants since the dawn of AI. In fact, given the popularity and history of intelligent medical systems and algorithms, AI have told doctors what to do for a long time.
Hence in theory none of the benefits and problems that the new systems pose are novel or unanticipated. To see what new benefits and problems might present themselves, we'd have to compare the technology being profiled to that which came before. Which, again, the article makes little to no effort whatsoever to do. Ultimately, the Wired article is symptomatic of a larger problem in discussion of AI: a lack of historical context. Not only does the article fail to inform its audience about a possible connection between P-Funk and computing (OK, I'm getting carried away but its George Clinton -- you have to forgive me), but it also adds to a longstanding problem: its difficult to make, well, intelligent public policy choices about intelligent systems when almost every conversation about them takes place in a historical vacuum.
Nature abhors a vacuum, so that vacuum is filled by a fair degree of fear mongering. Distinguishing between reasonable concerns and irrelevant ones is hard when every story is pitched as speculative and cutting-edge -- instead, maybe the journalists that penned the story could have thrown in a couple sentences about what older systems tell us about the prospects for Watson and Modernizing Medicine. Instead, it chooses to present it as "wow, machines are telling doctors what to do! How revolutionary! How potentially troublesome!" History certainly is not destiny, especially when information technology is concerned. But taking the risk that history may not prepare us for disruptive innovations is better than simply ignoring history altogether.
Furthermore, continuously saturating the public with a combination of hype ("AI is the future!!") and fear mongering ("AI is telling your doctor what to do!!!") also has costs. Those that do not know the history of the episodic "AI Winter" bubble-bust cycle may invest in technologies that seem full of promise but in fact could very well be nearing a dead end. Saturating America with ahistoric fear mongering could result in a public and/or legislative backlash against potentially innovative and/or socially beneficial. The history of cybersecurity regulations does not inspire confidence, especially as exploit researchers today find themselves being denounced as "merchants of death."
You don't have to download Norvig's MYCIN code and operate it yourself. But knowing it exists is at least better than the analytical status quo in 2014.
Latest posts by AdamElkus
- MYCIN, Watson, and AI History - August 28, 2014
- Computers and History: Beyond Science Fiction - August 26, 2014
- Encyclopedia Dramatica And The Case Of The Satoshi Paradox - March 17, 2014