Can You Teach a Computer Common Sense? .PART TWO.




I love software design. Society has been forever changed by those who have created great software products housed in machines that fit easily within the palms of our hands, and have thus seamlessly become an important part of our lives. For example, the Smartphone and the human are now almost inseparable. But are these machines truly "intelligent?" A reader recently sent me a great article from the New Yorker by Gary Marcus about Hector Levesque, a University of Toronto computer scientist who studies AI and the questions used to determine the intelligence of our machines. Marcus writes,

"Hector Levesque thinks his computer is stupid—and that yours is, too. Siri and Google’s voice searches may be able to understand canned sentences like “What movies are showing near me at seven o’clock?” but what about questions—“Can an alligator run the hundred-metre hurdles?”—that nobody has heard before? Any ordinary adult can figure that one out. (No. Alligators can’t hurdle.) But if you type the question into Google, you get information about Florida Gators track and field. Other search engines, like Wolfram Alpha, can’t answer the question, either. Watson, the computer system that won “Jeopardy!,” likely wouldn’t do much better."

As I mentioned in Can You Teach A Computer Common Sense, Part One, software is only as good as the programmers are, therefore if the goal of the next generation of AI is to be able to work seamlessly within society, engineers will need to go beyond the limited view of how things should work and focus on how things will work once in the customer's hands.

 In his article, Marcus writes that Levesque thinks the Turing Test is useless because it's too easy to game. "Every year, a number of machines compete in the challenge for real...But the winners aren’t genuinely intelligent; instead, they tend to be more like parlor tricks, and they’re almost inherently deceitful. If a person asks a machine “How tall are you?” and the machine wants to win the Turing test, it has no choice but to confabulate. It has turned out, in fact, that the winners tend to use bluster and misdirection far more than anything approximating true intelligence…The fakery involved in most efforts at beating the Turing test is emblematic: the real mission of A.I. ought to be building intelligence, not building software that is specifically tuned toward fixing some sort of arbitrary test."

Perhaps creating intelligent software isn't about meeting the requirements set out in some documentation. Perhaps it requires a desire on the creator's part to understand human behavior in order to design software that can begin to think like a human--that is to be able to link unrelated possibilities and come up with an answer. Or in other words, perhaps we need to teach AI how to solve rational problems with an irrational mind.

When I was a newly minted Lead Engineer, I was given the job of managing Maintenance On the Line, aka MOL, for my first assignment. Every engineer's nightmare. Except for me, it wasn't. I loved working with the customers and understanding how my product interacted within a system. More than that, I found the process of defect management and repair enlightening. A few months on the job and I'd learned more about software design than all four years in college combined.

Time after time, the software would glitch in the field due to a failure on the designer's part to consider how the software might work in a real world environment. Over the course of managing the MOL team for my product, I met a lot of software engineers. The ones that still stand out in my mind were the ones who understood the entire system--how every "box" worked within the system, what happened if part of the system went down, and how a human would use the system on a daily basis.

Like Mace Windu in "Star Wars," they understood the system's "shatterpoints"--opportunities where key events came together that just might destroy the way the entire system functioned. There's no way to predict every possible failure and use case, but those engineers and programmers who understood how humans interacted with their products, wrote the most resilient software. Even more important, when an issue came up in any byte of code, these same engineers could fix it immediately--watching them debug was like attending a Vegas show; flashy, exciting and entertaining. I'm not kidding, it was mastery.

To be truly of service, the next generation of AI needs to be designed and coded by those who see the "shatterpoints" of human interaction with technology. For AI to enhance our lives, it can't be designed by engineers who simply follow instructions and code to the requirements without considering the various ways humans might use it. Imagine an implant monitoring nanobots in your blood stream stuck in some glitch because you had Indian food for the first time and it didn't recognize the effects of curry on your glucose levels, because it wasn't in the requirements to do so! 

This isn't work for the lighthearted, for it requires an interest in humanity, as well as software. When people and technology meet, the magic begins.

Computers can have common sense, if their creators have common sense. AI can serve humanity, if software designers and architects also serve humanity. It's the duty of the mothers and fathers of AI to embrace the world wide webs of both of human nature and technology, in all of their nuances and beauty. Common sense isn't simple logic, it's understanding that when technology and life meet, there are complex consequences.

No comments: