Can You Teach a Computer Common Sense? .PART ONE.





"In your case, most likely, a small app would have been downloaded to your 
CPU to erase your previous few days of memory. You see, eHuman
software is really very easy to manipulate.”
~Alrisha, Lead Hacktivist for the Resistance


Artificial Intelligence. It's everywhere these days. Especially in the theaters. Last month saw the release of the movie, "Her", in which the lead character falls in love with an operating system. In addition, a trailer for Johnny Depp's new movie, "Transcendence" is now available on YouTube. Depp's portrayal of a genius espousing the future of a mind greater than all the minds that have ever been on Earth is chilling.

Can this be? Can we create Artificial Intelligence that is smarter than we are? I don't mean faster, or with a better memory. Can AI have common sense? Can it foresee complex patterns? Can AI be programmed to have our ability to sense what's wrong and make choices based on instinct and past experience? What is intelligence without consciousness? Is it possible for us to create an AI that is truly evolving, truly learning and truly greater than all of us combined?

In my novel, eHuman Dawn, the AI is Neuro, a complex operating system that organizes and guides eHuman life. The eHumans are themselves a blend of AI and their own consciousness. The ideal Transhuman solution. But is such a solution possible? Or would this technology be the end of humanity?

Two readers sent me articles this week, both discussing this theme of AI and the scientific mind. Two things stood out for me: First, our computers are only as intelligent as the engineers who program them. And second, solving the complex issues of life requires more thought than passing the Turing Test or executing a mere Google Search.

Let's begin with the first idea--just how much better can our AI be than ourselves? In "The Closing of the Scientific Mind", David Gelernter voices his thoughts about leading singularity scientist Ray Kurzweil's work:  (For more on Kurzweil's mission, see my blog post, "Can Google Stop Death?")

"Whether he knows it or not, Kurzweil believes in and longs for the death of mankind. Because if things work out as he predicts, there will still be life on Earth, but no human life. To predict that a man who lives forever and is built mainly of semiconductors is still a man is like predicting that a man with stainless steel skin, a small nuclear reactor for a stomach, and an IQ of 10,000 would still be a man. In fact we have no idea what he would be."

True confession: While writing the eHuman trilogy this exact thought has crossed my mind over and over again. Are Adam Winter and The Dawn of eHumanity still human? I want them to be, but what exactly are they? The eHuman is a product of the best of science, blended with human needs and desires. Yet those needs and desires are completely under the control of the scientists who created them. Is that a real human existence?

Remember: software is only as excellent as the people who make it. Honestly. So if we want to know what AI will look like in the future, let's consider the way science has been used to meet our human needs for the past forty years.

Do we trust the scientific mind that will cut down rain forests at unsustainable rates in order to raise cattle for our fast food chains? How about the scientists in Big-Agriculture, whose minds can only create products that kill insects and weeds--forgetting about and thus possibly destroying the very insects needed for crop pollination by inadvertently causing a global colony collapse for the honey bee? Do we trust a scientific mind that can't see and understand the web of life around us, to create the web of artificial intelligence that will guide us into the future?

Many of our stories are prophetic. They tell of a future time when our AI takes over and destroys us. Why would that be? Why would we create AI that hates us and competes against us rather than cares for us and meets our needs? Why wouldn't our AI serve us and improve our lives and make things cleaner, more efficient and better for everyone? Why must our machines eventually kill us?

Alas, the answer might just lie within our own hearts, within our own intelligence, and the way we educate our children. Remember, software is only as good as the mind that creates it. If the current scientific mindset that has controlled technology for the past century is any indication, we're in trouble. Gelernter puts it this way, "Many scientists are proud of having booted man off his throne at the center of the universe and reduced him to just one more creature—an especially annoying one—in the great intergalactic zoo." AI may kill us, but only because it will mimic what its creators believe.

Artificial Intelligence can't save us from ourselves, it can only become what we are.

Therefore if we want AI with common sense, we must hire software engineers with the most common sense. If we want AI that will lead us into a prosperous future, we need scientists to care about humanity, and the planet we live on, as a whole. If we want AI that will inspire us and grow with us, we need the business community to stop caring only about profits and instead care about life in all of its complexity.

The best of humanity must take part in this quest. Not just the smartest.

Imagine an Earth where machines and people live in harmony, rather than in competition. Is that possible? Perhaps we first need to learn to live in harmony with ourselves, each other, and the world we live in. Then we can create the machines.

To be continued...