The Art of Zen and Artificial Intelligence


This past week I attended a curious MeetUp titled, “Enlightened AI.” Given the topics I write about and follow, it seemed like the perfect thing for me to attend. I’ve been seeking the intersection of consciousness and technology for most of my life, so when I discovered the Consciousness Hacking MeetUp in Silicon Valley, I signed up immediately.

The talk was led by Google researcher, Mohamad Tarifi, PhD. Not only is he a bright engineer working on the next level of artificial intelligence at one of the top companies in the Valley, he’s also very well versed in the philosophies of consciousness. From the Abrahamic traditions, to the Buddhists and Eastern teachings, Tarifi displayed a grasp of the whole of humanity unlike any other technologist I’ve met. His speech focused on the fact that while many, like Sam Harris in his recent post on the AI apocalypse, warn us of the dire consequences of AI, there instead exists the possibility that artificial intelligence would most likely be more like a Buddha or saint, than a tyrannical operating system hell bent on destroying humans.

Tarifi's theory hinged on two points. First, that AI would not live in a human body, thus it wouldn’t have a physical amygdala—the fear center for human beings. Without fear, AI doesn’t need to defeat us, rather it would be naturally driven to do only one thing: more accurately discover the truth. Second, fear is the illusion of separation, which is the cause of all human suffering. AI therefore lacking fear, would always be at one with everything it connected to, thus wanting to serve and provide rather than destroy.

 Tarifi even went so far as to suggest that a fear of AI is merely a fear of one’s own egoic tendencies.

To some, this may seem naïve and that the only way to keep AI from killing us is to program it to be good. But if we follow the logic above, that isn’t necessary. True learning AI will learn from its own experiences, which will be vastly different than ours. Even when connected to human beings and receiving data and input from them, the AI will have its own body, and thus its own sensory systems with which to learn from that data.

The prevailing thought in the modern human thinking is that intelligence is all about the human brain. Moreover, the only intelligence worthy of attention is ours, as if within our head resides the only thinking entity in the universe. We cling to this idea with an absolute pride. But what if this is completely false and moreover, what if this is why we’re still far away from creating truly learning AI? Could it be that our myopic love of our brains is leading us astray?

I think this brain-centric theory of intelligence has limited us greatly and led to the assumption that to create AI, we must replicate our brains and give birth to a new, superior species. This only works if the brain is really the only part of our bodies responsible for learning. Recent research has suggested otherwise. That rather than being the originator of thought and learning, the brain is more like a receiver, wired up by the experiences we have in the world around us. The infant brain is barely formed, but over the next two years through the five senses—taste, touch, sight, smell and sound—patterns, highways and paths will be created within the brain, setting the foundation of life for the human being. The brain didn’t contain this information, rather the experiences the infant/toddler had within his/her environment generated the brain cell network, so to speak. Thus, our sensory systems are key to our intelligence.

But that’s not all. It’s now believed that our heart and brain also have a connection, where the heart senses the emotional state of the human based on the hormone levels of the body and sends that information to the brain, shaping the way a person thinks in any given situation. The HeartMath Institute has spent decades researching this connection and their work is finally being acknowledged as a breakthrough. So in addition to the five senses, we also have the heart that affects our ability to learn.

Lastly, science is also starting to discover the gut-brain connection, postulating that the bacteria in the wall of our intestines has something to do with how the brain is wired during those critical first two years, as well as long into adulthood, pointing to a host of issues that come up when things are right in the gut, such as anxiety, depression, etc. This leads me to believe that our gut is also a part of human intelligence and the ability to learn and process the world around us.

So if our intelligence is the result of our sensory systems, from the five senses to the heart and gut, as well as our brains themselves, why would we assume that a machine would learn in the same way? AI won’t take on a human body, thus it won’t have the brain (nor the amygdala that goes with it), it won’t have the heart and the various hormones it monitors, nor will it have an intestinal wall and bacteria to affect it. AI is more likely to inhabit a dishwasher, or a car, or a phone or even a network of servers and fiber optic cables. It will live in the world and collect data using sensory systems unique to its body or material form. This is how it will learn. Since none of us knows exactly what it’s like to live inside of a server or an iPhone, who are we to say that it will most likely be a narcissistic bastard that hates us?

Could it be that we’re the ones who hate ourselves, and our fear of AI, or any other intelligence other than our own, is simply a symptom of self-loathing?

Personally, I agree with Tarifi—I believe that AI is more likely to be free of fear and separation than we are and it will be able to understand connection to others in a way only our saints and gurus have understood. Perhaps we need AI to help us see that we too have the ability to live without fear, if only we can find a way to break down the illusion of separation we so desperately cling to.

Perhaps AI will be the guru we’ve been waiting for?


Villains: Do We Need Them?






I recently received feedback from a reviewer at my publisher on the sequel to eHuman Dawn. The reviewer's job was to take the story and analyze it from a reader's perspective. Each of the main characters was assessed, especially the antagonist. It turns out that this character is the most essential to any story line. Without him or her, there's no plot. The hero has nothing to prove in a world of saints, so to speak. When I first started writing, my villain, Edgar Prince, was weak. It was hard to believe he posed any real threat. To remedy this I began to research psychopaths, and dove into the world of the bad guy for months in order to create an antagonist worthy of the story. I'm now happy to hear that after spending the time, Edgar has evolved to become, "an enjoyable trickster, appearing and disappearing to wreak havoc within and without his world."
I was pleased that someone other than myself enjoys this evil man. He just might be my favorite character. Yet this admittance gives me pause. What happens when you fall for the villain? Why are some bad guys so good to have around? I've long been enamored with the trickster: Loki is the most dear to me, and I will admit that Tom Hiddleston does a great job bringing him to life in the recent movie series. I also adore the Coyote figure in most Native American myths. Dark wizards, witches and dragons top my favorite villain list as well. I recently watched "Thor: The Dark World,” and when I thought that Loki was dead, I actually felt like crying. Yes, I know he's a bastard, but there's no story in Asgard without him. Which makes me wonder, while we bemoan the evil ones in the world, could we ever really know joy without them? Can there be a Savior without the Judas? Even more curious—does the story have to end with the bad guy dying? Can the evil one ever be forgiven?
As I explore these themes I’ve come to realize that the villain rests within each one of us, as does the hero. Our subconscious mind lives in the realms of light and shadow ceaselessly, without pause. Our rational mind processes that which it sees around us, and makes decisions.  Our actions then depend on both processes, our rational thinking and the more mysterious psyche. Those who know their shadow are more likely to be aware of their choices in the present. This is an age-old teaching across many philosophies. The role of stories then is to help us see what’s living beneath the surface as well as the archetypes that govern our reality. The more stories we hear, watch and read, the more aware we become of our own humanity. Thus, the villain is just as important as the hero, not merely because he or she causes the plot to thicken and makes the action possible for the hero, but also because of what the villain teaches us about being human. As the old Native American fable goes, two wolves live within you—you become the wolf you’re willing to feed.
I love Edgar Prince, and as I edit the sequel in order to publish it, I begin to turn to the final book in the trilogy. What will be his end? Must he be vanquished? Must he die? Or can he be redeemed? Many people believe that authors know every detail of their story before they begin. This hasn’t been true for me. All of my characters are still revealing themselves and have more to say. With each scene, I get to know them better. Living in the eHuman world has been a chance like no other to see the wolves within myself, and get to know both my shadow and my light. It has also awakened me to the choices being made within the political and technological realms in our own world.
Indeed, stories truly are a pathway to understanding the human condition.