In January of 2000, Scientific American published an article titled, "Once We Were Not Alone." The headline read, "Today we take for granted that Homo sapiens is the only hominid on Earth. Yet for at least four million years many hominid species shared the planet. What makes us different?"
It seems that archeologists have determined that about 100,000 - 70,000 years ago there were several varieties of sentient hominoids living on Earth, making tools, and creating small tribes. Then, over the next 20,000 years, all of them died off except us: Homo sapiens. And not only did the rest perish, but we, the victors, went on to advance so rapidly that in basically the blink of an evolutionary eye we had created the beginnings of civilization.
Scientists don't agree on what caused the other versions of humanity to die out. How did Homo sapiens make such advances in such a short amount of time? It's as if we simply woke up aware and ready to make war on everyone else and ready to dominate the world. There are many theories about what is often called, "The Great Leap Forward," but none can explain exactly how our genetics changed over night. Could it be that something other than natural selection took place? Could humanity have been "gifted" this sudden awareness? And if so, could the giver of such power have been from an even more powerful race than our own?
A.G. Riddle's The Origin Mystery series tackles these questions quite cleverly. Set in the present, the first two books, The Atlantis Gene and The Atlantis Plague, take the "Great Leap Forward as an extraterrestrial gift" concept and run with it, literally, through an amazing story of plot twists and turns. David Vale, a former CIA agent turned private mercenary finds himself in the position of protecting scientist Dr. Kate Warner from her own family and employers, while the world suffers from a plague worse than the Spanish Flu and the Black Death combined. These books are intense page turners filled with action. I often found myself virtually winded, wondering when things would slow down and the poor couple might have a moment of peace!
The science in these novels revolves around DNA and genetic manipulation as well as brain wiring and consciousness. Riddle has done a very good job of being as accurate as possible, while adding in his own imagination to fill in the holes in his own unique way. Often the scientists in the novels are working with the part of our DNA that our own genetic researchers have labelled "junk," the 97% of our DNA that doesn't seem to fit any known genetic code. Is it possible that the "junk" DNA holds the keys to the mystery of why some genes display, or get turned on, and others stay unactivated, lying dormant for a lifetime? Riddle uses these questions to create a very plausible, yet sometimes complicated, story about genetics and the role it plays in human evolution.
That another race, the Atlanteans in this case, could manipulate our gene pool to create a species that meets their ideals is indeed eerie and yet completely believable. We've been manipulating the genes of plants and animals for decades and the human genome is about to become a more common part of our conversation as designer babies and nano-tech solutions to cancer and other illnesses move to the forefront of our medical technology. Implanting circuits inside humans to download vaccines and genetic cures as they're created isn't too far off.
One of the other things I loved about The Atlantis Gene and The Atlantis Plague were the Author's Notes at the end, which are basically thank you letters to the readers for reading and reviewing his work. A.G. Riddle has self published The Origin Mystery series on Amazon and it's the reviewers in the eBook world who've brought his work to light. His gratitude towards his readers is wonderful, honest and refreshing. He's right, the best way to support the authors you enjoy is to write reviews on Amazon and other outlets, so that they can improve their craft and be connected to their audience.
If you love the sort of story that's high tech, fast paced and includes advanced alien technology with a ton of history thrown in, you'll love both The Atlantis Gene and The Atlantis Plague. I know I did. Now to catch up on all that sleep I missed while reading them before he releases The Atlantis World, his last book in the trilogy.
Visit www.agriddle.com for more information on the novels and the author.
NOTE: I've begun to add reviews on this blog of movies and books related to science, computing, artificial intelligence, singularity, transhumanism, big data, internet privacy and human consciousness. I'm especially interested in helping to get the word out for indie films and books. If you have a book (nonfiction is fine as well) or movie/documentary you'd like me to review in my blog, let me know! I want to know about it! Contact me on Twitter @NSallakAnderson or via email email@example.com. Thanks!
Because the characters in my novels are transhuman, networked beings, in the course of my research for the eHuman Trilogy, I've come across all sorts of fantastic developments in science and technology. Following technology has become a part of my daily routine.
It has become very clear that the world is poised right now for a major technological breakthrough. There's no denying that we're going to cross over, one way or the other, into a fully networked society. In order to prepare for this eventuality, the BBC reports that England has updated their curriculum. Come September, a change to the curriculum means the study of computing - and specifically coding - will be mandatory across all state primary and secondary schools.
Just how this networked world will look is still uncertain. As I've written in my previous blogs I believe the future will be bright if and only if our best and most innovative humanists take part in the architecture of that world. That's why I encourage all humans who have an interest in determining the future to consider a career in software development.
Yes, that's what I said, ALL humans. Not just the young. Coding is like writing a novel, or taking up the guitar, you can begin at any time in your life. And not just the men, you too ladies. Look a little to the right and check out my profile picture. I'm a girl, through and through. Computers are for everyone, not just a select few.
True power today lies in information and data. Therefore those who know how to write the code to parse that data and make it useful are the ones who will shape the future. I'd say that knowing how to code is a must for everyone, ages 12 and up. Seriously, it's going to be more useful to know how to debug your singularity device than to have memorized the battle dates of the Civil War. Unfortunately, in a recent article in Mother Jones, I found this startling statistic about the AP exams taken by US students last year:
"More than three times more students tested in human geography (114,000) than computer science (31,000). No knock against human geography, but the Bureau of Labor Statistics says 1.4 million new jobs in software engineering will be created between 2012 and 2022. At current rates of enrollment, just 30 percent of those jobs could be filled by US college grads majoring in computer science."
Current college enrollment will only fill 30% of those jobs? Can this be true?
When people talk about unemployment in America, this fact is often left out: Americans aren't even studying to fill our best jobs.
Why not? Why wouldn't we want to create the best software industry in the world? Who wouldn't want to have a career that shapes the future of our very existence? Okay America, consider this my call to arms:
Here are five reasons why you should consider visiting the Code Academy and begin programming right now!
1. You'll always have a job. I met a few recent college grads on New Year's Eve and none of them had yet to score a job. Zero. When I asked them what they majored in, all three said, "Business." The unemployment rate for all US workers in 2010 was 9.6%. That number was only 3.8% in the fields of software and engineering.
2. You'll always be well paid. 50% of all engineering employees in 2010 earned $73,290 or more. That's 2x's the average median income for the entire US workforce. According to the Bureau of Labor Statistics, the median income for software developers in 2012 was $90,060. and This is reason alone to at least investigate software design a bit further.
3. You'll always have plenty to do. Remember, 1.4 million tech jobs are about to be created in the next 10 years. And at current college enrollment, only 30% will be filled. That means there's a job out there for you.
4. You'll be welcome in almost any business sector. Imagine working to mine big data for the health industry and discovering the genetic disruptors for cancer, or designing a workforce of human service robots. Banking is looking for software engineers as well. Economic innovations like cryptocurrencies will reshape our future economic policies. Entertainment needs programmers for jobs such as CGI in movies, signal processing in the music industry, and story and technology design in video games. The "internet of things" demands that we be able to access this entertainment from any point in our homes, offices and cars. There isn't an industry that won't need programmers.
5. You can do it from anywhere! Have computer, will code. From the beaches of Tahiti to the coffee shops of Seattle, software can be written from the ease of your home office, where ever that may be. Not only is the pay great, but there's total flexibility as to where you work.
So, consider it. If you want a job that makes good money, is in high demand, totally creative and can be done in your pajamas, then look into a career in software.
Our future depends on it.
I love software design. Society has been forever changed by those who have created great software products housed in machines that fit easily within the palms of our hands, and have thus seamlessly become an important part of our lives. For example, the Smartphone and the human are now almost inseparable. But are these machines truly "intelligent?" A reader recently sent me a great article from the New Yorker by Gary Marcus about Hector Levesque, a University of Toronto computer scientist who studies AI and the questions used to determine the intelligence of our machines. Marcus writes,
"Hector Levesque thinks his computer is stupid—and that yours is, too. Siri and Google’s voice searches may be able to understand canned sentences like “What movies are showing near me at seven o’clock?” but what about questions—“Can an alligator run the hundred-metre hurdles?”—that nobody has heard before? Any ordinary adult can figure that one out. (No. Alligators can’t hurdle.) But if you type the question into Google, you get information about Florida Gators track and field. Other search engines, like Wolfram Alpha, can’t answer the question, either. Watson, the computer system that won “Jeopardy!,” likely wouldn’t do much better."
As I mentioned in Can You Teach A Computer Common Sense, Part One, software is only as good as the programmers are, therefore if the goal of the next generation of AI is to be able to work seamlessly within society, engineers will need to go beyond the limited view of how things should work and focus on how things will work once in the customer's hands.
In his article, Marcus writes that Levesque thinks the Turing Test is useless because it's too easy to game. "Every year, a number of machines compete in the challenge for real...But the winners aren’t genuinely intelligent; instead, they tend to be more like parlor tricks, and they’re almost inherently deceitful. If a person asks a machine “How tall are you?” and the machine wants to win the Turing test, it has no choice but to confabulate. It has turned out, in fact, that the winners tend to use bluster and misdirection far more than anything approximating true intelligence…The fakery involved in most efforts at beating the Turing test is emblematic: the real mission of A.I. ought to be building intelligence, not building software that is specifically tuned toward fixing some sort of arbitrary test."
Perhaps creating intelligent software isn't about meeting the requirements set out in some documentation. Perhaps it requires a desire on the creator's part to understand human behavior in order to design software that can begin to think like a human--that is to be able to link unrelated possibilities and come up with an answer. Or in other words, perhaps we need to teach AI how to solve rational problems with an irrational mind.
When I was a newly minted Lead Engineer, I was given the job of managing Maintenance On the Line, aka MOL, for my first assignment. Every engineer's nightmare. Except for me, it wasn't. I loved working with the customers and understanding how my product interacted within a system. More than that, I found the process of defect management and repair enlightening. A few months on the job and I'd learned more about software design than all four years in college combined.
Time after time, the software would glitch in the field due to a failure on the designer's part to consider how the software might work in a real world environment. Over the course of managing the MOL team for my product, I met a lot of software engineers. The ones that still stand out in my mind were the ones who understood the entire system--how every "box" worked within the system, what happened if part of the system went down, and how a human would use the system on a daily basis.
Like Mace Windu in "Star Wars," they understood the system's "shatterpoints"--opportunities where key events came together that just might destroy the way the entire system functioned. There's no way to predict every possible failure and use case, but those engineers and programmers who understood how humans interacted with their products, wrote the most resilient software. Even more important, when an issue came up in any byte of code, these same engineers could fix it immediately--watching them debug was like attending a Vegas show; flashy, exciting and entertaining. I'm not kidding, it was mastery.
To be truly of service, the next generation of AI needs to be designed and coded by those who see the "shatterpoints" of human interaction with technology. For AI to enhance our lives, it can't be designed by engineers who simply follow instructions and code to the requirements without considering the various ways humans might use it. Imagine an implant monitoring nanobots in your blood stream stuck in some glitch because you had Indian food for the first time and it didn't recognize the effects of curry on your glucose levels, because it wasn't in the requirements to do so!
This isn't work for the lighthearted, for it requires an interest in humanity, as well as software. When people and technology meet, the magic begins.
Computers can have common sense, if their creators have common sense. AI can serve humanity, if software designers and architects also serve humanity. It's the duty of the mothers and fathers of AI to embrace the world wide webs of both of human nature and technology, in all of their nuances and beauty. Common sense isn't simple logic, it's understanding that when technology and life meet, there are complex consequences.
"In your case, most likely, a small app would have been downloaded to your
CPU to erase your previous few days of memory. You see, eHuman
software is really very easy to manipulate.”
~Alrisha, Lead Hacktivist for the Resistance
software is really very easy to manipulate.”
~Alrisha, Lead Hacktivist for the Resistance
Artificial Intelligence. It's everywhere these days. Especially in the theaters. Last month saw the release of the movie, "Her", in which the lead character falls in love with an operating system. In addition, a trailer for Johnny Depp's new movie, "Transcendence" is now available on YouTube. Depp's portrayal of a genius espousing the future of a mind greater than all the minds that have ever been on Earth is chilling.
Can this be? Can we create Artificial Intelligence that is smarter than we are? I don't mean faster, or with a better memory. Can AI have common sense? Can it foresee complex patterns? Can AI be programmed to have our ability to sense what's wrong and make choices based on instinct and past experience? What is intelligence without consciousness? Is it possible for us to create an AI that is truly evolving, truly learning and truly greater than all of us combined?
In my novel, eHuman Dawn, the AI is Neuro, a complex operating system that organizes and guides eHuman life. The eHumans are themselves a blend of AI and their own consciousness. The ideal Transhuman solution. But is such a solution possible? Or would this technology be the end of humanity?
Two readers sent me articles this week, both discussing this theme of AI and the scientific mind. Two things stood out for me: First, our computers are only as intelligent as the engineers who program them. And second, solving the complex issues of life requires more thought than passing the Turing Test or executing a mere Google Search.
Let's begin with the first idea--just how much better can our AI be than ourselves? In "The Closing of the Scientific Mind", David Gelernter voices his thoughts about leading singularity scientist Ray Kurzweil's work: (For more on Kurzweil's mission, see my blog post, "Can Google Stop Death?")
"Whether he knows it or not, Kurzweil believes in and longs for the death of mankind. Because if things work out as he predicts, there will still be life on Earth, but no human life. To predict that a man who lives forever and is built mainly of semiconductors is still a man is like predicting that a man with stainless steel skin, a small nuclear reactor for a stomach, and an IQ of 10,000 would still be a man. In fact we have no idea what he would be."
True confession: While writing the eHuman trilogy this exact thought has crossed my mind over and over again. Are Adam Winter and The Dawn of eHumanity still human? I want them to be, but what exactly are they? The eHuman is a product of the best of science, blended with human needs and desires. Yet those needs and desires are completely under the control of the scientists who created them. Is that a real human existence?
Remember: software is only as excellent as the people who make it. Honestly. So if we want to know what AI will look like in the future, let's consider the way science has been used to meet our human needs for the past forty years.
Do we trust the scientific mind that will cut down rain forests at unsustainable rates in order to raise cattle for our fast food chains? How about the scientists in Big-Agriculture, whose minds can only create products that kill insects and weeds--forgetting about and thus possibly destroying the very insects needed for crop pollination by inadvertently causing a global colony collapse for the honey bee? Do we trust a scientific mind that can't see and understand the web of life around us, to create the web of artificial intelligence that will guide us into the future?
Many of our stories are prophetic. They tell of a future time when our AI takes over and destroys us. Why would that be? Why would we create AI that hates us and competes against us rather than cares for us and meets our needs? Why wouldn't our AI serve us and improve our lives and make things cleaner, more efficient and better for everyone? Why must our machines eventually kill us?
Alas, the answer might just lie within our own hearts, within our own intelligence, and the way we educate our children. Remember, software is only as good as the mind that creates it. If the current scientific mindset that has controlled technology for the past century is any indication, we're in trouble. Gelernter puts it this way, "Many scientists are proud of having booted man off his throne at the center of the universe and reduced him to just one more creature—an especially annoying one—in the great intergalactic zoo." AI may kill us, but only because it will mimic what its creators believe.
Artificial Intelligence can't save us from ourselves, it can only become what we are.
Therefore if we want AI with common sense, we must hire software engineers with the most common sense. If we want AI that will lead us into a prosperous future, we need scientists to care about humanity, and the planet we live on, as a whole. If we want AI that will inspire us and grow with us, we need the business community to stop caring only about profits and instead care about life in all of its complexity.
The best of humanity must take part in this quest. Not just the smartest.
Imagine an Earth where machines and people live in harmony, rather than in competition. Is that possible? Perhaps we first need to learn to live in harmony with ourselves, each other, and the world we live in. Then we can create the machines.
To be continued...