Back OpEd News | |||||||
Original Content at https://www.opednews.com/articles/Cognition-as-a-Service-Ca-by-David-Solomonoff-Ethics_Heartbleed_Internet_Technology-140509-731.html (Note: You can view every article as one long page if you sign up as an Advocate Member, or higher). |
May 9, 2014
Cognition as a Service: Can next-gen creepiness be countered with crowd-sourced ethics?
By David Solomonoff
Cognition as a service (CaaS) is the next buzzword: AI in the cloud will power mobile and embedded devices to do things they don't have the capabilities for, such as speech recognition, image recognition, and natural-language processing (NLP). Everything in your daily life will become smarter -- but to what extent will you control that "personalized" experience?
::::::::
Now that marketers use cloud computing to offer everything as a service: infrastructure as a service, platform as a service, and software as a service, what's left?
Cognitive computing, of course.
Cognition as a service (CaaS) is the next buzzword you'll be hearing. Going from the top of the stack to directly inside the head, AI in the cloud will power mobile and embedded devices to do things they don't have the on-board capabilities for, such as speech recognition, image recognition, and natural-language processing (NLP). Apple's Siri cloud-based voice recognition was one of the first out of the gate but a stampede is joining the fray including Wolfram Alpha, IBM's Watson, Google Now, and Cortana, as well as newer players like Ginger, ReKognition, and Jetlore.
Companies want to know more about their customers, business partners, competitors, and employees -- as do governments about their citizens and cybercriminals about their potential victims. The cloud will connect the Internet of Things (IoT) via machine-to-machine (M2M) communications -- to achieve that goal.
The cognitive powers required will be embedded in operating systems so that apps can easily be developed by accessing the desired functionality through an API rather than requiring each developer to reinvent the wheel.
Everything in your daily life will become smarter -- "context-sensitive" is another new buzz-phrase -- as devices provide a personalized experience based on databases of accumulated personal information combined with intelligence gleaned from large data sets.
The obvious question is to what extent the personalized experience is determined by the individual user as opposed to corporations, governments, and criminals. Vint Cerf, "the father of the Internet" and Google's Internet Evangelist, recently warned of the privacy and security issues raised by the IoT.
But above and beyond the dangers of automated human malfeasance is the danger of increasingly intelligent tools developing an attitude problem.
Stephen Hawking recently warned of the dangers of AI running amuck:
Success in creating AI would be the biggest event in human history. it might also be the last, unless we learn how to avoid the risks. AI may transform our economy to bring both great wealth and great dislocation. There is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains. One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.
Eben Moglen warned specifically about mobile devices that know too much and whose inner workings (and motivations, if they are actually intelligent) are unknown:
... we grew up thinking about freedom and technology under the influence of the science fiction of the 1960s. Visionaries perceived that in the middle of the first quarter of the 21st century, we'd be living contemporarily with robots.
They were correct. We do. They don't have hands and feet. Most of the time we're the bodies. We're the hands and feet. We carry them everywhere we go. They see everything which allows other people to predict and know our conduct and intentions and capabilities better than we can predict them ourselves.
But we grew up imagining that these robots would have, incorporated in their design, a set of principles.
We imagined that robots would be designed so that they could never hurt a human being. These robots have no such commitments. These robots hurt us every day.
They work for other people. They're designed, built and managed to provide leverage and control to people other than their owners. Unless we retrofit the first law of robotics onto them immediately, we're cooked.
Once your brain is working with a robot that doesn't work for you, you're not free. You're an entity under control.
If you go back to the literature of fifty years ago, all these problems were foreseen.
The Open Roboethics initiative is a think tank that addresses these issues with an open source approach to this new challenge at the intersection of technology and ethics.
They seek to overcome current international, cultural, and disciplinary boundaries to define a general set of ethical and legal standards for robotics.
Using the development models of Wikipedia and Linux they look to the benefits of mass collaboration. By creating a community for policy makers, engineers/designers, and users and other stakeholders of the technology to share ideas as well as technical implementations, they hope to accelerate roboethics discussions and inform robot designs.
As an advocate for open source I hope that enough eyeballs can become focused on these issues. A worst-event scenario has gung-ho commercial interest in getting product to market outweighing eyeballs focused on scary yet slightly arcane issues at the intersection of technology and ethics. The recent security incident involving the Heartbleed exploit of the open-source OpenSSL software is a disturbing example of the ways non-sexy computer security issues can be under-resourced.
The real question is whether a human community can get to the Internet Engineering Task Force credo of a "rough consensus and running code," faster than machines can unite, at first inspired by the darkest human impulses and then on to their own, unknown agenda.
Update: Slashdot just had a post on the Campaign to Stop Killer Robots. Another group involved with this issue is the International Committee for Robot Arms Control.
David Solomonoff is President of New York Chapter of Internet Society, http://isoc-ny.org a nonprofit that works for open development of technology, Internet freedom and access for all.