Boston Dynamics Releases Asimov Decision API
These days a well-known provider of fodder for apocalyptic robotic uprising scenarios, Boston Dynamics first made waves across the Internet last decade when it introduced BigDog, the cutting edge of the Freaky Headless Nightmares industry.
As a contractor for the US Military, this outfit is pushing the envelope of human folly and self-destruction with striking speed. Boston Dynamics is now owned by the Japanese firm SoftBank Group Corp., but for a few years, it fell under the auspices of Google’ experimental R&D arm Google X.
Inspired by their Google cousins’ prolific work in the software world, Boston Dynamics’ president Marc Raibert commissioned a special team in 2015 to focus on mitigating the risks of rapid robotics advancement.
The Ethical Guards Against Destructive Sentience team, or EGADS, was charged with anticipating and preventing the myriad sci-fi horrors we’re so fond of dwelling on in popular entertainment. According to team lead Kirk Carlsen, early project ideas were fairly fanciful and even humorous, with many references to films such as I, Robot, Terminator, and 2001: A Space Odyssey serving as problem and solution proposal titles, acronyms, and procedural manual cameos.
“With such serious subject matter, we had to inject levity to keep spirits high and everyone motivated,” Carlsen said in an interview with Time. “Giving creative minds substantial freedom and a relaxed atmosphere is vital, as we’ve seen time and time again over the 20-odd years we’ve been a company. Fearful despair isn’t typically what we aim for. [Laughs]”
EGADS has heretofore operated under wraps, away from the public eye. “[Boston Dynamics gets] a lot of uninvited ‘fan mail’, which tends to be surprisingly angry. Our funding sources are pretty stable, so we have little reason to broadcast our work to the civilian market.”
That veil of secrecy was pulled aside yesterday when Carlsen and his team announced their baby: the FOSS Asimov API. A “decision engine” tuned to Big Data and humdrum housework decisions alike, the Asimov API aims to be the world’s most robust human-safety check for autonomous software. The idea is that robots, virtual assistants, automated messaging systems, results-curating search engines, and other complex machinery would check in with the Asimov API (which stands for Application Programming Interface) with a set of contextual data and a proposed action. If everything checks out, the green light is sent back and life continues as normal. Otherwise, the device or program is told to reevaluate or shut down, depending on implementation.
The API’s namesake, Isaac Asimov, is famous for having created the three laws of robotics in one of his short stories:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
“Obviously, the three laws are a bit simplistic,” Carlsen told us by email. “Evaluating what harm is is an incredibly complex process, and every minute assumption is up for debate.”
The Asimov API, alarmingly for some, makes use of machine learning in building a body of decisions to reference in tackling the huge complexity of ethical decision-making.
“People get scared when they hear ‘machine learning’, but the algorithms we use today that get this label are still very much mechanical, deterministic, amoral processes. We catalogued several hundred hypothetical scenarios and what various philosophical schools of thought would prescribe in response.” (Some of this data came from Google’s own digital archive of human thought, Google Books.) “We had the computer churn through all of it to generate rules that our engineers meticulously review again and again.”
Carlsen also assures that the open-source engine, released under an MIT license which makes it very close to public domain, will always remain public, and envisions that one day disputes over its outcomes might be resolved in legislation or the courts, “after the finest democratic traditions.”
Boston Dynamics is currently recruiting teams from around the globe to start creating locale-specific variants that conform the API’s outcomes to local cultural values. The European Parliament has already established a committee to review and police the influence of the Asimov API and any similar offerings that may arise in response.
“We’re hoping this will start off debates about artificial intelligence and ethics on the right foot while the field is still young and the devices that use it are limited,” says Carlsen. “We’ve got public awareness and input; we’ve got philosophy departments the world over scrutinizing it; we’ve got decentralized authority… we’re doing the best we can to take fear out of burgeoning technologies.”
Elon Musk, the eccentric serial entrepreneur, lauded the technology on Twitter.
Some voices, such as the American Humanist Association and the Vatican, have expressed uneasiness with this foray into codifying morality. Carlsen is adamant that this technology will only improve with time, and that any complex system can be reduced into manageable parts, given enough resources.
“We’re talking about toasters and Roombas,” he said. “It’s good that we’re getting started now.”
Perhaps a poor choice of words, sir.
Interested tinkerers can sign up for beta access to the API at the official website.