Fostering a culture of shared values: designing with ethics in mind


From time to time, people ask me what I think is the best way to ensure that all designers consider roboethics issues in designing their next awesome robotic product.

For example, does Google have a systematic process in place to consider implications of the its self-driving vehicle and related design decisions before the designers start implementing different features in it? Is it even possible to realize a future where all manufactured robotic products meet a kind of ethical standard on top of obvious and existing safety standards?

With open source software and hardware accelerating the landscape of engineering and design forward, and the rise of younger and younger generation of smartphone app developers that makes a boom of robot app developers just an obvious next trend, it may seems impossible to ensure that everyone design robots with ethics in mind.

But I’m an optimist about this, and believe that there are good ways to address this problem. One way is to discuss about the issues of concern as a community so that the community can agree upon a set of values that it chooses to share and foster.

For example, there was much discussion about the dangers of AI in the press lately. Some prominent figures, including Elon Musk and Stephen Hawking, openly voiced their concerns, and others, such as Rodney Brooks and Alan Winfield, openly presented their counter arguments and presented why unnecessarily worrying about the dangers isn’t helpful. There’s a 10 minute BBC debate on this topic if you want a quick overview. Although there are points of disagreement on this issue, a shared point of view by both sides seem to be that AI can and need to be developed/used to make positive impact on our society.


A few days ago, we saw that some of the above-mentioned and other individuals have written and signed an open letter hosted by the Future of Life Institute, which reads:

There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.

With a research priorities document (which lists law and ethics research as a priority by the way) included as part of the letter, the authors end the letter by saying:

In summary, we believe that research on how to make AI systems robust and beneficial is both important and timely, and that there are concrete research directions that can be pursued today.

Similarly, there have been efforts by members of the roboethics community who suggest a model of fostering a culture of shared values. For example, Laurel D. Riek and Don Howard presented their paper “A Code of Ethics for Human-Robot Interaction Profession” at We Robot 2014 conference last year, which included a draft code of ethics for human-robot interaction (HRI) practitioners. I had the pleasure of talking to Laurel D. Riek about it more for Robots Podcast (take a listen below if you’ve missed it).

Another example is the International Committee for Robot Arms Control (ICRAC) and its The Scientists’ Call To Ban Autonomous Lethal Robots which states:

As Computer Scientists, Engineers, Artificial Intelligence experts, Roboticists and professionals from related disciplines, we call for a ban on the development and deployment of weapon systems in which the decision to apply violent force is made autonomously.

Given the limitations and unknown future risks of autonomous robot weapons technology, we call for a prohibition on their development and deployment. Decisions about the application of violent force must not be delegated to machines.

This movement by ICRAC, along with the work by member organizations of the Campaign to Stop Killer Robots have, in part, led to Clearpath Robotics, a robotics company, releasing its open letter in support of the campaign.

Generating a code of ethics and explicitly stating shared values is something many professional organizations have embraced under the familiar umbrella term of professional ethics. However, as Laurel D. Riek outlines in the podcast interview, there are so many interesting ethical challenges at hand in the field of HRI that is not covered under the existing codes of ethics of professional organizations. On the brink of a variety of robotic products being made available outside research labs and manufacturing facilities, it is important to have discussions about ethical issues unique to HRI and to generate a consensus on the shared values of the HRI community.

Hence, it’s perhaps timely that more HRI practitioners get engage in such discussions. One such venue is an upcoming workshop that Laurel D. Riek, Woodrow Hartzog, Don HowardRyan Calo, and myself are organizing as part of the upcoming HRI’15 conference (March 2nd) in Portland, Oregon, called The Emerging Policy and Ethics of Human Robot Interaction.

Will such initiatives fully address the question we started with? (i.e., “Can we ensure that all designers consider roboethics issues in designing their next awesome robotic product?”) Probably not. But such initiatives will provide a foundation, and perhaps a momentum, for other initiatives to build upon (e.g., regulatory bodies to form), so that we can tackle it from many different angles.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s