Two new organizations on responsible AI and robotics. Do we need them?

Last week was a unique week in the field of roboethics. In the span of a week, two different roboethics-related organizations announced their existence to the world.

Thursday December 10th was the launch of the Foundation for Responsible Robotics (FRR) in London. It was followed by the launch of OpenAI on Friday, December 11th.

This back-to-back announcement of roboethics related organizations emphasizes the need for more work to be done in this area and the many involved individuals’ commitment to support / work on them. Yet, the nature of these two organizations are quite different from the way they are announced. Hence, the work they are going to produce is likely different.

FRR and OpenAI

To start, FRR has strong academic roots with goals to be “proactive and assistive to the robotics industry in a way that allows for the ethical, legal, and societal issues to be incorporated into design, development, and policy.” Its initial support comes from 3TU Center for Ethics and Technology, an academic consortium in the Netherlands, and its founders and most of its executive board consists of people in academia (an impressive list of notable academics in roboethics, mind you).

OpenAI is quite a bit different. As the name suggests, the focus is really on AI than robotics per se, but we all know AI and robotics have quite a bit of a synergistic relationship.

FRRMore of an important distinction is that while FRR is probably going to be building up much needed pool of discussions and coming up with interesting questions relevant to the technologies we are aware of today, OpenAI is likely to be the ones that build the technologies that raise even more questions (and perhaps also answer them technologically). OpenAI’s vision really seems to be in building the technology itself, given that all of the listed founding members are computer scientists who build stuff.

If you check out the websites and bios of the executive board members in FRR and that of OpenAI, you’ll get the contrast I’m drawing here.

Yet, the lingo that the two new organizations are using speak to the same idea — let’s work on getting technologies to do good, and let’s help be responsible about technologies we create.
It’s just that they are approaching it from two different directions. FRR being more of “let’s get together to actively discuss roboethics issues so that we can help support those who develop policies or technologies” while OpenAI is more of “let’s build that AI with the mindset of prioritizing public good”.

open-aiOf course, there are other organizations out there that have been working in the roboethics domain. The Open Roboethics initiative, for example, have been around since 2012 and part of its work is on informing roboethics discussions from the ground up, where stakeholder input is directly used to build robot decision making algorithms, gets voiced in policy making processes and disseminated openly and freely back to the public.

It’s yet another approach, or another piece on a relatively blank puzzle board.

But why is there so much interest in this domain all of a sudden? What are the challenges in roboethics that perhaps nudged these groups into existence?

With the news about these organizations buzzing around, I got into an interesting discussion with contributors at Robohub (thanks Travis and John!) that got me to write up a blurb related to the above-mentioned questions.

The Challenge in Roboethics

oriIn thinking through the idea of “responsible robotics”, it’s important to acknowledge that engineering is a profession that takes ethics seriously in general.

Almost all engineering professional organizations have code of conduct that commit its members (majority of engineers in the world, including roboticists) to make decisions for the ‘better’ — at least worded in the vague form that is general enough to be applicable to all engineers.

For example, IEEE’s first item on the code of ethics is:

“to accept responsibility in making decisions consistent with the safety, health, and welfare of the public, and to disclose promptly factors that might endanger the public or the environment.”

Same goes with other many organizations, such as ACM (#1 principle calling for responsibility toward serving public interest).

So what we, engineers, design and the decisions we make are covered under the codes of ethics that obliges us to keep the welfare of the public in mind. This means that roboethics is not unique or excluded from what’s already covered.

However, adhering to the codes becomes more meaningful and useful if we have a better understanding of what constitutes these important adjectives used in the codes of ethics: ‘good’, ‘public interest’, ‘better’ and so on.

The challenge within not only roboethics community, but ethicists for as long as they’ve existed, has been in agreeing on whether a new something is good/bad, beneficial etc. Good and universally agreed-upon definitions seem to be very hard to come by.

And in practice, the moral compass we use to measure what is ‘good’ or ‘right’ depends from one individual / group / community / culture to another. So it’d be hard to put forward a generic, yet sensible at first sight, principle of “support/build the good and reject the bad” to an engineer and expect positive results.

ILoad3303___sourceLet me give you an example.

Let’s say there’s an agricultural robot that’s supposed to be in many ways better than old agricultural practices. But despite the superiority of the robot, it may actually be abandoned and cause more problems if it were to be given to an elderly user who doesn’t know what to do with it, nor want to accept it due to perhaps sentimental, cultural factors that are very important to her/him.

Who is to say what’s right / better / good?

What’s a big challenge in robotics — not necessarily unique — is that due to the fast pace of the technologies being developed, there’s a big question mark as to the intended and unintended impact the technologies will have. On top of that, it’s really hard to evaluate something as ultimately and definitively good, because there’s always that unforeseen something that creeps up to us as we adopt new technologies into our lives.

So, do we need these organizations?

It is, and will continue to be, up to individual engineers/designers to determine what the ‘good’, ‘public interest’, etc., given the particular decisions they are about to make, and to the best of their knowledge/abilities.

But given that robotics is accelerating in such a fast pace, and the compass that is supposed to point us toward good/right is harder to read, it is important that these individuals and organizations shed more light in that process by pointing out the knowledge gaps that exist, and to help fill those gaps essential for informed decision making. Hopefully, with this added effort, we can see our compass needle(s) better.

All in all, most people will agree with me in saying that we can make better judgement when we are more informed with relevant facts. And the relevant information in roboethics still remains scarce.

Unsurprisingly, much work remains to be done from many different angles, and by many different individuals, and it’s an exciting time to be working in this field.

4 comments

Leave a reply to Vexelius Cancel reply