I’ve been posting more military roboethics entries than any other topics. The tag cloud at the bottom of the side bar definitely reflects it. It might be to your surprise to know that this was not done intentionally.
When I am in blogging mode I read up on the RSS’s I have on the side bar (I hope you find them useful) first. Then I move onto Google Readers and skim through almost everything that contains the word ‘robot’. As a result, my posts tend to reflect what is being most talked about within the month/week/day. For the past month or so the talk of military robotics have been on the rise. Is it an after effect of Ronald Arkin’s book on Governing Lethal Behavior in Autonomous Robots? Or NY Times’ coverage on his new idea of Guilty Robots? Or does it have to do with the fact that there were more than three books published on this topic this year alone?
Yesterday, Ben Goertzel, a transhumanist and and a Director of Research for the Singularity Institute for AI, commented on the topic of military robotics and its ethical issues on IEET. It’s just his after-talk commentary on the issue, but it is interesting because he comes from transhumanist background. He notes
Still, I don’t have a great gut feeling about superintelligent battlebots. There are scenarios where they help bring about a peaceful Singularity and promote overall human good … but there are a lot of other scenarios as well.
My strong hope is that we can create peaceful, benevolent, superhumanly intelligent AGI before smart battlebots become widespread.
I have to take a pessimistic side on his comment (especially after posting so many military robotics posts recently), since the overall theme of what George Beckey, Ronald Arkin, and other core people in the field of military robotics has to say is that “battlebots are here”. From my limited knowledge, transhumanist AI is still far behind compared to military robotics.
Read the full article from IEET here: