Monday, June 24, 2013

Robot drones: growing more complex, deadly

It is, as they say, only a matter of time.

Drones already play a significant role in the forward operations of the U.S. military.  They spare the lives of our own troops from engaging in dangerous situations.  There are, however, times where a human unit on the ground is more desirable such as in close quarters combat.  Human for now, anyway.

This CNN report shows human-like robots that walk, climb stairs, do push-ups, and engage in other movements that people do.  Now, imagine combining this with other projects at Google and DARPA where neural nets are being designed to give robots the ability to think and make decisions for themselves.  The report makes the obvious comparisons to Terminators and says that the debut of these robots on the battlefield is inevitable.

Combat robots that resemble "killer cyborgs" are so...gaudy, don't you think?  Is there not a more elegant solution?  Turns out there is.  How about a swarm of mini drones attacking at once?  That's what the Air Force describes in this article from The Atlantic.

These are called "Micro Aerial Vehicles" or MAVs.  I prefer the term "bugbots" as it sounds more like something from Transformers.  The idea behind these bugbots is that they would be dropped in swarms from aerial drones and could then navigate undetected through confined urban spaces such as alleyways.  They could perch atop electrical wires or on window ledges and observe a target for days...especially if the bugbot were solar powered.  You could even equip a bugbot to attack single individual with a shot to the head before the target could know what hit them or it could deliver explosive ordinance.

A more chilling option would be, as I mentioned, having numerous bugbots swarm a target at once and overwhelm him/her/it.

With all of the controversy over unintended deaths due to drone strikes, these forward-thinking developments have not escaped the notice of the United Nations.  Last May, a meeting of the UN Human Rights Council convened to discuss the ethics of "lethal autonomous robots."  Most of the concern apparently seems to be over whether or not a human is ultimately at the controls when the order to kill is given.

It's more than that to me, however.  If we have robots that think, it reasonable to assume that their thinking will evolve over time and grow more complex with more information acquired.  I'm not predicting a robot uprising or any such thing, but I can't say that I scoff at a time when robots would argue for their own rights.

If they command a swarm of bugbots are you going to say no to them?  Are you sure those birds are all really birds outside your window?

Follow me on Twitter: @Jntweets


  1. Honestly, I don't know why people fear a robot apocalypse a la Terminator. It's so much more frightening to think of what humans do with robots.