I'm gonna address the article author and readers separately in this post.
To the readers:
As a student of AI and a frequent receiver of jokes akin to "Here goes the future developer of Skynet" allow me to say to all those worrying/cracking jokes about robot uprising...
You have no idea how far away we are from coming up with anything remotely close to developing an artificial agent[footnote]Agent, as defined in robotics, is anything that's capable of observation and manipulation of its surrounding environment, no matter how limited. Under this definition, anything ranging from a living beings ranging from humans down to most single cell life forms, or various electronic/virtual devices like a robot, video game npc or even a self adjusting furnace, with a built in thermometer is considered an agent.[/footnote], capable of self aware thoughts. Like, you wouldn't believe how far away we are.
Currently, all forms of AI are nothing more then sophisticated search algorithms designed to solve specific tasks, based on given input. Sometimes, the resulting agent can display lifelike qualities but in those cases they're just that... lifelike. No independent thought process took place during that event. It's all purely based on how well the designer implemented its capabilities at finding the optimum solution, as defined by the designer.
This is not me trying to claim that artificial life can never exist, nor that we should never wonder about the philosophical implications of having to deal with our own homebrewed version of the Geth, but with our current understanding on how a real brain makes independent decisions, compared with our current methods of implementing artificial "intelligence", right now there's 0% chances about us having to deal with such worries in the foreseeable future.
In reality, the biggest crime here is the fact that the term "Artificial Intelligence" is used for this, since we really have no idea how normal intelligence works, let alone how to make an artificial version of it. Really, the best way to calm humanity's fear of robotics, would be a mandatory introduction course in AI development. I can think of no better way to make people stop with this fear mongering and go "...oh... Is that it?"
Now, a little bit of note to the author:
Look, I understand that when it comes to writing articles like these, it's important to catch the attention of readers, but these constant allusions on how screwed we are, along with these multiple tie ins to robot apocalypse movies is a bit much.
I also wish to argue with your decision to tie this with Asimov's laws. The laws work fine (to a point; even Asimov himself didn't shy away from writing stories demonstrating scenarios where the laws would come into conflicts) in a science fiction environment but are next to useless and impossible to implement into modern day robotics.
The purpose of this assembly is to define accountability to human rights violations, when it comes to designing autonomous weapons platforms. If you design a robot that's meant to shoot enemy combatants and it drives into the nearest village and massacres all of its inhabitants, you can't just shrug shoulders and claim faulty programing. For as long as you can't guaranty that your agents are incapable of committing human rights violations, it is the wishes of the Human Rights Council that no LARs are created.
On a final note to everyone, I say that you all should stop worrying so senselessly about your living room door being kicked down by an angry killbot since it obviously distracts you from the real danger of robotics.
Mainly sex robots.