The way I see it is the theory is practical and it's more a matter of when someone does it, than if, and like most technological innovations we want to be the first ones to get it and stay ahead of everyone else, especially when it comes to technology with far reaching security, military, and intelligence applications. I'd imagine most major governments have teams working on this kind of thing just like we are, albeit the US is far more into public
disclosure than most governments are despite claims to the contrary (and domestic criticism).
At the end of the day the NSA doesn't seem to be hiding this, as I've been aware of this kind of thing for a long time now (and someone else referenced reading articles on it as well). In general certain aspects of the government tend to be a generation or so ahead in terms of technology, and to be honest I'd imagine the NSA is ahead of the civilian sector because if they weren't you'd start seeing a lot of that research seized, or the people involved brought into the government given the stakes involved in something like this.
I'll also say I'm not totally paranoid about a mechanical takeover if we develop AIs/Robotics, etc... I believe people and self aware machines can co-exist just fine. You know... like Asimov's actual writings, as opposed to that crappy "I Robot" movie that somehow managed to get made despite having exactly the opposite message. I'm a fan of more optimistic science fiction with humans and artificial life forms working together for space exploration and the like. What's more even if mechanical life DID become arguably superior, that is no reason for it to immediately become genocidal.
The odd thing about most "machine war" fiction is that a lot of it seems to start when a machine becomes self aware and then people decide to immediately try and kill it as a result, ultimately creating the very problem.