Will AI in killer military robots lead to real Terminators?
There is a rising uneasiness about some of the perhaps unintended consequences  high-tech gear being built in the name of military might.
Professor Noel Sharkey , a British scholar of artificial intelligence and robotics who has warned many times of future robot-related problems, told a meeting of scientists in London this week that the technology to create Terminator-like machines already exists and that an international dialogue on such military applications is needed.
Slideshow: 8 real-life robots that could kill us all 
"The problem is that this is all based on artificial intelligence, and the military have a strange view of artificial intelligence based on science fiction," Sharkey said. "People talk about programming the 'laws of war' into a computer to give robots a conscience, so that if the target is a civilian you don't shoot. But for a robot to recognize a civilian you need an exact specification, and one of the problems is there's no specific definition of a civilian. Soldiers have to rely on common sense."
Earlier this year, Sharkey said of these advance robots: "Killing has never been so easy."
The US isn't alone in its quest for ever more intelligent robots either. Israel, China and Turkey are also among the leading developers of such AI-based robots. A BBC report  noted noting the current deployment of Israel's Harpy  - a fully autonomous unmanned aircraft that dive-bombs radar systems with no human intervention.
Sharkey believes that the advantages for government are obvious: lower costs, fewer personnel, fewer soldiers killed in battle. "But we have to be careful and need strict rules," he said. "Otherwise, robots will one day be deciding who should be killed where and when."
And that's exactly how the Air Force describes some future unmanned aircraft .
By 2047 the Air Force says unmanned aircraft with blazing artificial intelligence systems could fly over a target and determine whether or not to unleash lethal weapons - without human intervention. Such intelligent unmanned aircraft were described in the Air Force's wide-ranging "Unmanned Aircraft Systems Flight Plan 2009-2047" report released in July which outlines the service's future use of drones.
In 2047 technology onboard an unmanned aircraft will be able to observe, evaluate and act on a situation in micro or nanoseconds. According to the Air Force: "Increasingly humans will no longer be "in the loop" but rather "on the loop" - monitoring the execution of certain decisions. Simultaneously, advances in AI will enable systems to make combat decisions and act within legal and policy constraints without necessarily requiring human input."
The Air Force was careful to point out that there would always be stopgap measures and that assuming the decision is reached to allow some degree of aircraft autonomy, commanders must retain the ability to refine the level of autonomy the systems will be granted by mission type, and in some cases by mission phase, just as they set rules of engagement for the personnel under their command today.
Authorizing a machine to make lethal combat decisions is contingent upon political and military leaders resolving legal and ethical questions. These include the appropriateness of machines having this ability, under what circumstances it should be employed, where responsibility for mistakes lies and what limitations should be placed upon the autonomy of such systems, the Air Force stated.
Sharkey, who is known  for his expertise in artificial intelligence as well as roles as chief judge on the TV series Robot Wars and as onscreen expert for the BBC´s TechnoGames, said: "The trouble is that we can't really put the genie back in the bottle. Once the new weapons are out there, they will be fairly easy to copy. How long is it going to be before the terrorists get in on the act?"