The US military are developing a robot that will "learn" when on the battlefield.
It will be developed like the Mars explorer robots, to scout out over long distances.
In order to be able to be "autonomous", and stay out for long periods of time, the robot will be "taught" to find its own "food", usually organic matter, that it will feed into a biomass combustion chamber. It will be taught to learn to forage, even going as far as seeking out dustbins, rubbish tips etc.
One report even mentions the possibility of "sensing, finding and then using dead animals" as fuel......... and this is where it becomes a bit of a worry. How will it differentiate between animals and humans, and what if its need for fuel becomes critical?
If they can develop a robot that learns how to "survive" by using a "reactive, deliberate and creative learning intelligence", for example learning that a dustbin lorry is a source of organic fuel, then where will its "intelligence" draw the line?
If a computer is given a mission, it will always try to accomplish that mission, usually within set parameters, or limitations. If you arm an object, and give it the ability to "feed" itself, then set it a mission, how will you stop it?
One computer model suggests that, once you make a true "learning" programme, and couple it to a powerful enough computer, the speed of the "development" of intelligence would mean that the entire development of human civilisation would be overtaken in less than a month.
Ok, its not a walking, talking, cyborg from the future, but a six-wheeled, self-feeding, chainsaw (for cutting vegetation to feed) and gun weilding, armoured thinking machine that has been "taught" to survive, and learns as it wanders around, is a bit of a worry!!
Is this step too far? |