http://mashable.com/2013/08/03/artifici ... feedburnerI, for one, do not welcome our new robot overlords.
Let me elaborate.
Writing about Artificial Intelligence is a challenge. By and large, there are two directions to take when discussing the subject: focus on the truly remarkable achievements of the technology or dwell on the dangers of what could happen if machines reach a level of Sentient AI, in which self-aware machines reach human level intelligence).
This dichotomy irritates me. I don’t want to have to choose sides. As a technologist, I embrace the positive aspects of AI, when it helps advance medical or other technologies. As an individual, I reserve the right to be scared poop-less that by 2023 we might achieve AGI (Artificial General Intelligence) or Strong AI — machines that can successfully perform any intellectual task a person can.
I think the 2023 timeline is a bit optimistic (or pessimistic for the fearful). We'll (they'll) be further along and far more capable by then, certainly. It won't really start taking off until such time as the machines are smart enough to help with the process. It will be a singularity only from the perspective of hindsight.
In a report published by Human Right’s Watch and Harvard Law School’s International Human Rights Clinic, "Losing Humanity, The Case Against Killer Robots", the authors write: “In its Unmanned Systems Integrated Roadmap FY2011-2036, the U.S. Department of Defense wrote that it ‘envisions unmanned systems seamlessly operating with manned systems while gradually reducing the degree of human control and decision making required for the unmanned portion of the force structure.’”
Is the real thread.
Their POV is: As we take humans out of the warfare equation, humanity is lost.
Personally, I would much rather my robot take point. But to their point, if they have one:
http://www.hrw.org/reports/2012/11/19/losing-humanity-0This 50-page report outlines concerns about these fully autonomous weapons, which would inherently lack human qualities that provide legal and non-legal checks on the killing of civilians. In addition, the obstacles to holding anyone accountable for harm caused by the weapons would weaken the law’s power to deter future violations.
Well, not if they're programmed in. Hell, they be far less likely to lack such qualities than some humans I know.
related: OpenCog timeline. http://opencog.org/roadmap/