khedron: (Default)
[personal profile] khedron
Quoth Bruce Sterling in a piece in Wired:
In his 1950 classic, I, Robot, Isaac Asimov first conceived of machines as moral actors. His robots enjoy nothing better than to sit and analyze the ethical implications of their actions. Qrio, on the other hand, knows nothing, cares nothing, and reasons not one whit. Improperly programmed, it could shoot handguns, set fire to buildings, and even slit your throat as you sleep before capering into a crowded mall to detonate itself while screaming political slogans. The upshot is that you're unlikely to be able to buy one anytime soon.
I'm not saying that he's strictly wrong -- sure, all these things are possible. (Except for the "in your sleep" part; I don't think QRIO are that stealthy, unless they're slipping mickeys into your drinks, too, and if they are, you've got different problems.) But none of these things are unique to robots. Animals can be trained and have no more ethical sense. Heck, studies hint that teens might not be so good at empathizing with others or understanding the consequences of their actions (here's one link, though not the one I was looking for). Sterling is just being pointlessly inflammatory here, and it annoys me. There are better targets for the "let's think about the consequences of our actions!" cannon than this.
This account has disabled anonymous posting.
If you don't have an account you can create one now.
HTML doesn't work in the subject.
More info about formatting
Page generated Feb. 27th, 2026 02:56 pm
Powered by Dreamwidth Studios