This discussion is locked.
You cannot post a reply to this discussion. If you have a question start a new discussion

Should the IET seek its members to pledge not to help build killer robots?

I read an interesting article in the online E&T (above) that reports of a pledge to not assist in the development of so-called Killer Robots. Should the IET take a stance, be the first PEI to endorse the pledge and furthermore expect/encourage its members to sign up too?
  • Agree Alasdair, as that is where my opinion sits - someone somewhere signs off and determines the strategy and policies for the device - a human makes the decision to arm, decide and code the target identification and acquisition, code the action or inaction and dependencies, decides on impact level, and ultimately releases the device to execute its mission. It is therefore limited autonomy for the device. What surely must be prevented is where any of those decisions or even completely new decisions are made solely by the device's AI, self learning, self determination.
  • Mark,

    Again I have to differ from you a bit. If we take a hypothetical situation of an autonomous drone set to target a specific car known to contain terrorists (the sort of thing that has been done with remote operation at present). The drone sets off and locates its target which has now parked next to a hospital.....

    To my mind, a new decision made solely by the device's AI not to finish the attack would be one I would endorse, not prevent.

    However I do share your unease.

    Alasdair
  • Yes, but I would expect that decision, to abort or take another action, to be within the policy, rules, dependencies that the device operates according to, having already been set by the author not the device. Completely autonomous decision-making (no human origination or intervention) is where my unease rests.
  • Mark,

    In that case I think we are on the same page. AI and machine learning sometimes throws up some strange answers. An example I can think of (which I think was reported by IET in E&T) was bronchitis diagnosis where the risk to the patient was being assessed by AI on a number of factors based on historical outcomes and it concluded that asthma sufferers were at low risk of complications from bronchitis. This was apparently borne out by the data but skewed because doctors automatically sent asthma sufferers with bronchitis straight to hospital regardless of severity, so they rarely got any complications......

    Machine learning always needs to be assessed by someone with expertise to pick up these anomalies and set things straight, so completely autonomous decision making is fraught with danger. Yes, nine times out of ten it will probably be completely right but who takes responsibility for the tenth time?

    Alasdair