This discussion is locked.
You cannot post a reply to this discussion. If you have a question start a new discussion

Should the IET seek its members to pledge not to help build killer robots?

I read an interesting article in the online E&T (above) that reports of a pledge to not assist in the development of so-called Killer Robots. Should the IET take a stance, be the first PEI to endorse the pledge and furthermore expect/encourage its members to sign up too?
  • Mark,

    I would say yes and no, or rather in the order you put the questions, 'no' and 'yes'.

    I think encouraging members to sign up is a good thing, but how can the IET take a stance and endorse the pledge while it is encouraging members of the armed services to become members?

    Alasdair
  • I take a pragmatic view - isn't it more about the autonomous angle (of the weaponised drones) rather than seeking a pledge not to develop any weapons per se?


    I am reminded of that 1970s film "Colossus: The Forbin Project" (https://www.imdb.com/title/tt0064177/) that sees an apparently invincible computer system housed in an impenetrable mountain silo managing the US nuclear arsenal (for peaceful defensive purposes) only to find it has a competitor, Guardian in the Russia and so a playoff commences to see which one should rule ultimately threatening the entire world - with no-one able (or allowed by Collosus or Guardian) to intervene. Classic stuff.
  • Mark,

    Yes, but how can a member of the armed services using such weapons be a member of a PEI that has taken a public stance against them? You can't take a stance and say members are not to help develop them but say you are quite happy for members to use them when others have developed them. However encouraging members to sign and leaving it to their own consciences would be a good approach. After all, we have to start somewhere.

    This is a bit of an ethical minefield. I will leave it at the above as I can't see a simple solution.

    Alasdair
  • Interesting, I would actually have to go further than Alasdair and say no and no, for example as described later in the article:

    At the Farnborough airshow this week, defence secretary Gavin Williamson announced a multibillion pound project to develop a new RAF fighter – the Tempest – which will be capable of flying unmanned and autonomously hitting targets, as well as using concentrated energy beams to inflict damage; the government has stated that human operators will always have oversight over all weapons systems.




    I am sure many figures in the IET are involved with the Tempest, as an excellent example, and would argue that it is not for them to make the moral judgement as to how it is used or operated. So where do you draw the line?

    HOWEVER, I personally feel rather differently about this (a brief look at my LinkedIn page will reveal that I have never worked anywhere in the field of armaments, and I don't intend to). And I do feel that "the IET" (whatever that is) does have a duty to explore and explain what the implications are of the technology the engineering community is developing and is capable of developing. So I do think it has a duty to expose the facts, but it's difficult to see where it can draw the line on "withdrawing labour". There are plenty of conventional weapons, used under human control, designed and built in (for example) the UK, that are used by repressive regimes against civilians. I can even imagine an argument that it's better that if these autonomous systems are going to be developed that they are developed by professional engineers rather than "unprofessional" engineers.


    We really should be discussing ethics in engineering far more in the IET. It is the huge elephant in the room for engineering.


    Here's a thought: I think it would be credible for the IET to propose that engineers should have the right to refuse to work on technology which they consider "morally unacceptable" without risk of reprisals (i.e. dismissal). But again it's phenomenally difficult to put into a solid legal code - we are in a slightly odd position in UK employment law that cases where work is deemed unacceptable to an individual for religious reasons are reasonably well covered (although complex!) in law, but us far as I know there is no employment law to cover moral / ethical decisions of an individual for personal (non-faith) reasons. So, as far as I know, if someone refused to work on a project because they thought it was "morally wrong" (although legal) they could just be dismissed. So you can imagine a scenario where a weapons engineer is working on Tempest thinking it is a human controlled system, and it then becomes clear that it is actually going to become an autonomous system which they are unhappy with, there's not much they can actually do other than hope they can find another job. So maybe there's something the IET can do there.


    I must admit I had a look here in a break in case Lisa had posted one of her fun Friday postings, this was a bit the opposite! But thanks for posting this Mark and Alasdair, it is a really important subject which needs airing, and actually this is just the tip of the iceberg. With very little thought I'm sure we can think of many other things engineers have done - often with the best of intentions - that cause greater loss of life or health than autonomous weapons will. How much can we really blame the purchaser and the user every time? Should we be looking at what we do much more carefully? How much can we trust our own moral judgements anyway - do we really understand the big picture? Personally I've been directly wrestling with these problems since 1982 - when I went for a job interview which turned out to be working on atomic weapons (which thankfully I didn't get offered, given it was the one time in my life when I was pretty desperate for a job) - and I have never found an answer yet.


    Fortunately trains don't kill many people smiley Even better, my job is to make sure they kill even less people smileysmiley I know what you're thinking, "smug ***" smileysmileysmiley


    Happy weekend!


    Andy


  • Israel (allegedly) leads the world when it comes to military robots and has many in use of a type that no other country has.


    Has the IET ever been caught up in matters relating to the Israel Palestine conflict?
  • Thomas, In a way yes. it is the classic conundrum. Someone has to start the process, make a stance of peaceful objection and non-participation. Otherwise escalation such as we saw with North Korea and USA (thankfully reaching only verbal insults and threats) simply perpetuates to its inevitable outcome.

    I recall reading something from one of the guys that used to be deep underground with the finger on the button awaiting the presidential order to fire, saying (on hearing that Trump was going to be president and have access to the codes) that "Nukes are meant to be the ultimate deterrent, they're not meant to ever be used..."

    Not sure this applies to weaponised drones though...  to have them as a "just in case" deterrent as that could be universally applied.
  • Mark,

    I have to disagree with your last sentence. When you are looking at misuse causing unsafe operation or harm you are (or should be) not only considering harm to the operator but also harm to potential innocent bystanders. This is applicable to anything from electrical distribution substation design (people walking past just as the operator messes up the switching operation) to military weapons systems and autonomous drones. The military also want the autonomous drone to hit the intended target and not a bystander (though I admit that with budget cuts this may be because of the cost of these things rather than due to higher minded ethical principles) but the designer should be doing his/her bit to prevent 'foreseeable misuse' as you were taught.

    Alasdair
  • Absolutely Alasdair, it was an inadvertent omission (to also protect bystanders) and was (is, in my head at least) assumed. My last sentence was about removal of the user and human conscience - is that what you disagree with? (Just asking for clarification).
  • Former Community Member
    0 Former Community Member
    I once worked in defence building what could be described as torpedoes with a large explosive warhead on them or alternatively an underwater vehicle. These 'vehicles' were autonomous and designed to, let us say, search on their own initiative and 'find and identify' a target. The fact that they were in the sea and did not appear Hollywood Terminator-like seeks to blur the image of what is a killer robot and how long they have been around for. Realistically it is up to the United Nations to decree what is globally acceptable in warfare and up to individuals to determine their own ethical stance. A long time ago a chap by the name of Joseph Rotblat gave a most interesting talk at the University of Liverpool about what could be described as ethics in nuclear warfare. He gave up his research on the atomic bomb as part of the Manhatten Project when it was obvious that Germany could not achieve the atom bomb themselves. I suggest he saw the technology as pointless. Perhaps a quote from him will be an acceptable conclusion to this thread. "I saw science as being in harmony with humanity".  He remains a hero of scientific ethics and a mentor for us all.
  • Mark,

    On the removal of the user and human conscience, I can't say I agree with you, but I don't completely disagree. There always has to be a human operator who makes the decision to activate the 'killer robot' whether it be an autonomous drone or one of the target seeking torpedoes that Joseph has just described. This person then becomes the 'user' and it is his conscience that is in place. Where I don't completely disagree is that the 'user' is removed from the location of operation and it is more difficult to make ethical decisions when isolated from the situation.

    Alasdair