Schweitzer Fachinformationen
Wenn es um professionelles Wissen geht, ist Schweitzer Fachinformationen wegweisend. Kunden aus Recht und Beratung sowie Unternehmen, öffentliche Verwaltungen und Bibliotheken erhalten komplette Lösungen zum Beschaffen, Verwalten und Nutzen von digitalen und gedruckten Medien.
Images of killer robots are the stuff of science fiction - but also, increasingly, of scientific fact on the battlefield. Should we be worried, or is this a normal development in the technology of war?
In this accessible volume ethicist Deane Baker cuts through the confusion over whether lethal autonomous weapons - so-called killer robots - should be banned. Setting aside unhelpful analogies taken from science fiction, Baker looks instead to our understanding of mercenaries (the metaphorical 'dogs of war') and weaponized animals (the literal dogs of war) to better understand the ethical challenges raised by the employment of lethal autonomous weapons (the robot dogs of war). These ethical challenges include questions of trust and reliability, control and accountability, motivation and dignity. Baker argues that, while each of these challenges is significant, they do not - even when considered together - justify a ban on this emerging class of weapon systems.
This book offers a clear point of entry into the debate over lethal autonomous weapons - for students, researchers, policy makers and interested general readers.
If you haven't yet watched the short film Slaughterbots on YouTube, you really should do so now. I mean it - stop reading immediately and watch the video before going any further. You won't regret it, Slaughterbots is short and impressively well executed. Besides, what I say below contains spoilers.
Slaughterbots was created by the Future of Life Institute in conjunction with Stuart Russell from the University of California at Berkeley. The film garnered over 350,000 views on YouTube in the first four days after its release, and was reported on by a large range of news outlets, from CNN to the Telegraph. The fictional near-future scenario depicted in this film in vivid Hollywood thriller style is both entertaining and scary, but is scripted with serious intent. As Russell explains at the end of the video, Slaughterbots is intended to help us see that, while AI's 'potential to benefit humanity is enormous, even in defence', we must nonetheless draw a line. Ominously, he warns us that 'the window to act is closing fast'. The key issue is that '[a]llowing machines to choose to kill humans will be devastating to our security and our freedom' (Sugg 2017).
The film opens with a Steve Jobs-like figure speaking on stage at the release of a new product. Only, instead of the next generation of iPhone, the product is a weapon - a tiny autonomous quadcopter loaded with three grams of shaped explosives, and which combines artificial intelligence (AI) and facial recognition technology to lethal effect. After proudly explaining that 'its processor can react 100 times faster than a human', the Steve Jobs of Death demonstrates his creation. We watch as he throws it into the air, and it then buzzes autonomously, like an angry hornet, over to its designated target - in this case a humanoid dummy. After latching parasitically onto the forehead of this simulated enemy soldier, the drone fires its charge, neatly and precisely destroying the simulated brain within, to the applause of the adoring crowd. If that were not demonstration enough, a video then plays on the giant screen, showing a group of men in black fatigues in an underground car park. The mosquito-like buzzing of the quadcopter causes the men to scatter in fear, only to be killed one by one as the tiny drones identify, track and engage them, detonating their charges with firecracker-like pops. 'Now that is an airstrike of surgical precision', says Mr Death-Jobs. As if sensing the concern that is building as we watch, he is quick to reassure his audience: 'Now trust me, these were all bad guys.' (Of course, we don't trust him one tiny bit.) Our concern only increases as he tells us that 'they can evade . pretty much any countermeasure. They cannot be stopped.' Another video rolls on the big screen, this one depicting a huge cargo aircraft that excretes thousands of these tiny drones, while we are informed that '[a] 25 million dollar budget now buys this - enough to kill half a city. The bad half.' (Just the bad half - yeah, riiiight.) 'Nuclear is obsolete', we are told. This new weapon offers the potential to 'take out your entire enemy, virtually risk-free'. What could possibly go wrong?
At that point the film cuts across to a fictional news feed that's designed to help us see the dirty reality behind the advocacy and smooth assurances presented by the Steve Jobs of Death. The weapon has fallen into the wrong hands. An attack on the US Capitol Building has killed eleven senators - all from 'just one side of the aisle'. TV news reports that 'the intelligence community has no idea who perpetrated the attack, nor whether it was a state, group, or even a single individual'. We witness the horror of a mother's Voice over the Internet Protocol [VOIP] call to her student-activist son that ends with his clinical killing by one of the micro drones, as swarms of them hunt down and murder thousands of university students at twelve universities across the world. The TV talking heads inform us that investigators are suggesting that the students may have been targeted because they shared a video on social media ostensibly 'exposing corruption at the highest level'. Then, suddenly, we're back on stage with Mr Death-Jobs, who tells us: 'Dumb weapons drop where you point. Smart weapons consume data. When you can find your enemy using data, even by a hashtag, you can target an evil ideology right where it starts.' He points to his temple as he speaks, so that we are left in no doubt as to just where that starting point is.
It's all very chilling, and it taps into some of our deepest fears and emotions. Weapons like tiny bugs that attach to your face just before exploding - creepy. Shadowy killers (states? terrorists? hyper-empowered individuals?) striking at will against helpless civilians for reasons we don't fully understand - frightening. People targeted on the basis of data gathered from social media - terrifying.
Slaughterbots was released to coincide with, and influence, the first of the 2017 Geneva meetings of the delegates working under the auspices of the United Nations' Convention on Conventional Weapons (CCW) to decide, on behalf of the international community, what (if anything) should be done about the emergence of lethal autonomous weapons systems (LAWS).1 The year 2017 was the first year of formal meetings of the Group of Governmental Experts (GGE) on LAWS, though it followed on the heels of three years of informal meetings of experts tied to this process. At the time of writing, this international process continues. In addition to the state delegates to these meetings, a range of civil society groups are also represented, most notably the coalition of non-governmental organizations (NGOs) known as the Campaign to Stop Killer Robots. Originally launched in April 2013 on the steps of Britain's Parliament as the Campaign to Ban Killer Robots, it was 'the Campaign' (as it is commonly known) that hosted the viewing of Slaughterbots at the 2017 GGE meeting in Geneva.
Slaughterbots certainly provided a significant boost to the Campaign's efforts to secure a ban on lethal autonomous weapons (or, failing a ban, to otherwise 'stop' these weapons). Unfortunately, the emotive reaction generated by the film is in large part the result of factors that are entirely irrelevant to the issue at hand: the question of autonomous weapons.
Remember what Russell identified as the key issue? 'Allowing machines to choose to kill humans'. If you have time, watch the film again, and ask yourself this question throughout: what difference would it make to the scary scenarios in the film if, instead of the drones selecting and engaging their targets autonomously, a human being seated in front of a computer somewhere was watching through the drone's cameras and making the final call on who should or should not be killed? I don't mean just pressing the 'kill' button every time a red indicator flashes up on his or her screen - let's assume he or she takes the time to (say) check a photo and make sure that the person being killed is definitely on the kill list. To use a key term at the centre of the debate (which I will examine in depth in chapter 2), in this mental 'edit' of the film, a person is maintaining 'meaningful human control'.
In this alternative, imagined version, AI would still be vitally important in that it would allow the tiny quadcopters to fly, enable them to navigate through the corridors of Congress or Edinburgh University, and so on. But there are no serious suggestions that we should try to ban the use of AI in military autopilot and navigational systems, or even that we should ban military platforms that employ AI in order to carry out no-human-in-the-loop evasive measures to protect themselves. So that's not relevant to the key question at hand.
What about the nefarious uses to which these tiny drones are put in the film? It is, without question, deeply morally problematic, abhorrent even, that students should be killed because they shared or 'liked' a video online; but the fact that the targeting data were sourced from social media is an issue entirely independent of whether the final decision to kill this student or that was made by an algorithm or by a human being. Also irrelevant is the fact that autonomous weapons could in principle be used to carry out unattributed attacks: the same is true of a slew of both sophisticated and crude military capabilities, from cyberweapons to improvised explosive devices (IEDs), and even to antiquated bolt-action rifles. In short, a ban on autonomous weapons - even if adhered to - would make essentially no material difference to the frightening scenarios depicted in Slaughterbots.
There are real and important questions that need to be asked and answered about LAWS. But in order to make genuine progress we will need to disentangle those questions from the red herrings thrown up by Slaughterbots and, indeed, by many contributors to the debate. This book seeks to take steps in that direction by trying to give a clear answer to the question raised by the Campaign at its formation: should we ban these 'killer...
Dateiformat: ePUBKopierschutz: Adobe-DRM (Digital Rights Management)
Systemvoraussetzungen:
Das Dateiformat ePUB ist sehr gut für Romane und Sachbücher geeignet – also für „fließenden” Text ohne komplexes Layout. Bei E-Readern oder Smartphones passt sich der Zeilen- und Seitenumbruch automatisch den kleinen Displays an. Mit Adobe-DRM wird hier ein „harter” Kopierschutz verwendet. Wenn die notwendigen Voraussetzungen nicht vorliegen, können Sie das E-Book leider nicht öffnen. Daher müssen Sie bereits vor dem Download Ihre Lese-Hardware vorbereiten.Bitte beachten Sie: Wir empfehlen Ihnen unbedingt nach Installation der Lese-Software diese mit Ihrer persönlichen Adobe-ID zu autorisieren!
Weitere Informationen finden Sie in unserer E-Book Hilfe.