Subscribe now on: iTunes | Google Play | Stitcher | Soundcloud | RSS | or search "Ashes Ashes" on your favorite podcast app.
(We know this machine transcription sucks. We'll fix it as soon as we have the time and resources to do so!)
I'm David Torcivia.
[0:02] I'm Daniel Forkner.
[0:04] And this is Ashes Ashes, a show about systemic issues cracks in civilization collapse of the environment and if we're unlucky the end of the world.
[0:13] But if we learn from all of this maybe we can stop that. The world might be broken but it doesn't have to be. In 2009 the public expressed alarm after learning that DARPA, the US Defense Advanced Research Projects Agency, had agreed to fund a military robot that could turn organic material into fuel. The robot called Energetically Autonomous Tactical Robot or EATR combines a steam engine with a biomass furnace to propel an autonomous robot that can search out its own fuel. It has a chainsaw arm and cargo space for smaller robots and news about the project sparked sensational headlines like, "Military building a flesh-eating robot to feed on the dead". Some people freaked out and then the company released a statement:
[1:04] "We completely understand the Public's concern about futuristic robots feeding on the human population but let us assure you that is not our mission. Eater runs on fuel no scarier than twigs, grass clippings, and wood chips."
[1:19] News sites released corrections saying, "No the robot will not feast on dead people, it is strictly vegetarian," and it was pretty much forgotten. I like that quote you read, David. The company that makes the engine, Cyclone Power Technologies, released that statement saying that the robot doesn't eat anything scarier than Twigs. But a project overview by the company that makes the actual robot, Robotic Technology Incorporated, goes into a little bit more detail. What does it say?
[1:47] Well it lists a number of fuels that it can run on. It covers that the combustion is external so that the engine can run on any fuel whether it's solid, liquid, or gaseous. And this means anything from biomass, agricultural waste, coal, municipal trash, kerosene, ethanol, diesel, gasoline, heavy fuel, chicken fat - that's that's not very vegetarian - palm oil, cottonseed, algae oil, hydrogen, basically whatever you can think of and it can run all that either individually or in some sort of combination.
[2:16] Okay so maybe the robot isn't strictly vegetarian but I guess what they were saying is they would never tell the robot to eat anything other than plant material. And like we said people largely forgot about it. Nothing new has come out in the news about this EATR robot, although if you go to Robotic Technologies' website they include the EATR as one of their three major projects currently sponsored by DARPA, so who knows?
[2:42] Daniel, have you ever played that PS4 game Horizon Zero Dawn?
[2:46] No, David, I don't have a PS4.
[2:49] Well so this game, basically it takes place in a post-apocalyptic future where mankind has been reduced to almost like a neolithic level of technology, but the the animals that live in this world for the most part are not the animals that we know, these organic beings, but giant robots, mechs, that consume organic matter and live around the world. Things like giant robot saber-tooth tigers, giant dinosaurs, it sounds crazy and like a weird setting but it's a fun game and if you haven't played it yet and you don't want spoilers skip ahead here to 0:04:04 but basically the game revolves around the idea that in the past the militaries of the world built robots that consume organic matter to fuel themselves in order to wage war endlessly. And then we lose control of these autonomous robots and they end up fighting humanity, of consuming almost all the organic life on Earth until some sort of fix is introduced and the world is launched into this sort of balance between the mechs and the humans and I don't want to totally spoil everything, but when I played this game and then I read about this EATR robot I was like, oh my God we're quite literally creating exactly these robots as the science fiction game used as an example of a way to destroy the world and here we are doing it for real.
[4:04] Sounds like a perfect recipe for disaster. Let's design a robot that can operate autonomously, search out its own fuel, and then let's give it military weapons capabilities and then that fuel which it can use to power itself let's just make that fuel anything organic basically the entire world.
[4:23] Yeah well it's a recipe for disaster if anything were to go out of control and that's really the topic of what we're talking about today. These are autonomous weapons AI technology used for combat and many many ways that we might find ourselves facing in apocalypse that we only thought, so far, could be the realms of science fiction. But, as we've known in the show repeatedly, as we examine these topics science fiction is increasingly becoming our reality and unfortunately these science fictions often are dystopic in nature.
[4:52] And this eater robot as fun as it is to think about we have much larger things to worry about than just one little EATR robot.
[5:00] It's not a little robot. It’s a giant, almost car-sized robot with chainsaws mounted on the front of it but with that scary image in mind there are much worse things that we'll be facing very shortly so let's jump in. Arms Race For Laws
[5:10] Right now there's an arms race that is taking place among the nations of the world to develop and acquire military weapons of the so-called third revolution in warfare. The first revolution of course being gunpowder, the second being nuclear weapons, and the third being autonomous weapons. [5:30] And before researching this topic, David, if you had asked me to picture an autonomous weapon I would have had to use a little bit of creativity to try and imagine what a futuristic robot might look like.
[5:42] You would have probably thought of something like you've seen and all of our science fiction, so a Terminater-like character or these RoboCop humanoid robots patrolling around or in giant, like, mech suits running across the Earth, and you know, you wouldn't be that wrong. That is an autonomous weapon but the fact of the matter is is that autonomous reality that science fiction is something that is a little bit farther off but we've had autonomous weapons or at least semi-autonomous weapons at this point for decades.
[6:09] That's right in the more you look into it you realize that autonomous weapon systems are everywhere and all around us and pretty integrated into the militaries of the world. We've had missile defense systems on Navy destroyers for a long time now. Heat-seeking missiles are pretty common. And something that's becoming increasingly common in terms of military equipment are cybersecurity systems. So, things you wouldn't traditionally think of as being a weapon but existing within a computer server somewhere that may monitor and adjust certain infrastructures.
[6:39] Even things, cruise missiles, they can autonomously tell exactly, recognize a piece of terrain and guide itself to that. Israel has that very famous Iron Dome system that automatically detects incoming rockets or missiles, automatically targets and fires without any human intervention. And these things have been in practice they have been working at this point for many years, sometimes more successfully than others, but the age of autonomous weapons exist now. But what is changing with it is the level of these AI technologies and how powerful the autonomous capabilities of these technologies are becoming.
[7:14] Yeah and everyone by now is familiar with drones and, no surprise, militaries are developing drones for warfare and along with that comes autonomous capabilities. The US Department of Defense has a project for designing drone swarms that can hunt in packs like wolves according to the department. And the Air Force has deployed, successfully, micro drone swarms from fighter jets while in flight. So autonomous systems have existed within the military for a long time and one of the arguments for increasing automation comes from the potential for them to decrease risk or increase safety and eliminate the possibility of civilian casualties or collateral damage. But I was surprised to learn that perhaps the very first accidental death that came about from an autonomous system actually occurred in 1988. A US Navy jet fired an anti-ship guided Harpoon missile as part of a test in Pacific waters. The missile was supposed to target a dummy boat but decided to lock onto a nearby Indian merchant vessel instead. One crewman was killed when this unarmed, thousand-pound projectile slammed into the ship but we have come a long way since 1988 and a long way since simple heat seeking missiles. Autonomous Weapons: This Time It's Different
[8:34] We are on the precipice of a true paradigm shift in Modern Warfare. The changes that are taking place in weapons development do not represent incremental changes but rather an upheaval of War itself. In the same way that two weeks ago we discussed how technological advances in commercial automation presented an unprecedented future for labor around the world, the integration of deep learning and artificial intelligence with military equipment means that we are facing a future of war that is radically different from anything humanity has ever seen before.
[9:05] In April of 2017 the US Department of Defense established the Algorithmic Warfare Cross-Functional Team. Also known as project Maven to "integrate artificial intelligence and machine learning across operations". Specifically the goal of this project is to use the enormous data that is available to the Department of Defense as training sets, for machine learning to replace human analysts, and develop AI capabilities within every weapon system possible with initial priority focusing on unmanned aerial drones. And although the military has funded artificial intelligence technologies in the past this project represents a major shift in that it is the first time the focus has been on integrating machine learning with artificial intelligence products for combat operations.
[9:59] Yeah, in fact, the lieutenant general that headed this project, well, he called it the spark that would be the catalyst for, "the flame front of artificial intelligence". And so although this project started with just a few team members, I think it was like six, the real innovation is found in the way that this project enables rapid and flexible commercial partnerships. And this is a big sort of change in the way that the military operates with the commercial world. There's been an increasing amount of the commercialization of war over the past few decades. The military industrial complex has grown increasingly associated with the economy at large and this is just another part of that. But it plays into the development of the economy as a whole of the growth of Silicon Valley of technology companies as a major part of our economy. Normally the military acquires their technology very slowly. Although it's advanced through organizations like DARPA, it's a slow process. There's not a ton of funding that happens constantly but through Project Maven the military team was able to partner with technology companies, companies like Google and help build training data for their machine learning algorithms from all the available drone videos the military has. And then just six months later they were using these artificial intelligence algorithms in drone operations against real enemies. The rapidity in which this project got off the ground will undoubtedly be used as a model for countless new military projects in teams seeking to integrate AI into combat operations today. And that of course requires partners from the tech community.
[11:26] According to the Wall Street Journal the Department of Defense spent $7.4 billion on artificial intelligence related Technologies in 2017 alone. $7 billion is a big number and many Silicon Valley Tech companies are eager to get a piece of this growing demand for military tech integrated into machine learning, like you mentioned Google. Other companies like Amazon and Microsoft Azure are trying to get into this business. In fact, Google was one of the companies that competed for a contract with project Maven. The initial $9 million contract was expected to grow to $250 million per year. And executives within Google expressed their concerns that the public would find out about their involvement. The head scientist at Google Cloud said in an email, "I don't know what would happen if the media starts picking up a theme that Google is secretly building AI weapons or AI technologies to enable weapons for the Defense Ministry". But the contract was discovered and a huge backlash ensued. Academics in artificial intelligence fields signed an open objection letter to the company. 13 Google employees resigned. 700 employees joined an internal group called the Maven Conscientious Objectors. And 4600 employees signed an internal petition demanding that the company cancel its contract, stating that it harms the public trust in the company and that we should not allow Google to join the ranks of companies like Palantir, Raytheon, and General Dynamics.
[13:03] But it doesn't end just there. In May of this year close to 1,000 scholars, academics, and other information technology experts signed an open letter to Google asking the company to halt its work with project Maven. And to avoid any work that involves artificial intelligence or military use for the sharing of personal data for military operations and so in June of this year Google decided not to renew this Maven contract. But is this enough to stop the momentum driving forward and developing these weapons? Can we stop tech companies from accepting lucrative contracts from the military to develop autonomous war machines?
[13:39] Well, David, there's a lot of money at stake so I'm a little skeptical.
[13:43] I think part of it, and we'll explore this more as we go through this episode, but it's not just the fact that developing technology specifically for military uses, what people are out against here, but a lot of these technologies that Google are developing, a lot of technologies that are in this whole deep learning / machine learning field, well, because of the flexibility of the technology, as they push this forward as they advance an understanding of it, it carries very well over to these military uses. So even though you might be trying to design something for organizing just regular data, search data, or helping people book [14:18] restaurant reservations like we explored a couple weeks ago, many of these technologies can be applied to war. So even though you're not explicitly designing things for the military, for these operations, these defensive operations as their most often called, though let's be honest especially the United States it's often offensive, it ends up being used for these purposes anyway and we're starting into the ethics of this debate and that's something that we'll discuss in more detail later on in the show but it's something to start thinking about right now.
[14:47] Well, it's interesting you bring up how commercial development of artificial intelligence technologies can be applied to military operations because that's one of the responses you get from military experts and executives within the business of selling military technology. When it comes to banning the development of autonomous weapons they say, Look, the technology is neutral. It's going to be developed no matter what. And trying to prohibit the development of military weapons that have artificial intelligence is really only going to harm technological progress in general. And that's not something we want. It would harm all of society to try and prevent a general progress in technology. So that's something we can talk about soon but why don't we look at some of the recent efforts that have taken place to prevent this development of what is being called Lethal Autonomous Weapon Systems? And this goes beyond just Google and its specific involvement with project Maven.
[15:48] On July 28th, 2015, the future of Life Institute presented an open letter at the International Joint Conference on Artificial Intelligence which called for a blanket ban on offensive autonomous weapons beyond meaningful human control as a way to prevent this military artificial intelligence arms race. To date this letter is signed by over 4,000 AI and robotics researchers and over 22,000 other endorsers including Stephen Hawking before he passed, Steve Wozniak, Elon Musk, Noam Chomsky and many others. Their concerns about the development of autonomous weaponry include cybersecurity, since anything integrated with information technology can be hacked, the escalation of an arms race between countries and a lower risk of going to war in general, the ability or lack thereof of autonomous weapons to distinguish innocent people from combatants, the rise of destruction and death from unpredictable automated conflicts, sort of like you would see in stock market flash crashes, and the removal of accountability in moral judgment, in decisions that result in the loss of human life.
[16:53] So, David, that's kind of complicated but what generally people seem to be concerned about is the development of killer robots that can make the decision to end human life without a human being involved in that decision. These are weapons systems, these are robots, these are computers that are informed by algorithms and some kind of direction but then act on their own and we're okay with this type of autonomous decision-making when it comes to things like trading our stocks or flipping our burgers if we buy one of those burger flipping machines or other autonomous decision-making that goes on in manufacturing. But when these actions that are being decided by computers have to do with ending human life that's where things get a little bit tricky in and this is where the resistance is. It's not on autonomous software in general it's specifically on weapons that kill human beings without meaningful human control.
[17:50] In terms of some of these practical examples of what these might be, again, the very sci-fi media-based picture of all this is the Terminator robot going around executing people, breaking into homes and murdering people on street blocks but these are in reality, in the way that they're deployed right now, are drones, unmanned drones that are are flying miles high above the ground targeting people who look like tiny little infrared symbols from miles away and firing a missile and killing them without any sort of human intervention. Or, that missile itself that's fired also has the same detection technologies on it making sure targets, what it thinks is its initial target or whether thinks it's the correct house or vehicle or whatever it is. These are pieces of software that launch automatic defensive responses. So, they think they're being attacked so it responds with similar cyber attacks, or with missiles, or nuclear weapons even if we want to take it that far.
[18:45] And we even have technology today that gets us closer to that image of a Terminator killer-robot. Russia has a fully autonomous tank, the Uran-9, which can support a wide range of weapons from anti-tank guided missiles and machine guns to flamethrowers. And it can drive around, navigate, and pick its targets autonomously.
[19:04] There are flying drones of a large variety of sizes, some of those large airplane-sized drones firing missiles that I discussed just a moment before, but also small drones like we are more familiar that maybe have a camera or you buy at the toy store, but these are equipped with tear gas grenades with the detection weapons. Some countries are testing out with AK-47's and rocket launchers mounted on these otherwise small drones, and using the camera vision technologies all of these drones in order to target specific individuals or groups of people at the same time.
[19:35] And right now we haven't really seen a broad deployment of these types of technologies that are actively choosing and attacking targets without humans behind the wheel at some point. And that's usually the argument that is made by militaries like the US that say Okay even though our drones are capable of selecting objects and tracking objects and potentially firing upon targets by themselves, we always have a human behind the joystick that is making that final decision to fire the weapon.
[20:05] At this point it's sort of funny. Combat for the the Drone operator is no longer somebody sitting there, in the moment, feeling at risk like their life is on the line in combat, in place. Instead they're sitting there behind a computer screen in an air-conditioned trailer somewhere often times in the United States if this is the country that's operating the drone, even though that the weapon that they might be flying is on the other side of the world. And when it comes to actually firing that missile or firing the gun it's hitting a prompt that says basically Would you like to kill this person? Yes or no? And we now think about taking human life the same way that we accept terms and conditions when registering for new website. That's the human component of this equation at this point.
[20:50] And that's the technology that Google was developing with this Project Maven. They were taking hundreds of thousands of images from drone videos and then large pools of human labor we're getting together to classify objects within those videos. And that data could then be used as training data to help machines learn how to spot these objects on their own and then that technology can aid the drone operator in determining targets and such. So why don't we real quick just look at some of the arguments against the idea that we should ban these types of weapon systems.
[21:22] I love a lot of these anti ban arguments because they're things that have been trotted out time and time again and sometimes even used at the opposite side of the same people making these arguments about technology. So they might say that Oh we shouldn't ban autonomous technology because of these reasons but these very same reasons are why we should ban other states from obtaining nuclear weapons, because oh no we can't let them fall to their hands. So one of the first things that always comes up is these bans are useless because anyone will be able to develop them anyway. The knowledge of how to make these weapons, these tools, is out there. It exists. All these little bits and pieces are online. You can find them and you can assemble it yourself. You just need a little bit of time, investment in testing and then there you go. You have autonomous weapons. So trying to ban them is pointless. The knowledge already exists in the world. Of course this is identical with nuclear weapons and how to build them. You can go find the plans for the original nukes in the Library of Congress. But when it comes to autonomous weaponry, maybe because it's less capital investment and the fact that it's not just nation-states that would be able to put these things but companies or maybe in the future even individual people, for whatever reason the people pushing these autonomous weapons say Well it doesn't matter, cat's out the bag. Why ban it? You can't at this point.
[22:37] Right, David. The point that is being made is that while anyone can learn, if you're smart enough I guess, how to build a nuclear weapon obtaining the resources and actually manufacturing that is so incredibly difficult the knowledge doesn't really matter. It's the resources. But when it comes to autonomous weapons, well, anyone can go out and buy a $300 drone, tape a machine gun to it, and then the software behind it that really powers these systems, the neural networks for facial recognition, a lot of these neural networks are open source software and can be obtained relatively easily.
[23:13] Okay so everyone is agreeing then that we shouldn't ban these weapons because, well, they're easy to obtain for anybody. But then the second argument is that we shouldn't ban these weapons because it's going to be hard for terrorist to get their hands on these military grade weapons, and that non-state actors in war zones might be able to. So like if I'm Isis or something in Syria I might be able to get a drone to use for reconnaissance or something. But if you're in a city in the west this is not going to happen. Terrorists won't be able to get this technology. Which seems like it very much disagrees with what they just said about anybody being able to get this technology so why ban it? That aside this also ignores that the fact that state actors might or are already using this technology to actively oppress people including their own citizens. Victims of these "non-state actors" in war zones, what do we not care about the people that Isis or other extremist groups are using these technology to hurt?
[24:06] But you know, David, I think the real argument around autonomous weapons has to do with the ethics of killing. And do we want to put those decisions in the hands of a cold-blooded machine? And the response to that is that the Rules of Engagement make it very easy for robots to identify targets. If a human is shooting at you rules of engagement make it very clear that you can shoot back. A robot can sit there though and take the punishment of gunfire and then it can use its multiple sensors to determine who is shooting at it and then eliminate them without any risk of harming the wrong person. And it seems pretty logical, right? If you shoot at me, David, I can shoot you back. And a robot soldier in my place would be able to do the same thing except now I'm not at risk of dying. So this technology has the potential to save lives on the side that is employing them. It's a win-win situation or so the story goes. We kill the bad guy and eliminate the risk of our soldiers dying. [25:09] But this raises a very important moral question and one that you actually raised briefly, David, in episode #22, "Fashion Victims" which is that the destruction of human life as punishment for damaging inanimate objects and property is troubling in a way. Setting aside for a moment the moral contradictions and injustices within certain warfare itself, a human is justified in defending themselves with lethal Force specifically because human life is at risk. But once you replace one of these humans with a robot there is no risk of human life until the robot engages lethal Force. The rules of the game have completely changed but this is something we can come back to expand on later on in this episode.
[25:54] Those are great questions, Daniel, and I can't wait to explore them in more depth later on, but it also sort of makes me think of some of the tricky languages that these companies, these people involved behind developing these technologies employ. The same questions that we bring up about, well, you know is war justified when it's a robot killing a human? is the same sort of ethical questions philosophical play that is being put into use by these tech companies to justify their work on these what they call defensive projects. So the response that companies like Google and the DoD make to concerns about Artificial Intelligence being integrated with weapon systems is we're not developing any offensive capabilities. The Google contract for project Maven, for example, was mostly for developing object recognition capabilities. So a drone can recognize a car from a person and track where these things go and then the way it's presented by the military and these tech companies partnering with the military is that these technologies are just aids for human pilots so they don't have to spend as much time trying to identify targets and spend more time thinking about whether it's right, to ultimately pull that trigger.
[27:01] But that argument, David, of course ignores the fact that you can't draw a clear line between technology for firing a missile, and it's accessory capabilities. So what I mean is you can't draw a line between the act of autonomously firing a missile at a target and the capability to recognize and track objects. You can't have one without the other, and that's what the Google contract for project Maven was for. It was for developing object recognition capabilities so that a drone can recognize a car from a person and track where these things go. But once an unmanned aerial vehicle has the capability to fly itself, map the world in real time, and track different classes of objects the fact that a human being pulls the trigger is not a matter of necessity but mere protocol. And that protocol can be changed in a heart beat.
[27:51] And beyond all that. The fact that we talked in the past about predictive policing and how the technology is presented as a mirror objective tool used by human officers to predict where crime may occur. But in reality this technology only exacerbates discriminatory policing while allowing humans to offload their ethics and ultimate responsibility in the name of objective algorithms. And there's no reason to think military applications of AI would be any different. An AI driven visual overlay that is seen by a drone pilot is marketed as a way to reduce the risks associated with human judgment. But more realistically it could be used as a way to direct human judgment towards certain outcomes, outcomes that might go against what's best for the Drone pilot and certainly those on the ground.
[28:40] David it's time! Do you know what it's time for?
[28:43] You to ask me some inane questions, I'm sure.
[28:46] No, it's time for Robot Dystopian Future #1.
[28:56] So, David, let's say you're my right-hand military man and I tell you that we have Intelligence on a white van. Likely to be transporting explosives from one town to another and it's going to be using Such-and-Such Road. I give you a rocket launcher and I need you to locate the target and stop it at all costs. Oh, and of course you better not kill any civilian. Well, with those orders you might go out with your team and post checkpoints along the road at different places. You might tell your team to stop every white van along the road for a search and in general you're going to be aggressive in locating the target but you also might air on the side of caution when it comes to using lethal Force for fear of injuring or killing the wrong people and potentially losing your job in the process. [29:44] So, in other words, you're going to be using a lot of human judgment. But now let's say, David, instead I give you the same instructions but I augment your equipment with a pair of holographic overlay glasses. I'm sure you're excited at this point, David, and we connect these glasses to AI supported visual satellites or drones or something in the air that's tracking this to your graphic region. Well, fast forward to the mission. You're standing on the street corner scanning for likely targets. All of the sudden your glasses come alive with this visual overlay that says, "White van approaching from the Northeast. Seen leaving suspicious region known for terrorist activity. Traveling 40% over the speed limit. Occupants were seen loading van with containers of unknown contents. 68% probability of target match". Well, David, now you have to make a decision. Normally you'd stop the van for a search because it's white and could be a likely target. But you recognize that every time you do that there is a risk that one of your team members might get killed by a preemptive attack. The intelligence in your glasses seem solid and although you usually err on the side of caution when it comes to eliminating a target, well, if this turns out to be the right target and you messed up, somehow got one of your team members killed or let the target slip through, well you don't have a good excuse because your computer told you everything you needed to know. How could you mess this up when there's a 68% probability of a Target match. So what do you do?
[31:13] I mean if it was a black van we would have just shot it right away, but because it's a white van I want to be more cautious and try and take it peacefully if I can. But 68%. Time to blow it up I think.
[31:25] All right David so you blow it up. Fine. Well, unfortunately for the occupants of that van, a harmless family that was rushing to get to a wedding they were late for, you hit the wrong target. But fortunately for you no one is going to blame you because you relied on data that seemed positive from an objective artificial intelligence. Who could blame you for following up on that? [31:49] Okay, maybe that was a cheesy example but it kind of highlights one of the main concerns with augmenting human soldiers with artificial intelligence and autonomous weapons. It removes accountability and the moral caution that is unique to human judgment. Is Technology Just A Tool?
[32:04] Well, yeah, in this case technology was just a tool, right? It was a tool used by the soldier to make a decision. And this technology is just a tool argument is something that is so popular in this discussion and it's really no different than guns don't kill people. People kill people, right? [32:20] Well, this is the autonomous weapon version of that same classic argument. And ultimately autonomous weapons get their power from this software. And as we saw with Google and its partnership with Project Maven it takes a ton of resources and energy and time and people to train machines to recognize objects, faces, track targets and carry out the autonomous missions we want them to. As revealed in the Project Maven case, just to get the project off the ground and in order to train the machine behind this drone technology to recognize a car or a truck or individual human beings had to sit down and analyze hundreds of thousands of video images, manually identifying objects which machines could then use as training data. The kind of work needed to enable autonomous technology is labor-intensive and does not come about without intent and purpose. When we think about technology, advancement and progress we are lead to believe that advancement follows a logical, incremental path. [33:20] This is reinforced in the way that history itself is told. It's reinforced through justifications for the development of certain technology and it's reinforced through media and entertainment. If anyone has ever played one of the Sid Meier's Civilization games in which the goal is to build an advanced civilization you know how the technology tree is presented as a logical framework. Of course you go from the Stone Age to the Iron Age. Of course you develop roads and then the steam engine. And, of course, ultimately develop nuclear power and satellites.
[33:49] Yeah, David, that's really the idea when we think about technological progress is that there's only one direction it can go. That's progress. I mean, progress only goes in one direction. But this framework is a man-made construct. It's not a natural constant of the universe. Technology is not merely the invention of computer chips and circuit boards but includes systems for organizations like the British Postal Service, which at the time was considered a world wonder without which perhaps the modern global economy would not have come into being. [34:21] There are a million different directions that humans can go in the development of technology and those directions have less to do with natural laws than they do human intent and purpose. So, consider facial recognition technology. You could be the most technologically advanced civilization in the universe and still lack any method for tracking faces because the ability to do so does not emerge naturally from some general level of advancement. It emerges from an intentional effort to create it. There is no true benefit to society related to the development of facial recognition beyond surveillance and control. So if surveilling and controlling people were not an active human objective the technology would never emerge on its own.
[35:07] Yeah, I think that point about facial surveillance is really important because a lot of the times we talk about this technology like, "It's so neutral". Like, "Oh yeah, you know like anybody can build a missile but it's whether you use the missile or not that makes it evil". Like that's some sort of justification for being able to build missiles. But facial recognition is something that we see a lot more time and energy invested in and see it everywhere. Facebook has it. You can buy home cameras from Google, from Nest, that have facial recognition built in. There's even doorbells now that have facial recognition built into doorbells that tell you who's ringing your door.
[35:38] How convenient!
[35:39] This technology is everywhere and it's sold to us as convenience. You upload a photo to Facebook, it automatically tags your friends I guess.
[35:47] You know what, David? I'm going to have to take back my comment that there's no benefit to society related to the development of facial recognition because you just don't understand how much work went into tagging my friends on Facebook this is a huge time saver for me, something that...
[36:01] Yeah from all those like huge group photo shoots you do with your friends constantly.
[36:05] Someone's a little jealous I think but...
[36:07] I know. But really, like the justifications that I've had conversations with people about in facial recognition where they're reaching, searching for some sort of social good that comes from the technology that's used almost universally to surveil and control people. To control us in borders, to control us in our cities. As we're seeing right now in China and Shandong Province, all around the world this technology is being deployed to arrest people to make people's lives worse under the guise of safety. Under the guise of societal control which is, you know, not even trying to hide what they're trying to do with it. But what is the good? [36:44] Maybe you could take a picture of somebody who's injured and they don't know who they are and then you can maybe identify them. Like, the cases of that actually doing anything are are so infinitesimally small it's ridiculous. I mean, well, as technology becomes more ubiquitous it finds its way into everyones' hands. There was an example in Russia recently where there's a service, an app that you take a picture of a woman, on a train or a bus or on the street, that you thought was hot. And it would link their VK account which is like the Russian version of Facebook. So that you could get the information on that person and then stalk them, assault them, whatever it is you wanted to do, harass them online just because you saw them on the street and normally you would go up and ask them for their name, their phone number and if they didn't want to give it to you then you had that privacy. They could walk away and you had nothing that you were able to do. Now thanks to facial recognition you can find out who this person is and bother them wherever they are and they can't escape it. This is what this technology is doing. This is not good technology. There is nothing to redeem facial recognition or the work that's gone into developing this technology. It's a huge drag on society and a lot of these technologies that find their way into these autonomous weapons and the development, the engineers, the companies that are doing the actual development of these autonomous weapons, you're actively hurting all of humanity with your work. And you can try and justify it by saving soldiers lives or something but you're only fooling yourselves.
[38:07] Well you're right, David, that this facial recognition technology, the software behind object recognition, these technologies ultimately get integrated with autonomous weapons. And you brought up Russia and this is a country that is a good example of someone who's justifying the development of lethal autonomous weapon systems with the argument that artificial intelligence research is good. It's inevitable and a restriction on weapons might harm research in general. Russia is leading the charge on the development of autonomous unmanned ground vehicles or tanks. Add has been designing drone swarms of up to 100 individual drones that can operate as an AI controlled unit. [38:47] And in September of last year Russian president Putin stated that whoever led the world in AI development will rule the world. And 2 months later the Russian Federation released a letter stating that it would not honor any ban or restrictions recommended by the upcoming United Nations convention on lethal autonomous weapons systems. The Russian position was basically look it's too difficult to define what a lethal autonomous weapon system is. And any ban or restriction might harm research in general. Meanwhile Russian companies are producing and marketing autonomous weapons like machine guns that can select and fire upon targets without human guidance. [39:30] Alright, David, let's do another robot dystopian scenario. Number two!
[39:35] Boop boop boop boop boop boop boop boop boop boop boop.
[39:42] In 2010 a trillion dollars vanished from the US stock market before mostly recovering all in the span of 36 minutes. Analysts believe that a single man set off a chain reaction when he placed a large, bogus, spoofing order intended to confuse stock prices. Algorithms that were trained to sell under certain conditions went into a flurry of activity faster than humans could intervene. A similar event happened in February of this month when the Dow Jones Industrial Average collapsed 1600 points in about 15 minutes. Former vice chairman of NASDAQ said, "We created a stock market that too moves darn fast for human beings". Now when high frequency trading computers set off chain reactions in financial markets people lose money that they might not ever be able to get back. But life goes on. But in a situation where high-frequency autonomous weapons set off chain reactions life itself might not go on. That's where this gets really scary.
[40:37] As we saw an episode #13, "Lights Out," the military considers cyber attacks on our infrastructure systems as potential acts of war. Which means that in a world where we rely increasingly on autonomous weapon system robots won't just be monitoring incoming missiles. They'll also be monitoring information related attacks to justify physical retaliation. And so in the same way that we experience flash stock market crashes we may be setting ourselves up for flash warfare. [41:12] A computer somewhere erroneously detect hackers trying to disable a small national power grid and retaliates by launching mortar fire into a neighboring country. Some of the fire lands in proximity of a refugee camp, causing people to flee in panic. One of Meteor Aerospace's autonomous Rovers, Rambo, mistakes the behavior of the refugees as a military operation to cross the border and while firing into the crowd with its mounted machine gun it directs a nearby military base to activate a radar installation. But that prompts a loitering Harpy missile, programed to destroy enemy radar when it comes online to descend and blow it up. A nearby autonomous submarine joins the fray...
[41:56] And in the background of all this you have cybersecurity systems trying to attack and shut down enemy logistics, infrastructure, and sparking more retaliation. Within minutes you've got 10 countries practically at full-blown war with each other and the humans are still asleep.
[42:11] Luckily though as we saw with automation, David, now that we have autonomous journalist it will all be written about in the news when we wake up. War Business And Following Orders
[42:22] But no conversation about war can never be complete without the constant reminder that war is a business. And these technologies these autonomous lethal weapons, well, they themselves are a business. The Israeli Defense Ministry received a complaint against Israeli weapons company Aeronautics Defense Systems that alleges the company was asked by a potential client to give a live demonstration of its Orbiter 1000 suicide drone on an Armenian Army position. Before I even continue I just want to stop and point out the fact that people are in fact building and selling suicide drones. [43:00] OK, according to the complaint employee drone operators refused to launch the weapon and company executives armed and launched it themselves. There's no disputing that the attack actually happened but the company claims their client, the Azerbaijan government, executed the attack after purchasing the drone. The Israeli Defense Ministry has taken the complaint seriously enough that it has suspended all exports of this company's products to Azerbaijan and paused the contract worth $20 million to the company.
[43:28] This raises so many questions, David, about the business of war itself. I mean this company in particular markets its products to customers in about 50 different countries. But a question that is most relevant to this show is, in this complaint it is alleged that employees of the company refused to launch the drone but in an autonomous world would the risk that a soldier or in this case a company employee disobey an order go away?
[43:56] Well I think, you know, you bring this up as like some sort of future concept. Well, in the future when robots can do this themselves will the fact that a employee refuses to do it or a soldier refuses to do an action disappear and no longer be a thing? But I mean in this very specific scenario it actually happened. Here with our current level of autonomy the soldier, in this case the Drone operator who's an employee of the company, but we're going to call him a soldier because they were launching drones designed to kill people via suicide attack. They refused to fire. This is the same as a commander coming up to somebody saying, "Shoot this this woman. Shoot this combatant that's surrendering". Whatever it is. And the soldier's saying, "No. That's wrong. I'm not going to do that". And then the commander shoots him himself. Well in this case the commander is a CEO of a tech company who's trying to show off his product. Maybe he's not a qualified drone operator or doesn't know how to arm bombs or whatever it is this drone does but it doesn't matter because all he has to do is press a button and the autonomy of this weapon the ability to go through and complete these orders without question, just because it was told to, is what enabled this CEO to use a suicide bomb as an advertising pitch. This isn't the future. This is the current state of lethal autonomous weapons. I don't know if the Cannes Advertising Festival's going to have a award for a for lethal drone demonstrations advertising but maybe they should consider adding it to the docket this year.
[45:23] It'll win first place, David! But you know what? This discussion about war and business it reminds me of what we talked about last week. With debt and currency. We discussed how currency has become so vitally linked to war. Before currency it was difficult economically to support a large standing army. They were smaller. They were more spread out. Their justifications for going to war in the first place were more likely to stem from local conditions, human needs, human judgment. And in general they were more decentralized. But currency allowed rulers to expand their central power by imposing a currency tax on the people. A currency that could only be obtained by supplying the market demands of military personnel. [46:10] And this unified the armies and the people under the same roof not by appealing to their sense of duty or patriotism but through economic incentives. And in a way this paradigm shift removed some of the moral or purely human elements in armies and replace those elements with a cold and calculating machine. Well, in the same way autonomous military technology threatens to centralize military power even further. Right now, a tyrant dictator may still experience challenges when it comes to controlling a military force. I mean after all, many violent coup d’états occur when military executives disagree with the political leadership. And because militaries are comprised of citizens asking those individuals to turn on their own people is not an easy thing to accomplish. But once that same dictator has access to a dispersed robotic force that can be controlled from a central well-protected hub and executed like you pointed out, David, with a touch of a button those challenges all but disappear.
[47:13] Daniel?
[47:14] Yes, David?
[47:14] It's time for me to deploy Robot Dystopic Future #3! [47:24] Daniel, I'm wealthy fabulously wealthy.
[47:28] So how come you wouldn't give me that loan at the beginning of last week's episode.
[47:31] Because I don't like you, Daniel. But I am fabulously wealthy. I have billions of dollars.
[47:36] Oh I see this is a scenario.
[47:37] I have tens of billions of dollars. In fact, you know what? I'm the richest man on earth. I'm Jeff Bezos. It's tough being Lord of Amazon. There's a lot of work. People are angry underneath me, but all the money I guess it makes it okay. It's it's worth it. I've been developing autonomous technology. I've got drones that I'm working on that deliver things. You know, just selling people stuff from online... I think I'm getting tired of that business and I think I want to get into the warlord industry. You think you can help me out?
[48:07] You think I could help you out?
[48:09] Yeah, you know you are Daniel of Daniel's Autonomous Lethal Weapon Industries.
[48:13] Oh, yes. Yes, sir! All your finest robots right here. But according to certain international treaties I'm actually not allowed to sell some of these weapons to just anybody, Mr. Jeff sir. It has to be government.
[48:27] Well yeah, we could set up the the state of Seattle-Amazonatonia and I have $20 billion here why you should change your mind and I would love to purchase an autonomous drone Army.
[48:38] Oh, well $20 billion you say...
[48:40] Yeah is there something you can hook me up with.
[48:42] If you started with that I would have told you right from the very beginning we are in business.
[48:47] And just like that Jeff Bezos becomes a warlord with an army at control and this is the first time in history that we are entering that anybody with enough money anyway could purchase an army, an army that is never going to say no, it's going to always be there, isn't going to need to be fed, will need to be repaired and refueled but with EATR technology, with robots that repair themselves, that will all be taken care of, too, and for the first time in history military might is disconnected from manpower.
[49:17] From man power?
[49:18] Yes, from man power. Throughout history the ability to raise a military, an army, was dependent upon a population, a large population, one that could both field the number of people you need as troops as soldiers as well as people supporting the soldiers and then people growing food or making supplies to keep those soldiers fed and fighting fit. That's a huge effort in order to have any sort of military that has any sort of actual power. But as you eliminate the human component of this you're replacing the actual physical human cost of war with just an economic one. And of course it can't entirely disconnect economics from human labor hours. At least not yet, though the automation episode that we've addressed is showing that that is increasingly happening. But now money buys militaries. Large militaries, and not just mercenary forces. And so not too dissimilar from the Star Wars prequels, where groups with enough money, the Trade Federation, could buy large autonomous militaries. Or groups with enough money and a strong enough ideology like the Jedi, could buy a clone army. Well, now Jeff Bezos or anyone else who has enough could, in theory, field a military based purely on their bank account.
[50:32] David, at first I was a little skeptical when you were talking about a tech billionaire owning an army because it seems like, all the governments of the world would not stand for that. But you're right in the fact that the ability to command a violent force is shifting away from what we traditionally think of as the owners of armies which is states, and this power is transferring to non-state actors. So, maybe at first we're going to see this in the in the hands of small-scale, more regional based tyrant dictators but as money becomes more and more integrated with the ability to control militaries I think you're right that we're going to see a dramatic shift in who is ultimately deploying these weapons in the first place. And perhaps it would never be so overt, such as Jeff Bezos just purchasing an army outright but perhaps someone like him.
[51:25] Or a company like United Fruit Company which you've discussed in the past use 'American military might in order to forward their economic prospects. Well, now with these lethal autonomous weapons they could do the exact same thing with a Army paid for by stockholders and employed by the company itself.
[51:41] In trying to protect their resources and other business interest could somehow redirect funds to other paramilitary groups that could deploy these technologies as proxies for the man himself. Right? And that's a scary future, David.
[51:57] It certainly is. It's scary until I get control of my military, my lethal autonomous military. Then you'll all know fear! Wait, what?
[52:08] Well, speaking, David, of autonomous weapons that become increasingly within reach of non-state actors or quasi-state actors we can look at police departments themselves around the United States to see how this technology is developing in the hands of groups all over the country.
[52:26] Yeah absolutely just like you can't separate conversations of war from the increase in corporatization of war you also can't separate the increasing militarization of the police from the same conversations about weapons. The technology and tools that militaries around the world use as well as the techniques and training find their way into police forces all around the world. The most notably here in the United States and in places like Israel.
[52:50] In 2016 Dallas Police cornered a man in a parking deck that has shot and killed five officers. The man refused to come out so officers strapped C4 to a bomb defusal robot. They drove it in and they blew him up with it. And it was the first time police have ever used a robot to intentionally kill someone. It wasn't autonomous in this example and putting aside any ethical or moral questions about whether this was appropriate or not given the situation this event was significant for the fact that it was improvised. It had never been done before. And it raised a lot of questions that the current legal framework did not have answers for. And, ultimately, I think what we can take away from that example is that police officers represent one area of society that has access to force that will find innovative ways to use whatever tools they have at their disposal. And to prevent the abuse of those tools, one way is to write legislation that attempts to limit what a police department can do with certain equipment, but another angle is to limit access to certain equipment in the first place. Equipment perhaps like weaponized drones.
[54:00] In 2015 North Dakota became the first state in the United States to allow police to fly weaponized drones equipped with pepper spray, Tasers, and even bean bag guns which are little bags filled with lead shot fired from a shotgun that are allegedly not intended to kill. And ever since then Connecticut has been trying to raise the bar by becoming the first state to allow police to fly drones equipped with deadly weapons. Last year in 2017 a bill to that effect pasted the states judiciary committee but ultimately failed to pass the house. This was the fourth attempt at passing a similar legislation so it will likely be back around for round five at some point. And in March of 2018, just a few months ago, Israeli forces used drones equipped with teargas launchers to fly over protesters in Gaza and drop tear gas canisters from well above their heads down on the populations beneath them. These teargas drones are being tested right now against actual live human protesters as a beta test for developing this technology to sell to police departments around the world.
[55:07] Of course teargas is not completely a non-lethal weapon. In fact, in situations like this and in this event specifically, David, people die as a result of mass teargas especially vulnerable are children and the elderly. And a couple weeks ago in June of 2018 the company that makes police equipment like Tasers and body cameras partnered with drone maker DJI to sell commercial drones to police departments directly around the United States. And the footage from these drones will be uploaded to Axon's servers for artificial intelligence to analyze and enable autonomous surveillance capabilities in real time. And researchers published a paper this month demonstrating how real-time drone surveillance can be used to automatically detect violent individuals in public areas. Robot Overlords
[55:57] In this paper they introduce a system that identifies each individual in a public sphere, tracks their movements and identifies people that it believes are violent. Based on their pose and the orientation of their limbs. So if a man walks up to another man and punches him for example. The AI will notice that the first man has his arm extended toward the other's face and he's in an aggressive pose while the second man I suppose is falling backwards and the AI will conclude on this information at the first man is violent. Of course, like many of these autonomous systems there are serious limitations. The software has difficulty telling the difference between somebody punching someone or pushing them over. And from people dancing or if I reached over to brush something off your shoulder, Daniel, or many of the common ways that we touch each other in day-to-day life. It starts beginning to assume that all contact and reactions between ourselves is something that is violet. Because the way that this was trained was, well, in these situations if people are punching each other or touching each other then there's probably a violent reason behind it. But a lot of human contact isn't violent or it looks violent to those outside. And do we really need a device flying around calling the cops every time it thinks it sees conflict? Many fights are settled without any sort of police confrontation. They're friendly or at least settled on a way that no police need to get involved and nobody wants to press charges that there's no reason to involve law enforcement with this system. And then like our predictive policing, what are we going to see? Hundreds of thousands or tens of thousands of these drones constantly flying around a city, constantly refueling itself, in order to just watch us all the time on the off chance that somebody punches somebody and they can dispatch a police officer rushing away getting there 10 to 15 minutes later well after whatever sort of conflict it thought it saw is long done. What's the point? But maybe I'm getting ahead of myself.
[57:43] Well, David, of course the first problem with technology like this that tries to model human behavior and then autonomously step in to act on that behavior is that, like you mentioned, it's not that great at distinguishing say a violent person from a person who's just interacting in a natural way but their limbs have to be extended towards another person. Maybe they're brushing something off their shoulder and all of a sudden the computer thinks, "Oh no, violent person! We need to intervene." But there's a much broader concern and that's that machines will never be able to reasonably determine intent. And whether an act is Justified or not because it will always lack the understanding in context of human behavior which is something that we ourselves have trouble with sometimes. Let's see an example.
[58:32] The Igbo are a native people of Western Africa in what is now Nigeria. Before colonization by the British their society was unique. Although men and women had different economic roles power was held in an egalitarian manner among men and women alike. Political power itself was diffuse with no one institution or person having authority to issue commands. And force was a legitimate method that any individual or group could employ to protect their interests or decisions. Igbo women held tremendous power in the society through their solidarity, their various social roles, and they had a method for punishing men who committed crimes against society and it was called sitting on a man. So if a man abused his wife, for instance, a group of women would convene at his house, they would yell at him and they would shame him. And if he did not repent, they would drag him out and they would beat him up. [59:31] This is the process of sitting on a man. And this practice was an important function in Igbo society that kept the abuse of power in check and it provided victims with protection and solidarity. Well, when the British colonizers and the missionaries arrived on the scene they brought with them western systems of patriarchy and they were blind to the political power and functions of women in Igbo society. Women were often ignored and they were left out of the political institutions that were imposed and these women experienced a loss of power across the board within their own society. And when they naturally resisted in solidarity by sitting on a man and directing their dissatisfaction at their colonizers the British interpreted it as an uprising or women's revolution and they responded by slaughtering the Igbo women.
[1:00:24] And this of course is an analogy for what will happen to all of us if we allow computers to decide when we are stepping out of line. In the same way that the British came into a culture they did not understand, observed behavior that was outside their predetermined views of acceptable human behavior, and responded to deviations of that behavior with lethal force, artificial intelligence control drones with predetermined views of acceptable human behavior will do the same to us. And in the Igbo example these women lost their place within their own society. Their behavior was conformed to outsider expectations and their power was stripped from them. Will we allow these outsiders, these machines, these AI controlled weapons to do the same to all of us?
[1:01:14] I can qualify something about how well we are training these AI with our culture.
[1:01:19] And I mean it is people training these AI. We have humans sitting down there, clicking on things, teaching it stuff. It’s learning from us so at the same time it's learning our culture. But the nature of machine learning is that once we’ve taught it, once we’ve created that initial data set that goes in and then trains the neural nets, it's hard to update with new data. So as our culture develops faster than these machines can be retrained, which maybe is like a weird thing to think of because we perceive tech as advancing so fast it's moving faster than any other part of our society of our culture and pushes along society and culture, but in reality culture advances blindingly fast. Fads come and go, trends come and disappear, and we are always thinking about new things. Especially with the Internet, ideas can come, explode into the public consciousness, and die within a matter of weeks. Trying to take these ideas and load them into some sort of training data that can then be fed to these machine algorithms that need to be constantly updated and kept up-to-date with what is and what is not currently acceptable is quite literally impossible. There's no way to do this any sort of scale that actually makes sure that the technology and the ideas of these AI, these machine learning groups can keep pace with what we human beings collectively agree on as right as wrong as our culture. And so we might find ourselves very quickly running into the limits of our tech overlords and find ourselves with the fate of the Igbo women.
[1:02:47] And of course the other side of this is also that the data that we feed into these neural nets ends up averaging out. So as we feed in lots of different ideas and things it all comes out and spits out this very vanilla gray version of our culture, of our ideas, of our beliefs and becomes the average of all of us. But averages can sometimes be extremely misleading. And what you think will fit for everyone ends up fitting for no one.
[1:03:08] That's right, David. In order for machine learning to work it has to take a large set of training data and figure out what is acceptable within the bounds of the data that it's given. And it leads to this kind of average understanding of what is acceptable in terms of behavior in this example.
[1:03:27] But like you said, averages when it's applied to human beings and the way we function doesn't really make much sense. And this is something that the US military discovered when it was trying to design airplane cockpits for the first time.
[1:03:42] In the early 20th century, when airplanes for military use were new, engineers were trying to figure out what was the perfect size to design a cockpit. And in order to figure that out they took measurements of every proportion imaginable that could be found on the male body. They compiled all this data and they took averages of every single proportion. The length of the thumb to the palm, the length of our arms, our general height, the length of our legs, the size of our head, and they took averages of all of these. And then, based on these averages they designed a cockpit. The goal of which was to conform to the average human being so that it would fit the largest number of candidates.
[1:04:22] And after they built these cockpits what happened was pilots crashed so frequently it was alarming. Over 10 pilots could die in a day just trying to fly these airplanes because they were so difficult to fly. Well, finally a statistician from Harvard came in and tried to help analyze this data and he realized that trying to take for a human being doesn't make sense because no one individual person is average. We all have deviations. We all have unique proportions. And the only way to fit a person to a cockpit is to do the reverse. Fit the cockpit to the individual. And once they figured that out in the 1950's they started redesigning cockpits. In fact, I was surprised to learn, David, that before this time they didn't even have adjustable seats. It wasn't common knowledge enough to think that an adjustable seat, adjustable joysticks, adjustable pedals made any sense. And, in fact, this idea made its way to the automobile. That's why we have adjustable seats today, thankfully.
[1:05:25] But once they implemented this idea and started conforming the cockpit to the average human but to the individual within that cockpit, well, it suddenly became a lot easier to fly the airplane.
[1:05:37] Deaths went down, pilot performance dramatically improved, and the key take away from this is that averages can be bad. So, trying to apply a basic idea of culture to a large group of people trained off an average set of data will end up potentially making us all far worse off. And there isn't the technology right now to be able to have a large, discriminatory ability of these neural nets, of the training data that we feed in, of telling everything apart, explaining culture on its wide constantly changing way. And we need to be careful with this technology, with autonomous technologies especially when we give that technology the ability to take lives.
[1:06:17] In the British example, David, they needed women to conform to western roles for economic reasons. But access to autonomous weapons opens up a more financially expedient way to conform whole cities, places, and societies to economic standards and systems. Step out of line and the camera, the drone, the autonomously mounted machine gun on the corner of the street, well, they're going to let you know.
[1:06:47] What's that sound, Daniel?
[1:06:48] Sounds like another Robot Dystopian Future, number four.
[1:06:57] OK, I'm ready for this one. This is the final Robot Dystopian Future of this episode and everything from here on in this episode is just really highlighting the absurd finality of these autonomous weapon systems.
[1:07:12] So, I'm some sort of non-state actor. I'm an organization. I'm an eco-group. I'm an activist group. I'm a terrorist group. Or maybe I'm another tech billionaire. This time, you know what? I'm Larry Page, CEO of Alphabet of Google. And you know what? I've had it with Apple users. I see their iPhones everywhere. It's a constant reminder that not everyone is using Android. I'm sick of their privacy stance. And it's like all this stuff that Apple is doing is making it hard for Google to spy on their tech. And you know what? I'm done with this. This is outrageous. And I'm going to do something about this. The ultimate market move with my billions and billions of Google dollars. You know what I'm going to do?
[1:07:51] What are you going to do mister Page?
[1:07:53] You know, Google's been working a lot on all sorts of autonomous technology so I've got a lot of tech developed. We've got facial recognition, we've got conversations between other AI's. So I'm going to take all this and I'm going to build a bunch of teeny tiny drones. Hundreds of thousands of them. Millions of them. And these teeny tiny little drones, well, they're all going to talk to each other. And they're all equipped with cameras. And these cameras, we train these with groups of humans, identifying photos of people, looking for people who have iPhones. I've got little sensors on these that detect people who have iPhones, that recognize the unique signatures that iPhones give off. And these little drones can now tell what phone you have, whether it's in your pocket, whether it's out, whatever.
[1:08:35] So, now I've got a swarm of drones, small drones, these are like hand-sized things. And these massive armies of drones, they recognize people. And now I'm going to attach a little explosive charge to them. Nothing big, not much more than a bullet. But it's attached directly to the drone and the drone has a detonator that signals us and boom, you blow a little hole in something. Whether that's a wall, a window, or somebody's head. And it's time for the great Google rebellion! I'm sending out my army of drones to wipe out the iPhone and Apple users of the world. They're out. Tens of millions of drones are out there attacking people all over the world. And in a matter of hours I've wiped out a huge portion of people. The Google rebellion is over and only Android users stand... and I guess random the one or two guys that bought a Windows phone. They're OK, too.
[1:09:24] Bleewrerrlwaa.
[1:09:26] Sounds crazy, right?
[1:09:27] Yes, that sounds crazy.
[1:09:29] It does, but you know what? All this technology exists right now. Some AI researchers and autonomous developers have put together a video called "Slaughterbots" that we've linked on our website. If you haven't seen it yet you need to watch. And this creates a fictional world terrorist non-state actors and eventually even states themselves have developed tiny drones very much like the ones I just described theoretically developed by Google that can fly out into the world and target people based on all sorts of data using facial recognition and then using the explosive charge on this drone, well, to execute this person. It sounds crazy but all this technology exists. And nobody has cobbled together all the pieces just yet but there's nothing stopping them. And there's nothing stopping states from doing this. And there's nothing stopping small organizations, terrorists, activist groups, or larger organized military groups like Isis from putting together an attack very much like this and because of the low cost of these drones and the low tech required to build them, relatively speaking, you can basically create a weapon of mass destruction, a scalable weapon of mass destruction, for pennies on the dollar compared to any other sort of biological or nuclear agent.
[1:10:36] What's special about these weapons, especially deployed by state actors, is I could, theoretically take a plane, fly it over a city, deploy all these micro-drones, something that US Air Force has already tested successfully, dump hundreds of them, thousands of them, millions of them if the city's large enough and they will fly down and target every single being living in the city. If you try and hide inside some of these swarm drones one will sacrifice itself to punch through a wall come in and then get you anyway. In a matter of hours you could completely clear out the population of a city with no negative effects. There's no biological contamination. There's no disease left behind. There's no radiation. Even the mystical neutron bomb, which is supposed to be a nuclear weapon that shoots off specific types of radiation that kills people more than it destroys infrastructure doesn't have the type of unique, targeted attack that distributed swarm drone slaughterbot attack could have. This is war where infrastructure is kept perfect and all the populations that might rebel against you are gone in a matter of hours. This is what we really mean when we talk about the third age of war. Of a revolution about how we fight wars and how we think about them. Because if I'm fighting an aggressive war this is the perfect solution. I'm not risking lives, I'm not risking infrastructure. I drop this relatively cheap amount of drones over somewhere. They do their job and a large portion of the population, not all of them, are wiped off and my forces can move in and with little to zero resistance and immediately occupy the city. I can start moving civilians in right away. I can move troops in, and everything works perfectly. If that doesn't terrify you about the future of war, and again this technology already exists, well I don't know what would.
[1:12:17] We hear a lot, in general, David, that our soldiers risk their lives to save the lives of us back home. And I want to flesh out a point that we made earlier in the show and highlight how autonomous warfare reveals the absurdity of war and the paradox of that assumption. Like we said, a human is justified in defending themselves with deadly force precisely because their life is in jeopardy. But replacing a human with a robot changes the game completely because now there is no risk of losing life at all. That is until that robot fires its weapons, which means that if autonomous weapons have a place in our wars then our wars are not about saving lives at all, and perhaps never were. And if war is not about saving lives it must be about killing lives for material gain. Because that's exactly what a robot killing a human is. Destruction of life to protect inanimate material wealth. The paradox then is if you replaced all front line soldiers with robots the robots no longer need weapons at all, unless of course your objective is related to material wealth. And the most obvious objection to that would be that war is about protecting people, even if one country is in a foreign place. Maybe it's because they are trying to protect the native citizens from a foreign aggressor but that's really a larger discussion about the nature of our wars in the first place that I think doesn't take a lot of examination to challenge. But all this becomes even clearer when we look at the logical conclusion of all these autonomous weapons which is robot versus robot warfare. At that point, which is something that military experts and think-tanks assume will be an eventual reality, you have to ask what the point of war really is in the first place. If you're not even killing people then it's very obviously about taking and protecting resources, which perhaps challenges our traditional justifications for war in the first place. And honestly, David, if we're just facing a world where robots are fighting robots we could save a lot of heartache, we could save a lot of pain if two countries would just show up in a field somewhere and see who could burn the biggest pile of money because that's what we're doing at that point.
[1:14:32] And speaking of the absurdity that these robots point out I just want to quickly insert at the tail end of the show the story of these robotic police officers. Maybe you've seen them. They look like giant, dumb trash cans with wheels wondering around malls, mostly out in California. And people have reacted really poorly to these automated security officers. They try and make them look friendly. Maybe that's their mistake. People are knocking them over. They're pushing them into fountains, and like kicking them when the robot's not looking. And these robots, they store data, they facial track, they scan license plates, they do all sorts of things and it's purporting to solve crime, to reduce crime around what are typically very high end malls and office buildings anyway. I don't know what crime they're talking about unless it's harassing homeless people, which is a vast majority of these robots' and, let's be honest, security officers' duty.
[1:15:26] But, people don't respect these robots. They say, Why do we need this robot guarding this stuff? Get this out of here. And we have a long tradition in media of saying, Fuck these police robots, of things like RoboCop where if you watch the movie wrong you're like, Yeah, RoboCop's awesome! But let's be honest, it's a story about how these automated police systems are terrible. Minority Report, The Matrix, I mean the whole movie's about them running away from robot police, basically. So, when police are robots, when you take the human element out of that, everyone's suddenly like, Yeah, fuck the police. These things suck. Why are they here? You put a human in that stupid looking uniform and suddenly people are like, Respect the police. Do what they say. This is the authorities you have to listen to what their doing, but I wonder what the disconnect there is. And as we replace our troops with robots are we going to see the same disrespect with soldiers? Are we going to lose the hero worship of war and the idolatry of people out there, "bravely" risking their lives, out there defending our economic prospects abroad. Maybe. I don't know. Time will tell. It's food for thought.
[1:16:34] Time will tell.
[1:16:35] But, Daniel, it's that time of the show again. What can we do about these lethal automated weapon systems and the future of our autonomous warfare? Is there anything we, as citizens, can do?
[1:16:46] Well, David, for all the discussion, the back and forth about are bans on autonomous weapons useful? Are they necessary? Are they good? I think, again, one of the major push backs to these bans is that the technology's going to be developed regardless. But like we see with nuclear weapons, just because a technology exists does not mean that everyone has access to that. And, as affordable as drones may be, the bottom line is that it's still not going to be easy to acquire autonomous drones or other military equipment on a scale that can reach weapons of mass destruction unless there are factories mass producing these products. And so I do think there is a lot of value in supporting a ban for the development of lethal autonomous weapon systems because imagine if all the major countries of the world came together and agreed, OK, we're not going to allow any companies to mass produce lethal micro-drones. That would have a dramatic effect on the ability for people, terrorists, non-state actors and state actors themselves to stockpile this technology that can be cheaply deployed to wipe out large segments of populations. I think that's a better direction than the wild west approach we have right now.
[1:18:03] I agree. That's a very hopeful, positive way of looking at it. But, you know what? Fact of the matter is I can go out and buy micro-drones at the hobby store, at my local electronics store right now. And sure they're not networked with cameras and C-4 isn't on them. But, you know, all those little bits and pieces for building those drones have been commercialized and you can buy them for, in some cases $20, $40. It's as cheap as that. You're asking, with these bans, to take out this commercial market and it's that much more unlikely to happen because of that. So maybe we, as consumers, can not purchase these things and kill that market ourselves. Of course, that's unlikely to happen even though they are seen as just toys. But these toys can become tools are all ultimately weapons.
Beyond that though I want to shout out to the Google employees at Google who revolted, at least initially, against these Project Maven programs, the drone assistance that Google was giving to the Department of Defense, and were ultimately able to overturn the business prospects of these large investments from the military and say, You know what? We don't want Google to be like this. And if we continue down this path then we will resign. Or, like the several people who did resign, more respect to them. Killing Google from inside with the labor that would ultimately be used to create these weapons really shows us that the development of these technologies are in the hands of individuals and of workers. We can talk all day long about banning things on an international level, banning things from tech companies developing them, but ultimately all these things are designed by individual engineers, by software devs, by people, by labor. And the people working on these programs can just stop and say, You know what? This is wrong. And in the example of Google, they did. And it worked. The development of this large program, from a very large important economic company, was cut off. Stopped. And of course these contracts will be awarded to somewhere else. If all these other companies refuse to work on these devices saying, We don't want to live in a world were robots can be allowed to pick who dies, well then we won't have those technologies. They won't exists. And so the hands of our future, the ability to resist autonomous weaponry, lies in those people who would be called on to make that weaponry possible.
[1:20:17] And more conceptually, going back to that point about the absurdity of war in the first place. If war is ultimately about securing resources and economic interests, then perhaps it would be best for us to support in any way we can an economy that moves away from extraction and exclusion. Because if we here in our respective countries had economies that were sustainable, based on local resources, the idea of sending military equipment around the world to secure economic interests just wouldn't even factor into the equitation. The necessity for an expanded and aggressive military force comes about as a result of economic insecurity. And the more we can do to increase our economic security by placing a value on local, sustainable resources, the better off we'll be from a national security standpoint, from a local security standpoint, and from a global security standpoint.
[1:21:15] A lot to think about and a lot to be scared of but that's how we roll here on Ashes Ashes. If you want to learn more about any of these topics, if you want to see that Slaughterbot video, or read papers on all these subjects, you can find that and a full transcript of this episode on our website at ashesashes.org
[1:21:34] A lot of time and research goes into making these shows possible and we will never use ads to support this show. Nor will we ever purchase ads, as effective as that might be, to crowd your newsfeeds. So if you like this show and you would like us to keep going you our listener can support us by giving us a review and recommending us to to a friend. Also, we have an email address it's contact AT ashesashes DOT o-r-g and we encourage you to send us your thoughts, positive or negative. We'll read them and we appreciate them.
[1:22:07] You can also find us on your favorite social network at AshesAshesCast.
[1:22:12] Next week we're going to be taking a wet and wild ride.
[1:22:16] But until then, this is Ashes Ashes.
[1:22:18] Bye.
[1:22:19] Buh bye.