The Evolution of War – Time Machine 2019

I want to thank you all for being herefor our panel on the evolution of war and for the next 30 minutes we're goingto be answering questions about the role of autonomous weapons drones in thefuture of warfare how they're going to interact with human beings on thebattlefield what are the challenges that thePentagon in the military are going to face what are the ethical dilemmas ofusing AI in the future of war and how is war evolving as as you've mentioned wehave here a secretary work who served as the deputy defense secretary untilJanuary of this year he was appointed by President Obama he spent 27 years in theUS Marines and he also is a senior fellow at the at the center for newAmerican security he has he serves on the advisory board for SparkCognitionand he has his own company now, TeamWork, which is looking at the issue of thefuture of war and he believes you can transform government through throughdata science.And Admiral Richardson, thank you for being with us.Admiral Richardson was the Chief of Naval Operations the 31st CNO. He madehistory he retired in August after 37 years ofservice and prior to serving as CNO, he was the director of the Navy's nuclearpropulsion program and he's the first Navy head of nukes to be CNO. So thankyou both for being with us here today I want to read to you a few headlinesthat I saw in the news yesterday and they have to do with AI. One says, “ChinaSelling Terrifying Blowfish Killer Drones in the Middle East.” “Pentagon's AIProblem is Dirty Data.” “Army Official: Build AI Weapons First, Then DesignSafety.” Before we start, I'd just like to ask a general question.How is war goingto change as a result of AI? What do you see as the arc of that change, and whatare the challenges that the Pentagon faces? Well, in 2014 2015 the Departmentof Defense decided that it needed to try to address the eroding technological superiority that the US Joint Force had enjoyed since theend of the Cold War.And one of the ways that they did so was to go to a varietyof different technologists and say, “How would you suggest we get out of thisproblem?” And the defense Science Board said look the single most importanttechnology that the department has to master is artificial intelligence. But what people forget is DSB went on to say it's all about AI-enabled autonomy. Autonomous weapons and autonomous operations. The Chinese call this a shiftfrom “informationalized warfare” to “intelligentizized warfare,” and whatyou're going to see in the future is ubiquitous digitizationmachine-to-machine communications autonomy at rest where we're usingdecision support systems, predictive maintenance systems.All sorts ofsoftware that helps human makes better better decisions and then autonomy andmotion different unmanned platforms different unmanned weapon systems etcthe expectation is this is going to result in what the department refers toas algorithmic warfare and what Amir is saying and john allen referred to ashyper war it's going to require or it's going to result in operations thathappen at such a fast pace at machine speeds that it's going to be extremelyviolent and extremely difficult to keep up with events unless you are relyingupon autonomous AI enables systems so we are in the middle of what i wouldconsider to be a potential revolution in war and the department is trying tounderstand what will happen when all of these systems really hit i would sayjust the same thing and one way to think about autonomy in motion andme at rest is to just put it in the context of the OODA loop right john-boyit's observe orient decide act and to me it all comes down to making betterdecisions quicker than the adversary autonomy and motion computing at theedge what we're really doing is we're we're moving those decisions out andmaking them automatic and sensors and payloads but there's gonna be a also acore role for us for AI at rest to just help people manage the just a tsunami ofinformation that's gonna be coming into command centers all the way down and howto make sense of all of that so back to Blowfish killer drones does the US havethese kind of drones what are the coolest AI enabled weapon systems thatare either currently being used that maybe people don't know about or thatare being developed in the Navy for instance or elsewhere in the Pentagonthis is an important part of the debate there's a large movement to ban what arecalled lethal autonomous weapon systems but to me there's three two general typeof weapons that the department events and all military's use unguided weaponswhich are inherently inaccurate most of them missed their targets theirindiscriminate by Nature and then there are guided weapons and these weaponsactively homed in on their target by changing their trajectory and they'refar far more accurate lethal autonomous weapon systems which are what people arearguing about is an entirely new class of weapons that do not exist generallywhat they are discrete they aren't described this way but if you listen towhat people talk about they're independent from human controlunsupervised in their battle field operations and independently selftargetable well I can guarantee you the Department of Defense does not wantweapons like this no commander is going to just say hey let me turn on thisweapon and let the weapon decide what its gonna do when it's gonna do who it'sgoing to kill how it's going to kill so it's important to realize there areno weapons like this they're an entirely new class of weapons generally theywould require what the term of art would be artificial general intelligenceintelligence it's much closer to human cognition and it's not even certain thatwe're going to be able to get there and even if we do it's probably decades butto answer your question Jennifer we have guided munitions with quite advancedautonomous functionalities so we have weapons for example where a humancommander would say there's a group of tanks at 150 nautical milesI know those tanks are enemy tanks I want to kill those tanks and I'm gonnafire a guided weapon to a point above the group and I'm gonna delegate to theweapon which target specific target it picks the commander has already saidthere's 150 tanks I want to kill he'll shoot out a weapon and we have weaponsthat decide that is the tank I'm going to hit that's as far as we have gone andthere are some people who think even that's wrong but we've been using themfor over four decades so I think it's a pretty weak argument so there are nokiller robots in other words that's a myth that that you would suggest is nottrue only killer robot I know about is in Terminator dark fate these aregenerally what people are worried about about a human I mean about a machinethat's just an implacable killer completely beyond the control of a humancommander but you're saying that is not being developed right now not in theWest now I haven't seen what the blowfish drone does I have read the samearticles that you have described what you've seen that evidently they'reautonomous and can decide what targets to attack and how to attack them but Idon't know what their modalities are and I don't know exactly what the commanderI mean a commander would do so I'm a little reluctant to say oh my gosh theChinese have gone into this area I'm just not certain and you canappreciate it right away that something like that very susceptible to deceptionvery susceptible to being just you know taken off of track and so we're talkingabout some unbelievable sophisticated AI to be able to spear through all that sotell me more about some of the Navy AI enabled weapon systems that you'reseeing what are some of the coolest things into it and we're moving towardsautonomy and so there's a whole family of unmanned systems both in the air onthe surface of the water and then under the water we're using AI to integratethem into sort of the in fact we worked pretty closely together on this unmannedaircraft a tank other aircraft interviewer yeah refuel them integratingit into the man done air wing so how to do that on the surface working in areaswhere we can extend the reach of manned ships and also do kind of adaptivedeception those sorts of things and then a family of undersea vehicles andthey're actually operating now or they get into that so they're operating nowyeah but Jennifer if I could you know it's important that this break betweenguided munitions with autonomous functionalities and these trulyautonomous weapon systems the entire debate is fraught with ethical moral andlegal issues and the department is doing everything it can to understand theobjections from those who think these are unethical or immoral or illegalunder the laws of armed conflict or international humanitarian law thedefense innovation board just came out with a principle for a IAI ethicalprinciples so the one thing I just want everyone to understand is the departmentwants to have more autonomy and it will have more autonomy but as it goes alongthis path it's trying to do everything it can to be as ethical andand obviously legal as possible so you wouldn't be an advocate for the billthough a weapon and then worry about safety later no I don't know who that iswho is that army official probably isn't gonna have a job tomorrow maybe youcould help us understand how AI is already being used by the warfighter tominimize civilian casualties because I think there is this misunderstandingthat if you have a autonomous weapons that it's increasing civilian casualtiesbut how is it that it actually is being used or can be used to minimize thosecasualties I'll tell you one is and I think general Thomas might have touchedon this yesterday was you can use AI to sip through a very complicated battlespace if you will find your target down to an individual if you need to or aspecific thing and then with the precision munition get right to thatthing and so you know consistent with the discussion so far a tremendousamount of work goes into doing exactly that analysis so that the collateraldamage the civilian casualties are absolutely driven to the absoluteminimum before we launch any of these attacks I mean this is to me this is anatural evolution from unguided weapons which are inherently indiscriminate andcause a lot of collateral damage all you got to do is look at the pictures of thecities around a factory in World War 2 and the city is as bombed out as thefactory because most of the unguided weapons missed their targets the UnitedStates has shifted and uses guided weapons predominantly in its operationsfrom long-range attack to short-range attack and that has resulted in muchless collateral damage and far fewer non-combatants harmed in fact in 2018the United Nations looked at the way the United States targets guided munitionsthey to be precisely targeted to utilizetheir accuracy so the targeting process the accuracy of the weapons if we have amistake which results in collateral damage or god forbid the death of anoncombatant there's an immediate investigation there are lawyers at everysingle step that say yes the shot that you're going to take is consistent withthe laws of war and the United Nations looked at the way the United States doesthis and says look this is pretty much best practices John said we believe themove to more autonomous weapon of weapons with autonomous functionalitieswith very high AI systems in it it's going to be even more discrete and beeven less collateral damage and be have even fewer non-combatant deaths so toanyone who thinks that this is an inhumane way to go I would just say askthe United Nations they've said the shift to guided munitions which hasresulted in a much more humane outcome on the battlefield and the Department ofDefense is trying to make the case that we would make another step so do youthink that those Google workers who signed the petition and said theywouldn't work with DoD on the kind of software programs that would help thesekind of these autonomous weapons be smarter are they misguided has thePentagon been able to convince tech and we're here in Austin have they been ableto convince tech companies to work with them I would say that one thing that ifyou go back and you sort of adopt the stance that we're in a competition thegreat power competition we have as a nation we have neverprevailed in a great power competition without a close relationship between thedepartment and academia the universities research institutes and those sorts ofthings I think in the last 20 years insecretary where you can check me if I've got this wrong but there's been a littlebit of a drifting apart right and so I don't know thatwe just don't understand each other perhaps as well as we used to and aswell as we should if we're really gonna be competitive in this in thiscompetition going forward so not all and then of course all of those thosestudents graduate and then they join all these tech companies right although thereal talent so that familiarity can extend intoindustry and the tech companies as well but I've been advocating for just a morehabitual closer relationship so that we get to know each other better right andthen we can stop sort of stereotype being on either side and in terms ofgreat power competition Russia China what are Russia and China doing in termsof autonomous weapon systems in terms of looking at future battlefields that theu.s.Is not doing is the u.s. still a world leader is it possibly going toslip back and not be a world leader in these areas if there isn't morecollaboration with the tech community yes I'm on the National SecurityCommission for artificial intelligence which was empowered by Congress to takea look at this very issue China has probably the most ambitiousnational AI strategy because they believe it as a means by which they willleapfrog the United States both an economic output as well as militarypower they want to surpass the United States and AI technologies by 2025 andthey want to be the world leader in 2030 the Russians have a goal to have manymany more robotic units to me it's a pretty simple problem for Russia theirDuma their demographics are terrible they're running out of people to enlistin their arm focus of forces so they're much more inclined to use robotics andthey have actually been testing robotics in Syria so both China and Russia havesaid this is a competition we want to go directly against the United States thisis a way that we can level the playing field if we ever did god forbid come toblows and this is going to require a national responsefor the Chinese their Sputnik moment was in 2014 when Lisa doll was defeated byalphago in the game of Go it shook up the entire country in China explain thata little bit go is a very very unlike chess I can't remember how many pieces Ithink 180 pieces there's like 200 but I'm making these numbers up but you know180 pieces on a board you're trying to make it so that your opponent can't moveand there's millions millions and millions of different ways you couldactually play and alphago was an AI that played against agrandmaster and go and it was widely accepted that paid look this is just toocomplicated of a game for AI to beat a human and it beat the human four to onethe humans 1-1 draw humans and for the Chinese go is centralto their strategic culture and thinking in warfare and they said whoa we'regoing to have to do something about this it took them in the direction much morethan the United States is right now in command and control systems so theirautonomy at rest really is focused on command and control systems and autonomyand motion as you said they're developing an awful lot of unmannedsystems and unmanned weapons weapons and I like the way you also alluded to thefact that a I applied certainly to kind of the military dimension of nationalpower but also across the board the economic dimension of national I meanthe competition is going to manifest itself along all of those thingsinformation diplomacy economics and also military and so it's going to bedecisive across every dimension of national power in this competition Ithink that there's both risk and opportunity there we can use AI tomanage our way through this competition without getting into conflict which issecretary works that is going to be you know violent on a scale wehaven't seen before what were some of the surprising moments that you saw asCNO or when you were based out in the Pacific in terms of advances that Chinaor Russia were making in terms of AI just the scale of it all so it'sdifficult to get specific insights into kind of their progress in AI but they'veyou know I've obviously focused most on the Navy and they've completely rebuilttheir Navy in a matter of 10 years right and so and it's sort of natural we hadthis moment as a nation kind of at the end of the 1800s where if we were gonnacontinue to prosper we had to go offshore overseas markets access sealanes all of that and we built a navy to kind of step up to that strategicchallenge I think China in many ways is getting into the maritime domain tocontinue to prosper as a nation and so they're seeing their military adapt tothat as well given your nuclear background let's talk about AI for nukescould we see a situation where humans would be taken out of nucleardecision-making would we ever have a moment where you've got rid of thenuclear football for instance right I was nuclear propulsion okay fair enoughbut I don't think we would want just a secretary yes this is the one area wherewe just can't and there's been a number of books and close calls and all ofthose sorts of things where a person stepped in at you know kind of the lastminute and made the right call yeah I think the whole debate on lethalautonomous weapon systems generally kind of focuses in on the weapon not weaponssystems I tend to think of these as lethal intelligent machines and theautonomy at rest part our command and control systems that can make a decisionfor a pre-emptive or retaliatory attack without human intervention that isextraordinarily that would lead to an extraordinary level of crisisinstability would increase the chances of an inadvertent war and possibly aninadvertent nuclear exchange and I believe the United States shoulddo everything it can to prevent nations especially those with nuclear weapons togo after those type of machines now there's a much much different debate tohave over the autonomous weapons but I don't want to you know to answer yourquestion I know of no talk inside the Department of Defense it says we wouldautomate a retaliatory or a pre-emptive weapon system without a human in theeither in or on the loop Jennifer we're also learning not only a tremendousamount about artificial intelligence and how that can a decision-making but alsoon the human side you know we're learning a lot about how our brains workand these biases that are inherent and so the you know the partnering ofartificial intelligence with this new knowledge of kind of the strengths andvulnerabilities of human intelligence and decision making I think that isreally a fertile ground for coming up with a system that is you know partnerstogether and makes it kind of one plus one equals three this is a reallyimportant point because the department is not trying to go for AI that isalways perfect the department recognizes that AI will make mistakes like humansdo the best picture I've ever seen is that one picture in the game against theSaints and the Rams and the referee is actually looking at the collision of thedefensive back with the receiver and he made what was clearly the wrong decisionand we don't say my god you know how could that have yeah well we do say howcan yes people know which side you're on yeah but the wave you know theDepartment thinks of AI is autonomy is it's the programmed ability of machineto make faster decisions as well as or better than a humanhow do you take the bias out it's gonna be a lot of testing you're going to haveto go through a verification and validation regime you can take the biasout but we're learning I think that's one of the things DARPA spends a lot ofmoney on trying to get explainable AI etc but this point on as well as humansthis is what happened in the intelligence community in 2015 for thefirst time computers could identify an object in a scene as well as a humananalyst though generally these numbers are close but they're not precise ahuman was 95% accurate and the computer vision was 95 point four percentaccurate so it wasn't you know it wasn't like it was a slam dunk it was as goodas an analyst and that's when the intelligence community started to movetowards computer vision have they shifted over completely not completelybut they're well well along so like I said you're never gonna get perfectdecision-making in war whether it's a man or machine what you try to do ishave a command and control system that tries to ameliorate bad decisions and tothe point yeah obviously humans we've got biases and vulnerabilities as wellthere's been two Nobel prizes in behavioral economics to just sort of tryand explain why we make decisions that are not in our best interests rightpeople and so that's where the teaming can come into effect where perhaps an AItype of first step can serve to check some of those biases and so in some ofthese tournaments so you mentioned chess and go you knowobviously computers have been you know beat the chess grandmasters some timeago well before the alphago and so what do you do is it over you know did westop playing chess tournaments and that sort of thing and the answer was nobut some of the tournaments have been with teams of grandmasters and computersand the winners are not necessarily the team of the best grandmasters and thebest computers it's that it's the teaming effect that's most effectiveintegration yeah exactly and so this I think is a real rich area we only have aminute left but I just wondered if each of you could leave the audience withsomething to think about in terms of this topic or an action to take I wasn'there yesterday but I understand Tony Thomas said Wowhe thought we were the department was behind and we didn't go as fast Itotally agree with him I consider us to have lost two years we had a strategythe national defense strategy listed advanced autonomous systems artificialintelligence and machine learning as one of the top key priorities but it took solong to stand up the joint AI Center the joint AI Center was putunder the CIO rather than the undersecretary defense for research andengineering you can debate why that was but the bottom line is there was noclear signal from the top that we were gonna go after an AI enable future andthe programs had better reflect it and we will shift money that way that ischanging now secretary Esper announced in his confirmation hearing as well asat the National Security Commission on AI that AI was his number one R&Dpriority money is now being shifted into the Jake so I anticipate that you'll seean acceleration of departmental efforts which i think is gonna be really reallygood I would say that well I agree with 100% with what secretary work said thereis tremendous opportunity for private industry to help the department getmoving in this area at the speed that we need to move you know the government isis a big flywheel it takes a while to get spinning once it spins it's prettyeffective but in terms of I would just encourageeverybody to be bold come to the discussion with solutions common readyto go and I think that the climate is that they'll invest in those solutionsgreat thank you very much for being with us here today thank you to our audienceand we'll we'll be back in a moment thank you

pexels photo 4624907

As found on YouTube


You May Also Like