More Intelligent Tomorrow: a DataRobot Podcast

How the United States Will Shape the Future of AI - Robert Work

February 01, 2022 More Intelligent Tomorrow - Robert "Bob" Work Season 2 Episode 2
More Intelligent Tomorrow: a DataRobot Podcast
How the United States Will Shape the Future of AI - Robert Work
Show Notes Transcript

In 2018, the US government established the National Security Commission on Artificial Intelligence. Robert (Bob) Work, former US Deputy Secretary of Defence, is one of 15 commissioners of a 750 page document outlining the four priority areas in need of attention for the “advance … of artificial intelligence, machine learnings and associated technologies to comprehensively address the national security and defence needs of the United States."


In this episode of More Intelligent Tomorrow, Chief AI Evangelist Ben Taylor sits down with Bob Work to discuss the findings of the commission, as well as the broader considerations for AI adoption and innovation globally.

Bob Work (00:00):
This competition is much more about values. And I think, therefore, it should be much more of interest to our citizens in that AI is in the center of a whole bunch of emerging technologies which are going to change life as we know it. I just look at the enormous number of things that AI are going to be able to help people on, and I can't help but be optimistic.

AIVOv2 (00:29):
Welcome to More Intelligent Tomorrow, a podcast about our emerging AI driven world, critical conversations about tomorrow's technology today. On today's episode, Ben Taylor sits down with former US Deputy Secretary of Defense, Robert Work.

Ben Taylor (00:53):
Bob Work, I'm so excited to have you here. Today, we're going to be talking about the 750-page document that you helped work on for the National Security Commission on AI. Very excited for you to be here today.

Bob Work (01:05):
Thanks for having me, Ben. I was just one of 15 commissioners. So, if I make any mistake in mischaracterizing the report that's on me, but I will try to faithfully explain what the commission concluded. I also come from the Department of Defense. And I will often say we, because I sometimes think of myself as still being in the Department of Defense, but I'm not speaking for the Department of Defense. I'm speaking for the Commission.

Ben Taylor (01:35):
That makes sense. So, Bob, one of the things that's very clear in this report is this goal, because the ideas aren't useful if there aren't goals and timelines, right? So, there's this goal for 2025 for us to be ready with the Department of Defense and our intelligence agencies. I know it's a lot of pressure to answer this in a short amount of time, because we really need to dive in the report. What are the major things that we have to have ready to fulfill the vision of the commission with this report by 2025?

Bob Work (02:03):
The Commission reached two overarching judgments. The first is that the United States is not organized or resourced to win a technology competition against a committed competitor like China, nor is it prepared to defend against the AI enabled threats that we see. So, the second judgment is, we have to get ready by 2025. That's right around the corner. So, we say we need to be AI ready. And there are four priority areas for us to do that. The first is to establish a national plan. We have to have top down leadership, like we saw in the Manhattan Project to develop an atom bomb or the Space Race where we were going to the moon. And so, we recommend having a Technology Competitiveness Council at the White House, which would do a whole of government approach, set up the strategy for the government, and then implement the strategy.

Bob Work (03:05):
Second, as we will talk about, I'm sure, talent is the determining factor in this competition and we have a huge talent deficit, especially in the US government. So, we have to build these new digital pipelines, expand existing programs, cultivate AI talent nationwide, and ensure that the world's best technologists come and stay in the US. The third thing is we have an advantage in hardware now. But we're too dependent on semiconductor manufacturing in East Asia and Taiwan in particular. And so, we think we have to revitalize US cutting edge semiconductor fabrication capabilities as a national goal. This will be expensive. A fabrication facility might cost $40 billion. We won't be able to rely upon the commercial sector to do it. It's going to require government support.

Bob Work (04:01):
And then, the fourth thing is innovation. AI research is going to be very expensive. We think we should be spending $40 billion a year over the next five years to cover all of the AI R&D for defense and non-defense research. So, leadership, talent, hardware, and innovation. Those are the four priority areas for us to get AI ready by 2025.

Ben Taylor (04:29):
I like the hardware innovation because I'm a big high-performance computing nerd and I love companies like NVIDIA, where they just continue to build these screaming systems to enable our algorithms.

Bob Work (04:39):
Yep.

Ben Taylor (04:40):
How is this different than the Space Race and these other iconic moments in history where the US has had to rally? This is a very different situation. I'd love to pull on that thread. Why is this different than the Space Race?

Bob Work (04:54):
Yeah, the Space Race, when Russia or the Soviet Union surprised us with Sputnik, it really was a shock to the system in America. We were saying, "Oh my goodness, we're behind in a technological race with the Soviet Union with all sorts of implications for national security." But in the end, the Space Race, in my view was really about national prestige. It literally became a race to who was going to get to the moon first. And the side or the competitor that could get to the moon first could claim technological superiority. This competition is much more about values.

Bob Work (05:36):
And I think, therefore, it should be much more of interest to our citizens in that AI is in the center of a whole bunch of emerging technologies, which are going to change life as we know it. I mean, our social life, it's going to change our economy, it's going to change the way wars are fought, it's going to have these enormous implications. It includes quantum science, it includes 5G for example, includes biotechnology, and synthetic biology, and advanced manufacturing. All of these things. AI is in kind of the center, because we believe that it will start. Whenever you're working on a problem, you'll start with AI. AI will help you kind of bound the problem that you're trying to solve and will be central to everything.

Bob Work (06:31):
While all of these technologies will be deployed around the world, on platforms, just like Huawei was deploying its 5G platform. And these platforms reflect the values of the governments that deploy them. So, for example, we know the way China approaches AI. They see it as a means by which they can monitor their population, suppress dissent, surveil minorities and make sure that they stay in line. They don't care about personal privacy, they don't care about civil liberties. And when you take a look at the Huawei platform, it was designed, any of the data that it was collecting in other countries was going to be sent back to China, whether or not the individuals knew about it or wanted this information to be sent to China for whatever reason China was going to use it.

Bob Work (07:30):
The United States believes very strongly in responsible AI. AI that preserves values, democratic values, protects privacy, protect civil liberties. And our platforms are going hopefully to reflect those values. So, whoever wins this technological competition, no American should want to live in a world 30 years from now, in which all of these technological platforms were created and controlled by authoritarian regimes, who will use them for their own aims. So, this is a much different competition in my view. This is really a competition over values in the future.

Ben Taylor (08:18):
I definitely agree. And one of those items that caught my attention was the Chinese social score. So, if you and I are close friends, and if I say something online that sounds antagonistic towards the government, you have an incentive now to distance me, because I impacted your social score, which there have been these black mirror examples that we think about on Netflix, and then we see them in the wild in China.

Bob Work (08:41):
The way traffic is going after AI is really quite striking. I watched a show on VICE, and they sent a team to China. And they showed this one picture, for example, of a Chinese person walking across the street outside of the crosswalk. And within 15 seconds, the speaker says, "Citizen Smith, you are walking outside of the crosswalk. You are not a very good citizen." And you're just sitting here going, the facial recognition that they're using is just unbelievable, gait recognition, all sorts of stuff.

Bob Work (09:20):
Everything now is facial recognition for you to get into a public bathroom. You're first scanned and say, "Okay, you're okay." I don't know what would happen if you have a bad social credit score. If they would say, "No, you can't use a bathroom. You're on your own." But it's just kind of scary when you see the way they're using it. They have so many cameras around the country. And they are without a doubt the best in the world in facial recognition. They're using it to surveil their population in a way that I don't think anybody in the West would tolerate.

Ben Taylor (09:52):
And this reminds me of we've seen this happen before when you export scale to China, innovation will come, right? So, I'm thinking of semiconductor or some of these biopharmaceuticals. The scale that they push is so large compared to some of the manufacturing facilities that we have in the US. Is that kind of what we're seeing on the AI front? The scale that they've taken with their AI facial recognition system, the US is orders of magnitude behind. And so, China obviously had with the innovation because of that, for that particular vertical.

Bob Work (10:23):
Yeah, Ben, you're getting to a point where we were constantly being asked by members of Congress and by other people who is ahead in this competition. And so, we spent some time thinking about it. It's a very difficult. In the Cold War, we'd send a satellite over the Soviet Union, we count the number of missile silos in the ground. And we would say, "Wow, they have 1,500 missile silos." And so, we had a good idea of how many missiles they had. This competition is all in software. And it's very difficult to say, "Okay, is that software better than our software? How capable it is?"

Bob Work (11:00):
So, AI is not a singular technology. It's what we refer to as a stack and the stack includes six things. It starts with data, that's very important. Then the hardware, these are the chips that you mentioned upon which the algorithms run. And then, the algorithms are transferred into applications, such as facial recognition or natural language processing, or something like that. Then there's the talent that is required to do all of that. And then, integration of the data, the hardware, the algorithms, the applications and the talent. That's how you make really forward progress.

Bob Work (11:46):
So, we said, "Look, China is clearly ahead in data." Why? Because they don't care where it comes from. They don't care if it violates somebody's privacy. The example we would use is, if China said, "Tomorrow, we are going to make an algorithm to do a health assessment of all of our citizens." They would just gather the data. They wouldn't have HIPAA rules, that would say, "Look, you have to go to every single individual and get permission to use that data." So, they're definitely ahead in data. We think they're definitely ahead in applications at scale. They're very good at it. So, once they get their hands-on facial recognition, they deploy it at scale, and they're constantly improving it.

Bob Work (12:33):
So, in terms of applications at scale, we think China is in the lead. We also think China is in the lead in integration. Not because they're inherently better than integration than the United States, but because they have a national strategy. You mentioned AlphaGo. When AlphaGo defeated Lee Sedol in the game of Go, an ancient Chinese game that is very, very important to their culture. And, quite frankly, no one thought that artificial intelligence was going to be able to win in the game of Go against a human for a decade or so.

Bob Work (13:11):
But in 2016, AlphaGo, defeated Lee Sedol, four to one. And it's really shook up the Chinese and they said, "Look, we have got to win in this competition." It was their Sputnik moment. And so, they created a national strategy. They said, "We would like to be able to catch up to United States and AI technologies by 2020. We'd like to surpass the United States by 2025. And we would like to be the number one world leader in AI by 2030." They set down goals, they set down intermediate objectives, and they poured a lot of money into making this happen. So, their integration strategy dwarfs what we're doing right now in the United States.

Bob Work (14:02):
Now, on the flip side of the coin, the West, the United States, and the West more generally has an advantage in hardware. These are these very advanced chips that the algorithms run on. And the United States, the Netherlands, and Japan, essentially cornered the market in the fabrication machines to make the chips. These are extremely sophisticated, extremely expensive. A fabrication facility might cost $40 billion, a single fabrication facility. So, the United States enjoys that advantage in hardware and chips. There is a huge plant in Taiwan that is very, very good at fabricating these cutting-edge chips. And the smaller the separation of things on the chip determines how fast they go. And so, seven, five, and three nanometer chips are kind of at the front end of technology. And so, Taiwan can make those chips, and Taiwan is 110 nautical miles away from China.

Bob Work (15:13):
So, we would like to keep a two generation lead ahead of China in the state-of-the-art electronics. But if they invaded Taiwan, for example, and got their hands on all of the equipment on Taiwan, they'd be able to close that gap pretty well. But right now, we're in the lead. We also believe that we lead in algorithms. We think we still have cutting edge algorithms in the world, although it's really hard to determine because the Chinese are catching up very, very fast. And we think we have an advantage in talent, because the United States still attracts global talent, if we can just keep it. So, this is a close-run race. They have an advantage in three, we have an advantage in three. But because we think we have the advantage in hardware, algorithms and talent, we believe we have a slight overall lead. But it's not a lead that we can be comfortable with because the Chinese are coming after us very, very hard. And they're very, very good.

Ben Taylor (16:19):
In one of those leads, you called out the algorithms as it's a little muddy, because AI is different because we open source everything, right? So, if I'm an AI researcher, and if I do something brilliant this weekend, some experiential network that we see is a big breakthrough, it might be freely available next month, where I'm trying to build my personal brand, when it came to nuclear technology in the Space Race that was top secret. And so, how do we deal with this reality where a lot of these AI innovations are freely shared or their talks about trying to lock down some key innovations?

Bob Work (16:52):
Well, we talked about this. And we quickly concluded, for the reasons you just described, having export controls on algorithms is a big loser, you couldn't do it. They're open source and they proliferate around the world so rapidly. For example, the most recent is called NCTP 3. It's a natural language processing capability and it creates content. So, if you and I were talking with this algorithm, it would be difficult for us to tell the difference between a human and a machine. It is very natural in the way it reacts to you. And the Chinese have already, they now have their own like system. It just happens very, very fast.

Bob Work (17:38):
So, your point is, this is a dynamic temporal contest. Sometimes we'll be in the lead, sometimes we'll be a fast follower. Sometimes China will lead, sometimes they'll be a fast follower. But that's why you got to keep your eye on this. And you got to put the money into cutting edge research, you got to make sure your talent is being used effectively. I mean, this is the competition that I think is going to determine how the 21st century unfolds, both in terms of economic competitiveness, national security, values. This is it. This is the one.

Ben Taylor (18:18):
You mentioned a few times already this talent in the report. I believe you increase the sacredness of the talent by calling it the holy grail. So, you have data and compute. I see the US as being a leader with entrepreneurship. But you guys did call out that there is a concern around the H-1B visa, not being entrepreneur friendly. So, I'd love to talk about that for a minute. What would be an ideal scenario for external talent coming into the US where they want to lean into our entrepreneurship economy?

Bob Work (18:46):
Yeah, we suggest that, as a matter of some urgency, the United States develop a new visa for really highly talented STEM people and have them come to the United States and allow them to stay. We think we should increase the number of green cards that are provided to PhD graduates, and all of the STEM curricula. The key thing is a lot of these folks want to come to the United States. We still think we have the best universities in the world. And most of them after they get their PhD want to stay in the United States. So, as a matter of competitive advantage, it makes absolutely no sense to the Commission for immigrants to come to the United States, get a PhD in a cutting-edge computer science or software engineer field, and then have them go to China and compete against us after we've taught them. So, we suggested a whole lot of different things.

Bob Work (19:47):
Our big discussion point with Congress is, "Look, you've got to somehow attract these folks and allow them to stay if we want to maintain competitiveness over time." Because the Chinese, they're getting much better at attracting talent. They buy talent. They literally just go to a software engineer in a Google and say, "Hey, we'll pay you three times what you're making now if you move to China. And we'll give you your own lab. And we'll give you a million dollars a year to pursue your interests." And that's really hard for these really smart people who want to change the world with their technology to say no to.

Ben Taylor (20:30):
Yeah. You'll think this is funny. I was giving a talk in San Francisco, I think it was 2017. And after the talk, this Chinese Nationalist came up to me and said, "If you come to China, you'll make millions." At that time, I just thought, "That's strange." After reading your report, I had no idea that these forces were going on and the interest was there. It just seemed like a really bizarre thing to bring up. I did want to kind of call out, lean into this entrepreneurship competitive edge that we could have because China doesn't have the most friendly entrepreneurship environment.

Ben Taylor (21:01):
I'm thinking of like a Jack Ma. We have our heroes in the US economy. You have your Elon Musk and these different founders that kind of chase the American dream. I feel like in China, they're kind of stepping on their heroes. So, do you think if we fix that, that would be a pretty clear, competitive edge? If you're getting your PhD in AI? Do you go back to start a company in China, knowing that you might get stepped on in the future? Or do you get VC capital now in the US?

Bob Work (21:26):
Yeah, I mean, from the Commission's point of view, this is, well, like you said, talent is the holy grail. The way you win this competition is with the best people and the best ideas. And so, everything you can do to homegrown your entrepreneurship, to attract talent from around the world, to keep the talent in the United States, to give them hard problems to solve in a friendly research atmosphere so that they can pursue their work aggressively and happily. I mean, you want to keep these people happy, because they literally want to change the world for the better.

Bob Work (22:07):
They want to do some type of a health care application that will improve the lives of our citizens, they want to do something in transportation that will improve safety, they want to do something in agriculture which will improve agricultural output and reduce hunger. So, I mean, you talk with some of these young folks and some of them are young. They come and the PhD candidates are probably in their 30s or late 20s. Talent is key. And so, keeping our entrepreneurs and keeping our talent is absolutely critical.

Ben Taylor (22:45):
And I did read in the report that you talked about this talent funnel. Because it's not just the international talent, it's also we have issues with our high schools where we have high school students graduating that don't have the correct exposure to computer science and they've had no exposure to AI. So, is that also a priority figuring out? I'd love for AI to be kind of a core discipline that regardless of your career goals, it's something you have exposure to?

Bob Work (23:10):
Yeah, the way we looked at this is you want to keep attracting talent from around the world, but you want to exploit your homegrown talent. Now, one of the commissioners was Andrew Moore, formerly of Carnegie Mello. He has thought a lot about this. And he told the commissioners, he said, "Look, you don't have to have a PhD or a master's in computer science to be a good AI specialist. You have to be good in what he refers to as computational thinking. Understanding how the machine would think its way through a problem. And he said, "Look, if you had a class on computational thinking, say, in the seventh or eighth grade, and then you had a more advanced class in 11th or 12th grade, you would really start to increase the population of folks that could really, really help in the competition.

Bob Work (24:04):
Then we said, okay, one of the key problems is getting people into the government, getting smart AI talent into the government. We haven't been as good at doing that as we would want. And so, people said, well, we'll never be able to compete with the Googles and the Oracles and the Amazons, and Facebooks. We'll just never be able to compete with them in terms of salaries. And when we started talking to most of the young folks that we reached out to, they said, "Look, our first salary is important but not as important as graduating without a crushing load of debt." So, the way we went about it was we designed programs where people could get a four-year ride and get their degree in exchange for working in the government for some period of time.

Bob Work (24:56):
So, we described a digital service academy, just like the Naval Academy or the Air Force Academy. People go to this academy for four years, they would get a degree in STEM, something like computer science or software engineering or biotechnology, something like that. And in exchange, they would work in the government for six years, say, we didn't want to be prescriptive, maybe it's five, maybe it's four, whatever. And so, these would be people who will say, "Hey, I would like to work for the government on these things." Then we design what we call the National Reserve Digital Corps, which was modeled after the ROTC, the Reserve Officer Training Corps model.

Bob Work (25:39):
And these folks could go to any college they wanted in the United States that was in the program on a free ride, get their degree in a STEM. And then they would come out, and they could go to Google or they could go to Facebook or they could go to any number of these high technology startups. But for one weekend, out of every month, they would go to either an agency in the government, like in the Department of Energy or Department of Commerce, or they would go to some type of a military unit, and they would say, "I'm here to figure out how technology can make your life better, allow you to do your job more effectively and more efficiently?

Bob Work (26:22):
"And for two weeks out of the year, they would do the same thing but for a longer period of time. Maybe they would go observe a military exercise or they would go to a national lab, and see what they're doing there. And so, for 38 days, and they would do that for say, six years, the young folks that we talked to said, "Yeah, if I could graduate without a lot of debt, that would definitely attract me."

Bob Work (26:51):
The one good thing about the government is there are a lot of really tough problems that once young folks get on it, they say, "Wow, this is something I really want to work on and improve." So, there is a way to do homegrown talent and global talent, to make sure that our government and our nation has enough. We also said there ought to be a National Defense Education Act, a second one, which really focuses on how to improve STEM in K through 12. So, it's really kind of a holistic thing. Again, going back to talent is the holy grail. How do you grow it? How do you maintain it? And how do you motivate the talent?

Ben Taylor (27:33):
And one thought I had listening to you is, I think there's also a perception for people in tech. If you're really talented, you go to the private sector. If you can't get a job there, you can go to the public sector as tech talent. And I was talking to someone in Singapore, and they said, it's the exact opposite there. If you're extremely talented, you go work in the government. When it comes to tech, which I thought was really interesting, how nice would it be to flip that on its head, where you have some of the best talent fighting to solve these national security issues.

Bob Work (28:02):
If the United States said, "This is a technological competition that really is going to determine where the US stands in the world." And there are all these knotty problems that we are focused on. And if young folks said, "Holy moly, that is a problem that I want to work on." Nuclear weapons safety, better agricultural processes, better securing our electrical grid, securing our transportation grid. I mean, I think there would be a lot of young men and women who would say, "Man, I want to contribute. I want to help our country." And then, after they do that they can go make their trillions in the innovation economy,

Ben Taylor (28:47):
That reminds me, I feel when people get further along in their career, they start thinking about legacy, but everything you just hit on this legacy. So, if you can convince these young, talented folks graduating school, "Look, you're going to die. You can get a legacy win immediately. And then you can go do your cat face AI app or something less impactful to society." Because everything you hit on, that's massive legacy impact that matters.

Bob Work (29:14):
Yeah, generally the way it's working now, as a lot of people who the Department of Defense refers to as post economic, they've already gone out into the innovation sector, they've made a couple of millions. Perhaps, they had an exit on the company or one of the products really blew up. And then they start saying exactly what you're saying, Ben, "What is my legacy? Is my legacy that I earned $2 million? Or do I want to help solve this particular issue in the Department of Defense?" Or again, someplace else in the government? So, we want to flip that. We want the young people, some young people saying, "I want to establish my legacy before I make my millions." But hell, we'll take everybody.

Ben Taylor (30:00):
Yeah. How does the private sector partner better with the DOD? Because I think there's motivation there, where they've got advanced talent and algorithms and methods. How do we bridge that gap?

Bob Work (30:12):
When Google rolled out a project Maven, which was a computer vision project that would look for objects inside full motion video feeds, that's what it was about. It was computer vision. And some of the employees at Google said, "Okay, drones useful motion video. And therefore, we're going to be helping weaponize this. That's not consistent with our company values or with our personal values, so we shouldn't support." And so, they pulled out of project Maven. They are working again with Project Maven, but on different aspects of the issue, which are more consistent with the principles that Google published after this incident.

Bob Work (30:58):
And so, a lot of people started saying, "Well, the West Coast won't work with the Department of Defense." I was in the department at that time. I left. And then soon thereafter, I was still talking with the people. And most of the companies were going, "Look, we don't mind working with you. We hate the bureaucracy. We are small companies. We can't afford to do 15 briefs to see whether or not we can get a prototype, which may or may not become a program. We just don't have the time. We've got to figure out a better way for us to work with you." And the department's been doing this for about three years.

Bob Work (31:38):
So, the government said, "How do we fix this?" And Congress has been very helpful, too. They've been giving more authorities to the Department of Defense to be able to move quicker. So, there's a thing called other transactional authorities. And using OTAs, you're often able to get something on contract, say within 60 days instead of six months. And that has helped a lot. There have been all sorts of different mechanisms like this, which are designed to attract these young, kind of feisty companies who have a great idea, but they're so small, they just can't do all of the requests for proposals and requests for information and all the proposal writing. So, the department knows this as a problem and is working on it hard. I think you would get different views on whether or not they're doing a good job at it, but it certainly has improved over the last three or four years. But it's not where I think anybody wants it to be. We want to get better.

Ben Taylor (32:51):
How's the perspective different when you think about AI? And how do the citizens think about AI differently in the US versus China? I know, this is probably a big question, but if you had, what would be your quick response to that?

Bob Work (33:03):
I have never traveled to China and spoken at an AI symposium or anything like that. But I've talked to a lot of people who have, and they say it's consistent. If you go to an AI symposium, first of all, it's packed. It's always packed. And after the presentation, they're just mobbed. They're like rock stars. And people are coming up and saying, "What about this? And what do you think about that?" And the impression is that the Chinese public views AI in a positive way that AI is going to help their country re-establish itself as one of the leading, if not the leading global power.

Bob Work (33:49):
In the United States, if you watch most of our science fiction movies, if you read our books, AI is generally bad. It's either trying to kill you or it's trying to make it their pets. And so, on the US side, I think there is a much more guarded view of an AI future. Am I going to lose my personal data? What about my civil liberties? Am I going to be spied on? Is this going to be like 1984? Will machines be making decisions like assigning me a social credit score that will determine whether I can go to a top tier college or not? So, I think in the United States, it's a little bit more guarded and a little bit more fearful.

Bob Work (34:36):
And that's why I think what's different, the United States spends a lot of time on talking about ethical AI to include explainability. How did the black box come to the conclusion it did? Are there any inherent biases in the way it's making these choices? And as our citizens become more and more comfortable with these, and they determine on their own that, "Hey, look, generally if we have the right testing, evaluation, validation and verification, the algorithms can be made unbiased, and they can be made to act safely, and therefore will help us."

Bob Work (35:18):
The way this worked in the intelligence community was interesting in that, up until 2015, a human analyst was better at picking out an object in a picture. So, say you had a satellite picture and you were looking for a tank. Until 2015, the human analyst was better at picking out the tank in the picture. But in 2015, computer vision got to the point where it met or exceeded human performance. And at that time, the intelligence community said, "Okay, let's go after computer vision at scale. Let's take the analyst out of looking at 40 hours of full motion video feed and let's let the machine do that. And let the machine tell the human Hey, you need to look at this." And then the human can really concentrate on what's important.

Bob Work (36:12):
So, once you establish that the machine can perform at a level equal to or better than humans, then at that point, you start to say, "Well, why wouldn't we go to a machine and put humans on projects that exploit their insight and their creativity and their initiative, and just let the machine do what is referred to as the dull, dirty and dangerous?" Staring at a computer screen looking at full motion video, let the machine do that. It never gets tired, never gets pissed off, doesn't worry about getting a raise. The machine just does its thing.

Ben Taylor (36:49):
Yeah, the sad thing about that example is how overqualified is everyone to not do that. The human brain is this miracle of innovation. And if you're just sitting there, trying to tag tanks in a feed, that seems soul crushing. It seems like you're not living up to your full potential. If anything, you should be teaching the AI systems in dealing at the cracks. You should be focusing on the uncertainty.

Bob Work (37:16):
Yeah, I mean, this is exactly the thinking behind what the department refers to as human machine collaboration. Using machine to do things that the machine does good. Assigning human tasks that humans do good and finding that virtuous pairing. Now, Garry Kasparov, he was a world champion in chess. And in 1997, he played the IBM computer Deep Blue, and he lost. He was the world champion chess player at that time and he lost to Deep Blue. He did a lot of soul searching. And after that was over, he started thinking about how do you do the best pairing. And so, he went through a period where he called it Centaur Chess, where you had a computer algorithm that kind of crunched the numbers on the chess side and made recommendations to the human player. And the human player would say, "Within my strategy of this type of defense or this type of offense, this is the one I'll make."

Bob Work (38:22):
And for a period of time, the Centaur teams, humans with algorithm would defeat algorithms alone and defeat humans alone. But then, he started seeing after a while, the machines would learn from the way they played and they would start making different choices. So, both the humans and the machines were learning how to be better chess players as they went along. And he had a hypothesis. This hypothesis was a weak human with a strong machine would be able to beat strong machines alone or strong humans with weak machines. And that was his hypothesis.

Bob Work (39:07):
And DARPA said, "Let's test this." And the way they did it is, there's a model called Storm. It's a stochastic model that looks at a campaign. And it says, okay, with our airplanes versus airplanes, this is how many would get shut down, the ships does it with ground forces. And it does a whole campaign. So, what DARPA did is they had a thing called Brainstorm. And they said, "We are going to make an AI that works with Storm and the AI will nominate courses of action to a human commander." And then what they did is they said, "We are going to take some humans who have nine years of experience in the military and we're going to give them Brainstorm and they are going to fight against the team that has 29 years of experience in military, but they don't have a machine." And over 30 games, it that turned out to be 14 to 16, the nine-year experience people won 14 games whereas the 29-year experience guys won 16.

Bob Work (40:16):
So, in other words, the rookies played the pros to a draw. Again, the AI wasn't making the decisions. It was nominating different courses of actions to the humans who would say, "This is the one I think will work the best." So that's what this is all about human machine collaboration. Letting the machine look at all of the different data that's coming in from your imagery, from information you're getting from your electronic warfare, and what people would call ELINT electronic intelligence, and SIGINT signals intelligence, and IMIT imagery intelligence, and human intelligence. All this data is coming in. It's just an enormous amount of data. No single human can comprehend what the hell is happening.

Bob Work (41:07):
But like you said, the machine, it just chunks the data. And it says, "Hey, I see a pattern. From this pattern, I'm going to infer that when I see this type of activity, this is what's going on." And then, it makes a prediction. AI, these machine learnings are prediction machines. And depending on how good your data is and how good your algorithms are, these predictions are damn good. So, it's the predictions that go to the humans and the humans use their creativity and their insight and their intuition.

Bob Work (41:41):
And their, just to say, this is the way we're going to go. If I sound excited, it's because I am. I mean, this is going to be, you're going to be talking to your jet. And the jet is going to be sucking in all the radar data and all of this stuff. And the jet is going to say, "You got a bad guy coming from this direction. There's four coming from this direction and two coming from this direction. This is what I think you ought to do, pilot." The pilot could decide, "No, I'm not going to do that." But it's going to be really cool.

Ben Taylor (42:13):
Well, it'll be even cooler when the jet identifies the UFO and freaks out in the future because all the sensory data confirms that this is not of human origin.

Bob Work (42:24):
A lot of people don't realize is mistakes with self-driving cars. The cars are based on observing how humans drive cars for millions and millions of miles. They're just watching, the data is coming in, and they say, "Okay, in this case, when the light is red, the human can turn right on red." And a 99.9999% of the time, the human turns right on red. But if you got a bonehead for a driver, and the driver turns left on red, you'll never get that prediction from a machine. And therefore, they sometimes make mistakes. So, it's really humans performing poorly, that often really screws up these machine learning algorithms.

Ben Taylor (43:12):
I want to end on an optimistic note. But before I do that, I definitely want to make sure that we have a moment to pull the unthinkable thread. Because you and I, we don't want war ever in the future. But assuming that there was some type of kinetic response, World War Three with AI. I've said that Hollywood has not prepared us because they're not smart enough. I'm curious what your response is to that after doing this report and being with people like Eric Schmidt. Has Hollywood really nailed what a world war three would look like when it comes to the future innovations? Or what are some of the gaps that the public doesn't really understand?

Bob Work (43:47):
Yeah, I think most of the public thinks of AI in the future in terms of terminators against these robots that are out hunting for humans and they killed humans. But the type of thing that I worry about the most and I think is the most fraught, is machines that would have the ability to autonomously order a preemptive strike or retaliatory strike without human intervention. That scares the hell out of me. Those would lead to many, many, many more lives lost.

Ben Taylor (44:23):
It's almost like the flash crash with stock but you have a flash crash with war. You call that out in the report. You say, there should be no autonomous response when it comes to nuclear war. There should always be a human in the loop whether it's the US, China or Russia. We should just all agree that that is a really bad thing. Yeah, that's fascinating. I didn't think about a flash crash where there's immediate escalation because AI is responding to another AI.

Bob Work (44:48):
Yeah, because that's the way machine learning would learn. They have all this data, they're looking for patterns. From the patterns, they make inferences and from the inferences they make predictions. And so, the machine learning says, "I predict that adversary X is going to attack us at 0600 tomorrow. And my level of confidence based on all of the data I have is 95%. So, if that was going to a human, the human would say, "Okay, what are the indications and warning? What do we see that would verify what the machine is telling us?

Bob Work (45:25):
And do we want to start the war? Do we want to preempt?" If we let them hit us first, that might be very bad. But if we hit them first, do we really want to do that? And that's where, as you said, we should be talking with our major competitors, China and Russia, and saying, "Are there areas where none of us want to go?" And I agree with you and the Commission agrees, when it comes to nuclear weapons, we don't want any autonomy in nuclear weapon employment. We want humans making those decisions all the time.

Ben Taylor (45:58):
I did want to back up for a second and say that we've already had examples of automated response in history where automated systems have killed people. I know there are people out there that think you should never have an automated kinetic response. But we've already had examples where this has happened.

Bob Work (46:14):
It was the US Cruiser in the Gulf. It mistakenly identified an Iranian passenger liner as a threat and it engaged it and shot it down and killed all of the civilians on board. Generally, the only time that the major competitors have said, "We are willing to delegate authority to attack without a human actually pushing the button is in cyberattacks." Humans aren't going to be able to move fast enough in a cyberattack. So, your machines are looking for the cyberattack. And the machines, as soon as they detect the attack are going to start trying to block the attack. And in some cases, fire back at the bad guy, and try to keep them from doing it. And cyber just happens in milliseconds, it's too fast for a human to keep up. So, I think you're going to see automated defenses on the cyber side.

Bob Work (47:16):
Then on the other side is where you're faced with a raid of many, many, many missiles, or many, many, many UAVs. And, normally what would happen is you'd be, I'm just making stuff up now say, you're in a ship combat information center, and you've got 60 missiles screaming in on your ship. Humans aren't fast enough to say, "Holy Moly, of the 60 missiles, we need to take this one down first, and this one, this one, this one, this one, this one."

Bob Work (47:47):
We generally will press the automatic button, let the machine make the decision. But after we've done that, this is called a human supervised autonomous weapon, the human is watching what the machine is doing. And if it becomes clear, the machine is shooting at friendlies or if it's not performing the way it was expected, the human hits the off switch. So, there will be some cases like that, where we'll have to debate how far we want to go. There are just certain instances where war moves so fast that the humans can't keep up.

Ben Taylor (48:26):
One of the things that really surprised me thinking about this was when you really lean into how a hive mind might operate. So, when you have 10 drones or 100 drones that are all communicating together with their sensory information and all controlling each other, that's not something a human brain is well equipped to comprehend. So, there's a huge advantage when it comes to sensory response. But also simulated war before you invade a North Korea, or a China or vice versa, they've already simulated the optimal attack a million times, where do you strike? Where do you take out a building that could be a future stronghold? What is this more intelligent tomorrow that you are excited about that if we play our cards right, if we make sure that we're not caught off guard, what is this more exciting future that you look forward to with AI being a big part of it?

Bob Work (49:13):
Just think of a doctor who is the number one brain surgeon in the world. And on any given day, maybe there are a hundred papers written by researchers in the field that are saying, "This is a new way to approach a problem in brain surgery." Well, the doctor can't read all of that stuff. He just can't do it. So, imagine a future in which every doctor would have an AI savant, and the AI savant is literally looking across the globe and is reading every single paper that has to do with brain surgery, and would be able to tell the doctor, "Of all these papers, read this one." And this is a different way that you might go about thinking about it. And the doctor would be interacting with the AI and would be helping the doctor become a better doctor. Why wouldn't you want a future like that?

Bob Work (50:14):
Why wouldn't you want AI controlling stoplights, stop signals, or transportation signals in a city, so that traffic, more generally flows better, that you can get to work faster, that you're much more efficient in your day? Why wouldn't you want that? Why wouldn't you want to have a world in which AI takes works with air traffic controllers to make the skies more safe? There are so many ways in which AI can improve our lives.

Bob Work (50:48):
So, to me, generally, when you talk to AI optimists, they'll say, "Look, AI will not be perfect, it will make mistakes. But the general thrust of the future is it will make all of our lives better, more productive, and more satisfying." So those people who are AI pessimists say, "That may be true, but you can't trust AI. It's going to do something bad." So, to me, I'm an optimist. I think just seeing some of the things every day that AI is doing to improve the effectiveness and the efficiency of the Department of Defense and the Department of Energy. I'm very optimistic.

Ben Taylor (51:35):
One thing I wanted to ask you about was with our allies. So, US, we're not the only ones with top talent, obviously. You've got Tel Aviv, you got London, you've got other, Germany. We could go down a long list of some very impressive countries that we see as being friendly allies to us. What was involved with the report, figuring out how we can partner better with a country like Israel that is known for being definitely in the top ranks when it comes to AI?

Bob Work (52:00):
We see this as about US competition. And therefore, we want to all the democratic nations of the world helping us figure out the boundaries, the ethical and moral and legal boundaries of this powerful technology. And it's important for us to do so, so that we can collaborate together to pursue this more optimistic future. And there are big differences, like on autonomous weapons. Some of our allies will view autonomous weapons differently than the United States does. We have to try to work those out as best we can, so that we can be interoperable, that we don't cause a fissure, safer in an alliance.

Bob Work (52:43):
So, we recommended that the Department of State start setting up a technological dialogue. Maybe we start with our five bi-allies, our closest allies, and then expand that to be our NATO allies, and then expand that to include our Asian allies. With an eye towards, as I said, identifying the legal, ethical, and moral boundaries of these technologies. In my own view, I've concluded that synthetic biology is going to probably have the biggest impact on us as humans. Whereas AI is going to have the biggest impact on the conduct of warfare. But I was in a discussion where one of my colleagues said, "This is two sides of the same coin." Synthetic Biology now we have is the digitization of life. It gets to your point, Ben, that we can get a lot of information from our bodies, because we can digitize what's happening. And we can have a much more microscopic feel on how our bodies actually operate.

Bob Work (53:54):
And on the AI side, this is machines that mimic human intelligence. So, you have one side where we're digitizing human life and on the other side, you have machines that mimic human intelligence. And the moral, ethical, and legal issues of both of them are very, very similar. How far are you going to go? And so, for example, I think in the West, we think in terms of improving human performance. In other words, maybe improving eyesight so that, I'm just making this up now, improving eyesight so that major league hitters can hit the ball, see the ball and hit the ball better. There are some people who say we ought to go after human enhancement and make super soldiers. And I don't believe the West would approach it that way, our authoritarian competitors might. That's why it's so important that we start these dialogues early both with our competitors and with our own citizens, so that we get this as right as we possibly can.

Ben Taylor (55:07):
What are some of the top things that you're excited about? What are the AI innovations that you think could help society for good?

Bob Work (55:14):
Well, it's going to be almost impossible to stop this technology. This is like electricity. It's a general-purpose technology that everyone's going to use in one way, shape, or form. And a lot of it is in the commercial sector and it proliferates very, very fast. So, it's going to be very difficult to keep this technology from getting better and better. But at the same time, it's going to be very difficult not to see this technology make us better and better and better. So, for example, say you're a brain surgeon and you do three or four operations per day, and you're at the very cutting edge. Well, every day, there's probably 100 papers being written by researchers who are studying brain functions and brain surgery, and they're writing these cutting-edge papers.

Bob Work (56:10):
Well, even if you're at the top of the game, you don't have time to read all of these papers. And therefore, even if you're at the top of the game, you might be falling behind without even knowing it. Well, AI would be able to read every single paper published every single day, across the world. And it will be able to be a savant for the brain surgeon and say, "Look of the 36 papers that I reviewed today, you need to review this one. And here are different ways to think about how you might go after this problem that comes up in a surgery." We're going to have these types of AI savant is for everybody. For airline pilots, for brain surgeons, for policeman.

Bob Work (56:57):
These AI servants are going to help all of us be better at what we do, more productive and more efficient. And then they're just going to be valuable in society as a whole. Being able to monitor the flow of traffic in the city helped make that traffic go smoother, so that we all get to work on time, and we all get home earlier than we otherwise might. Same thing for air traffic controllers. We are going to have AI assistance, helping them make air travel safer. And farmers better able to make decisions on how they're going to go about producing and cultivating and getting their crops to market. So, I just look at the enormous number of things that AI are going to be able to help people on, and I can't help but be optimistic, Ben.

Ben Taylor (57:50):
Yeah, Bob listening to you, there's a very profound thing woven in what you're saying, because right now, the human researcher needs to have a lightbulb idea. They have a hypothesis, and they go test it, where AI goes below the research papers where that brain surgeon is consuming all of that data if we can figure out the data privacy. And so, AI is potentially proposing its own ideas. "Hey, have you considered this? Have you looked at this?" Human researchers in the middle. It becomes pretty clear why we can't stop the train because many, many lives will be saved and it will extend human life, have more time to spend with family and friends and the things that bring you joy, and less time doing the dull, dirty, and dangerous work that many of us do today.

Bob Work (58:34):
Amen. Very well said.

AIVOv2 (58:37):
Thank you for joining us on this More Intelligent Tomorrow journey. Discover more and join the conversation at more intelligent.ai. The future is closer than we think.