Transforming Cybersecurity Challenges into Solutions with Steve Orrin of Intel

May 21, 2024

Play/Pause Download

Steve Orrin is Intel’s Federal CTO and a Senior Principal Engineer. He leads Public Sector Solution Architecture, Strategy, and Technology Engagements. He has held technology leadership positions at Intel where leading cybersecurity programs, custom hardware and software architecture and solutions, products, and strategy. Steve is a cybersecurity expert and sought-after advisor to public and private sector leaders on enterprise security, risk mitigations, and securing complex systems. He is also a leading authority on Public Sector/Federal mission and enterprise systems and solutions regularly engaging with United States government senior technical and mission leadership. He is the Intel representative to on security standards and guidance and has contributed to several NIST standards and guidance publications. Steve is the supply chain threat and risk management advisor to DOD and Intelligence Community senior leadership and was chosen by the US government to serve as a Special Government Employee on the congressionally mandated taskforce to perform an assessment of Microelectronics Quantifiable Assurance (MQA)/Microelectronics Security.

John Shegerian: Get the latest Impact Podcast right into your inbox each week. Subscribe by entering your email address at ImpactPodcast.com to make sure you never miss an interview. This edition of the Impact Podcast is brought to you by ERI. ERI has a mission to protect people, the planet, and your privacy, and is the largest fully integrated IT and electronics asset disposition provider and cybersecurity-focused hardware destruction company in the United States, and maybe even the world. For more information on how ERI can help your business properly dispose of outdated electronic hardware devices, please visit ERIdirect.com. This episode of the Impact Podcast is brought to you by Closed Loop Partners. Closed Loop Partners is a leading circular economy investor in the United States with an extensive network of Fortune 500 corporate investors, family offices, institutional investors, industry experts, and impact partners. Closed Loop’s platform spans the arc of capital from venture capital to private equity, bridging gaps and fostering synergies to scale the circular economy. To find Closed Loop Partners, please go to www.closedlooppartners.com.

John: Welcome to the Impact Podcast. I’m John Shegerian, and I’m so excited to have with us today, Steve Orrin. He’s the federal CTO of Intel. Welcome to the Impact Podcast, Steve.

Steve Orrin: Thanks for having me today, John.

John: Federal CTO. So CTO can mean a lot of things to a lot of people. Can you share what this means in terms of your role?

Steve: Absolutely. So as the Federal Chief Technology Officer, it’s my role to represent Intel’s technologies, its architectures and capabilities, as well as our ecosystem to the Federal Government and the broader public sector, and to help them adopt technology, and plan for what’s coming. I also help translate government requirements back into Intel so that we can better build products to meet the needs of our US and global government customers.

John: That’s so interesting. So even though you’re a great ambassador on technology for Intel in terms of all their capabilities, you’re also on the front-end trying to understand better what clients like the Federal Government need now and in the future so you could design better products for them.

Steve: Absolutely, and make sure what their requirements are, get met by the state-of-the-art technology and also understanding that the government in some respects is a microcosm of every other industry.

John: Good point.

Steve: They have just about all the same enterprise admission problems that you’ll find in financial services and health care, just sometimes at a bigger scale.

John: Before we get talking about all these fascinating and very important topics we’re going to cover today, such as chips, AI, cybersecurity, and everything else, I’d love you to share a little bit about the Steve Orrin story. Where did you grow up and how did you get on this very futuristic and impactful journey that you’re on?

Steve: Well, it’s very interesting. I grew up in a couple of places and was born in Texas, but I really count myself as a New Jersey native having grown up and had my formative years in New Jersey. The plan was actually I was going to go into research biology. I was going to be an MD, PhD and do biomedical or biochemical research. Around the time that I was getting ready to start med school, I had an opportunity to go help a company back in early ’95 to get started in the security space. As a kid, I was always a hacker. I liked playing with technology. I liked seeing how things worked, or how things fell apart when they didn’t work, but at the time in the 80s, there really wasn’t a career per se in cybersecurity or in computers, for that matter, other than maybe doing COBOL programming.

So I took the other path of my other love, which was biology. Like I said, in ’95, I had an opportunity in between some graduate work in bio research and med school to help a startup get going in the cryptography and desktop security space. After 3 months of doing that, I just fell in love. It was a hard time. It was the beginning of this internet revolution, and I got in on the ground floor and just have never looked back. It’s been a bunch of fun opportunities throughout my career, both in the startup world back in the ’90s and 2000s and then later after getting acquired by Intel, driving capabilities for Intel into the market.

John: How long have you been in this role at Intel, Steve?

Steve: So I’ve been the federal CTO now for just over 10 years, having moved out of the business unit where I was driving security pathfinding. So when I came to Intel, it was the first role that I took on, looking at what can we do in the sort of two to five-year horizon. If you think about Intel’s research and R&D was sort of 5 to 10 years out, what’s that next chip architecture? Our product teams are building chips on a two-year cadence, what they call the Tic Tac model. What was missing at the time was sort of focusing on that in-between stage. What can we innovate in software and firmware? What can we do with current or about-to-come-out hardware that could be innovative and help inform where the hardware should go, sort of bridging that gap?

And when that team was stood up to do that, it’s what we call pathfinding, I was tapped to lead the cybersecurity team and to go look at cybersecurity innovations, leveraging software that talks directly to hardware. I did that for about 9 years. It was a lot of fun and exciting stuff and helped to develop our first cloud security architectures, anti-malware technology that would leverage the hardware, and then got the opportunity in 2013 when they were standing up a federal practice to move back east and really look at how can we better service the Federal Government.

John: So speaking of the Federal Government, what are some of the biggest challenges, technology challenges more specifically, interfacing public sector organizations like the Federal Government? What are they facing that you’re helping them solve, and how daunting is the risk now when it comes to cybersecurity?

Steve: So I think you’ve hit one of the key areas of challenge that the government is facing on a daily basis, is the cybersecurity challenge, and it’s pretty vast. Obviously, the US government and most public sector organizations are an active target and have high-security risks, are also maintaining very sensitive information. So the risk appetite is very low and the risks are very high. The other thing is that like any big organization, there’s a lot of technology debt or legacy that they’re trying to manage as well. So how do they modernize their infrastructure to meet the rising threats? How do they evolve their capabilities to meet the mission needs, whether that’s better services for the citizenry, the warfighter, intelligence, all of that needs to be modernized and continually innovative in order to be able to meet the current and pressing future needs.

Right now, the Federal Government is on a multi-year journey across the board around zero trust architectures, the buzzword that we all know around how do we change the way we do security. They’ve taken this task by the horns, putting out memorandum and executive orders to really get the entire US Government and the Department of Defense doing better around cybersecurity and taking this fundamental shift. So I think security is obviously top of mind, but at the same time, the needs of the government, whether that be better services across all of the civilian agencies for US citizens for the common good, as well as enabling the warfighter, nailing the intelligence community, enabling the US Government writ large to operate at the speeds that we see in the cloud and at the Internet. So there’s technology challenges around how do we modernize the infrastructure. How do we take better advantage of the cloud? Right now, everyone is looking at how can or will AI transform how we’re doing things today.

John: I want to go into the AI issue in a second. Go back to cybersecurity. When it comes to cybersecurity, it’s fascinating what you said at the top of the show. You’re in a very interesting role in terms of public sector facing Federal Government into relationships that you have. As you said, the Federal Government is really a microcosm of really what big corporations and corporations at large really need as well. When it comes to cybersecurity, Steve, help me understand this and help our listeners and viewers understand this. Is that a hardware issue, a software issue or a people issue when it comes to the risks around ransomware and other cybersecurity risks and trying to create more resilient organizations?

Steve: So the answer is yes, and then some. It’s a people process technology issue.

John: Okay, fair enough.

Steve: Both on the technologies that we’re using in our everyday lives from your phone, desktop, and laptop to the cloud, and to everything in the network in between. There are fundamental technologies that both can be abused by the adversary or can be enhanced to help better protect the organization. There are the right processes and procedures to make sure that you’re accurately expressing what the risks are. You’re deploying systems in a secure way. You’re building the right process procedures for allowing data to flow, authorizing access, authorizing transactions, and ultimately, it’s people both with training, as well as having the right cybersecurity technologists, architects, and personnel in your teams, as well as how do we educate the employees, the partners, and the customers to better protect themselves because this is an all in-game. We are all part of how do we provide better security together, and that cooperation between the data owner and the data consumer and making sure that we’re creating that right contract, if you will, how best to protect data throughout its lifecycle.

So to answer your question, it’s all of the above, but again, it’s who’s responsible and where can those innovations happen, that can happen across the board. Sometimes better processes can help reduce risk by identifying problems earlier in this lifecycle or having the right controls in place to mitigate against some future risk. Many times it’s training your employees to not click the link every time. So it’s a combination of all of those that have to work in tandem, bringing the right technology to bear, bringing the right training to bear to operate, and providing the right risk management for an organization or a particular transaction.

John: So as you’re saying here if I’m understanding, the soup really is not going to taste that great unless all the elements are included there appropriately?

Steve: Absolutely.

John: You mentioned AI. Let’s go into that. What’s AI’s impact on the Federal Government? And really more broadly, all of us in terms of enterprise organizations, and public sector. If AI is a baseball game, are we just on the way to the ballpark or is this the top of the first inning and we don’t really know where this thing’s going to fall out, or where do you foresee this evolving in the years ahead?

Steve: Well, it’s a really interesting question because it’s hard to put your finger on where are we in that game. In some aspects, we’re already into the final innings in the sense like there are things you’re using already today, every day that have AI involved and oftentimes are not aware. When you go online to go purchase something and you get a recommendation because of your buying habits or people like you have been buying something, that’s AI. The gifts I buy my kids for their birthday, it’s driven by AI behind the scenes. The car that you’re driving when it avoids collision is using object recognition. It’s using AI.

So it’s already very much pervasive into our everyday lives even if we’re not aware. So on that side, yes, it’s already well adopted and out there deployed. On the other hand, what we’re starting to see is AI getting into places that it hasn’t classically been and I will pick on two areas just to show the extremes. So there’s the sexy part everyone’s talking about. I want AI to solve these problems. I want the chatbots, the generative AIs to go solve hard problems like tracking of ships in the ocean to dynamic supply chains, being able to do that really quick facial or object recognition, there’s a lot of what we call the sexy part of AI.

It’s important and those will affect everyday missions, and we’ve seen great examples. I’ll pick on one from the public sector where forestry did approve a concept with an AI to actually have drones fly through the national forest looking for blight. It’s a disease in the trees. One of the challenges is that you have like a lone forest ranger to walk the trails to sort of check if there are any diseases prevalent in certain areas of the forest. Obviously, you’re only getting a small sample size and you have to wait for them to actually walk those trails. By deploying a set of drones that had object recognition or blight recognition software on board that could take images and fly around in a coordinated pattern, number one, they were able to get a much larger sample of the overall forest.

Two, they were much quicker time to detect blight and contain it because they were able to recognize it deep within the forest before it became pervasive. So we’ve seen examples of the mission being affected by AI. On the other side, where I think some of the biggest benefits from a financial and a benefit to organizations are going to be on the non-sexy stuff. The manual business process or optimizations, contract management, ERP, things that are very document-heavy like compliance, and supply chain management. Things that have a lot of human steps in the loop that don’t need to be, where we can automate a lot of that using AI tools.

So, things like large language models and this new approach, it’s called BRAG, are really about how do we automate those legacy processes that are foundational to core business. Every large organization, whether you’re a US Government, a construction company, a manufacturing organization, or a bank has to do with a lot of paper or nowadays PDF or other kinds of document types. They have to be processed and they have to make sure, number one, do they fit the formats or are they the right format? Do they have the right codes listed?

And that’s before anyone even looks at the actual content. Is this the right information? A lot of that is time-consuming, it’s laborious, and it ultimately is not a very efficient process. That’s where we’re seeing some really amazing applications of these generative AI and large language models is to automate those legacy processes to speed the time, whether be manufacturing or being able to reduce the time to meet your conformance and compliance requirements, and complex supply chain logistics management.

So the back-end process is where I think the next major revolution on the benefits of AI will be seen. Ultimately, I believe that many organizations will see the biggest ROI and total cost of ownership reduction in the application of AI to those processes, and then better customer service by giving them a chatbot, although those are sexy and that’s something that’s fun, the bigger ROI is going to be on automating and using allies on those back-end processes today.

John: Is the boogeyman worry people that put out the negative possibility outcomes of AI, will there be enough ways to protect ourselves against some of those doomsday type procrastinators that say, “Oh, AI is going to enable robots to one day outthink humans and take over all of us.” I’m not a doomsday type of guy. I assume just like the internet had all these wonderful things that it enabled and created and still does then these brilliant people like you that help create safeguards and guard rails to protect us from cybersecurity ransomware, and all sorts of other types of attacks, the similar type of analogy is going to happen with AI. It’s going to evolve and there’ll be guard rails and safety precautions put in to protect us from the doomsday type of outcomes. Is that how you see it or is there more to that?

Steve: So there’s a couple of ways to answer that, John. Let’s look at it from one group of naysayers is that, “Oh, the AI is going to take my job,” and there absolutely going to be functions of people’s jobs that will get automated by AI. Just like, automated elevators took out a whole group of people who used to flip the button in the elevators 100 years ago. At the same time, AI affords new opportunities. A lot of [inaudible] like think about a maintenance worker now working on those AIs.

So, same idea. There’s going to be new opportunities. What we’re finding in the vast majority of cases is that AI is making the workforce more efficient. It’s not that it’s replacing the workforce. What it’s doing is allowing them to spend less time on the busy work and more time on what needs human thought or the checks that you want to have at the end of the story so you can get faster to a decision. That’s really at the core of what AI can do. It can give you the right information or a better collection of the information to give you better, more accurate information to make a decision upon or to move a process forward.

So I think what we’ll find is that, especially in organizations where we’re trying to get things to market faster in order to be able to increase revenues, those sort of basic tenants of how products are delivered to market. If I can speed it up using AI and these other machine learning technologies to make my workforce more efficient, so on the one hand, yes, I’m making my workforce more efficient.

I may not have to hire as much workforce, but at the same time, I’m going to need a different kind of workforce to manage those AI. So when you think about where we should be spending time educating using AI, using machine learning algorithms, learning the new algorithmic programming frameworks are going to be critical for that next generation of workforce. So that’s one area of naysayers. I don’t think it’s going to replace everyone’s job, but it’s definitely going to make people more efficient or people have to focus a little differently of how they apply their job.

John: Understood.

Steve: The other side is how do we make sure that like the doomsday scenario, again, will there be a Terminator? No, but how do we make sure that the AIs don’t go off-script? So there are a couple of things that are going on. There’s a lot of talk now about the ethical use of AI, and really, it’s a lot of different ways of saying, how do we have proper governance? So it’s how we use the AI and how we trust the AI are going to be absolutely critical. What people often forget is that everyone’s talking about the last piece of the puzzle, the AI.

Subscribe For The Latest Impact Updates

Subscribe to get the latest Impact episodes delivered right to your inbox each week!
Invalid email address
We promise not to spam you or share your information. You can unsubscribe at any time.

I’m using the AI to drive my car. What we don’t often have visibility into is the years worth of work and the billions of data points that fed that AI engine. That’s where we need that transparency, and we need to apply that trust so that when we make a decision, turn left, it’s built on a trusted model that I can look back. Somebody has the oversight to say, “We know the data sets that went in there,” and we’re seeing examples. Again, it’s all we’re all learning as we’re going here is the joke that some people say around a lot of these AI deployments is we’re building a ship while it’s out to sea, but there have been examples of where AIs have either gotten it wrong or they started to skew or the term they use now is hallucinate. A lot of that has to do with the data that’s driving that AI.

So this is where those controls, those governance, which we’re calling ethical use, and how we have governance models for AI and data management are helping us to get a better understanding of what data went in, what the weightings that they’re putting on the results, and how do we correct those if they start to go astray. There’s a lot of research going on how to do that better and faster, but we already have a good understanding of what’s needed from an operational perspective. The last piece is at the end of the day, just like any other piece of automation or tool we use, you put safeguards in place. Some of them are within the tool and some of them are around the tool.

We have autonomous vehicles and we have corrective steering and we have these controls, but you still have the person in the car driving. You still have the police that will come when you speed. There are still compensating controls that surround it to make sure even when it does go astray, there are consequences or there’s at least controls in place to help minimize the impact of when these technologies go right. AI is like any tool. It’s a hammer. You can use it to build a house. You can also use it to break a house, and so it’s how we apply those tools.

John: So, when we’re in an airplane and they put it on autopilot. Is that AI-driven as well? Just we never called it that, but that’s all, again, AI-driven?

Steve: Yeah. It’s a broader definition of AI machine learning. A lot of that went into the algorithms that tell the plane how to fly a specific course and when to notify if something has gone wrong.

John: For our listeners and viewers who’ve just joined us, we’ve got Steve Orrin, he’s the federal CTO for Intel. To find Steve and the important work they’re doing at Intel, you go to www.intel.com. Steve, you sit in a unbelievably fascinating seat in that, if you turn on Bloomberg on any given day or the Wall Street, you read the Wall Street Journal, the New York Times, the hottest topics of the recent years is chips and chip manufacturing, chip shortages, et cetera, and how chips are evolving, that whole chip race, and Intel’s part of that for sure. It’s an important horse in that race, cybersecurity, and nation-states that are bad actors in that whole area. Then you also have the issue of AI and the rise of AI and Sam Altman and Elon Musk. On any given day, how are you juggling these topics and how do they interrelate with each other so you can make it through the day and not be weighed down by one or the other? This is a fascinating trilogy of worlds that you live in.

Steve: It is. What makes it exciting is every day is an interesting challenge. That’s what keeps me excited about what I do. We do get presented with some of the really hard challenges. I’d say it goes back to some advice I was given very early in my career. I had the opportunity to meet Jim Collins back early in my career. My CEO at the time was friends with him and brought him in to talk to her executives. I’d read the ‘Good to Great’, and one of the things that’s really stuck with me from his books and from the conversations we had, is surround yourself with people smarter than you are and listen to them.

I’ve tried to make that part of my modus operandi throughout all of my career, at all my companies and in my roles at Intel, is make sure you have really smart people on your staff. Make sure you’re listening to them. You don’t always have to agree with them, but you want to listen to them. For me, I have experts on my team, our AI experts, and not just on the current, but also looking out in the future. I think one great example of that, just picking on the AI one, I had two different people on my team. One is an AI researcher and was more of applied AI and performance guru. Both of them came to me within a week of each other, 6 months before anyone saw the term ChatGPT or generative AI. And they said, “Hey, Steve, I want to tell you about this cool little thing that’s coming.

It’s called Transformers, and here’s how it works, and here’s what you can do with it, “It’s this really cool tech, new way of approaching the AI problem.” Now, for those who are not aware, ChatGPT, the T in ChatGPT is transformer, they were talking about the precursor to ChatGPT. So they were my bell cows. They were out there seeing what’s going on, researching the next generation of algorithms, what people were trying to do, what became open AI and these others. They saw it and they were able to give me that insight long before. So I had at least a chance to come up to speed before the whole wave crashed over everyone else. and the same thing with cybersecurity.

I have cybersecurity experts on my team who have deep understanding both of the technical things that an attacker or a malware can do to a system, as well as folks that are focused on what the adversaries are doing, what are the botnet crews doing right now? What are the ransomware gangs? What’s the next technique they’re taking advantage of, and how we’re seeing collaboration between the different adversary groups to create better malware. So having these experts on my team and listening to them and having them regularly keep me up of this and how I can stay abreast of just the volume and complexity of the different problem sets that our customers face. So, I really would echo that It’s about making sure you surround yourself with those smart people and have the time to spend with them to let them tell you what they’re seeing because they’re seeing the future.

John: That’s brilliant, and that’s great advice for any leader of any organization. Surround yourself with smarter people. Speaking of which, when you are advising or working with the Federal Government since you interface with them, many titles that you wear is also you’re an advisor or researcher with NIST and other government agencies. How much advisory role you’re playing with the Federal Government besides interrelating as an Intel employee with them, are they listening to you when you say, “Hey, you might have a void in your security. Here is how I would help solve it if I were you.” How much of that back and forth goes on between experts like you and the Federal Government on a regular basis, that you’re allowed to talk about?

Steve: John, so I think the interesting thing there, and this is one of the reasons I’ve really loved working at Intel for the 20 years or so, I’ve been here, is that unlike other product vendors that their job is to sell, whether it be the government or a bank, their product or their service, Intel for the most part is an ingredient play. We’re inside the things you buy, but you don’t often go to Intel to buy that. So my role, when I advise the Federal Government or advise even the commercial customers I deal with, it’s more of that advisory tech advisory subject matter expert kind of role.

I’m not trying to tell them to go buy this hardware platform or that. They’ll buy it from Dell, HP. They’ll get their Cloud services from the Amazon, the Azure and the Google. What I’m trying to help them is understand what is the right technology, what is the right innovations that they should be leveraging from that ecosystem. So in some respects, as a somewhat neutral party in that conversation, my advisory role to the government is really to help them understand the technology. What they end up buying, will have Intel inside, and so the value will come later.

Really my goal is if I get them to modernize the Cloud, then they’ll consume the services from the Cloud providers, and ultimately that will lead to Intel revenue from the servers that the Cloud is hosting. My role is really to help them adopt the technology, help them identify, like I said, gaps in the security that they could use new innovations or features that are present in the hardware that they may not be aware of, flipping those on to help better protect their systems. That’s really what my role is.

John: And then it becomes more complicated because you’re not only advising on state-of-the-art and best technologies, but then you are also now being forced, like other leaders are, of taking into account geopolitical risks as well.

Steve: I leave a lot of the geopolitics to our gobbled affairs and the government knows how to deal with their geopolitical. A lot of what I’m focused on is how to help them with what the risks end up manifesting as.

John: Understood. That’s so interesting. Talk a little bit about Intel. Intel like you said, we all got used to that wonderful little jingle that’s indicated that Intel was inside of our products and of course, being powered by Intel. Where are we now in terms of the evolution, the chip race now? Intel has gone through a very public reboot and with the new CEO not new anymore, an important horse in the chip race. How are things going now? And how does that interrelate with the work that you do in AI and cybersecurity and things of that such?

Steve: I think it’s interesting. There’s a lot of things going on Intel, and so a lot of them were behind the scenes, and so obviously, things have come out. One thing that people know as our chips, like you said, we’re inside, and one of the cool things is we are literally inside almost everything, your laptops and desktops, people know, the servers and the cloud people know, but the network architecture. So when you’re talking across the internet, those switches and routers are often running on Intel. The 5G evolution that everyone is talking about, those space stations, that core network is also running on Intel. So what you find is that Intel is one of the few places that we’re holistically from end to end. So we get to see a good aspect of all the different parts of the sort of transaction or workflow.

The other thing that Intel is not as well known for is our software. We have over 19,000 software developers that are producing software that runs on Intel platforms, whether they’re building open source tools and committing to open source projects, or helping commercial vendors better run their software on top of Intel to developing compilers, and developer tools, and open frameworks to help people better do AI design on across heterogeneous computing architectures and building security services that they can take advantage of at their organization, or that can enhance what they’re already doing. That software layer is absolutely critical because you sort of think about, hardware is the foundation, the software is what turns it on, and Intel has been a major player in that software.

That’s one of the key things when Pat came on and brought in Greg Lavender as the CTO who’s also the CTO for him at VMware, really started to shine a light on our software capabilities. It was sort of there in the background. It’s coming to the fore because a lot of what we do today is really leveraging software. When you look at how do I do these things faster, how do I get the most out of my AI to really get that value? Well, if the software can optimize for the hardware and make it run faster, make the data flow quicker, make the decisions faster, then we all get the benefit. So I think one big change has been this renewed focus on software.

Then the other is looking at, and this has been announced, the foundry, opening the doors to the Intel’s process to allow third parties come in and build their chips on our foundry system, on our capabilities, and giving access to the state of the art, to the broader technology community. That, again, is a shift in how Intel has been coming to market, really opening the door, and this also going back about the public sector, it’s about the domestication or the Westernization to rebalance the supply chain. Regardless of the geopolitical state at any given time, ultimately, you don’t want to have too much of your supply chain in any one area, and so if we can rebalance it with the West, with both EU and the US, it really balances supply chain to have better availability so that we can meet the demands of both the current innovations, whether it be IoT or AI, as well as what we’re seeing coming as more and more compute is going to need to be done as it becomes more pervasive throughout all of our lives.

John: That makes total sense. Going back to AI, is AI a net positive for chief information officers and chief information security officers? Is it going to better enhance what they do in terms of protecting the organization and enterprises that they’re involved with?

Steve: I think for those who embrace it, and that’s going to be one of the key things. So some are going to be looking for ways to use AI to make their teams more efficient and to help identify threats quicker. I think one of the biggest wins of AI, if I can automate the 80% of the stupid stuff, of the random attacks that always are happening, the yet another ransomware, can I block that using AI and then let my already underfunded small team focus on the 20% hard problems, that nation-state adversary or that brand new malware no one’s ever seen before, that’s where you’re going to get some of the biggest deficiencies and biggest benefits for CISOs and CIOs.

The CIO organizations that are looking at AI as something that’s going to cause them trouble, and I got to figure out what to do with it, and they’re taking a negative view, are going to be the ones that are impacted most by not adopting, and they’ll find themselves falling behind. What we’re seeing is those that both embrace it for how they use it, but also how best to understand how to secure it, because whether they like it or not, the business units are adopting it. It’s happening.

Every major sector is adopting AI in some form from across every industry, and so the CIO organizations that embrace it and figure out how can we get ahead of that curve, how can we help the business units transform to an AI-based application and do it securely are going to be the ones that are in the better position, because the business units are doing it anyway. Almost like the cloud, those that said, “Oh, no, we’re not going to do cloud,” well, the line of business went out and bought their own cloud, and suddenly now I’m managing shadow IT. Same thing is going to be true with AI. If you get ahead of the curve now, you’re buying yourself a lot more in the future.

John: Is that a general principle that you would basically profess to anyone that you’re coaching in terms of businesses, federal government, public sector people, embrace it because it’s here to stay. So you might as well get with it early instead of being behind the eight ball as time goes on?

Steve: Absolutely. Embrace it. Learn how it works. Bring in pilots. Start getting your people familiar with the technology. One of the things that I’ve often recommended for CIOs as they work with whether it be public sector or private sector, is creating that team of teams, that diverse group of not just the technologists that are working on the problem, but bringing in the business people, bringing in legal and compliance, bringing in the finance people. Have everyone along for the ride for the requirements and design, because what you’ll end up on the other side will be much better aligned to your business operations.

It will meet your security requirements and your compliance requirements. They all know, the compliance people will know what data needs to be protected. If you have that information from the get-go, you’ll know what data should be fed into the model or what protections I have to do for this model once it’s done with its training. So having that information and having that team working together from the beginning leads to a lot bigger successes. So those that embrace it, but also pull in from across the organization, we’re finding are going to be much more successful downstream as they start to go to deployment and scale of these new solutions and innovations.

John: You have children, Steve, and you seem like a glass is half full person rather than a glass-is-half-empty person. When you’re speaking with your children about the future, their future, the future of the United States, obviously the future of the planet, are you net positive, given that you’re one of these brilliant type of people that almost knows too much, but that’s okay. Are you able to share what you can share with them in terms of hope in the future? Are you net positive and hopeful that technology is going to take us to a whole new realm and that their future is brighter than ever? Or where do you fall out on all the different topics that you’re an expert in that you have to constantly reprioritize and juggle and come out with opinions on where we’re going and directions on where we’re going?

Steve: So, it’s a really interesting question. My children are still young. [crosstalk] They’re young, so we’re not having deep conversations about ransomware yet, although I’d love for that day to come. I think that I am very positive on what we’re seeing as the opportunity for kids today as not just cloud natives or technology natives, but also the way they approach technology is so much different than when we were kids. For us, it was a new thing. If I get a hold of one for a little bit, I can play with it. I want to understand it. Now it’s a part of their everyday lives. My son knows how to get into the car, turn on the podcast, and listen to his favorite stories without me even getting involved. He knows how to do that. He’s five, and he knows how to go right to the right channel and listen to the podcast of children’s stories he wants to listen to.

John: He’s a technology native. He’s a native

Steve: Exactly. In one respect, they’ve got the benefit of it’s all there available to them. I think one of the things I’m hoping to do as they grow up is to help them understand, peel back the cover a little bit, understand how that actually works. Because if you understand how things work, then whether you’re going to go into security or any other field, if you have a foundation on how the technology operates, it’s going to serve you as you think about how to apply it to new and novel situations. While they’re definitely technology natives, when they’re a little older, I’m hoping to get them a little bit more into figuring out what makes it work. That’s what always made me excited is, I love this cool thing, but what’s underneath the covers? How does the remote control the TV? Can I put it back together if I took it apart? Those are the kind of questions I want them to ask.

John: That’s going to be fun for them, I’ll tell you that. To have you as a dad, I can see that’s going to be a ton of fun. Predictions for the future, Steve, with regards to cybersecurity, so many people still hear about the boogeyman cybersecurity, and they get frozen. They get frozen on what they should be doing, how they should be protecting themselves or the organizations they’re with. Can you share a couple of your predictions? What’s going to be changing in the cybersecurity landscape in the months and years ahead?

Steve: John, let me look at it from two perspectives. I think on the positive side, organizations, especially as they start looking at both zero trust architecture and this idea of dynamic risk management, are going to find that they’re getting faster at reducing the overall risk, being able to meet the ever-changing challenge, and getting out of the firefight. One of that is, instead of thinking of it as I’ve got to protect this system or this network, it’s thinking more about the transaction or the data independent of the infrastructure, allows them to be much more nimble on how they apply their security controls and where, and having security follow the bit, as opposed to, well, it was secure on this system. When it got off there, I don’t know. That’s not going to fly anymore.

That evolution towards a dynamic risk management and zero trust approach to things is definitely going to help organizations, as well as things like machine learning, AI, and other technologies are going to help them get ahead of that curve. I think we’ll see a much more nimble and dynamic security architecture that will be able to respond to threats, and in many cases, be proactive versus the firefighting we see today. On the other side of the table, let’s also face the reality that the adversaries are using those same tools. They’re using AI against us. They’re collaborating with each other. They’re sharing information. I think one thing that I’ve been guiding CIOs for a number of years on is we have to be better at sharing amongst ourselves, not just, did you see this compromise, which is where we are today. I saw this vulnerability.

I can share that with everyone so that we all can find it ourselves, but sharing best practices of how we mitigated it, and sharing our policies on how I try. Let’s face it. We all have ERP. We all have to secure ERP. Wouldn’t it be novel if we all started sharing our best practices for doing that, like the adversaries are sharing best practices for how they got into systems? I think that collaboration is going to be something that’s going to be necessary because the adversaries are absolutely sharing best practices, sharing code, and collaborating on these malware campaigns. That’s one of the things that’s a challenge for the future is how do we become, as a defender of our infrastructure and our critical systems, our banking, financial, and private data, how do we all collaborate better? How do we leverage these technologies to meet and hopefully in the future get ahead of the threat of adversary?

John: I love it. Thank you for sharing your thoughts with us today. Steve, you’re a fascinating guest. You are always welcome back on the Impact podcast to share future updates on technologies, AI, cybersecurity, and everything that you’re working on at Intel, another great American corporation. Man, we’re just lucky to have you. For our listeners and viewers who want to find Steve and want to find his colleagues and all the great important work they’re doing, please go to www.intel.com. Steve Orrin, you are a gem. Thank gosh you are advising our government and working with our government on making the world a better and more safe, sustainable place. Thank you again for being on the Impact podcast today.

Steve: Thank you, John. It was a pleasure to be here.

John: This edition of The Impact Podcast is brought to you by Engage. Engage is a digital booking platform revolutionizing the talent booking industry. With thousands of athletes, celebrities, entrepreneurs, and business leaders, Engage is the go-to spot for booking talent, for speeches, custom experiences, live streams, and much more. For more information on Engage or to book talent today, visit letsengage.com. This edition of the Impact podcast is brought to you by ERI. ERI has a mission to protect people, the planet, and your privacy, and is the largest fully integrated IT and electronics asset disposition provider and cybersecurity-focused hardware destruction company in the United States and maybe even the world. For more information on how ERI can help your business properly dispose of outdated electronic hardware devices, please visit eridirect.com.