Katrina Mulligan '07 on artificial intelligence and the future of national security

March 3, 2026
Katrina Mulligan

With nearly 20 years of experience in senior roles across the Department of Defense (DoD), Department of Justice, and National Security Council, Katrina Mulligan ’07 has spent her career in rooms where consequential decisions are made.

Now, as the first head of national security policy and partnerships at OpenAI, Mulligan is helping shape how artificial intelligence will be deployed for defense and intelligence purposes.

Mulligan began her legal studies at UCLA School of Law on the heels of working on Barack Obama's U.S. Senate campaign, an experience that “fundamentally shaped” her time and perspective while in law school. Throughout her legal studies, Mulligan was mentored by Jack Beard, a UCLA Law lecturer who was a student favorite and renowned expert in national security and international law. It was Beard who encouraged her initiative to do a first-of-its-kind semester-long externship with the American Bar Association's Rule of Law program in Amman, Jordan, where she supported Iraq human rights programs — an experience that cemented her passion for public service.

In this conversation, Mulligan shares her perspective on the critical challenge of building trust in AI to ensure that democratic nations can responsibly harness its transformational potential.

Now that you’re at OpenAI, what does your day-to-day look like?

On any given day, I might be designing guardrails and governance structures for how our technology is used, structuring agreements with DoD and national security customers, or negotiating terms that balance innovation and risk. Underlying all of it is long-term thinking about geopolitical implications — what does it mean for democratic nations if we get this right, or wrong?

What was it like making the leap from traditional government service to an AI company, and how did you recognize that this was the right moment to help shape how AI would be used in national security?

This is the first time in American history that a technology of this consequence is being developed exclusively by the private sector with no direct government involvement. If you think back to every other significant technological leap — electricity, nuclear fission, the internet, the human genome project, GPS — the government was a direct stakeholder in every one of those advances. But this time is different.

Watching AI's astonishing trajectory and being deeply familiar with how hard it is to create momentum in government in the absence of an emergency, I became convinced that the most important national security decisions of this decade are going to be made not in government but at AI companies. And I knew I had something to contribute to that discussion.

What excites you most about AI's potential in national security and public service?

It’s the promise of this technology to improve how government delivers results for the American people. I think that is critically important to the continued legitimacy of many of our institutions, which are not getting more resources these days. If they are going to improve the services they provide and the outcomes for people, they must find new ways to do that. AI is the only resource I know of that can deliver those kinds of outcomes.

What keeps you up at night when it comes to this technology?

That despite the mounting evidence that AI is going to be transformational, there are still a lot of skeptics out there who do not think the way they work needs to change. That said, I'm optimistic that we're close to the tipping point. After all, there were skeptics about electricity, the internet, and personal computers, and the government eventually realized they were essential to public service, too!

From your vantage point bridging Silicon Valley and the national security community, what's the most critical challenge right now?

The biggest issue, by far, is trust in AI. In the United States, trust in AI is consistently around 30%. And in much of the rest of the world, including China, trust in AI is much higher — above 70%. That's a massive gap. And what it tells me is that democracies could win the race to artificial general intelligence on the technology side and lose the race on adoption.

It’s important to think of AI as a foundational, general-purpose technology, like electricity. Its impact will come from integration into existing systems — health care, logistics, defense, science — that are often invisible but nonetheless enabling transformational impacts.

If you believe that AI has the potential to drive transformational impact, and I do, you should want the U.S. government and our national security community to move quickly to harness it for their mission. But that's not what I'm seeing today. Our government is moving slower than industry and far slower than our adversaries. We need to change that, and doing so begins with building trust in the technology and its ability to be safely and effectively used for government purposes.

How has your legal training shaped your approach to this work?

Law school trains you to operate in ambiguity. In national security and AI, you rarely have perfect information — you have to reason from first principles and make defensible decisions under uncertainty. I use that training every day. UCLA Law also gave me a deep appreciation for how institutions are built — and how they fail. Courses in national security and international law taught me that rules don’t just constrain power, they legitimize it. That insight is critical when you’re deploying powerful technologies inside democratic systems.

News
See All
Feb 27, 2026

One couple’s journey from law school classmates in Colombia to LLM students at UCLA Law

Read More
Feb 25, 2026

‘It’s made all the difference’: How BLSA helped shape two student leaders' UCLA Law experience

Read More
Feb 24, 2026

Former UCLA Law dean Jennifer Mnookin is celebrated with a new faculty chair

Read More