Shield AI’s founder on death, drones in Ukraine, and the AI weapon ‘no one wants’



About two months ago, Shield AI co-founder Brandon Tseng and one of his employees were in an Uber weaving through Kyiv, Ukraine. They were headed to a meeting with military officials to sell them on their AI pilot systems and drones, when suddenly his employee showed him a warning on his phone. Russian bombs were incoming. Tseng met his potential demise with a shrug. “If it’s your time to go,” he said, “then it’s your time to go.” 

If anything, Tseng, a former Navy SEAL, was itching for more action. Shield AI employees had previously been to much more dangerous areas in Ukraine, training troops on its software and drones. “I’m quite jealous of where they got to go,” Tseng said. “Just from an adventure standpoint.”

Tseng embodies that quiet macho-ness that pervades most defense tech founders. When I met him last month at the company’s Arlington office, he showed off a knife displayed in his office engraved with the SEAL slogan “Suffer in silence.” The white walls, whose tops glowed with fluorescent lights (to look like a spaceship, Tseng said), were covered with slogans like “Do what honor dictates” and “Earn your shield every day.” I pointed out they were pretty intense. “Are they?” Tseng replied.  

In 2015, Tseng founded Shield AI alongside his brother, Ryan Tseng, a patent-awarded electrical engineer, with a clear mission: “We built the world’s best AI pilot,” he said. “I want to put a million AI pilots in customers’ hands.” 

To that end, he and his brother have raised over $1 billion from investors like Riot Ventures and the U.S. Innovative Technology Fund. The company develops AI software to make air vehicles autonomous, although Tseng said they want Shield AI’s software in underwater and surface systems as well. It also has hardware products, like its drone V-BAT. 

Shield AI is also part of a rare class of defense tech startups: one that’s actually landed decently sized government contracts, like its $198 million contract from the Coast Guard this year. As if trying to position themselves for an even bigger future, the founders chose a new office surrounded by three floors of Raytheon, one of the major defense contractors. 

Ukraine: The lab for U.S. defense tech startups

September 16 was a sign of the changing times: Instead of making defense tech founders fly to the Capitol, put on their suits, and grovel to politicians, Washington, D.C., came to them. 

Members of the U.S. House Armed Services Committee gathered with Palantir CTO Shyam Sankar, Brandon Tseng, and executives from Skydio, Applied Intuition, and Saildrone at UC Santa Cruz’s Silicon Valley campus. They discussed U.S. Department of Defense (DoD) acquisition reform and, inevitably, the role of U.S. technology in Ukraine. It was the first public hearing the committee has held outside of Washington, D.C., since 2006.

Ukraine has “been a great laboratory,” Tseng told the policymakers. “What I think the Ukrainians have discovered is that they’re not going to use anything that doesn’t work on the battlefield, period.”

Defense tech founders, like Anduril co-founder Palmer Luckey and Skydio co-founder Adam Bry, have all flocked to the embattled country to sell relatively new technology for a rapidly deteriorating battlefield. Unfortunately, not all U.S. tech is working. According to a Wall Street Journal report, drones from U.S. startups have almost universally failed to operate through electronic warfare in Ukraine, meaning the drones cease to work under Russia’s GPS blackout technology.

“Ukraine is at war and people are being killed. But … you want to take those lessons learned,” Tseng told me a week later, reflecting on the hearing. “You don’t want to have to relearn any of those lessons. The United States should not want to relearn any of those lessons.”

Naturally, he’s confident that Shield AI’s drones have fared better in Ukraine than others because, he says, they can operate without relying on GPS. “We are working to get more drones over there based on the successes that we’ve had,” he said, although he declined to name specifics of how many drones Shield AI has sent over. 

Terminator-like AI killers? Or “Ender’s Game”?

Tseng’s corner office is bare besides a framed copy of the Constitution, hanging crooked on the wall. He listed it as one of his biggest inspirations. “It’s not because we’re perfect, but because we aspire to these values that I would claim are perfect values,” he said. “That’s what matters most. We’re always marching in that direction.” 

He straightened out the frame before brushing through an abbreviated history of warfare. Deterrence, he said, tends to happen when a radical new technology emerges, like the atom bomb, or stealth technology and GPS. AI, he said, will usher in the new era of deterrence — assuming the DoD funds it properly. “Private companies are putting more money towards AI and autonomy than any aggregate amount in the defense budget,” he said. 

The potential value of AI-related federal contracts ballooned to $4.6 billion in 2023 from $335 million in 2022, according to a report by the Brookings Institution. But that’s still a fraction of the over $70 billion that VCs invested in defense tech in roughly the same period, according to PitchBook.

Still, the biggest question of military AI use is not budget — it’s ethics. Founders and policymakers alike grapple with whether to allow completely autonomous weapons, meaning the AI itself decides when to kill. Lately, some founders’ rhetoric appears to be on the side of building such weapons.

A few days ago, for instance, Anduril’s Luckey claimed there was “a shadow campaign being waged in the United Nations right now by many of our adversaries” to trick Western countries into not aggressively pursuing AI. He implied that fully autonomous AI was no worse than land mines. He didn’t mention, however, that the U.S. is among over 160 nations that agreed to ban the use of anti-personnel land mines in the vast majority of places.

Tseng is firmly opposed to fully autonomous weapons. “I’ve had to make the moral decision about utilizing lethal force on the battlefield,” he said. “That is a human decision and it will always be a human decision. That is Shield AI’s standpoint. That is also the U.S. military’s standpoint.” 

He’s right that the U.S. military does not currently purchase fully autonomous weapons, although it does not ban companies from developing them. What if the U.S. changed its standpoint? “I think it’s a crazy hypothetical,” he answered. “Congress doesn’t want that. No one wants that.” 

So if he doesn’t foresee an army of Terminator-like killers, what does he envision? “A single person could command and control a million drones,” Tseng said. “There’s not a technological limitation on how much a single person could command effectively on the battlefield.”

It’s going to be akin to “Ender’s Game,” he said, referencing the 1985 sci-fi classic where a child military officer can release legions of space armies with the wave of a hand. 

“Except instead of actual humans that he was commanding, it’ll be f—ing robots,” Tseng said.




Source