Limited Time Offer. Become a Founder Member Now!

UW–Madison researcher warns of dual-use AI risks from deepfakes to drone swarms

October 10, 2025 | Sheboygan City, Sheboygan County, Wisconsin


This article was created by AI summarizing key points discussed. AI makes mistakes, so for full details and context, please refer to the video of the full meeting. Please report any errors so we can fix them. Report an error »

UW–Madison researcher warns of dual-use AI risks from deepfakes to drone swarms
Dave Schroeder, director for national initiatives at the University of Wisconsin–Madison and an officer in the Wisconsin Army National Guard, told a public audience at Mead Public Library that recent advances in generative artificial intelligence present both powerful tools and new national-security risks.

Schroeder said generative AI can “sift through massive amounts of data” to find patterns useful to medicine and intelligence, but warned the same capabilities can enable mass surveillance, automated cyberattacks and realistic deepfakes that erode public trust. “AI is just that. It mimics human intelligence,” Schroeder said.

Schroeder illustrated the technology’s speed and dual use with several examples from research and industry. He described a Wisconsin company, Rake Labs, whose CEO used an image model to locate a suspected surveillance balloon quickly in commercial satellite imagery. He also demonstrated AI-generated video clips and called attention to new commercial products — referenced in his talk as “Sora 2” and other tools — that make realistic video and audio creation widely accessible.

The lecture covered five principal security concerns Schroeder said policymakers and the public should weigh:

- Deepfakes and disinformation: Schroeder warned that generative models can produce convincing text, images, audio and video, and be used to run bot networks that amplify false narratives. He said foreign adversaries have used these tactics to seed and escalate social-media controversies.

- Autonomous and semi-autonomous small unmanned aerial systems (small UAS): Using examples from the Russia–Ukraine conflict, Schroeder described how low-cost, automatable drones and drone swarms change operational dynamics by being inexpensive, hard to interdict and capable of conducting reconnaissance or attacks. He said such drones can cost “a thousand [to] a few thousand dollars per drone.”

- Cybersecurity and AI-enabled offensive operations: Schroeder described AI agents that can scan networks, identify vulnerabilities and report findings to their operators, effectively acting as automated reconnaissance for attackers.

- Command-and-control and high-consequence automation: He raised the risks of overreliance on AI in time-sensitive military decision-making, noting that models can produce deceptive or incorrect outputs in testing and that frontier models “will lie” in controlled scenarios.

- Concentration of compute and geopolitical competition: Schroeder summarized investment growth he cited in the talk (from $4,000,000,000 to $109,000,000,000 to $350,000,000,000 across recent years) and said export controls on specialized processors, and large data-center builds, are part of a broader competition with China for AI leadership.

On defensive responses, Schroeder said U.S. efforts include programs to develop domestically sourced small UAS (he named a program he called Replicator 2) and export controls intended to restrict access to specialized GPUs used to train large models. He also said some work aims to run models locally on devices (an approach he attributed to Apple-style on-device efforts) to reduce cloud dependence and privacy exposure.

During a question-and-answer period, audience members raised recent drone-swarm incidents in Denmark and unexplained aerial sightings in New Jersey. Schroeder said some widely reported sightings were later identified as helicopters, commercial aircraft or observational errors, but that at minimum the publicly visible incidents point to adversary testing and “an area of concern.” He repeated that the U.S. lacks a full domestic small-UAS industrial ecosystem and that smaller innovative firms and startups will be central to catching up.

Schroeder closed by urging communities to preserve trust in institutions and to weigh tradeoffs between rapid AI development and safety measures. He said these tradeoffs — involving private industry, governments and global competition — will shape how widely and safely AI is deployed in civilian and military contexts.

View full meeting

This article is based on a recent meeting—watch the full video and explore the complete transcript for deeper insights into the discussion.

View full meeting

Sponsors

Proudly supported by sponsors who keep Wisconsin articles free in 2025

Scribe from Workplace AI
Scribe from Workplace AI