Category: What’s Your Big Idea
PUBLIC: How can you use AI to develop an early warning system for issues, campaigns, and staff security?

What’s Your Big Idea?
How can you use AI to develop an early warning system for issues, campaigns, and staff security?
Time to show off your stuff.

Why this question now
Nonprofit and civic organizations have always operated in contested space. You advocate for things powerful interests would rather leave alone. You organize people who have been told their voices don’t matter. You run campaigns that make someone uncomfortable enough to push back.
That has always been true. What is different in 2026 is the speed, the scale, and the sophistication of what pushing back looks like.
A coordinated narrative attack that used to take weeks to build can now be assembled overnight. A disinformation campaign that used to require resources only well-funded opposition could afford is now accessible to anyone with a credit card and a grudge. A staff member’s personal information that used to require real effort to surface can now be found, compiled, and weaponized in hours.
Most organizations are still running 2018 threat awareness on 2026 threats. The gap between what is being done to mission-driven organizations and what those organizations are equipped to detect is widening every cycle.
An early warning system does not close that gap by itself. But it changes the timeline. And in a threat environment where the first 48 hours determine whether you are managing a situation or reacting to a crisis, timeline is everything.
The three threat categories — what they are and how they usually arrive
Issue threats
These are attacks on your cause, your credibility, or your organizational narrative. They do not always look like attacks when they start.
They arrive as a sudden spike in negative commentary across platforms. As a coordinated pattern of similar talking points appearing independently from accounts that do not know each other. As a story that begins in a fringe outlet and migrates toward mainstream coverage faster than organic sharing would explain. As an old controversy resurfaced at a moment that seems too convenient to be accidental.
The traditional response is monitoring — someone on staff or a consultant checking mentions, flagging things that look concerning, escalating when something feels wrong. The problem is that human monitoring is retrospective. By the time something feels wrong it has usually already moved.
An AI-assisted early warning system watches for pattern changes, not just volume changes. It notices when the sentiment around a keyword shifts before the volume does. It flags when similar language appears across accounts that have no visible connection. It catches the leading edge of a coordinated campaign before the campaign has momentum.
Campaign threats
These are more specific and more operational. They target your ability to run a campaign — to organize, to turn out, to communicate, to maintain coalition.
They arrive as voter or donor list manipulation. As targeted suppression of your communications — emails flagged as spam at unusual rates, social posts throttled or reported in coordinated waves. As false information about your events, your endorsements, or your positions seeded into the communities you are trying to reach. As impersonation — fake accounts, fake websites, fake communications that appear to come from you.
The 2026 version of this is harder to detect because the tools generating it are better at mimicking authenticity. A fake endorsement used to look fake. Now it looks like something your communications director would have written. A fake event listing used to have tells. Now it does not.
Campaign threat monitoring requires watching not just what is being said about you but what is being said as you — and flagging the gap between what your organization actually produced and what is circulating under your name.
Staff security threats
This is the category organizations are least prepared for and most reluctant to talk about.
Staff at advocacy organizations, campaigns, and nonprofits working on contested issues are targets. Not abstractly. Concretely. Their home addresses get posted. Their family members get contacted. They get coordinated harassment across platforms designed to make them feel unsafe, visible, and alone.
This is called doxing when it happens all at once. But it often happens incrementally — a piece of information here, a pattern of low-level contact there — in ways designed to stay below the threshold that would trigger a formal response while still achieving the goal of making someone want to stop doing the work.
An early warning system for staff security watches for the aggregation of personally identifiable information across public sources. It monitors for unusual contact patterns. It flags when a staff member’s name starts appearing in spaces where it has not appeared before — particularly in spaces associated with previous targeting campaigns. It gives staff members a way to report low-level incidents before they escalate, and it gives the organization a way to see whether isolated incidents are actually part of a pattern.
What makes 2026 different
Three things have changed in ways that matter for how you think about this.
The cost of a sophisticated operation has collapsed. What used to require a funded opposition research shop or a foreign state actor now requires a subscription and an afternoon. The barrier to running a coordinated narrative campaign, generating convincing fake content, or compiling a detailed profile of a staff member is essentially gone. This means the threat is no longer concentrated in high-profile targets. Any organization doing work that matters to someone is a potential target.
The speed of escalation has compressed. A coordinated campaign used to take days to reach critical mass. Now it can reach it in hours. The window between “something is happening” and “this is a crisis we are managing publicly” has shrunk to the point where traditional monitoring — which is designed to catch things at the “something is happening” stage — often catches them at the crisis stage instead.
AI-generated content has made authenticity unreliable as a signal. Organizations used to be able to tell that something was inauthentic because it looked inauthentic. That heuristic is no longer reliable. Fake communications, fake accounts, fake endorsements, and fake events can now be generated at a quality level that passes casual inspection. The signal has to shift from “does this look real” to “does this fit the pattern of how we actually operate” — and that is a question AI is much better at answering than a human doing spot checks.
Angles and thoughts to take into your answer
You do not have to answer all of these. Pick the one that connects most directly to your experience and go deep on it.
What would an early warning system have caught in a situation your organization — or one like it — actually faced? Work backward from a real incident and identify the leading indicators that were visible in retrospect but missed in real time.
Where is the highest-leverage point of intervention in your threat category? For issue threats it might be narrative pattern detection. For campaign threats it might be impersonation monitoring. For staff security it might be aggregation alerts. What would catching it early actually change about the outcome?
What does your organization currently do that passes for early warning, and where does it break down? Most organizations have some version of monitoring — Google alerts, a staff member watching social, a communications person with good instincts. Where does that system fail, and what would fill the gap?
What is the realistic version of this for a small organization with no dedicated security staff and a limited budget? The threat environment is real but so is the resource constraint. What does a minimum viable early warning system look like for a three-person shop?
How do you build staff buy-in for security practices in an organization that has never had to think about this before? The technical question is only half the problem. The cultural question — how do you get people to report low-level incidents before they escalate, how do you talk about threat monitoring without making people feel surveilled — is often harder.
The underlying design question
Every answer to this question is really an answer to a harder question underneath it: what is the earliest point at which you can intervene in a threat, and what does intervention look like at that point?
The value of AI in an early warning system is not that it catches everything. It is that it catches things earlier than human monitoring does — which means you are making decisions when you still have options rather than when you are already in a corner.
That is the frame to bring to your answer. Not “can AI protect us” but “where in the timeline does earlier detection change what we can do about it?”
What’s Your Big Idea?
PUBLIC: How I Accidentally Walked into the Perfect Solution to Deepfakes

First New Feature Discussion is Always Public
How I Accidentally Walked into the Perfect Solution to Deepfakes
How my weird brain works; and how it might help you approach problem solving with LLMs.
Part 1 — The Accidental Lawmaker
This is the first in a four-part series on the thinking behind the 48-Hour Deepfake Takedown Act — a framework built not in a law school or a legislative office, but in the middle of a vortex of curiosity that started with Denmark. This wasn’t a revelation, it was an example of the connections you can make when thinking about solutions using AI tools. The machine didn’t think of this, I did. Together, the machine and I added depth, thought through problems; saving time.
I didn’t set out to write a law.
I was doing what I usually do when something bothers me — reading, following threads, asking questions nobody around me was asking yet. The something that was bothering me was deepfakes. Not the sensational version of the problem, the celebrity scandals and the viral outrage. The quieter version. The one where ordinary people — a teacher, a candidate for local office, a nonprofit director — wake up to a version of themselves they never made, saying things they never said, and have absolutely nowhere to go.
That bothered me more than I could explain at the time. It still does.
So I kept reading.
The Denmark thread started the way most good ideas do — sideways.
I came across a piece about how Denmark had changed its copyright law to give every citizen in the country something called NIL status. Name, image, and likeness. The same framework American college athletes had been fighting for. Denmark just gave it to everyone, by law, and then extended it to cover voice — NILV. Your voice, your face, your likeness. Yours. Not the platform’s. Not the model’s. Yours.
I filed that away and went looking for what Michigan had on the same question.
What I found surprised me. Michigan already had NIL protection in the common law. Not a statute. Not something the legislature had passed after years of advocacy. It was already there, sitting in the legal tradition, waiting to be modernized.
That was the first moment I thought: we might actually have something to work with here.
The problem with most deepfake legislation — and there is more of it every session, in more states, with more urgency — is that it focuses on the wrong end of the pipeline.
Most of it asks: how do we stop deepfakes from being made?
That is the wrong question. Not because it is unimportant, but because it is unanswerable at scale. You cannot regulate creation without either missing most of it or regulating so broadly you catch things you never intended. The platforms are global. The tools are cheap. The bad actors are diffuse. Trying to stop creation is like trying to stop water with a screen door.
The right question — the one I kept coming back to — is different.
What has to happen in the first 48 hours after a deepfake goes live?
That question has a tractable answer. It has leverage points. It has a pipeline you can actually regulate. And it puts the obligation where it belongs — not on the victim to prove something impossible, but on the platforms and the generators who profited from building the infrastructure that made the harm possible.
The administrative court idea did not come from a legal textbook. It came from frustration.
I had been thinking through APIs and image identification algorithms, trying to imagine how a takedown system would actually work in practice. Who verifies the claim? Who decides it’s a match? How do you stop BigAI from doing what BigAI always does when faced with an inconvenient obligation — slow pay, no pay, drag it out until the victim gives up or runs out of money?
Every version I worked through had the same problem. The moment you put humans in the middle of the verification process, you create delay. Delay is the strategy. Delay is how you win if you are a company with more lawyers than the victim has years.
And then it hit me. What if the computers talk to each other?
Not humans filing paperwork and waiting for a hearing date. Not a lawsuit that takes three years and costs more than the victim will ever see. An electronic administrative court where state law mandates adherence — once the law passes — and where the verification is algorithmic, the timeline is fixed, and the human element comes in at the beginning and the end. Humans see the evidence going in and coming out. A human can appeal, and they need better evidence to prevail. But the middle — the matching, the verification, the decision — that runs without the delay that human process invites.
Keeping the scope narrow was not a compromise. It was the insight. The narrower the scope, the harder it is to game. The more you try to do everything, the more surface area you give the opposition to work with.
What happens to the victim while the process runs? If the answer is “they wait,” the system fails. Waiting is losing when you are a private citizen and a version of you is spreading across the internet in real time.
But relief is not automatic. It is not enough to claim harm. The system has to first confirm that what you are reporting is actually a deepfake — that the image or video in question matches a registered original and was generated without consent. That verification is what the electronic court is for.
Once it confirms a match, the clock starts. The platform has 48 hours to take the content down and stop future generation. They have the evidence. They have the match. They have everything they need to act. There is no ambiguity to hide behind, no discovery process to drag out, no procedural delay available to them. Just a deadline and a decision.
If they miss it, they pay. Not a nuisance fee. Not heaven and earth. Not a cost-of-doing-business fine that gets folded into a quarterly report. Real money for real damages, drawn from an escrow fund required by law and fully funded by the tech companies as a condition of operating in the state.
The fund exists because victims cannot wait for a judgment that takes years to collect. But the trigger is not the claim — it is the failure to act after verification. You had the evidence. You had the time. You chose not to move. That choice has a price. The sequence matters. Claim. Verify. Remedy. In that order. The speed of the system is what protects victims — not the looseness of the standard.
I want to be honest about something. I am not a lawyer. I am not a legislator. I did not build this framework in a law school clinic or a policy shop with a staff of researchers.
I built it the way most useful things get built — by following a question further than was probably reasonable, reading everything I could find, thinking out loud with AI tools that helped me pressure-test the logic, and refusing to stop at the point where it got complicated.
What I ended up with is a framework that uses Michigan’s existing legal foundation, learns from what Denmark figured out about voice and likeness protection, and builds something new in the middle — an electronic administrative court that keeps the scope narrow, the timeline fixed, and the power where it belongs.
Is it perfect? No. Is it gameable? Everything is gameable. The question is whether the cost of gaming it exceeds the cost of complying. That is what Parts 2, 3, and 4 of this series are about.
The first question I want this community to sit with — the one that will open our first moderated thread in What’s Your Big Idea? — is this:
How can we use AI to fight the excesses of AI?
Not as a rhetorical provocation. As a genuine design question. The framework I built uses algorithmic matching to verify claims, electronic filing to prevent delay, and automated logs to create an evidentiary record that humans never could. It fights the tool with the tool.
There are other ways to do that. There are versions of this idea that apply to fields you know better than I do — healthcare, education, housing, civic engagement. There are applications we have not thought of yet.
That is what the next two weeks are for.
Part 2 goes deeper into the architecture — the registry, the tribunal, the escrow fund, and why the 48-hour window is the number that makes everything else possible.
If you are not already in the room, this is a good time to walk through the door.
What’s Your Big Idea? is a free community for nonprofit and civic people figuring out AI together. Entry works differently than most communities.