
What's Your Big Idea?
PUBLIC: How can you use AI to develop an early warning system for issues, campaigns, and staff security?
Time to show off your stuff.

Nonprofit and civic organizations have always operated in contested space. You advocate for things powerful interests would rather leave alone. You organize people who have been told their voices don't matter. You run campaigns that make someone uncomfortable enough to push back.
That has always been true. What is different in 2026 is the speed, the scale, and the sophistication of what pushing back looks like.
A coordinated narrative attack that used to take weeks to build can now be assembled overnight. A disinformation campaign that used to require resources only well-funded opposition could afford is now accessible to anyone with a credit card and a grudge. A staff member's personal information that used to require real effort to surface can now be found, compiled, and weaponized in hours.
Most organizations are still running 2018 threat awareness on 2026 threats. The gap between what is being done to mission-driven organizations and what those organizations are equipped to detect is widening every cycle.
An early warning system does not close that gap by itself. But it changes the timeline. And in a threat environment where the first 48 hours determine whether you are managing a situation or reacting to a crisis, timeline is everything.
Issue threats
These are attacks on your cause, your credibility, or your organizational narrative. They do not always look like attacks when they start.
They arrive as a sudden spike in negative commentary across platforms. As a coordinated pattern of similar talking points appearing independently from accounts that do not know each other. As a story that begins in a fringe outlet and migrates toward mainstream coverage faster than organic sharing would explain. As an old controversy resurfaced at a moment that seems too convenient to be accidental.
The traditional response is monitoring — someone on staff or a consultant checking mentions, flagging things that look concerning, escalating when something feels wrong. The problem is that human monitoring is retrospective. By the time something feels wrong it has usually already moved.
An AI-assisted early warning system watches for pattern changes, not just volume changes. It notices when the sentiment around a keyword shifts before the volume does. It flags when similar language appears across accounts that have no visible connection. It catches the leading edge of a coordinated campaign before the campaign has momentum.
Campaign threats
These are more specific and more operational. They target your ability to run a campaign — to organize, to turn out, to communicate, to maintain coalition.
They arrive as voter or donor list manipulation. As targeted suppression of your communications — emails flagged as spam at unusual rates, social posts throttled or reported in coordinated waves. As false information about your events, your endorsements, or your positions seeded into the communities you are trying to reach. As impersonation — fake accounts, fake websites, fake communications that appear to come from you.
The 2026 version of this is harder to detect because the tools generating it are better at mimicking authenticity. A fake endorsement used to look fake. Now it looks like something your communications director would have written. A fake event listing used to have tells. Now it does not.
Campaign threat monitoring requires watching not just what is being said about you but what is being said as you — and flagging the gap between what your organization actually produced and what is circulating under your name.
Staff security threats
This is the category organizations are least prepared for and most reluctant to talk about.
Staff at advocacy organizations, campaigns, and nonprofits working on contested issues are targets. Not abstractly. Concretely. Their home addresses get posted. Their family members get contacted. They get coordinated harassment across platforms designed to make them feel unsafe, visible, and alone.
This is called doxing when it happens all at once. But it often happens incrementally — a piece of information here, a pattern of low-level contact there — in ways designed to stay below the threshold that would trigger a formal response while still achieving the goal of making someone want to stop doing the work.
An early warning system for staff security watches for the aggregation of personally identifiable information across public sources. It monitors for unusual contact patterns. It flags when a staff member's name starts appearing in spaces where it has not appeared before — particularly in spaces associated with previous targeting campaigns. It gives staff members a way to report low-level incidents before they escalate, and it gives the organization a way to see whether isolated incidents are actually part of a pattern.
Three things have changed in ways that matter for how you think about this.
The cost of a sophisticated operation has collapsed. What used to require a funded opposition research shop or a foreign state actor now requires a subscription and an afternoon. The barrier to running a coordinated narrative campaign, generating convincing fake content, or compiling a detailed profile of a staff member is essentially gone. This means the threat is no longer concentrated in high-profile targets. Any organization doing work that matters to someone is a potential target.
The speed of escalation has compressed. A coordinated campaign used to take days to reach critical mass. Now it can reach it in hours. The window between "something is happening" and "this is a crisis we are managing publicly" has shrunk to the point where traditional monitoring — which is designed to catch things at the "something is happening" stage — often catches them at the crisis stage instead.
AI-generated content has made authenticity unreliable as a signal. Organizations used to be able to tell that something was inauthentic because it looked inauthentic. That heuristic is no longer reliable. Fake communications, fake accounts, fake endorsements, and fake events can now be generated at a quality level that passes casual inspection. The signal has to shift from "does this look real" to "does this fit the pattern of how we actually operate" — and that is a question AI is much better at answering than a human doing spot checks.
You do not have to answer all of these. Pick the one that connects most directly to your experience and go deep on it.
What would an early warning system have caught in a situation your organization — or one like it — actually faced? Work backward from a real incident and identify the leading indicators that were visible in retrospect but missed in real time.
Where is the highest-leverage point of intervention in your threat category? For issue threats it might be narrative pattern detection. For campaign threats it might be impersonation monitoring. For staff security it might be aggregation alerts. What would catching it early actually change about the outcome?
What does your organization currently do that passes for early warning, and where does it break down? Most organizations have some version of monitoring — Google alerts, a staff member watching social, a communications person with good instincts. Where does that system fail, and what would fill the gap?
What is the realistic version of this for a small organization with no dedicated security staff and a limited budget? The threat environment is real but so is the resource constraint. What does a minimum viable early warning system look like for a three-person shop?
How do you build staff buy-in for security practices in an organization that has never had to think about this before? The technical question is only half the problem. The cultural question — how do you get people to report low-level incidents before they escalate, how do you talk about threat monitoring without making people feel surveilled — is often harder.
Every answer to this question is really an answer to a harder question underneath it: what is the earliest point at which you can intervene in a threat, and what does intervention look like at that point?
The value of AI in an early warning system is not that it catches everything. It is that it catches things earlier than human monitoring does — which means you are making decisions when you still have options rather than when you are already in a corner.
That is the frame to bring to your answer. Not "can AI protect us" but "where in the timeline does earlier detection change what we can do about it?"