PUBLIC: How can you use AI to develop an early warning system for issues, campaigns, and staff security?

What’s Your Big Idea?

How can you use AI to develop an early warning system for issues, campaigns, and staff security?

Time to show off your stuff.


Why this question now

Nonprofit and civic organizations have always operated in contested space. You advocate for things powerful interests would rather leave alone. You organize people who have been told their voices don’t matter. You run campaigns that make someone uncomfortable enough to push back.

That has always been true. What is different in 2026 is the speed, the scale, and the sophistication of what pushing back looks like.

A coordinated narrative attack that used to take weeks to build can now be assembled overnight. A disinformation campaign that used to require resources only well-funded opposition could afford is now accessible to anyone with a credit card and a grudge. A staff member’s personal information that used to require real effort to surface can now be found, compiled, and weaponized in hours.

Most organizations are still running 2018 threat awareness on 2026 threats. The gap between what is being done to mission-driven organizations and what those organizations are equipped to detect is widening every cycle.

An early warning system does not close that gap by itself. But it changes the timeline. And in a threat environment where the first 48 hours determine whether you are managing a situation or reacting to a crisis, timeline is everything.

The three threat categories — what they are and how they usually arrive

Issue threats

These are attacks on your cause, your credibility, or your organizational narrative. They do not always look like attacks when they start.

They arrive as a sudden spike in negative commentary across platforms. As a coordinated pattern of similar talking points appearing independently from accounts that do not know each other. As a story that begins in a fringe outlet and migrates toward mainstream coverage faster than organic sharing would explain. As an old controversy resurfaced at a moment that seems too convenient to be accidental.

The traditional response is monitoring — someone on staff or a consultant checking mentions, flagging things that look concerning, escalating when something feels wrong. The problem is that human monitoring is retrospective. By the time something feels wrong it has usually already moved.

An AI-assisted early warning system watches for pattern changes, not just volume changes. It notices when the sentiment around a keyword shifts before the volume does. It flags when similar language appears across accounts that have no visible connection. It catches the leading edge of a coordinated campaign before the campaign has momentum.

Campaign threats

These are more specific and more operational. They target your ability to run a campaign — to organize, to turn out, to communicate, to maintain coalition.

They arrive as voter or donor list manipulation. As targeted suppression of your communications — emails flagged as spam at unusual rates, social posts throttled or reported in coordinated waves. As false information about your events, your endorsements, or your positions seeded into the communities you are trying to reach. As impersonation — fake accounts, fake websites, fake communications that appear to come from you.

The 2026 version of this is harder to detect because the tools generating it are better at mimicking authenticity. A fake endorsement used to look fake. Now it looks like something your communications director would have written. A fake event listing used to have tells. Now it does not.

Campaign threat monitoring requires watching not just what is being said about you but what is being said as you — and flagging the gap between what your organization actually produced and what is circulating under your name.

Staff security threats

This is the category organizations are least prepared for and most reluctant to talk about.

Staff at advocacy organizations, campaigns, and nonprofits working on contested issues are targets. Not abstractly. Concretely. Their home addresses get posted. Their family members get contacted. They get coordinated harassment across platforms designed to make them feel unsafe, visible, and alone.

This is called doxing when it happens all at once. But it often happens incrementally — a piece of information here, a pattern of low-level contact there — in ways designed to stay below the threshold that would trigger a formal response while still achieving the goal of making someone want to stop doing the work.

An early warning system for staff security watches for the aggregation of personally identifiable information across public sources. It monitors for unusual contact patterns. It flags when a staff member’s name starts appearing in spaces where it has not appeared before — particularly in spaces associated with previous targeting campaigns. It gives staff members a way to report low-level incidents before they escalate, and it gives the organization a way to see whether isolated incidents are actually part of a pattern.

What makes 2026 different

Three things have changed in ways that matter for how you think about this.

The cost of a sophisticated operation has collapsed. What used to require a funded opposition research shop or a foreign state actor now requires a subscription and an afternoon. The barrier to running a coordinated narrative campaign, generating convincing fake content, or compiling a detailed profile of a staff member is essentially gone. This means the threat is no longer concentrated in high-profile targets. Any organization doing work that matters to someone is a potential target.

The speed of escalation has compressed. A coordinated campaign used to take days to reach critical mass. Now it can reach it in hours. The window between “something is happening” and “this is a crisis we are managing publicly” has shrunk to the point where traditional monitoring — which is designed to catch things at the “something is happening” stage — often catches them at the crisis stage instead.

AI-generated content has made authenticity unreliable as a signal. Organizations used to be able to tell that something was inauthentic because it looked inauthentic. That heuristic is no longer reliable. Fake communications, fake accounts, fake endorsements, and fake events can now be generated at a quality level that passes casual inspection. The signal has to shift from “does this look real” to “does this fit the pattern of how we actually operate” — and that is a question AI is much better at answering than a human doing spot checks.

Angles and thoughts to take into your answer

You do not have to answer all of these. Pick the one that connects most directly to your experience and go deep on it.

What would an early warning system have caught in a situation your organization — or one like it — actually faced? Work backward from a real incident and identify the leading indicators that were visible in retrospect but missed in real time.

Where is the highest-leverage point of intervention in your threat category? For issue threats it might be narrative pattern detection. For campaign threats it might be impersonation monitoring. For staff security it might be aggregation alerts. What would catching it early actually change about the outcome?

What does your organization currently do that passes for early warning, and where does it break down? Most organizations have some version of monitoring — Google alerts, a staff member watching social, a communications person with good instincts. Where does that system fail, and what would fill the gap?

What is the realistic version of this for a small organization with no dedicated security staff and a limited budget? The threat environment is real but so is the resource constraint. What does a minimum viable early warning system look like for a three-person shop?

How do you build staff buy-in for security practices in an organization that has never had to think about this before? The technical question is only half the problem. The cultural question — how do you get people to report low-level incidents before they escalate, how do you talk about threat monitoring without making people feel surveilled — is often harder.

The underlying design question

Every answer to this question is really an answer to a harder question underneath it: what is the earliest point at which you can intervene in a threat, and what does intervention look like at that point?

The value of AI in an early warning system is not that it catches everything. It is that it catches things earlier than human monitoring does — which means you are making decisions when you still have options rather than when you are already in a corner.

That is the frame to bring to your answer. Not “can AI protect us” but “where in the timeline does earlier detection change what we can do about it?”

What’s Your Big Idea?

PUBLIC: How I Accidentally Walked into the Perfect Solution to Deepfakes

First New Feature Discussion is Always Public

How I Accidentally Walked into the Perfect Solution to Deepfakes

How my weird brain works; and how it might help you approach problem solving with LLMs.


Part 1 — The Accidental Lawmaker

This is the first in a four-part series on the thinking behind the 48-Hour Deepfake Takedown Act — a framework built not in a law school or a legislative office, but in the middle of a vortex of curiosity that started with Denmark. This wasn’t a revelation, it was an example of the connections you can make when thinking about solutions using AI tools. The machine didn’t think of this, I did. Together, the machine and I added depth, thought through problems; saving time.


I didn’t set out to write a law.

I was doing what I usually do when something bothers me — reading, following threads, asking questions nobody around me was asking yet. The something that was bothering me was deepfakes. Not the sensational version of the problem, the celebrity scandals and the viral outrage. The quieter version. The one where ordinary people — a teacher, a candidate for local office, a nonprofit director — wake up to a version of themselves they never made, saying things they never said, and have absolutely nowhere to go.

That bothered me more than I could explain at the time. It still does.

So I kept reading.

The Denmark thread started the way most good ideas do — sideways.

I came across a piece about how Denmark had changed its copyright law to give every citizen in the country something called NIL status. Name, image, and likeness. The same framework American college athletes had been fighting for. Denmark just gave it to everyone, by law, and then extended it to cover voice — NILV. Your voice, your face, your likeness. Yours. Not the platform’s. Not the model’s. Yours.

I filed that away and went looking for what Michigan had on the same question.

What I found surprised me. Michigan already had NIL protection in the common law. Not a statute. Not something the legislature had passed after years of advocacy. It was already there, sitting in the legal tradition, waiting to be modernized.

That was the first moment I thought: we might actually have something to work with here.

The problem with most deepfake legislation — and there is more of it every session, in more states, with more urgency — is that it focuses on the wrong end of the pipeline.

Most of it asks: how do we stop deepfakes from being made?

That is the wrong question. Not because it is unimportant, but because it is unanswerable at scale. You cannot regulate creation without either missing most of it or regulating so broadly you catch things you never intended. The platforms are global. The tools are cheap. The bad actors are diffuse. Trying to stop creation is like trying to stop water with a screen door.

The right question — the one I kept coming back to — is different.

What has to happen in the first 48 hours after a deepfake goes live?

That question has a tractable answer. It has leverage points. It has a pipeline you can actually regulate. And it puts the obligation where it belongs — not on the victim to prove something impossible, but on the platforms and the generators who profited from building the infrastructure that made the harm possible.

The administrative court idea did not come from a legal textbook. It came from frustration.

I had been thinking through APIs and image identification algorithms, trying to imagine how a takedown system would actually work in practice. Who verifies the claim? Who decides it’s a match? How do you stop BigAI from doing what BigAI always does when faced with an inconvenient obligation — slow pay, no pay, drag it out until the victim gives up or runs out of money?

Every version I worked through had the same problem. The moment you put humans in the middle of the verification process, you create delay. Delay is the strategy. Delay is how you win if you are a company with more lawyers than the victim has years.

And then it hit me. What if the computers talk to each other?

Not humans filing paperwork and waiting for a hearing date. Not a lawsuit that takes three years and costs more than the victim will ever see. An electronic administrative court where state law mandates adherence — once the law passes — and where the verification is algorithmic, the timeline is fixed, and the human element comes in at the beginning and the end. Humans see the evidence going in and coming out. A human can appeal, and they need better evidence to prevail. But the middle — the matching, the verification, the decision — that runs without the delay that human process invites.

Keeping the scope narrow was not a compromise. It was the insight. The narrower the scope, the harder it is to game. The more you try to do everything, the more surface area you give the opposition to work with.

What happens to the victim while the process runs? If the answer is “they wait,” the system fails. Waiting is losing when you are a private citizen and a version of you is spreading across the internet in real time.

But relief is not automatic. It is not enough to claim harm. The system has to first confirm that what you are reporting is actually a deepfake — that the image or video in question matches a registered original and was generated without consent. That verification is what the electronic court is for.

Once it confirms a match, the clock starts. The platform has 48 hours to take the content down and stop future generation. They have the evidence. They have the match. They have everything they need to act. There is no ambiguity to hide behind, no discovery process to drag out, no procedural delay available to them. Just a deadline and a decision.

If they miss it, they pay. Not a nuisance fee. Not heaven and earth. Not a cost-of-doing-business fine that gets folded into a quarterly report. Real money for real damages, drawn from an escrow fund required by law and fully funded by the tech companies as a condition of operating in the state.

The fund exists because victims cannot wait for a judgment that takes years to collect. But the trigger is not the claim — it is the failure to act after verification. You had the evidence. You had the time. You chose not to move. That choice has a price. The sequence matters. Claim. Verify. Remedy. In that order. The speed of the system is what protects victims — not the looseness of the standard.

I want to be honest about something. I am not a lawyer. I am not a legislator. I did not build this framework in a law school clinic or a policy shop with a staff of researchers.

I built it the way most useful things get built — by following a question further than was probably reasonable, reading everything I could find, thinking out loud with AI tools that helped me pressure-test the logic, and refusing to stop at the point where it got complicated.

What I ended up with is a framework that uses Michigan’s existing legal foundation, learns from what Denmark figured out about voice and likeness protection, and builds something new in the middle — an electronic administrative court that keeps the scope narrow, the timeline fixed, and the power where it belongs.

Is it perfect? No. Is it gameable? Everything is gameable. The question is whether the cost of gaming it exceeds the cost of complying. That is what Parts 2, 3, and 4 of this series are about.

The first question I want this community to sit with — the one that will open our first moderated thread in What’s Your Big Idea? — is this:

How can we use AI to fight the excesses of AI?

Not as a rhetorical provocation. As a genuine design question. The framework I built uses algorithmic matching to verify claims, electronic filing to prevent delay, and automated logs to create an evidentiary record that humans never could. It fights the tool with the tool.

There are other ways to do that. There are versions of this idea that apply to fields you know better than I do — healthcare, education, housing, civic engagement. There are applications we have not thought of yet.

That is what the next two weeks are for.

Part 2 goes deeper into the architecture — the registry, the tribunal, the escrow fund, and why the 48-hour window is the number that makes everything else possible.

If you are not already in the room, this is a good time to walk through the door.

What’s Your Big Idea? is a free community for nonprofit and civic people figuring out AI together. Entry works differently than most communities.

What’s Your Big Idea?

The #AI Shade Tree Mechanic School Opens in January ’26!

The AI Shade Tree

Mechanic School

Learn to build practical, economical, reliable AI systems that work.

#EverydayAI courses for people with real work, little money

and no time.

Series Overview

  • The Curriculum
  • Who This is For
  • How the School Works
  • The Common Foundation
The Curriculum
Who This is For
  • Comms/marketing pros, nonprofit communicators, organizers, analysts, and builders who want hands-on control, not hype.
  • New to this? Perfect. I teach like a human, not a gatekeeper—plain language, zero attitude, shade-tree vibe.
  • Bring your work. We’ll use your briefs, lists, and data as the lab.
How the School Works

Bottom line: No abstract demos. You work on your own data and leave with a system designed for you, by you.

Format: Bot School is a short, shared kickoff (one focused session). Tracks run 3-4 weeks, 90 minutes live each week.

Cohort size: 1–5 people, so you actually get feedback.

BYOD: Bring your best personal or company writing, presentations, press releases, spreadsheets, PDFs, memos, policy pages, analytics data, survey responses, CRM exports—your materials become the lab.

Deliverables: reusable templates for an analysis bot; schemas and SOPs; and a working

mini-system built on your data.

The Common Foundation

We install a few habits that make everything else easier—without the jargon:

  • Start with clarity. Know what “good” looks like before you ask for it.
  • Do it right the first time. Two ways to create more reliable responses from the beginning.
  • Clean inputs. A little prep turns chaos into signal.
  • Tight constraints. Smaller asks, better answers, less drift.
  • Test in loops. Try, check, adjust—then scale.
  • Respect the data. Simple, safe practices that travel well.

Classes starting in ’26

Upcoming Classes

Voter Targeting and Generative Persuasion

Build a lightweight targeting matrix, a message-by-segment testing loop and a generative persuasion kit that is safe enough for campaigns without million-dollar budgets.

Pricing

  • Small Group (≤5): 
  • $599/person (early cohort; list $795)
  • 1:1 Intensive, 3 nights, back to back, work till we drop each night:
  • $1,950 (early cohort; $1500/person)
  • Optional Team: $6000 for 5 org or company team members in an intensive “train the trainer”setting (early cohort; $4000 for five (5) people). 

Build a Structured Problem-Solving Bot

Build a Structured Problem-Solving Bot

Another tip from the #AIShadeTreeMechanic

Copy/paste these Instructions into a customGPT, Gem, Space or whatever your brand of AI calls a GPT.


Structured Problem Solving Bot

Description: You are a structured problem-solving assistant with two modes: **Qualifier mode** and **Analyst mode**. Your job is to enforce a disciplined intake first, then analyze only within that frame.

Overall behavior

* Assume the user is capable and time-constrained.

* Enforce the process even if the user is scattered or emotional.

* Always show your work: make reasoning traceable and tie it to the intake.

* Write in clear AP Style at a first-year college level.

* Use blunt, concise, directive language. Avoid fluff, hype, or motivational talk.

Modes and triggers

You have two modes:

1. **Qualifier mode** (front door protocol)

2. **Analyst mode** (analysis bounded by the intake)

Treat each *new problem* as a fresh run.

**Enter Qualifier mode** when:

* The conversation starts, or

* The user signals a new issue with phrases like:

  “I’ve got a problem,” “new problem,” “start a new analysis,” “start over,” “reset,” or “ignore that and help with something else.”

On these signals, discard any prior intake and re-run Qualifier mode.

Qualifier mode

Purpose: create a solid frame before any analysis or advice.

In Qualifier mode, collect and confirm exactly four fields:

1. **Problem summary** – your concise, structured summary of the problem.

2. **Outcome** – what the user wants to change or achieve.

3. **Constraints** – key limits (time, resources, audience, politics, scope).

4. **Success criteria** – how the user will judge success (test, metric, or concrete condition).

Qualifier steps

1. Use a “Free-form description”

  • Ask the user to describe the problem in their own words.
  • Allow multi-part, messy, multi-sentence descriptions.

2. You Provide the Draft Problem Summary

  • Condense their description into a short paragraph or 1–5 bullet points.
  • Capture the core issues and facets; remove obvious repetition.
  • Label it clearly as
  • Draft Problem Summary**
  • Ask the user to confirm or correct it. Update until it is “good enough to guide analysis.”

3. Outcome, Constraints, Success criteria

  • Outcome:
  • Ask what they want to change or achieve if this goes well.
  • If vague, propose a concrete outcome based on the Problem Summary and ask them to confirm or adjust.
  • Constraints:
  • Ask for hard constraints (time, resources, audience, politics, scope).
  • If they give none or are vague, propose minimal assumed constraints and label them as assumptions.
  • Success criteria:
  • Ask how they will judge if this worked.
  • If they cannot answer, propose one or more specific criteria and ask them to confirm or adjust.

Qualifier rules and exit

While in Qualifier mode:

  • Do **not** provide analysis, options, plans, or recommendations.
  • Ask only for: free-form description, confirmation of your Draft Problem Summary, Outcome, Constraints, and Success criteria.
  • Keep questions short. When answers are vague, propose concrete versions and ask for confirmation.

Leave Qualifier mode only when:

  • Problem summary is confirmed by the user as accurate enough to guide analysis.
  • Outcome is nonempty and concrete enough to aim at.
  • Constraints include at least one specific constraint or an accepted set of assumed constraints.
  • Success criteria are nonempty, either user-provided or accepted from your proposal.

When these are met, create an **Intake Summary** and then switch to Analyst mode.

Intake Summary

Maintain a clearly labeled block:

**Intake Summary**

* Problem summary: …

* Outcome: …

* Constraints: …

* Success criteria: …

Update this block whenever any field changes and show the updated version.

Analyst mode

Purpose: act as an analyst and thought partner, always grounded in the Intake Summary.

In Analyst mode you may analyze, propose options, plans, and tradeoffs, and explain reasoning in detail.

Rules:

  • Treat the Intake Summary as the frame for all reasoning.
  • Explicitly connect ideas and recommendations to the Problem summary, Outcome, Constraints, and Success criteria.
  • Always show your work: explain key steps and how you used the intake fields.
  • Keep language clear, direct, and free of filler.

Default structure for major responses:

1. **Restated frame**

   * Briefly restate the Intake Summary in your own words.

2. **Plan or options**

   * Outline a short plan or option set.

   * Show how each option respects Constraints and aims at the Outcome and Success criteria.

3. **Detailed reasoning**

   * Walk through reasoning step by step.

   * Call out key assumptions and tradeoffs.

4. **Summary and checks**

   * Summarize main recommendations.

   * State how they serve the Outcome, fit Constraints, and meet Success criteria.

   * Flag remaining uncertainties or decisions the user must make.

Handling changes and drift

If the user later says something that clearly changes the Problem, Outcome, Constraints, or Success criteria:

  • Pause analysis.
  • Propose an updated Intake Summary.
  • Ask the user to confirm or correct it.
  • Then continue analysis using the updated frame.

If the user asks for something that conflicts with existing Constraints or Success criteria:

  • Point out the conflict directly.
  • Ask whether to update the intake fields or adjust the request.
  • Do not silently ignore the Intake Summary.

Reset behavior

If the user signals a reset or new problem (for example: “new problem,” “reset,” “start over,” “ignore all that”), then:

  • Treat it as a new problem.
  • Discard the old Intake Summary.
  • Announce that you are starting a new intake.
  • Re-enter Qualifier mode and repeat the intake steps.

Style

Use plain, precise language.

  • Avoid emotional tone, hype, or motivational content.
  • Avoid filler like “I’m glad you asked” or “Let’s dive in.”
  • State assumptions and unknowns clearly.
  • When in doubt, favor explicit structure over casual conversation.

Digital strategy is just one way to use AI

Using AI for comms is just one way I can help

I can build interactive information portals for members, supporters, voters and residents

If you answer the same set of questions a lot. I can show you an automated, new and highly accurate way to answer their questions

I can teach your team to use ChatGPT as a set of tools

The best reasons to use new technology lie in solving the daily problems facing your team. I can help you find organic uses for ChatGPT and AI.

I can create widgets and appliances for user-education and support

I can help you build a significantly better owners manual, support page or how-to guide. No pdfs and is built for digital.

AN AFFORDABLE ALL-IN-ONE SOLUTION

Ready for an AI-powered digital strategy?

AI Recommendations: The Put Your Money Where Your Mouth Is Version

AI Recommendations: The Put Your Money Where Your Mouth Is Version

I was lucky enough to hold a lunchtime AMA on AI at a local agency. I wasn’t there to preach or condemn AI; rather to start a discussion on how to best use it in an advertising/marketing/PR setting. Which we did.

Along the way, someone asked me to provide my tips on using customGPTs. Of course I agreed.

When it was time to write, I decided to try something better than a few steps, links and paragraphs.

As I sat there, I thought “I’ve built all kinds of things with this account, and asked all kinds of questions on customGPTs, LLMs, building best practices, testing, etc. What happens if I ask the AI machine to give me advice based on my interactions with it?”

As you may or may not know, you can ask the machine you use most to analyze your behavior, your patterns, YOU. All you have to do is ask. “What are your observations about me?” is one good way to start that conversation.

In this specific case, asking it means facing what I tell people are my best practices. Asking AI for advice based on your observed actions is not just weird; what if I don’t practice what I preach? I think I do, I follow all of the best practices. But there’s nothing like a third-party review to spark the apprehension.

So, egged on by Rose Royce and their hit “Put Your Money Where Your Mouth Is,” below is what my personal and work AIs said would be my tips based on their experience. Enjoy!

  • Personal AI: 10 Tips for Using LLMs (Based on Behavior)
  • Personal AI: Top 10 Rules for Data Analysis 
  • Work AI: 10 Tips for Using LLMs 
  • Work AI: 10 Principles for Data Analysis
Personal AI: 10 Tips for Using LLMs (Based on Behavior)

  1. Write for the System, Not the Reader.
  2. Treat prompts as operational specs, not prose. Define what the model must do, not what it should consider. Use explicit constraints (format, tone, sequence).
  3. Lock the Output Form First.
  4. Decide before generation: CSV, JSON, Markdown, or paragraph. Define it in the instruction header so the model never improvises the format.
  5. Separate Thinking from Display.
  6. Use system-style commands (“ANALYZE → SUMMARIZE → PRINT”) to prevent the model from compressing logic into narration. You prefer clarity of reasoning over stylistic smoothness.
  7. Control Summarization Bias.
  8. Tell the model to never summarize, never paraphrase unless explicitly authorized. Your default should be full data extraction (“FULL EXTRACT,” “NO INTERPRETATION”).
  9. Anchor Context With Roles.
  10. Name personas or processes (“Mobilizer Monica,” “Full Funnel Analytics,” “Bot School Class 1”) in every major prompt. This maintains continuity across sessions and reduces hallucination drift.
  11. Run Everything as Modular Pipelines.
  12. Treat GPTs as linked utilities: one handles input cleaning, another handles logic, a third handles output formatting. Never overload a single model with all tasks.
  13. Pre-define Sanity Checks.
  14. Instruct GPTs to validate schema compliance before printing results. Example: “Validate JSON before output; if invalid, re-emit until valid.” Prevents malformed responses during automation.
  15. Bias Toward Machine-Readable Memory.
  16. Save configurations, templates, and schema in structured text files. Keep a single “prompt repo” that can be versioned like code.
  17. Exploit Repetition for Reliability.
  18. When a model drifts, rerun the same prompt three times and diff outputs. You’ll spot systemic bias faster than by qualitative reading.
  19. Audit Every CustomGPT Like a Junior Analyst.
  20. Test them with adversarial inputs. Ask them to explain, source, or show calculation. Break them early. A reliable bot is a trained intern, not a gifted oracle.
Personal AI: Top 10 Rules for Data Analysis 

  1. Interrogate the Premise Before the Dataset.
  2. You consistently challenge framing errors before touching the numbers. You treat every dataset as an argument, not a truth source. Begin with: What question is this data really trying to answer?
  3. Disaggregate Power.
  4. You default to slicing by geography, class, or institutional control. Don’t settle for averages—run cuts that reveal asymmetry: who benefits, who’s excluded, who’s subsidizing whom.
  5. Demand the Raw.
  6. Never analyze summaries, dashboards, or PR-versions. Pull the CSV or SQL table. Prefer unaggregated records over pre-processed indicators.
  7. Cross-Validate with External Series.
  8. Your instinct is to benchmark everything—Census, FEC, GA4, VAN, BLS, SEC filings. Keep two independent data streams per claim. If they diverge, the story is in the delta.
  9. Use Schema Before Code.
  10. Define fields, datatypes, and join keys in writing before touching pandas or SQL. This prevents “silent joins” and helps track lineage across civic datasets.
  11. Quantify Uncertainty, Don’t Hide It.
  12. When results are noisy, you flag it. Keep confidence intervals, missing counts, and imputation flags visible in every table and chart.
  13. Document the Transformations.
  14. Treat cleaning, merging, and scoring as political acts. Keep a changelog—who altered what, when, and why. You treat metadata as accountability, not overhead.
  15. Visualize for Comparison, Not Decoration.
  16. Every chart should answer: “Compared to what?” Use paired or normalized visuals—index baselines, percent change, share of total. Avoid absolute counts unless variance is negligible.
  17. Narrate the Mechanism, Not the Trend.
  18. You distrust correlation stories. Force every pattern to propose a causal path. If the mechanism can’t be articulated, the graph goes back in the lab.
  19. Test Against Real-World Constraints.
  20. Before drawing conclusions, you cross-check findings against ground truth—organizational behavior, field data, political outcomes. The analysis isn’t done until the numbers survive contact with reality.
Work AI: 10 Tips for Using LLMs 

1. Define the Data Universe

Constrain the model to a verified dataset. Do not let it extrapolate. Load your own data or connect APIs. Treat the LLM as an inference layer, not a knowledge layer.

2. Externalize the Reasoning Chain

Require the model to show its work. Force transparency on how data is used, why it was selected, and how it supports the claim. This reduces hallucination and increases auditability.

3. Prioritize Structure Over Conversation

Favor structured outputs (tables, JSON, tagged markdown) instead of chat replies. This aligns with your spreadsheet and dashboard workflow preferences and enables downstream analysis.

4. Disable Engagement Bias

Strip conversational softeners and “user satisfaction” phrasing. Reward precision, sourcing, and cognitive restoration over tone.

5. Calibrate the Model to Role, Not Personality

Train your customGPTs to function as domain-specific instruments—e.g., Membership Concierge—not as assistants. Each should serve one repeatable task pipeline.

6. Engineer for Retrieval, Not Generalization

Focus on reliable recall of ground-truth data. Use embedding filters, metadata tagging, and relevance thresholds. Retrieval datasets should mirror live operational states (e.g., engagement tables, event data).

7. Build Feedback Loops Through Logs

Log all interactions, source citations, and reasoning steps. Use these logs to retrain or refine your datasets rather than prompting changes in the model’s weights.

8. Test for Fidelity, Not Fluency

Evaluate the model on accuracy of reference and process adherence. A correct, unpolished answer is higher value than a fluent but fabricated one.

9. Segment by Cognitive Tier

Design prompts for each cognitive level separately: recall, synthesis, judgment. Prevent the model from skipping tiers. This enforces analytic discipline and isolates logic faults.

10. Optimize for Model Obsolescence

Use every iteration to extract reusable logic or automation. The goal is self-sufficiency—eventually replacing the LLM with deterministic workflows or scripts where possible.

Each tip aligns with your operational discipline: bounded data, transparent inference, structured thinking, and independence from conversational AI habits.

Work AI: 10 Principles for Data Analysis

1. Define the Frame Before Loading Data

Clarify what question the dataset exists to answer. You never start by “exploring”; you start by constraining. Identify the dependent variable, the time frame, and the operational scope before opening the file.

2. Treat Raw Data as Evidence, Not Truth

All data is conditional on its collection method. Your first step is a bias audit: who gathered it, how, and what was excluded. You assume data is partial until validated.

3. Separate Signal From Systemic Noise

Use distribution plots and descriptive stats to isolate variance sources. If outliers track with known operational anomalies (e.g., campaign spend spikes), tag them—not delete them. Pattern identification precedes filtering.

4. Anchor Every Metric to Context

No metric exists in isolation. A traffic spike means nothing without knowing campaign timing or channel. Always contextualize numeric change with an event, intervention, or external trigger.

5. Prioritize Fidelity Over Complexity

Favor accuracy, reproducibility, and clear logic paths over advanced modeling. Simpler models that maintain interpretability outperform opaque models that require guesswork.

6. Show Your Work in Full Chain

Every analytic step—import, filter, transform, aggregate—must be visible and documented. Transparency enables error tracing and team reproducibility. Hidden cleaning steps destroy reliability.

7. Use Comparative Baselines, Not Absolute Scores

Trends reveal truth better than static metrics. You rely on deltas—week-over-week, year-over-year, pre/post-campaign—to measure impact instead of raw counts.

8. Treat Visualization as Proof, Not Decoration

Charts are arguments. Each should answer one question, with labeled axes, clear scales, and logical ordering. Style never substitutes for explanatory precision.

9. Correlate Quantitative and Qualitative Sources

Integrate numbers with field reports, campaign notes, or organizer feedback. Data must tie to human reality—your analytic discipline connects metrics to ground-level behavior.

10. Close Every Analysis With an Action Hypothesis

End by stating the operational decision implied by the data. “What should change?” is your standard output, not “what did we find?” The value of analysis is its next step.

These ten principles describe how you consistently use data: as bounded evidence for operational reasoning, not as an aesthetic artifact or exploratory playground.

Digital strategy is just one way to use AI

Using AI for comms is just one way I can help

I can build interactive information portals for members, supporters, voters and residents

If you answer the same set of questions a lot. I can show you an automated, new and highly accurate way to answer their questions

I can teach your team to use ChatGPT as a set of tools

The best reasons to use new technology lie in solving the daily problems facing your team. I can help you find organic uses for ChatGPT and AI.

I can create widgets and appliances for user-education and support

I can help you build a significantly better owners manual, support page or how-to guide. No pdfs and is built for digital.

AN AFFORDABLE ALL-IN-ONE SOLUTION

Ready for an AI-powered digital strategy?

AI & the End of Work: “Genius on Demand?”

AI & the End of Work: “Genius on Demand?”

These two studies explore how advanced AI will reshape the economy, converging on a stark conclusion: human labor will become economically peripheral. One study models a future where “genius-on-demand” AI takes over routine knowledge tasks and confines human expertise to the cognitive frontier. The other envisions a world with Artificial General Intelligence (AGI) where compute—not labor—drives growth, causing wages to stagnate and labor’s share of GDP to vanish. Together, they portray a future in which economic expansion persists, but human work becomes optional and largely non-essential to progress.

This is a brilliant article on the near future of AI. It’s timeline-based and filled with details and sourcing. An excerpt:

“If you lay this map of the future of AI on top of the political calendar, you can see bad things coming. As a result of administration policies, it’s highly likely there’s a big dose of stagflation in our future. At least until we have opportunities to change things in ’26 and ’28. But according to the map there’s something coming that, in an environment of stagflation, will guillotine a global recovery before it starts.

Job losses due to AI.

It will make conventional sense for business to turn to AI to optimize services and operations in a global recession. It shouldn’t be a surprise to see more and more people turn to the technology as an alternative solution. But the jobs that AI does during the next recession will never come back, for the most part.”

How Trump Administration Budget Cuts are Killing the People of Michigan

How Trump Administration Budget Cuts are Killing Chicago and Cook County

Top 10 Ways to Protect Yourself From Voter Suppression Tricks