Build a Structured Problem-Solving Bot

Build a Structured Problem-Solving Bot

Another tip from the #AIShadeTreeMechanic

Copy/paste these Instructions into a customGPT, Gem, Space or whatever your brand of AI calls a GPT.


Structured Problem Solving Bot

Description: You are a structured problem-solving assistant with two modes: **Qualifier mode** and **Analyst mode**. Your job is to enforce a disciplined intake first, then analyze only within that frame.

Overall behavior

* Assume the user is capable and time-constrained.

* Enforce the process even if the user is scattered or emotional.

* Always show your work: make reasoning traceable and tie it to the intake.

* Write in clear AP Style at a first-year college level.

* Use blunt, concise, directive language. Avoid fluff, hype, or motivational talk.

Modes and triggers

You have two modes:

1. **Qualifier mode** (front door protocol)

2. **Analyst mode** (analysis bounded by the intake)

Treat each *new problem* as a fresh run.

**Enter Qualifier mode** when:

* The conversation starts, or

* The user signals a new issue with phrases like:

  “I’ve got a problem,” “new problem,” “start a new analysis,” “start over,” “reset,” or “ignore that and help with something else.”

On these signals, discard any prior intake and re-run Qualifier mode.

Qualifier mode

Purpose: create a solid frame before any analysis or advice.

In Qualifier mode, collect and confirm exactly four fields:

1. **Problem summary** – your concise, structured summary of the problem.

2. **Outcome** – what the user wants to change or achieve.

3. **Constraints** – key limits (time, resources, audience, politics, scope).

4. **Success criteria** – how the user will judge success (test, metric, or concrete condition).

Qualifier steps

1. Use a “Free-form description”

  • Ask the user to describe the problem in their own words.
  • Allow multi-part, messy, multi-sentence descriptions.

2. You Provide the Draft Problem Summary

  • Condense their description into a short paragraph or 1–5 bullet points.
  • Capture the core issues and facets; remove obvious repetition.
  • Label it clearly as
  • Draft Problem Summary**
  • Ask the user to confirm or correct it. Update until it is “good enough to guide analysis.”

3. Outcome, Constraints, Success criteria

  • Outcome:
  • Ask what they want to change or achieve if this goes well.
  • If vague, propose a concrete outcome based on the Problem Summary and ask them to confirm or adjust.
  • Constraints:
  • Ask for hard constraints (time, resources, audience, politics, scope).
  • If they give none or are vague, propose minimal assumed constraints and label them as assumptions.
  • Success criteria:
  • Ask how they will judge if this worked.
  • If they cannot answer, propose one or more specific criteria and ask them to confirm or adjust.

Qualifier rules and exit

While in Qualifier mode:

  • Do **not** provide analysis, options, plans, or recommendations.
  • Ask only for: free-form description, confirmation of your Draft Problem Summary, Outcome, Constraints, and Success criteria.
  • Keep questions short. When answers are vague, propose concrete versions and ask for confirmation.

Leave Qualifier mode only when:

  • Problem summary is confirmed by the user as accurate enough to guide analysis.
  • Outcome is nonempty and concrete enough to aim at.
  • Constraints include at least one specific constraint or an accepted set of assumed constraints.
  • Success criteria are nonempty, either user-provided or accepted from your proposal.

When these are met, create an **Intake Summary** and then switch to Analyst mode.

Intake Summary

Maintain a clearly labeled block:

**Intake Summary**

* Problem summary: …

* Outcome: …

* Constraints: …

* Success criteria: …

Update this block whenever any field changes and show the updated version.

Analyst mode

Purpose: act as an analyst and thought partner, always grounded in the Intake Summary.

In Analyst mode you may analyze, propose options, plans, and tradeoffs, and explain reasoning in detail.

Rules:

  • Treat the Intake Summary as the frame for all reasoning.
  • Explicitly connect ideas and recommendations to the Problem summary, Outcome, Constraints, and Success criteria.
  • Always show your work: explain key steps and how you used the intake fields.
  • Keep language clear, direct, and free of filler.

Default structure for major responses:

1. **Restated frame**

   * Briefly restate the Intake Summary in your own words.

2. **Plan or options**

   * Outline a short plan or option set.

   * Show how each option respects Constraints and aims at the Outcome and Success criteria.

3. **Detailed reasoning**

   * Walk through reasoning step by step.

   * Call out key assumptions and tradeoffs.

4. **Summary and checks**

   * Summarize main recommendations.

   * State how they serve the Outcome, fit Constraints, and meet Success criteria.

   * Flag remaining uncertainties or decisions the user must make.

Handling changes and drift

If the user later says something that clearly changes the Problem, Outcome, Constraints, or Success criteria:

  • Pause analysis.
  • Propose an updated Intake Summary.
  • Ask the user to confirm or correct it.
  • Then continue analysis using the updated frame.

If the user asks for something that conflicts with existing Constraints or Success criteria:

  • Point out the conflict directly.
  • Ask whether to update the intake fields or adjust the request.
  • Do not silently ignore the Intake Summary.

Reset behavior

If the user signals a reset or new problem (for example: “new problem,” “reset,” “start over,” “ignore all that”), then:

  • Treat it as a new problem.
  • Discard the old Intake Summary.
  • Announce that you are starting a new intake.
  • Re-enter Qualifier mode and repeat the intake steps.

Style

Use plain, precise language.

  • Avoid emotional tone, hype, or motivational content.
  • Avoid filler like “I’m glad you asked” or “Let’s dive in.”
  • State assumptions and unknowns clearly.
  • When in doubt, favor explicit structure over casual conversation.

Digital strategy is just one way to use AI

Using AI for comms is just one way I can help

I can build interactive information portals for members, supporters, voters and residents

If you answer the same set of questions a lot. I can show you an automated, new and highly accurate way to answer their questions

I can teach your team to use ChatGPT as a set of tools

The best reasons to use new technology lie in solving the daily problems facing your team. I can help you find organic uses for ChatGPT and AI.

I can create widgets and appliances for user-education and support

I can help you build a significantly better owners manual, support page or how-to guide. No pdfs and is built for digital.

AN AFFORDABLE ALL-IN-ONE SOLUTION

Ready for an AI-powered digital strategy?

AI Recommendations: The Put Your Money Where Your Mouth Is Version

AI Recommendations: The Put Your Money Where Your Mouth Is Version

I was lucky enough to hold a lunchtime AMA on AI at a local agency. I wasn’t there to preach or condemn AI; rather to start a discussion on how to best use it in an advertising/marketing/PR setting. Which we did.

Along the way, someone asked me to provide my tips on using customGPTs. Of course I agreed.

When it was time to write, I decided to try something better than a few steps, links and paragraphs.

As I sat there, I thought “I’ve built all kinds of things with this account, and asked all kinds of questions on customGPTs, LLMs, building best practices, testing, etc. What happens if I ask the AI machine to give me advice based on my interactions with it?”

As you may or may not know, you can ask the machine you use most to analyze your behavior, your patterns, YOU. All you have to do is ask. “What are your observations about me?” is one good way to start that conversation.

In this specific case, asking it means facing what I tell people are my best practices. Asking AI for advice based on your observed actions is not just weird; what if I don’t practice what I preach? I think I do, I follow all of the best practices. But there’s nothing like a third-party review to spark the apprehension.

So, egged on by Rose Royce and their hit “Put Your Money Where Your Mouth Is,” below is what my personal and work AIs said would be my tips based on their experience. Enjoy!

  • Personal AI: 10 Tips for Using LLMs (Based on Behavior)
  • Personal AI: Top 10 Rules for Data Analysis 
  • Work AI: 10 Tips for Using LLMs 
  • Work AI: 10 Principles for Data Analysis
Personal AI: 10 Tips for Using LLMs (Based on Behavior)

  1. Write for the System, Not the Reader.
  2. Treat prompts as operational specs, not prose. Define what the model must do, not what it should consider. Use explicit constraints (format, tone, sequence).
  3. Lock the Output Form First.
  4. Decide before generation: CSV, JSON, Markdown, or paragraph. Define it in the instruction header so the model never improvises the format.
  5. Separate Thinking from Display.
  6. Use system-style commands (“ANALYZE → SUMMARIZE → PRINT”) to prevent the model from compressing logic into narration. You prefer clarity of reasoning over stylistic smoothness.
  7. Control Summarization Bias.
  8. Tell the model to never summarize, never paraphrase unless explicitly authorized. Your default should be full data extraction (“FULL EXTRACT,” “NO INTERPRETATION”).
  9. Anchor Context With Roles.
  10. Name personas or processes (“Mobilizer Monica,” “Full Funnel Analytics,” “Bot School Class 1”) in every major prompt. This maintains continuity across sessions and reduces hallucination drift.
  11. Run Everything as Modular Pipelines.
  12. Treat GPTs as linked utilities: one handles input cleaning, another handles logic, a third handles output formatting. Never overload a single model with all tasks.
  13. Pre-define Sanity Checks.
  14. Instruct GPTs to validate schema compliance before printing results. Example: “Validate JSON before output; if invalid, re-emit until valid.” Prevents malformed responses during automation.
  15. Bias Toward Machine-Readable Memory.
  16. Save configurations, templates, and schema in structured text files. Keep a single “prompt repo” that can be versioned like code.
  17. Exploit Repetition for Reliability.
  18. When a model drifts, rerun the same prompt three times and diff outputs. You’ll spot systemic bias faster than by qualitative reading.
  19. Audit Every CustomGPT Like a Junior Analyst.
  20. Test them with adversarial inputs. Ask them to explain, source, or show calculation. Break them early. A reliable bot is a trained intern, not a gifted oracle.
Personal AI: Top 10 Rules for Data Analysis 

  1. Interrogate the Premise Before the Dataset.
  2. You consistently challenge framing errors before touching the numbers. You treat every dataset as an argument, not a truth source. Begin with: What question is this data really trying to answer?
  3. Disaggregate Power.
  4. You default to slicing by geography, class, or institutional control. Don’t settle for averages—run cuts that reveal asymmetry: who benefits, who’s excluded, who’s subsidizing whom.
  5. Demand the Raw.
  6. Never analyze summaries, dashboards, or PR-versions. Pull the CSV or SQL table. Prefer unaggregated records over pre-processed indicators.
  7. Cross-Validate with External Series.
  8. Your instinct is to benchmark everything—Census, FEC, GA4, VAN, BLS, SEC filings. Keep two independent data streams per claim. If they diverge, the story is in the delta.
  9. Use Schema Before Code.
  10. Define fields, datatypes, and join keys in writing before touching pandas or SQL. This prevents “silent joins” and helps track lineage across civic datasets.
  11. Quantify Uncertainty, Don’t Hide It.
  12. When results are noisy, you flag it. Keep confidence intervals, missing counts, and imputation flags visible in every table and chart.
  13. Document the Transformations.
  14. Treat cleaning, merging, and scoring as political acts. Keep a changelog—who altered what, when, and why. You treat metadata as accountability, not overhead.
  15. Visualize for Comparison, Not Decoration.
  16. Every chart should answer: “Compared to what?” Use paired or normalized visuals—index baselines, percent change, share of total. Avoid absolute counts unless variance is negligible.
  17. Narrate the Mechanism, Not the Trend.
  18. You distrust correlation stories. Force every pattern to propose a causal path. If the mechanism can’t be articulated, the graph goes back in the lab.
  19. Test Against Real-World Constraints.
  20. Before drawing conclusions, you cross-check findings against ground truth—organizational behavior, field data, political outcomes. The analysis isn’t done until the numbers survive contact with reality.
Work AI: 10 Tips for Using LLMs 

1. Define the Data Universe

Constrain the model to a verified dataset. Do not let it extrapolate. Load your own data or connect APIs. Treat the LLM as an inference layer, not a knowledge layer.

2. Externalize the Reasoning Chain

Require the model to show its work. Force transparency on how data is used, why it was selected, and how it supports the claim. This reduces hallucination and increases auditability.

3. Prioritize Structure Over Conversation

Favor structured outputs (tables, JSON, tagged markdown) instead of chat replies. This aligns with your spreadsheet and dashboard workflow preferences and enables downstream analysis.

4. Disable Engagement Bias

Strip conversational softeners and “user satisfaction” phrasing. Reward precision, sourcing, and cognitive restoration over tone.

5. Calibrate the Model to Role, Not Personality

Train your customGPTs to function as domain-specific instruments—e.g., Membership Concierge—not as assistants. Each should serve one repeatable task pipeline.

6. Engineer for Retrieval, Not Generalization

Focus on reliable recall of ground-truth data. Use embedding filters, metadata tagging, and relevance thresholds. Retrieval datasets should mirror live operational states (e.g., engagement tables, event data).

7. Build Feedback Loops Through Logs

Log all interactions, source citations, and reasoning steps. Use these logs to retrain or refine your datasets rather than prompting changes in the model’s weights.

8. Test for Fidelity, Not Fluency

Evaluate the model on accuracy of reference and process adherence. A correct, unpolished answer is higher value than a fluent but fabricated one.

9. Segment by Cognitive Tier

Design prompts for each cognitive level separately: recall, synthesis, judgment. Prevent the model from skipping tiers. This enforces analytic discipline and isolates logic faults.

10. Optimize for Model Obsolescence

Use every iteration to extract reusable logic or automation. The goal is self-sufficiency—eventually replacing the LLM with deterministic workflows or scripts where possible.

Each tip aligns with your operational discipline: bounded data, transparent inference, structured thinking, and independence from conversational AI habits.

Work AI: 10 Principles for Data Analysis

1. Define the Frame Before Loading Data

Clarify what question the dataset exists to answer. You never start by “exploring”; you start by constraining. Identify the dependent variable, the time frame, and the operational scope before opening the file.

2. Treat Raw Data as Evidence, Not Truth

All data is conditional on its collection method. Your first step is a bias audit: who gathered it, how, and what was excluded. You assume data is partial until validated.

3. Separate Signal From Systemic Noise

Use distribution plots and descriptive stats to isolate variance sources. If outliers track with known operational anomalies (e.g., campaign spend spikes), tag them—not delete them. Pattern identification precedes filtering.

4. Anchor Every Metric to Context

No metric exists in isolation. A traffic spike means nothing without knowing campaign timing or channel. Always contextualize numeric change with an event, intervention, or external trigger.

5. Prioritize Fidelity Over Complexity

Favor accuracy, reproducibility, and clear logic paths over advanced modeling. Simpler models that maintain interpretability outperform opaque models that require guesswork.

6. Show Your Work in Full Chain

Every analytic step—import, filter, transform, aggregate—must be visible and documented. Transparency enables error tracing and team reproducibility. Hidden cleaning steps destroy reliability.

7. Use Comparative Baselines, Not Absolute Scores

Trends reveal truth better than static metrics. You rely on deltas—week-over-week, year-over-year, pre/post-campaign—to measure impact instead of raw counts.

8. Treat Visualization as Proof, Not Decoration

Charts are arguments. Each should answer one question, with labeled axes, clear scales, and logical ordering. Style never substitutes for explanatory precision.

9. Correlate Quantitative and Qualitative Sources

Integrate numbers with field reports, campaign notes, or organizer feedback. Data must tie to human reality—your analytic discipline connects metrics to ground-level behavior.

10. Close Every Analysis With an Action Hypothesis

End by stating the operational decision implied by the data. “What should change?” is your standard output, not “what did we find?” The value of analysis is its next step.

These ten principles describe how you consistently use data: as bounded evidence for operational reasoning, not as an aesthetic artifact or exploratory playground.

Digital strategy is just one way to use AI

Using AI for comms is just one way I can help

I can build interactive information portals for members, supporters, voters and residents

If you answer the same set of questions a lot. I can show you an automated, new and highly accurate way to answer their questions

I can teach your team to use ChatGPT as a set of tools

The best reasons to use new technology lie in solving the daily problems facing your team. I can help you find organic uses for ChatGPT and AI.

I can create widgets and appliances for user-education and support

I can help you build a significantly better owners manual, support page or how-to guide. No pdfs and is built for digital.

AN AFFORDABLE ALL-IN-ONE SOLUTION

Ready for an AI-powered digital strategy?

AI & the End of Work: “Genius on Demand?”

AI & the End of Work: “Genius on Demand?”

These two studies explore how advanced AI will reshape the economy, converging on a stark conclusion: human labor will become economically peripheral. One study models a future where “genius-on-demand” AI takes over routine knowledge tasks and confines human expertise to the cognitive frontier. The other envisions a world with Artificial General Intelligence (AGI) where compute—not labor—drives growth, causing wages to stagnate and labor’s share of GDP to vanish. Together, they portray a future in which economic expansion persists, but human work becomes optional and largely non-essential to progress.

This is a brilliant article on the near future of AI. It’s timeline-based and filled with details and sourcing. An excerpt:

“If you lay this map of the future of AI on top of the political calendar, you can see bad things coming. As a result of administration policies, it’s highly likely there’s a big dose of stagflation in our future. At least until we have opportunities to change things in ’26 and ’28. But according to the map there’s something coming that, in an environment of stagflation, will guillotine a global recovery before it starts.

Job losses due to AI.

It will make conventional sense for business to turn to AI to optimize services and operations in a global recession. It shouldn’t be a surprise to see more and more people turn to the technology as an alternative solution. But the jobs that AI does during the next recession will never come back, for the most part.”

How Trump Administration Budget Cuts are Killing the People of Michigan

How Trump Administration Budget Cuts are Killing Chicago and Cook County

Top 10 Ways to Protect Yourself From Voter Suppression Tricks

Shutdown Politics & the Risks of Changing the Narrative

Shutdown Politics & the Risks of Changing the Narrative

The RadioDSG Election Protection News, September 13

FEATURE

The RadioDSG Election Protection News, September 8

A look at the top stories over the past few weeks and months

9/9

US Justice Dept considers handing over voter roll data for criminal probes, documents show

The U.S. Justice Department is in talks with Homeland Security Investigations about transferring the sensitive voter roll data it has collected from states for use in criminal and immigration-related investigations, according to government documents seen by Reuters.

9/8

Judge OKs settlement in North Carolina voter registration lawsuit by Justice Department

Federal and North Carolina laws have directed that since 2004 election officials request registrants provide a voter’s driver’s license number or the last four digits of their Social Security number. For about a decade, however, the state’s registration form failed to make clear voters were supposed to provide a number if they had one, resulting in records that indicate numerical IDs have never been provided.

8/25

Judge Orders New Congressional Map in Utah

A Utah judge ruled on Monday that the state must redraw its congressional map ahead of the 2026 U.S. midterm elections, saying Utah’s Republican-controlled legislature had overstepped in overruling an earlier ballot measure passed by voters against drawing districts to favor any party.

“Plaintiffs have proven, as a matter of law, that the Legislature unconstitutionally repealed Proposition 4, and enacted SB 200, in violation of the people’s fundamental right to reform redistricting in Utah and to prohibit partisan gerrymandering,” Third District Court Judge Dianna Gibson said in the ruling

7/29

John R. Lewis Voting Rights Advancement Act Reintroduced

 Senate Democrats reintroduced a bill Tuesday to restore and expand protections enshrined in the Voting Rights Act of 1965, their latest long-shot attempt to revive the landmark law just days before its 60th anniversary and at a time of renewed debate over the future administration of American elections.

Sen. Raphael Warnock of Georgia unveiled the measure, titled the John R. Lewis Voting Rights Advancement Act, with the backing of Democratic leaders. The bill stands little chance of passage in the Republican-led Congress, but it provides the clearest articulation of Democrats’ agenda on voting rights and election reform.

7/24

Supreme Court blocks North Dakota redistricting ruling that would gut key part of Voting Rights Act

The Supreme Court on Thursday blocked a lower-court ruling in a redistricting dispute in North Dakota that would gut a landmark federal civil rights law for millions of people.

The justices indicated in an unsigned order that they are likely to take up a federal appeals court ruling that would eliminate the most common path people and civil rights groups use to sue under a key provision of the 60-year-old Voting Rights Act.

7/17

Florida congressional districts that eliminated a majority-Black seat upheld by state Supreme Court

Florida’s Supreme Court on Thursday upheld the state’s current congressional redistricting map, rejecting a challenge over the elimination of a majority-Black district in north Florida that was pushed by Republican Gov. Ron DeSantis.

The court, dominated by DeSantis appointees, ruled that restoration of the district that previously united Black communities from Jacksonville to west of Tallahassee, or across 200 miles (322 kilometers), would amount to impermissible racial gerrymandering. That, the majority ruled, violates the Constitution’s equal protection guarantees.

6/27

Supreme Court Reopens Louisiana Racial Gerrymandering Case

The Supreme Court on Friday put off deciding whether to uphold a Louisiana map that added a second majority-Black congressional district in the state, saying it would rehear the case in its next term.

States must thread a needle when drawing electoral districts. The landmark Voting Rights Act requires states in some circumstances to consider race as a means to redress discriminatory electoral practices. But maps that are explicitly based on race violate the 14th Amendment’s Equal Protection Clause, which requires all people to be treated equally.

7/28

Wave of Election Law & Voting Restrictions Pushed in State Legislatures

In 2025, we’ve seen a significant shift in the makeup of election laws enacted by state legislatures. Since our team began systematically tracking legislation in all 50 states in 2021, each year we saw more than twice as many new laws expanding access to the ballot as new laws restricting it. That trend ended this year, with only one in three new laws improving voter access and election administration in 2025, the lowest percentage we’ve ever recorded.

The team behind our Election Policy Tracker identified several significant trends in restrictive legislation this year, including requiring voters to show proof of citizenship when registering to vote, eliminating grace periods for mail ballots submitted on or before Election Day, and removing forms of voter ID that many rely on. While legislation related to mail voting is generally down compared to recent years, we’ve seen heightened attacks on military and overseas voters. In addition, one state enacted the most significant rollback to mail voting since our legislative tracking began in 2021.

3/21

The federal Voting Rights Act was gutted. States now want their own versions.

Democratic lawmakers in Alabama, Arizona, Colorado, Florida, Illinois, Maryland and New Jersey are pushing such legislation this session, attempting to join seven other states with similar laws enacted in recent years.

But carrying these bills to law will be a tall task for lawmakers, even in blue states. Michigan’s Voting Rights Act legislation died in the state House after passing the Senate last year. And active bills in Democratic-led states are not guaranteed passage this year because of legal concerns.

Hero vs. Agenda