AI Recommendations: The Put Your Money Where Your Mouth Is Version

I was lucky enough to hold a lunchtime AMA on AI at a local agency. I wasn't there to preach or condemn AI; rather to start a discussion on how to best use it in an advertising/marketing/PR setting. Which we did.

Along the way, someone asked me to provide my tips on using customGPTs. Of course I agreed.

When it was time to write, I decided to try something better than a few steps, links and paragraphs.


As I sat there, I thought "I've built all kinds of things with this account, and asked all kinds of questions on customGPTs, LLMs, building best practices, testing, etc. What happens if I ask the AI machine to give me advice based on my interactions with it?"

As you may or may not know, you can ask the machine you use most to analyze your behavior, your patterns, YOU. All you have to do is ask. "What are your observations about me?" is one good way to start that conversation.

In this specific case, asking it means facing what I tell people are my best practices. Asking AI for advice based on your observed actions is not just weird; what if I don't practice what I preach? I think I do, I follow all of the best practices. But there's nothing like a third-party review to spark the apprehension.

So, egged on by Rose Royce and their hit "Put Your Money Where Your Mouth Is," below is what my personal and work AIs said would be my tips based on their experience. Enjoy!

  • Personal AI: 10 Tips for Using LLMs (Based on Behavior)
  • Personal AI: Top 10 Rules for Data Analysis 
  • Work AI: 10 Tips for Using LLMs 
  • Work AI: 10 Principles for Data Analysis
Personal AI: 10 Tips for Using LLMs (Based on Behavior)


  1. Write for the System, Not the Reader.
  2. Treat prompts as operational specs, not prose. Define what the model must do, not what it should consider. Use explicit constraints (format, tone, sequence).
  3. Lock the Output Form First.
  4. Decide before generation: CSV, JSON, Markdown, or paragraph. Define it in the instruction header so the model never improvises the format.
  5. Separate Thinking from Display.
  6. Use system-style commands (“ANALYZE → SUMMARIZE → PRINT”) to prevent the model from compressing logic into narration. You prefer clarity of reasoning over stylistic smoothness.
  7. Control Summarization Bias.
  8. Tell the model to never summarize, never paraphrase unless explicitly authorized. Your default should be full data extraction (“FULL EXTRACT,” “NO INTERPRETATION”).
  9. Anchor Context With Roles.
  10. Name personas or processes (“Mobilizer Monica,” “Full Funnel Analytics,” “Bot School Class 1”) in every major prompt. This maintains continuity across sessions and reduces hallucination drift.
  11. Run Everything as Modular Pipelines.
  12. Treat GPTs as linked utilities: one handles input cleaning, another handles logic, a third handles output formatting. Never overload a single model with all tasks.
  13. Pre-define Sanity Checks.
  14. Instruct GPTs to validate schema compliance before printing results. Example: “Validate JSON before output; if invalid, re-emit until valid.” Prevents malformed responses during automation.
  15. Bias Toward Machine-Readable Memory.
  16. Save configurations, templates, and schema in structured text files. Keep a single “prompt repo” that can be versioned like code.
  17. Exploit Repetition for Reliability.
  18. When a model drifts, rerun the same prompt three times and diff outputs. You’ll spot systemic bias faster than by qualitative reading.
  19. Audit Every CustomGPT Like a Junior Analyst.
  20. Test them with adversarial inputs. Ask them to explain, source, or show calculation. Break them early. A reliable bot is a trained intern, not a gifted oracle.
Personal AI: Top 10 Rules for Data Analysis 


  1. Interrogate the Premise Before the Dataset.
  2. You consistently challenge framing errors before touching the numbers. You treat every dataset as an argument, not a truth source. Begin with: What question is this data really trying to answer?
  3. Disaggregate Power.
  4. You default to slicing by geography, class, or institutional control. Don’t settle for averages—run cuts that reveal asymmetry: who benefits, who’s excluded, who’s subsidizing whom.
  5. Demand the Raw.
  6. Never analyze summaries, dashboards, or PR-versions. Pull the CSV or SQL table. Prefer unaggregated records over pre-processed indicators.
  7. Cross-Validate with External Series.
  8. Your instinct is to benchmark everything—Census, FEC, GA4, VAN, BLS, SEC filings. Keep two independent data streams per claim. If they diverge, the story is in the delta.
  9. Use Schema Before Code.
  10. Define fields, datatypes, and join keys in writing before touching pandas or SQL. This prevents “silent joins” and helps track lineage across civic datasets.
  11. Quantify Uncertainty, Don’t Hide It.
  12. When results are noisy, you flag it. Keep confidence intervals, missing counts, and imputation flags visible in every table and chart.
  13. Document the Transformations.
  14. Treat cleaning, merging, and scoring as political acts. Keep a changelog—who altered what, when, and why. You treat metadata as accountability, not overhead.
  15. Visualize for Comparison, Not Decoration.
  16. Every chart should answer: “Compared to what?” Use paired or normalized visuals—index baselines, percent change, share of total. Avoid absolute counts unless variance is negligible.
  17. Narrate the Mechanism, Not the Trend.
  18. You distrust correlation stories. Force every pattern to propose a causal path. If the mechanism can’t be articulated, the graph goes back in the lab.
  19. Test Against Real-World Constraints.
  20. Before drawing conclusions, you cross-check findings against ground truth—organizational behavior, field data, political outcomes. The analysis isn’t done until the numbers survive contact with reality.
Work AI: 10 Tips for Using LLMs 

1. Define the Data Universe

Constrain the model to a verified dataset. Do not let it extrapolate. Load your own data or connect APIs. Treat the LLM as an inference layer, not a knowledge layer.

2. Externalize the Reasoning Chain

Require the model to show its work. Force transparency on how data is used, why it was selected, and how it supports the claim. This reduces hallucination and increases auditability.

3. Prioritize Structure Over Conversation

Favor structured outputs (tables, JSON, tagged markdown) instead of chat replies. This aligns with your spreadsheet and dashboard workflow preferences and enables downstream analysis.

4. Disable Engagement Bias

Strip conversational softeners and “user satisfaction” phrasing. Reward precision, sourcing, and cognitive restoration over tone.

5. Calibrate the Model to Role, Not Personality

Train your customGPTs to function as domain-specific instruments—e.g., Membership Concierge—not as assistants. Each should serve one repeatable task pipeline.

6. Engineer for Retrieval, Not Generalization

Focus on reliable recall of ground-truth data. Use embedding filters, metadata tagging, and relevance thresholds. Retrieval datasets should mirror live operational states (e.g., engagement tables, event data).

7. Build Feedback Loops Through Logs

Log all interactions, source citations, and reasoning steps. Use these logs to retrain or refine your datasets rather than prompting changes in the model’s weights.

8. Test for Fidelity, Not Fluency

Evaluate the model on accuracy of reference and process adherence. A correct, unpolished answer is higher value than a fluent but fabricated one.

9. Segment by Cognitive Tier

Design prompts for each cognitive level separately: recall, synthesis, judgment. Prevent the model from skipping tiers. This enforces analytic discipline and isolates logic faults.

10. Optimize for Model Obsolescence

Use every iteration to extract reusable logic or automation. The goal is self-sufficiency—eventually replacing the LLM with deterministic workflows or scripts where possible.

Each tip aligns with your operational discipline: bounded data, transparent inference, structured thinking, and independence from conversational AI habits.

Work AI: 10 Principles for Data Analysis

1. Define the Frame Before Loading Data

Clarify what question the dataset exists to answer. You never start by “exploring”; you start by constraining. Identify the dependent variable, the time frame, and the operational scope before opening the file.

2. Treat Raw Data as Evidence, Not Truth

All data is conditional on its collection method. Your first step is a bias audit: who gathered it, how, and what was excluded. You assume data is partial until validated.

3. Separate Signal From Systemic Noise

Use distribution plots and descriptive stats to isolate variance sources. If outliers track with known operational anomalies (e.g., campaign spend spikes), tag them—not delete them. Pattern identification precedes filtering.

4. Anchor Every Metric to Context

No metric exists in isolation. A traffic spike means nothing without knowing campaign timing or channel. Always contextualize numeric change with an event, intervention, or external trigger.

5. Prioritize Fidelity Over Complexity

Favor accuracy, reproducibility, and clear logic paths over advanced modeling. Simpler models that maintain interpretability outperform opaque models that require guesswork.

6. Show Your Work in Full Chain

Every analytic step—import, filter, transform, aggregate—must be visible and documented. Transparency enables error tracing and team reproducibility. Hidden cleaning steps destroy reliability.

7. Use Comparative Baselines, Not Absolute Scores

Trends reveal truth better than static metrics. You rely on deltas—week-over-week, year-over-year, pre/post-campaign—to measure impact instead of raw counts.

8. Treat Visualization as Proof, Not Decoration

Charts are arguments. Each should answer one question, with labeled axes, clear scales, and logical ordering. Style never substitutes for explanatory precision.

9. Correlate Quantitative and Qualitative Sources

Integrate numbers with field reports, campaign notes, or organizer feedback. Data must tie to human reality—your analytic discipline connects metrics to ground-level behavior.

10. Close Every Analysis With an Action Hypothesis

End by stating the operational decision implied by the data. “What should change?” is your standard output, not “what did we find?” The value of analysis is its next step.

These ten principles describe how you consistently use data: as bounded evidence for operational reasoning, not as an aesthetic artifact or exploratory playground.

Digital strategy is just one way to use AI

Using AI for comms is just one way I can help

I can build interactive information portals for members, supporters, voters and residents

If you answer the same set of questions a lot. I can show you an automated, new and highly accurate way to answer their questions

I can teach your team to use ChatGPT as a set of tools

The best reasons to use new technology lie in solving the daily problems facing your team. I can help you find organic uses for ChatGPT and AI.

I can create widgets and appliances for user-education and support

I can help you build a significantly better owners manual, support page or how-to guide. No pdfs and is built for digital.

AN AFFORDABLE ALL-IN-ONE SOLUTION

Ready for an AI-powered digital strategy?

COMING SOON:

HOW YOUR NON-PROFIT CAN IMPLEMENT ARTIFICIAL INTELLIGENCE WITHOUT LOSING ITS SOUL

Based on my upcoming training at Netroots Nation '25 in August, it's a method that any non profit can use to take advantage of force-multiplying technology in a responsible, human-first way.