
I was lucky enough to hold a lunchtime AMA on AI at a local agency. I wasn't there to preach or condemn AI; rather to start a discussion on how to best use it in an advertising/marketing/PR setting. Which we did.
Along the way, someone asked me to provide my tips on using customGPTs. Of course I agreed.
When it was time to write, I decided to try something better than a few steps, links and paragraphs.
As I sat there, I thought "I've built all kinds of things with this account, and asked all kinds of questions on customGPTs, LLMs, building best practices, testing, etc. What happens if I ask the AI machine to give me advice based on my interactions with it?"
As you may or may not know, you can ask the machine you use most to analyze your behavior, your patterns, YOU. All you have to do is ask. "What are your observations about me?" is one good way to start that conversation.
In this specific case, asking it means facing what I tell people are my best practices. Asking AI for advice based on your observed actions is not just weird; what if I don't practice what I preach? I think I do, I follow all of the best practices. But there's nothing like a third-party review to spark the apprehension.
So, egged on by Rose Royce and their hit "Put Your Money Where Your Mouth Is," below is what my personal and work AIs said would be my tips based on their experience. Enjoy!
Constrain the model to a verified dataset. Do not let it extrapolate. Load your own data or connect APIs. Treat the LLM as an inference layer, not a knowledge layer.
Require the model to show its work. Force transparency on how data is used, why it was selected, and how it supports the claim. This reduces hallucination and increases auditability.
Favor structured outputs (tables, JSON, tagged markdown) instead of chat replies. This aligns with your spreadsheet and dashboard workflow preferences and enables downstream analysis.
Strip conversational softeners and “user satisfaction” phrasing. Reward precision, sourcing, and cognitive restoration over tone.
Train your customGPTs to function as domain-specific instruments—e.g., Membership Concierge—not as assistants. Each should serve one repeatable task pipeline.
Focus on reliable recall of ground-truth data. Use embedding filters, metadata tagging, and relevance thresholds. Retrieval datasets should mirror live operational states (e.g., engagement tables, event data).
Log all interactions, source citations, and reasoning steps. Use these logs to retrain or refine your datasets rather than prompting changes in the model’s weights.
Evaluate the model on accuracy of reference and process adherence. A correct, unpolished answer is higher value than a fluent but fabricated one.
Design prompts for each cognitive level separately: recall, synthesis, judgment. Prevent the model from skipping tiers. This enforces analytic discipline and isolates logic faults.
Use every iteration to extract reusable logic or automation. The goal is self-sufficiency—eventually replacing the LLM with deterministic workflows or scripts where possible.
Each tip aligns with your operational discipline: bounded data, transparent inference, structured thinking, and independence from conversational AI habits.
Clarify what question the dataset exists to answer. You never start by “exploring”; you start by constraining. Identify the dependent variable, the time frame, and the operational scope before opening the file.
All data is conditional on its collection method. Your first step is a bias audit: who gathered it, how, and what was excluded. You assume data is partial until validated.
Use distribution plots and descriptive stats to isolate variance sources. If outliers track with known operational anomalies (e.g., campaign spend spikes), tag them—not delete them. Pattern identification precedes filtering.
No metric exists in isolation. A traffic spike means nothing without knowing campaign timing or channel. Always contextualize numeric change with an event, intervention, or external trigger.
Favor accuracy, reproducibility, and clear logic paths over advanced modeling. Simpler models that maintain interpretability outperform opaque models that require guesswork.
Every analytic step—import, filter, transform, aggregate—must be visible and documented. Transparency enables error tracing and team reproducibility. Hidden cleaning steps destroy reliability.
Trends reveal truth better than static metrics. You rely on deltas—week-over-week, year-over-year, pre/post-campaign—to measure impact instead of raw counts.
Charts are arguments. Each should answer one question, with labeled axes, clear scales, and logical ordering. Style never substitutes for explanatory precision.
Integrate numbers with field reports, campaign notes, or organizer feedback. Data must tie to human reality—your analytic discipline connects metrics to ground-level behavior.
End by stating the operational decision implied by the data. “What should change?” is your standard output, not “what did we find?” The value of analysis is its next step.
These ten principles describe how you consistently use data: as bounded evidence for operational reasoning, not as an aesthetic artifact or exploratory playground.
Using AI for comms is just one way I can help
If you answer the same set of questions a lot. I can show you an automated, new and highly accurate way to answer their questions
The best reasons to use new technology lie in solving the daily problems facing your team. I can help you find organic uses for ChatGPT and AI.
I can help you build a significantly better owners manual, support page or how-to guide. No pdfs and is built for digital.
AN AFFORDABLE ALL-IN-ONE SOLUTION
COMING SOON:
HOW YOUR NON-PROFIT CAN IMPLEMENT ARTIFICIAL INTELLIGENCE WITHOUT LOSING ITS SOUL
Based on my upcoming training at Netroots Nation '25 in August, it's a method that any non profit can use to take advantage of force-multiplying technology in a responsible, human-first way.