First New Feature Discussion is Always Public

PUBLIC: How I Accidentally Walked into the Perfect Solution to Deepfakes

How my weird brain works; and how it might help you approach problem solving with LLMs.


Part 1 — The Accidental Lawmaker

This is the first in a four-part series on the thinking behind the 48-Hour Deepfake Takedown Act — a framework built not in a law school or a legislative office, but in the middle of a vortex of curiosity that started with Denmark. This wasn't a revelation, it was an example of the connections you can make when thinking about solutions using AI tools. The machine didn't think of this, I did. Together, the machine and I added depth, thought through problems; saving time.


I didn't set out to write a law.

I was doing what I usually do when something bothers me — reading, following threads, asking questions nobody around me was asking yet. The something that was bothering me was deepfakes. Not the sensational version of the problem, the celebrity scandals and the viral outrage. The quieter version. The one where ordinary people — a teacher, a candidate for local office, a nonprofit director — wake up to a version of themselves they never made, saying things they never said, and have absolutely nowhere to go.

That bothered me more than I could explain at the time. It still does.

So I kept reading.

The Denmark thread started the way most good ideas do — sideways.

I came across a piece about how Denmark had changed its copyright law to give every citizen in the country something called NIL status. Name, image, and likeness. The same framework American college athletes had been fighting for. Denmark just gave it to everyone, by law, and then extended it to cover voice — NILV. Your voice, your face, your likeness. Yours. Not the platform's. Not the model's. Yours.

I filed that away and went looking for what Michigan had on the same question.

What I found surprised me. Michigan already had NIL protection in the common law. Not a statute. Not something the legislature had passed after years of advocacy. It was already there, sitting in the legal tradition, waiting to be modernized.

That was the first moment I thought: we might actually have something to work with here.

The problem with most deepfake legislation — and there is more of it every session, in more states, with more urgency — is that it focuses on the wrong end of the pipeline.

Most of it asks: how do we stop deepfakes from being made?

That is the wrong question. Not because it is unimportant, but because it is unanswerable at scale. You cannot regulate creation without either missing most of it or regulating so broadly you catch things you never intended. The platforms are global. The tools are cheap. The bad actors are diffuse. Trying to stop creation is like trying to stop water with a screen door.

The right question — the one I kept coming back to — is different.

What has to happen in the first 48 hours after a deepfake goes live?

That question has a tractable answer. It has leverage points. It has a pipeline you can actually regulate. And it puts the obligation where it belongs — not on the victim to prove something impossible, but on the platforms and the generators who profited from building the infrastructure that made the harm possible.

The administrative court idea did not come from a legal textbook. It came from frustration.

I had been thinking through APIs and image identification algorithms, trying to imagine how a takedown system would actually work in practice. Who verifies the claim? Who decides it's a match? How do you stop BigAI from doing what BigAI always does when faced with an inconvenient obligation — slow pay, no pay, drag it out until the victim gives up or runs out of money?

Every version I worked through had the same problem. The moment you put humans in the middle of the verification process, you create delay. Delay is the strategy. Delay is how you win if you are a company with more lawyers than the victim has years.

And then it hit me. What if the computers talk to each other?

Not humans filing paperwork and waiting for a hearing date. Not a lawsuit that takes three years and costs more than the victim will ever see. An electronic administrative court where state law mandates adherence — once the law passes — and where the verification is algorithmic, the timeline is fixed, and the human element comes in at the beginning and the end. Humans see the evidence going in and coming out. A human can appeal, and they need better evidence to prevail. But the middle — the matching, the verification, the decision — that runs without the delay that human process invites.

Keeping the scope narrow was not a compromise. It was the insight. The narrower the scope, the harder it is to game. The more you try to do everything, the more surface area you give the opposition to work with.

What happens to the victim while the process runs? If the answer is "they wait," the system fails. Waiting is losing when you are a private citizen and a version of you is spreading across the internet in real time.

But relief is not automatic. It is not enough to claim harm. The system has to first confirm that what you are reporting is actually a deepfake — that the image or video in question matches a registered original and was generated without consent. That verification is what the electronic court is for.

Once it confirms a match, the clock starts. The platform has 48 hours to take the content down and stop future generation. They have the evidence. They have the match. They have everything they need to act. There is no ambiguity to hide behind, no discovery process to drag out, no procedural delay available to them. Just a deadline and a decision.

If they miss it, they pay. Not a nuisance fee. Not heaven and earth. Not a cost-of-doing-business fine that gets folded into a quarterly report. Real money for real damages, drawn from an escrow fund required by law and fully funded by the tech companies as a condition of operating in the state.

The fund exists because victims cannot wait for a judgment that takes years to collect. But the trigger is not the claim — it is the failure to act after verification. You had the evidence. You had the time. You chose not to move. That choice has a price. The sequence matters. Claim. Verify. Remedy. In that order. The speed of the system is what protects victims — not the looseness of the standard.

I want to be honest about something. I am not a lawyer. I am not a legislator. I did not build this framework in a law school clinic or a policy shop with a staff of researchers.

I built it the way most useful things get built — by following a question further than was probably reasonable, reading everything I could find, thinking out loud with AI tools that helped me pressure-test the logic, and refusing to stop at the point where it got complicated.

What I ended up with is a framework that uses Michigan's existing legal foundation, learns from what Denmark figured out about voice and likeness protection, and builds something new in the middle — an electronic administrative court that keeps the scope narrow, the timeline fixed, and the power where it belongs.

Is it perfect? No. Is it gameable? Everything is gameable. The question is whether the cost of gaming it exceeds the cost of complying. That is what Parts 2, 3, and 4 of this series are about.

The first question I want this community to sit with — the one that will open our first moderated thread in What's Your Big Idea? — is this:

How can we use AI to fight the excesses of AI?

Not as a rhetorical provocation. As a genuine design question. The framework I built uses algorithmic matching to verify claims, electronic filing to prevent delay, and automated logs to create an evidentiary record that humans never could. It fights the tool with the tool.

There are other ways to do that. There are versions of this idea that apply to fields you know better than I do — healthcare, education, housing, civic engagement. There are applications we have not thought of yet.

That is what the next two weeks are for.

Part 2 goes deeper into the architecture — the registry, the tribunal, the escrow fund, and why the 48-hour window is the number that makes everything else possible.

If you are not already in the room, this is a good time to walk through the door.

What's Your Big Idea? is a free community for nonprofit and civic people figuring out AI together. Entry works differently than most communities.

What's Your Big Idea?