1

Claim

What exactly is being asserted?

2

Incentives

Who benefits if you believe it?

3

Evidence

What's primary, what's derivative?

4

Pressure Test

What would falsify it?

5

Translation

The clean explanation in plain language.

Step 1: Claim

What exactly is being asserted?

This sounds simple, but it's where most analysis fails before it starts. Headlines imply things they don't state. Tweets compress information in ways that change meaning. Press releases use language designed to sound like claims while committing to nothing.

Before I can evaluate a claim, I need to nail down what the claim actually is. Not what it sounds like, not what you assume it means—the literal assertion being made.

Questions I ask:

  • What is being stated vs. implied?
  • Is this a claim about fact, prediction, or value?
  • What scope is actually covered? (All? Some? This case?)
  • What would make this claim true vs. false?

Step 2: Incentives

Who benefits if you believe this claim?

This isn't about assuming bad faith. It's about understanding the landscape. Everyone has a position. That position shapes what they notice, what they emphasize, and what they leave out.

A pharmaceutical company has an incentive to emphasize positive trial results. A politician has an incentive to characterize opponents unfavorably. A journalist has an incentive to make stories dramatic. None of this means they're lying—it means their perspective is shaped by their position.

Questions I ask:

  • Who made this claim originally?
  • What do they gain if the claim is believed?
  • What do they lose if the claim is disbelieved?
  • Who is repeating the claim, and why?
  • What would change if the opposite were true?

Step 3: Evidence

What's primary, what's derivative?

Most of what you read is not primary reporting. It's reporters summarizing other reporters summarizing other reporters, all the way down until you hit someone who actually talked to a primary source—if anyone did.

Primary sources are: original documents, raw data, firsthand accounts, peer-reviewed research, official records. Derivative sources are: news articles, summaries, interpretations, "studies show" claims without linked studies.

Questions I ask:

  • Where did this claim originate?
  • Is the primary source accessible?
  • How many layers of interpretation exist between me and the source?
  • What does the primary source actually say vs. what the summary claims it says?
  • What methodology was used? What were its limitations?

Step 4: Pressure Test

What would falsify this claim?

This is the question that separates empirical claims from faith positions. If I ask "what evidence would convince you you're wrong?" and the answer is "nothing," then we're not dealing with a testable claim.

Unfalsifiable claims aren't necessarily wrong—they might be value statements, definitions, or articles of faith. But we should know when we're in that territory, because the rules are different.

Questions I ask:

  • What evidence would prove this claim false?
  • Has anyone looked for that evidence?
  • What would change the claimant's mind?
  • Are there alternative explanations for the evidence cited?
  • What's the strongest version of the opposing argument?

Step 5: Translation

The clean explanation in plain language.

After all that analysis, what do we actually know? What don't we know? How confident should we be?

Translation isn't dumbing down—it's simplifying. Jargon often obscures rather than clarifies. If I can't explain something in plain language, I probably don't understand it well enough.

What I deliver:

  • What's verified (high confidence, strong evidence)
  • What's inferred (reasonable conclusion, incomplete evidence)
  • What's uncertain (could go either way)
  • What's performative (language designed to persuade rather than inform)

Why this avoids hot takes

Hot takes are fast. This method is slow. Hot takes feel good. This method feels like work.

But hot takes are almost always wrong in ways that matter. They oversimplify, they assume bad faith, they substitute confidence for accuracy, and they optimize for engagement rather than truth.

The method is designed to produce fewer conclusions with higher reliability. That's less exciting but more useful if you actually need to make decisions based on what's true.

How I label uncertainty

Every claim gets a confidence label:

  • High confidence: Multiple independent primary sources, consistent evidence, no serious counterarguments
  • Medium confidence: Some primary sources, reasonable inference, minor counterarguments or gaps
  • Low confidence: Limited or derivative sources, significant uncertainty, serious counterarguments exist

I'm explicit about which category each claim falls into. If I'm guessing, I'll say I'm guessing.

What counts as a source

Not all sources are equal. Here's my hierarchy:

  1. Primary documents: Original research, raw data, official records, firsthand accounts
  2. Peer-reviewed research: Academic papers that survived expert scrutiny (still flawed, but better than press releases)
  3. Quality journalism: Reporters with track records, named sources, clear methodology
  4. Expert opinion: People with demonstrated expertise in the relevant domain
  5. Derivative reporting: News that summarizes other news (useful for awareness, not for confidence)

I try to trace everything back to the highest-quality source available.

What I refuse to do

  • Claim certainty I don't have. If the evidence is thin, I'll say so.
  • Assume bad faith without evidence. Incentives explain behavior; they don't prove malice.
  • Both-sides things that aren't balanced. If the evidence strongly favors one side, I'll say so.
  • Optimize for engagement. I'd rather be boring and right than viral and wrong.
  • Pretend I'm neutral. I have positions. I try to be transparent about them.

See it in action

Watch how the method applies to real claims.