Algorithmic bias
Systematic errors in automated systems that produce unfair outcomes, usually because the training data reflected historical human biases or because the optimization target doesn't align with fairness.
Common misuse: Treating bias as a bug to be fixed rather than an inherent property of optimization. All models have biases—the question is whether those biases produce unacceptable outcomes.
See: EP. 014 — The AI Hiring Tools No One Audits
Confidence (as used here)
A calibrated assessment of how likely a claim is to be true given available evidence. High confidence = multiple independent sources, consistent evidence. Low confidence = limited sources, significant uncertainty.
Common misuse: Conflating confidence (epistemic state) with certainty (claiming to know). Confidence is probabilistic; certainty is binary.
See: The Method
Derivative reporting
News coverage that summarizes other news coverage rather than going to primary sources. When multiple outlets cite each other, you can get an illusion of independent confirmation that doesn't exist.
Common misuse: Treating "multiple sources report" as confirmation when all sources trace back to the same original report.
See: EP. 001 — Why Second Opinion Exists
Falsifiability
The property of a claim that makes it possible to prove false. If no conceivable evidence could disprove a claim, it's not a testable empirical claim—it might be a belief, value, or definition instead.
Common misuse: Treating unfalsifiable claims as especially strong ("nothing can disprove it!") when unfalsifiability actually means the claim isn't making an empirical assertion.
See: The Method — Step 4: Pressure Test
Goodhart's Law
"When a measure becomes a target, it ceases to be a good measure." People optimize for metrics, and optimization often produces gaming rather than genuine improvement.
Common misuse: Using this to dismiss all measurement. The point isn't that measurement is bad—it's that targets create incentives to manipulate the measurement itself.
See: Management
Incentive tracing
The practice of asking "who benefits if this claim is believed?" Not to assume bad faith, but to understand how positions shape what gets noticed, emphasized, or omitted.
Common misuse: Treating incentive analysis as proof of deception. Having an incentive to believe something doesn't make it false—it just means you should seek independent verification.
See: How to Think
Primary source
Original material that hasn't been filtered through interpretation: raw data, original documents, firsthand accounts, peer-reviewed research, official records. The thing that derivative sources are summarizing.
Common misuse: Calling any source you trust "primary." A newspaper article is not primary even if it's reliable—it's someone's interpretation of primary material.
See: The Method — Step 3: Evidence
Security theater
Security measures that provide the appearance of security without meaningfully reducing risk. Often implemented because they're visible and reassuring rather than because they're effective.
Common misuse: Dismissing all visible security as theater. Some visible measures are both symbolic AND effective. The question is whether they address actual threat models.
See: Security & Risk
Steelmanning
Presenting the strongest version of an opposing argument, not the weakest. The opposite of strawmanning. If you can't describe an opposing view in terms its proponents would accept, you don't understand it well enough to disagree with it.
Common misuse: Performatively steelmanning bad-faith arguments that don't deserve the effort. Some positions are just wrong, and treating them as worthy of steelmanning can be its own form of distortion.
See: EP. 001 — Why Second Opinion Exists
Validated (in AI/ML context)
Tested against real-world outcomes to verify that a model's predictions actually work as claimed. Not the same as "accurate on test data" or "passed internal review."
Common misuse: Vendor claims of "validated" AI that only mean the model performed well on curated test sets, not in production deployment with diverse real-world inputs.
See: EP. 014 — The AI Hiring Tools No One Audits