Skip to main content
Fundamentally Different

Distribution

April 2026·11 min read

For most of the internet’s history, the limiting factor was supply. Making something, recording it, editing it, getting it out, all required time and skill and money. The printing press required capital. Television required studios. Even early internet publishing required enough technical literacy to find a host and learn some HTML.

When making things was hard, the things that got made were the things worth distributing. The filter was upstream.

That filter is gone.

What changed

The marginal cost of content (the cost of producing one additional unit, after fixed costs are paid) has approached zero in a way that broke the economics that gave distribution its meaning.

The point is the quality floor, not the volume. A language model can now produce an essay that reads like a competent writer’s in seconds, and produce a thousand of them, and produce a credible news summary, a functional marketing email, and a reasonable explanation of a complicated concept in the same minute. When everything can be competent, competence stops being a signal.

This is the structural break. Distribution systems (editorial processes, platform algorithms, word of mouth) were all, in different ways, filters for quality. They worked against a background assumption that not everything was worth surfacing, and that the hard part was identifying what was.

When the supply of competent content becomes infinite, those filters change what they optimize for.

Where the bottleneck moved

It is tempting to say the new bottleneck is attention. Attention is finite; content is infinite; therefore attention is the constraint. This is incomplete.

Attention was always finite. The interesting question is how it gets allocated, and the mechanism of allocation has changed.

Before platforms, allocation was editorial. A magazine editor decided what its readers would see. An acquisition editor decided what books existed. The gatekeeper was human, with taste, incentives, and identifiable blind spots.

Platforms moved allocation to algorithms. Algorithms have no taste. They optimize for measurable proxies of engagement: clicks, watch time, shares, replies. For a decade, the proxy held reasonably well. Engagement correlated with enough of what people actually wanted that the system worked. The things that spread were often, if imperfectly, interesting.

The proxy is now breaking down under load.

When AI-generated content can optimize for engagement metrics better than humans (and it can, because it has been trained on the full distribution of what has historically performed well), the signal function of those metrics collapses. If the algorithm rewarded headlines that created urgency, AI can generate those headlines at scale. This is Goodhart’s law[2] at platform scale: the metric that was a proxy for something real becomes a target, and hitting the target severs its connection to the underlying thing.

“When a measure becomes a target, it ceases to be a good measure.”
Goodhart’s law, in the formulation Marilyn Strathern gave it in 1997

Attention is not the bottleneck. The bottleneck is trust. Specifically: prior trust in a specific person.

Trust is not belief. It is a shortcut that makes evaluation unnecessary. When you trust someone’s judgment, you do not have to verify their conclusions; their endorsement functions as a prior strong enough to skip the work of assessment. In a world where every piece of content is plausibly competent, that shortcut becomes the only economical way to read.

The shifting filter

Editorialfilters qualityAlgorithmicfilters engagementTrust networksfilters identitypre-20052005–2024now
Each transition removed one filter and installed another, stricter one

The mechanism

Here is what actually happens when a piece of content gets read today.

Someone shares it, or recommends it, or cites it in something else they wrote. The reader receives it as an endorsement from someone whose judgment they have already validated, and the content carries the recommender’s credibility as a prior. The reader does not encounter the piece on its own terms; they encounter it through a pre-existing trust relationship.

Word of mouth always worked this way. What has changed is that the alternatives stopped working. When feeds, search, and editorial channels could distinguish quality from noise, trust networks were one mechanism among several. Once those alternatives collapse, trust is the only one left. The secondary mechanism becomes the primary mechanism by elimination.

Distribution used to be about content. Now it is about who you are before the content exists. Eugene Wei has called this dynamic status as a service: social platforms are utilities for accumulating, signaling, and trading status, and attention follows status more reliably than it follows quality.[4]

The internet was supposed to dissolve the gatekeepers. Publishers replaced by blogs. Newspapers replaced by Twitter. Each transition was described as democratization, and each was, in fact, re-centralization around a different bottleneck.

Publisher gatekeepers gave way to platform algorithms, which gave way to identity-based trust networks. Each step removed one filter and installed another. The current filter is: whose trust have you already earned?

This is a stricter filter than any of the previous ones, because trust accumulates slowly and does not transfer. A piece can spread within a trust network and be invisible outside it. You can have something true and carefully made, and if you have not built prior relationships, the mechanism to move it anywhere does not exist.

What the systems optimize for

Social platforms optimize for engagement. That was always imperfect, but it was directionally reasonable.

AI-powered discovery (agents doing research, retrieval-augmented generation, summarization systems) optimizes for something different: relevance to a specific query. An agent looking for the best explanation of a concept ignores how many likes an article received. It cares whether the article actually explains the concept correctly.

This is the first time, at scale, that distribution might reward correctness over engagement. Competent-but-generic content will be undifferentiated. Content that is specifically correct about something specific will have a structural advantage. For one narrow slice of distribution, expert-seeking queries, the incentive structure improves.

That slice is small. The majority of distribution still runs through social surfaces, inboxes, and feeds, all of which filter for identity over content.

What becomes defensible

If the mechanism is trust-based and trust accumulates slowly, what are the actual durable assets?

Platform audiences are rented. The platform changes its ranking and the asset evaporates. No platform audience is owned.

Viral reach is one-time. It does not compound. A piece that spreads widely rarely converts into an audience that returns; reach and relationship behave differently.

What compounds: direct relationships, consistent output over time, a recognizable position on something specific, and a reputation for being right about things. None of these are fast. All of them require doing the thing before building on it.

This produces an uncomfortable asymmetry. The people best positioned for trust-based distribution are the ones who were already trusted before the model shifted. Anyone trying to establish themselves now faces a higher barrier than someone who established themselves five years ago, and the barrier is rising as the signal-to-noise ratio in every channel declines.

Clay Shirky argued in 2008 that the internet removed the institutional gatekeepers.[1] He was right about the gatekeepers and wrong about what replaced them. The field did not flatten. A new hierarchy formed: organized around identity rather than institution, informal rather than formal, but a hierarchy all the same.

Why publish

The obvious implication of everything above is: don’t bother.

If distribution requires prior trust, and trust takes years, and the noise floor is rising, then publishing something on a site no one knows about is an act of optimism that the evidence does not obviously support.

That conclusion follows from the model. It is not obviously correct.

Publishing is, before anything else, a forcing function for thought. The constraint of the essay form (you have to finish a thought, follow it to its conclusion, commit to a position) produces a different quality of thinking than the conversation or the note or the abandoned draft. The essay exists whether or not anyone reads it. What it does to the thinking that produced it is independent of its reach.

There is also a version of distribution that has nothing to do with volume. The question is whether the right person sees it.

The right person for a given piece of writing is the one for whom it is relevant in a way that changes something. They were already looking for something like this, and they find it. The mechanism is a search, or a link in another piece, or a coincidence. For any single essay it is slow and low-probability. Over time, it accumulates.

This is a theory of trail-building, not of mass distribution. You put enough things in enough places that form a coherent shape, and the people who were looking for that shape eventually walk the trail. You are constructing rather than broadcasting.

The question that remains

If the bottleneck is trust, and trust takes years, then the real question shifts. It stops being “how does this piece reach people” and becomes “what kind of person am I building the credibility to be.”

That question has an answer. It just takes longer than publishing an essay.

There are two ways to build that identity, and they are easy to confuse.

The first is performed. You announce yourself. You frame your work as a personal narrative. You spend a measurable share of your time on the visibility of the work rather than on the work. This is the version most people mean when they say “personal brand,” and it is, on the evidence, effective. It compresses the timeline. It is also load-bearing on the performance: stop performing, and the identity erodes faster than the work can replenish it.

The second is accumulated. You do specific work on a specific thing, correctly, for long enough that the work itself becomes the identity. The reputation is a residue rather than a product. It is slower, harder to compress, and does not depend on the performance to persist. The work, if it is real, keeps being real after you stop talking about it.

Two kinds of identity work

PERFORMEDACCUMULATEDvisibility of the workidentityerodes when performance stopsfast to build, fast to losethe work itselfidentity (as residue)persists past the attentionslow to build, slow to lose
Both produce trust. They differ in what they leave behind when the attention moves on.

Both produce trust. Both function as distribution in the new regime. The difference is what they leave behind when the attention moves on.

The performed version works. It is not the version I want to build.

The structural break this essay describes (filters shifting from quality to engagement to identity) is a constraint on everyone publishing now. The choice it leaves is not whether to play, but which identity to accumulate. McLuhan’s line, that the medium is the message,[5] applies here in a sharper form: when the medium has become identity, the message is whoever you have already become by the time the reader arrives.

Sources

  1. [1]Clay Shirky. Here Comes Everybody: The Power of Organizing Without Organizations. Penguin Press, 2008.
  2. [2]Marilyn Strathern. "'Improving ratings': audit in the British University system." European Review, 1997. (Source of the canonical formulation of Goodhart's law.)
  3. [3]Ben Thompson. "Aggregation Theory." Stratechery, July 21, 2015.
  4. [4]Eugene Wei. "Status as a Service (StaaS)." Remains of the Day, February 19, 2019.
  5. [5]Marshall McLuhan. Understanding Media: The Extensions of Man. McGraw-Hill, 1964.