Loading...

You Can't Go Home Again

The anxiety is real. You hear it at conferences, in Slack channels, in the conversations that happen after the formal sessions end — between people who have built careers on the kind of thinking that AI tools are now doing faster, cheaper, and without complaining about the scope of the task. The worry isn't abstract. It's about mortgages and career trajectories and the skills a person spent a decade building and whether those skills still mean what they used to mean.

That anxiety deserves to be taken seriously. It also deserves a direct answer. Thomas Wolfe provided the most useful one in 1940. And the rest of this series will build something more concrete: a model for what the forward direction actually looks like, in numbers an organization can evaluate.

The Wolfe Insight

Near the end of You Can't Go Home Again, Wolfe writes:

"You can't go back home to your family, back home to your childhood... back home to the old forms and systems of things which once seemed everlasting but which are changing all the time."

The novel follows George Webber, a writer who returns to his hometown of Libya Hill after a long absence and finds it unrecognizable — not because the geography has changed, but because the assumptions everyone held about it no longer hold. The town has been gripped by speculative fever; everyone is getting rich on real estate, certain the boom will last forever. Webber can see what the others can't: the reality underneath the assumptions has shifted, and the assumptions are now floating free of anything solid.

He can't go home again — not because home is gone, but because the person he is doesn't fit the place he remembered, and the place has changed in ways that make the old ways of seeing it useless.

That is the precise situation facing knowledge workers today.

What "Home" Actually Was

The "home" that feels lost is a specific era of knowledge work — call it the pre-LLM model — in which certain kinds of cognitive labor were unambiguously human work and commanded stable economic value because of it.

In that model, the analyst who could synthesize complex data into a coherent report was valuable because synthesis was hard and slow and required trained judgment. The consultant who could produce a first-pass strategic assessment was valuable because producing it required accumulated expertise and time. The writer who could draft clear, structured communications was valuable because drafting required skill most people didn't have. These capabilities weren't just useful — they were scarce, and scarcity is what creates durable market value.

Large language models are not eliminating these capabilities. They are eliminating the scarcity.[1]

A first-pass synthesis that took a senior analyst three hours can now be generated in minutes. A structured draft that required a skilled communicator can now be produced on demand. The capability still exists. The scarcity doesn't. Research systematically evaluating LLM task-substitution potential finds the highest exposure — the greatest elimination of scarcity — concentrated precisely in the knowledge-synthesis and information-processing roles that prior automation waves largely left untouched.[2] The professional synthesis capabilities that became economically valuable because writing, the printing press, and the database created so much information to manage are now the capabilities most directly targeted by the next infrastructure layer.

That is what knowledge workers are mourning. And the mourning is appropriate. That era was real, the skills were real, and the market value those skills commanded was real. It isn't irrational to grieve the passing of something that genuinely existed.

But you can't go home again.

The Webber Problem

Here is the deeper Wolfe insight that the title alone doesn't capture: Webber can't return to Libya Hill not only because the town has changed, but because he has changed. The mental models he built while away, the ways of seeing he developed, the understanding of the world he accumulated — none of it maps cleanly onto the place he left. Even if Libya Hill were exactly as he remembered it, he wouldn't fit.

That is the position knowledge workers are in now. The mental models built during the prior era — "synthesis is my competitive advantage," "the ability to produce coherent first drafts is scarce," "organizations need me to do this work that they can't do themselves" — are assumptions that no longer hold against the current reality.

But the Webber Problem runs deeper than just updating your assumptions. The prior articles in this series have built a specific vocabulary for what changed: the bottleneck in knowledge work shifted from information access to synthesis; AI is the first infrastructure layer that directly targets the synthesis operation; and the diagnostic that matters is not "how hard is the task?" but "does the synthesis pattern exist before the work begins?"[3]

The mental models that no longer fit are specifically the ones that treated all synthesis as equally scarce and equally valuable. They are the models that couldn't distinguish between the variance report (defensible-but-not-differentiating, established-pattern synthesis) and the judgment call that determines what the variance means and what to do about it. When those were bundled together — because the only way to get the judgment was to have humans produce the whole package — the scarcity of one protected the value of both. Unbundling them is what LLMs do. And it cannot be undone.

The anxiety is partly about AI tools. But it is also about the discovery that the map you were using doesn't match the territory anymore — and specifically, that the map was accurate until very recently, which makes the mismatch feel like a betrayal rather than an ordinary update.

The Historical Pattern Is Not Comfort — It Is Information

Earlier in this series, the mechanical loom story established the pattern: skilled textile workers were right to fear that automation would change the composition of their work. It did. What they couldn't see was the direction. The loom didn't shrink the industry — it transformed it, expanded it, and shifted human skill toward higher-complexity work. Total employment in textile-related work grew after mechanization. The craft moved up the complexity curve.[4]

This is not offered as comfort. "The pattern suggests things will probably be fine" is not a useful thing to say to someone whose specific skills are currently being devalued. What the historical pattern provides is not reassurance but information: the direction of the transformation, and what adaptation has looked like when it worked.

The consistent finding across every prior mechanization wave is that automation substitutes for routine tasks while complementing workers in non-routine analytical roles — raising the productivity and market value of the work that remains once routine synthesis is handled.[3] This mechanism is not a prediction about AI; it is a documented pattern from the 1960s through the 1990s, and the early evidence from LLM deployments runs consistent with it.[5]

What worked was not trying to re-establish the old scarcity. The workers who adapted moved toward work the machine couldn't do — the judgment calls, the creative direction, the exception handling, the decisions that required someone to understand what the right question was before the synthesis could begin. The workers who adapted didn't do less cognitive work; they did more complex cognitive work, on harder problems, with higher stakes. And those harder problems generated higher returns per unit of cognitive labor than the defensible-but-not-differentiating work that automation displaced.[6]

Fear, Uncertainty, and Doubt Are the Wrong Map

There is an orientation toward this transition that gets organizations and individuals nowhere: treating it primarily as a threat to be managed, a risk to hedge, a danger to contain. This is the FUD orientation — fear, uncertainty, and doubt as the primary frame.

The FUD orientation fails for a specific reason. It is analytically wrong about where value is going. Cognitive value is not distributed uniformly across the tasks that knowledge workers perform. The prior articles in this series have made this precise: the established-pattern synthesis that AI handles most reliably is, by definition, the lowest-differentiating portion of the role. The judgment work that follows it — synthesis that determines the pattern rather than executing one — is where error prevention, organizational influence, and genuine strategic advantage actually live.

When you treat the AI transition as a threat to manage, you defend the wrong territory. You protect the task inventory rather than the judgment capability. You resist the automation of the variance report while the capacity for genuine strategic insight — the capacity that was always the actual source of value — sits underused.

The workers who thrive in the post-LLM environment are not the ones who successfully avoided the transition. There is no version of Libya Hill that survived intact. They are the ones who recognized early that the composition of their work was changing, identified which parts of their cognitive labor were defensible-but-not-differentiating, and redirected their attention and development toward the synthesis tasks that produce patterns rather than execute them.

The organizations that get leverage from this transition are the ones asking the composition question: which parts of our cognitive burden have established synthesis patterns, and which parts require the kind of judgment that Polanyi described as resistant to codification?[7] That question has a specific, answerable structure. It is not a question you can ask well if you're primarily oriented around fear.

Forward Is the Only Direction With Traction

The question "how do I protect what I built?" is the wrong question. It assumes you can return to the conditions that made those things valuable, and you can't. The Libya Hills of the pre-LLM knowledge work era are not coming back.

The question with traction is: "What does the work look like at the next level of complexity, and what do I need to develop to do it?"

That question points toward the cognitive labor that remains unambiguously human: the framing of novel problems, the judgment calls that require accountability, the synthesis that produces the pattern rather than executing a pre-existing one. It points toward the skills that AI acceleration makes more valuable, not less — because when routine synthesis is handled, the organizations that can apply genuine judgment to harder problems gain compounding advantage over those that can't.[6]

Thomas Wolfe knew the answer to that, too: "I believe that we are lost here in America, but I believe we shall be found." Forward is not a consolation prize. It is the only direction that has ever led anywhere.

What the Next Series Builds

The articles so far have established the mechanism. Cognitive labor is the engine that transforms information into coherence. Every infrastructure layer expands its scope. AI is the first layer that targets the synthesis operation directly. The diagnostic for where AI can and cannot substitute is whether the synthesis pattern exists before the work begins. The historical pattern is transformation, not elimination — and what survives is always the work that required the kind of judgment that cannot be pre-specified.

What the next series builds is the model that turns mechanism into decision.

Knowing that cognitive labor divides into routine and non-routine is the first cut. The second cut — the one that makes it possible to build a business case — is understanding that the non-routine tier itself has meaningful structure. The judgment required to handle an ambiguous case is different in kind from the judgment required to manage a portfolio, recognize a systemic pattern, or decide what question deserves attention in the first place. Each level commands a different relationship between labor cost and business value generated.

The Cognitive Labor Value Model series starts where the anxiety ends: not with whether this transformation is happening, but with what it's worth — by layer, by role, by dollar of investment. That is where the forward direction becomes operational.

Next in series: "What Happened to Physical Labor Roles After Mechanization"


References & Notes

[1] AI eliminating scarcity, not capability. The distinction between capability (what a knowledge worker can do) and scarcity (how rare that capability is) is the article's central framing. The relevant historical parallel is clear in the physical labor case — mechanical weaving didn't eliminate the ability to weave by hand, it eliminated the premium that hand-weaving commanded because it was the only method available. The same mechanism applies to knowledge synthesis at the LLM inflection point.

[2] LLMs specifically eliminate synthesis scarcity in knowledge work. Eloundou, T., Manning, S., Mishkin, P., & Rock, D. (2023). "GPTs Are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models." arXiv:2303.10130 [econ.GN]. DOI: 10.48550/arXiv.2303.10130.

Eloundou et al find that approximately 80% of U.S. workers have at least 10% of their tasks exposed to LLMs, and that the highest exposure is concentrated in higher-income, higher-education knowledge-synthesis occupations — inverting the pattern of prior automation waves that concentrated displacement risk in manual and clerical labor. This is the empirical anchor for the article's core claim: LLMs target the synthesis capabilities that prior infrastructure layers left untouched, specifically eliminating the scarcity that made those capabilities economically valuable. This is not a speculative concern about future AI; it is a documented capability profile of current LLMs.

[3] The synthesis bottleneck and the established-pattern criterion. The prior articles in this series: Cognitive Labor: The Mental Work Behind Knowledge Work (v3) and The Mechanical Loom of Mental Synthesis (v3), drawing on: Autor, D.H., Levy, F., & Murnane, R.J. (2003). "The Skill Content of Recent Technological Change: An Empirical Exploration." Quarterly Journal of Economics, 118(4), 1279–1333. NBER Working Paper 8337.

ALM establish that routine cognitive tasks follow explicit rules and are substitutable by technology, while non-routine analytical tasks resist automation because they require judgment that cannot be pre-specified. The "established pattern before the work begins" criterion developed in Article 3 is this article's application of ALM's routine/non-routine line to the specific question of what LLMs can and cannot reliably perform. The claim that LLMs unbundle established-pattern synthesis from judgment-intensive synthesis is the direct consequence of this framework applied to current AI capabilities.

[4] Historical pattern: textile mechanization transformed rather than eliminated the craft. The claim that total textile employment grew after mechanization — despite displacement of handloom weavers — is supported by British occupational census data from 1801–1851. See the companion article What Happened to Physical Labor Roles After Mechanization and Wikipedia: Economic history of the United Kingdom. The mechanism: cheaper cloth expanded demand beyond what cottage industry could satisfy, creating factory employment that exceeded the displaced handloom workforce. The craft moved up the complexity curve; the function survived even as the method was transformed.

[5] Automation substituting for routine tasks while complementing non-routine analytical work. Autor, D.H. (2015). "Why Are There Still So Many Jobs? The History and Future of Workplace Automation." Journal of Economic Perspectives, 29(3), 3–30. DOI: 10.1257/jep.29.3.3.

Autor surveys multiple mechanization waves and documents the consistent finding that automation of routine tasks raised the wage premium for non-routine analytical work. The complement/substitute mechanism — that automating established-pattern tasks frees and concentrates human attention on judgment-intensive tasks — is the empirical pattern on which the "forward direction" argument rests. Early LLM deployment evidence is consistent with this pattern: Brynjolfsson, E., Li, D., & Raymond, L.R. (2023). "Generative AI at Work." NBER Working Paper 31161 — which finds the largest productivity gains in judgment-intensive interactions, not routine ones, when AI handles the established-pattern synthesis.

[6] Adaptation means moving toward harder, higher-value cognitive work. Acemoglu, D., & Restrepo, P. (2019). "Automation and New Tasks: How Technology Displaces and Reinstates Labor." Journal of Economic Perspectives, 33(2), 3–30. NBER Working Paper 25684. DOI: 10.1257/jep.33.2.3.

Acemoglu and Restrepo formalize the reinstatement effect: when automation displaces workers from specific tasks, new tasks emerge at higher complexity levels where humans retain comparative advantage. Over historical time periods, reinstatement effects have largely offset displacement effects, supporting the article's claim that forward-adapting knowledge workers move toward harder problems with higher stakes — not toward unemployment. The article's claim that "harder problems generate higher returns per unit of cognitive labor than defensible-but-not-differentiating work" is consistent with this mechanism: the tasks that emerge at the automation frontier are precisely those where human comparative advantage is strongest and where error costs (and therefore value of correct judgment) are highest.

[7] Polanyi's Paradox and the composition question. Autor, D.H. (2015), cited above [5]. The composition question — which synthesis tasks have established patterns vs. which require judgment that resists codification — is the operational version of Polanyi's Paradox applied to organizational decision-making. Asking "where does synthesis pattern exist before the work begins?" is the analytical move that converts the FUD orientation into a productive one: it replaces "how do I protect my role?" with "which parts of my role are automatable, which parts are not, and how do I redirect my development accordingly?"

Next in Series

What Happened to Physical Labor Roles After Mechanization

The empirical record of how physical labor roles transformed after mechanization — and what the pattern predicts for cognitive labor roles under AI.

Read next article in series