Loading...

The Agency Trap

April 2026

Every few months, someone prominent announces that AGI — Artificial General Intelligence — is close.

It's right around the corner, and when it arrives, the argument goes, it will render most human labor economically obsolete.

Not some labor --- most of it.

The knowledge worker is slated to be replaced, not elevated, they say.

This argument has a structure problem. It isn't wrong because AGI is impossible. It's wrong because it smuggles in a series of assumptions — about accountability, legal standing, moral responsibility, and social legitimacy — and treats them as already resolved.

They aren't.

And they won't be resolved automatically by capability improvements, however large.

This is the Agency Trap: assuming that if an AI system can perform a task, it can also own the task — bearing the accountability, authority, and institutional standing that human workers carry.

Capability and agency are not the same thing.

The argument that AGI ends human labor conflates these ideas together and that's where things go wrong.

The Target Keeps Moving

Start with the claim itself --- "AGI" has no agreed definition (and neither does reasoning, if you dig into it).

Depending on who you ask, "AGI" means: human-level performance across benchmarks, general reasoning in novel domains, conscious experience, or recursive self-improvement. Each definition implies a different threshold and a different timeline.

François Chollet, who developed the ARC benchmark specifically to test general reasoning, argues that current AI systems demonstrate impressive performance on trained distributions but fail to show the fluid generalization that "general" intelligence requires.[1]

Other researchers found that AI systems consistently solve the specific problem they were trained on while struggling with adjacent problems that humans handle easily.[2]

The capability gains are real. The generality is not — yet. And "around the corner" is not a timeline; it's just framing. The goalposts move with the technology.

This is the same pattern we’ve seen every time AI makes a big leap: people quietly move the definition of “general intelligence” to whatever AI still can’t do.

Capability Without Agency: The Hard Problems Remain

Alright, let's play the game. Let's say there is a system with human-level or greater task performance across all cognitive domains --- does that resolve the labor question?

Nope.

We're still stuck with 3 issues that don't go away even with a system that has "greater task performance across all cognitive domains".

  • Legal Accountability
  • Moral Responsibility
  • Institutional legitimacy

First up -- accountability cannot be automated away. When an AI system causes harm — a wrong coverage decision, a busted legal brief, a misdiagnosis (ow?) — someone is responsible. Not the "AI" system. It still comes back to the fact that an "agent" must be capable of bearing legal liability for their own conduct.[3]

Software can’t be sued, fined, or sent to jail. When something goes wrong, the responsibility still lands on the people who built, deployed, or approved the system. So the human worker has not really disappeared — they have just been hidden inside a more complicated chain of accountability. As Raji and her co-authors have shown, when AI systems cause harm, responsibility often gets spread across developers, deployers, and users in ways that courts and regulators are becoming less willing to accept[4].

So that legal accountability gets stuck to someone, somewhere.

Then there is the issue that moral responsibility takes more than just getting the right result.

As Harry Frankfurt argued[5], what separates a person from a mechanism is the ability to reflect on their own motives, accept or reject them, and act from values they have genuinely made their own.

Iason Gabriel makes a similar point in his writing[6] on AI alignment: current AI systems, and even the kinds of systems we presently know how to build, do not have values in that sense. They optimize, but they do not deliberate, and that difference matters when responsibility is on the line.

Institutional legitimacy is something people choose to grant. It is not a technical threshold that a system eventually crosses.

Even if a system could act on its own over time instead of just reacting, that would create new problems, not solve the old ones.

If it really had its own preferences, then alignment would become the main problem[7].

And whether a system’s decisions are accepted as valid is still something humans decide through laws and institutions, not something the system earns just by becoming more capable. Regulators in areas like insurance, medicine, and finance have made this clear by requiring documented human accountability for decisions that affect other people. That is a political and ethical choice, and it does not disappear just because the technology improves.

It's just not easy to seperate agency from capability.

What If It Really Could Act?

The deeper version of the AGI argument sometimes acknowledges the accountability problem and responds: but what if legal frameworks adapted to recognize AI agents as legal persons?

Ok, so this is a serious argument, but it is also one that creates as many problems as it solves.

Non-human entities can have legal personhood — corporations do, for example — but that status is created and limited by human institutions, not something that arises automatically from capability.

And Nick Bostrom's treatment of the question is worth taking seriously: a genuinely autonomous system with its own goals is not obviously a tool any longer.[8] It may have interests that conflict with human interests.

The question of who it serves stops being practical and becomes fundamental.

None of it works if you can't solve accountability — and none of the people making this argument have tried.

The Persistent Minimum

What the history of knowledge work actually shows from spreadsheets to relational databases to today’s wave of LLM tools — is that automation changes the kind of work people do;

it does not eliminate the need for people.

It follows the same pattern seen with the mechanical loom in the textile industry two centuries ago: machines take over the repeatable, structured tasks, and humans are left with the parts the machine cannot handle — judgment, quality control, exceptions, and decisions that do not fit an established pattern.[9]

The spreadsheet is worth dwelling on because we have fifty years of hindsight. VisiCalc didn't eliminate accountants. It created the modern CFO role — freed analysts from arithmetic and pointed them toward judgment.

The firms that once resisted spreadsheets are now the ones that rely on financial modeling as a competitive advantage. The fear of replacement was right that disruption was coming, but wrong about where it would lead.

The same dynamic holds for cognitive labor.

Each time a new information tool shows up — whether it is the filing cabinet, the spreadsheet, the database, or the large language model — the routine synthesis work moves to the tool, and people spend more of their time on work that requires real judgment. The ceiling on output goes up and the need for skilled judgment does not fall; it grows, because the tool lets the organization take on more. That is not speculation. It is the pattern every earlier wave has followed.

The Minimum Viable Person persists at every level of automation, because capability without accountability is not a replacement.

It is a tool.

The Agency Trap is the belief that capability is enough.

It isn't --- And it never was.

References

  1. Chollet, F. (2019). "On the Measure of Intelligence." arXiv:1911.01547. Introduces the ARC benchmark and argues that current AI demonstrates narrow performance, not general fluid intelligence.
  2. Mitchell, M. (2021). "Why AI Is Harder Than We Think." arXiv:2104.12871. Surveys systematic generalization failures revealing the gap between task performance and general intelligence.
  3. American Law Institute. Restatement (Third) of Agency § 1.01 (2006). An agent must be capable of bearing legal liability for their own wrongful conduct.
  4. Raji, I. D., Kumar, I. E., Horowitz, A., & Selbst, A. (2022). "The Fallacy of AI Functionality." ACM FAccT Conference, 959–972. Accountability diffuses unacceptably when AI systems produce harmful outputs.
  5. Frankfurt, H. G. (1971). "Freedom of the Will and the Concept of a Person." Journal of Philosophy, 68(1), 5–20. Genuine agency requires second-order volitions. Optimization is not deliberation.
  6. Gabriel, I. (2020). "Artificial Intelligence, Values, and Alignment." Minds and Machines, 30(3), 411–437. No current AI architecture produces genuine values in the sense required for moral agency.
  7. Russell, S. (2019). Human Compatible: Artificial Intelligence and the Problem of Control. Viking. The alignment problem: specifying what a capable system should optimize for in a way that produces outcomes humans actually want.
  8. Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press. A genuinely autonomous system with its own goal structure raises questions harder than labor economics.
  9. See "The Mechanical Loom of Mental Synthesis" for the full analysis of how automation reshapes cognitive labor composition without eliminating the demand for human judgment.

Continue Reading

The Minimum Viable Person

Every wave of automation reduces the floor, but it has never reached zero. Here's why that floor persists — and what it means for the future of knowledge work.

Read article