Subscribe to my newsletter!

Sign up with your email address

The AI Competency Tax*: How LLMs Are Rewarding Power Users

We’re living through a strange moment in technological history. Artificial intelligence was supposed to democratize expertise, making advanced capabilities accessible to everyone. Instead, it’s unexpectedly creating a chasm between power users and casual adopters.

This isn’t happening despite AI’s current limitations, it’s happening because of them. The very flaws that dominate headlines – hallucinations, missing context, extreme verbosity – act as an invisible filter. Those who know how to steer around them see their output multiply while others end up underwhelmed or distracted. We can call this phenomenon, the ‘AI Competency Tax’.

Choose Your Flywheel

AI tools today offer you two very different flywheels:

Flywheel What AI Does Long-Run Effect
Focused Leverage Generates first drafts, code stubs, structured summaries Frees experts for judgment → sharper output → deeper learning → smarter prompts → compounding gains
Ambient Distraction Curates infinite, personalized novelty Distracts attention → weakens self-regulation → tighter algorithmic grip → shrinking capacity to guide AI

Both loops feed on identical mechanisms—personalized inputs, rapid generation and collation, ubiquitous access to the world’s knowledge base—yet they compound in opposite directions. The difference is user discipline and existing domain expertise. The difference lies not in access to the technology, but in the user’s existing capacity to direct and extract from it.

Hallucinations: A Filter Only Experts Can Afford

Consider the asymmetry: as AI makes personalized distraction infinitely cheap to produce, the opportunity cost of unfocused attention approaches infinity. A seasoned professional can use LLMs to accelerate research and compress learning curves. A casual user gets served perfectly tailored content that keeps them scrolling for hours (e.g., take no further look than Tik Tok, which has become an endless doom loop of low quality AI-generated). Conventional wisdom treats AI hallucinations as a pure liability, an error rate to be minimized before ‘general adoption’ can begin. But this misses a crucial dynamic: Hallucinations currently function as a competency tax that only experts can afford to pay.

Case studies to illustrate the point:

Pros Cons
• A Harness case study indicated a 10.6% increase in pull requests with GitHub Copilot, alongside a 3.5-hour reduction in cycle time, suggesting productivity gains for engineers using AI-assisted coding tools • Stanford’s research, including the study “AI on Trial: Legal Models Hallucinate in 1 out of 6 (or More) Benchmarking Queries,” found legal AI tools hallucinate citations over 17% of the time, with real-world examples showing fabricated cases and misrepresentations
• Google’s CEO Sundar Pichai has stated that internal AI coding assistants, like “Goose,” make engineers roughly 10% more productive • Pattern Data showed that medical record reviews also showed risks, noting LLMs struggling with clinical notes, potentially affecting patient safety and legal outcomes

Regardless of the scenario above, positive or negative, an insight emerges: verification costs will scale inversely with domain expertise. For a seasoned engineer/lawyer/doctor, fact-checking AI-generated outputs will be is cheaper and faster than starting from scratch. For a domain expert, spotting hallucinations becomes pattern recognition*. For novices, penalties for errors remain high and verification will likely cost more than original research, creating a negative return on time spent with AI assistance.

*(Caveat: The end use case is also extremely important. LLMs today are great for coding but may not be great at other tasks.)

Trampolines for Experts, Treadmills for Novices

Large-language models excel at making the abstract concrete. Hand them a half-baked concept, scatter-brained notes, or a simple outline and they’ll return a coherent draft in seconds. High-intent users treat this as a cognitive trampoline, launching from rough ideas to structured output. However, the same capability becomes a liability without domain expertise to guide the process. Users may find themselves caught in cycles of polished but superficial content, where each iteration sounds increasingly more authoritative while not adding any real substance. The technology that accelerates experts toward breakthrough thinking traps novices on an endless treadmill of refining mediocre, low-value inputs.

This creates compounding advantages for high-agency users, with domain expertise:

  • Rapid Ideation: Jump from vague hypothesis to structured argument without staring at a blank page.
  • Compound Learning: Each revision teaches the model (and you) what ‘good’ looks like, shortening the next cycle to the next high-value output.
  • Cognitive Leverage: Experts offload low-value monotonous work to AI while reserving human attention for high-value decisions—strategy, creativity, judgment calls.
  • Domain Expansion: Strong knowledge in one area becomes a platform for AI-accelerated learning in adjacent fields.

The Coming Divergence

We’re approaching an inflection point. Current AI limitations won’t persist forever. Retrieval-augmented generation and model upgrades are reducing hallucinations (see below). Better interfaces are being designed to support sustained attention. The current ‘AI Competency Tax’ on novice users will decrease. But by the time AI becomes genuinely accessible to casual users, early adopters will have accumulated massive advantages. The compounding returns of skilled AI use (better prompts, refined workflows, expanded domain expertise) create gaps that become harder to close as the technology improves.

The most important insight about AI adoption isn’t technological, it’s behavioral. The tools that will reshape entire industries are already in everyone’s pocket. The question isn’t whether AI will change your industry, it’s whether you’ll be among the few who shaped that change or the many who merely experienced it. Start treating AI like a skill that compounds, not a toy that entertains. Your future self will thank you for the reps you put in today. The coming divide isn’t about AI access. It’s about AI agency.

To close, here’s some thoughts on agency from Andrej Karpathy, a Slovak-Canadian computer scientist and co-founder of OpenAI.

*Disclaimer: If you made it this far, this entire piece was iterated on with AIs – from the text to the images! Tools/models used include ChatGPT o3, Claude Sonnet 4, and Adobe Firefly. I’ve been using AI to iterate on ideas I have throughout the day (e.g., “hallucinations as an advantage vs. limitation” thought stream) and that’s what motivated this post and the creation of the ‘AI Competency Tax’ concept.

Latest posts