On “Software Engineering May No Longer Be a Lifetime Career”

Wenqi He · Research Software Engineer, NCSA, University of Illinois


A recent Hacker News thread asks whether software engineering can still be a lifetime career, and the anxiety underneath the question is genuine, but most of the arguments offered in response — on both sides — are confused about what software engineering actually is, and that confusion is worth untangling before asking what AI does or does not change about it.

The most popular position in the thread is that developers spend only a small fraction of their time writing code and the rest understanding problems and formulating solutions, so LLMs, which write code, do not threaten what actually matters. This is reassuring but it accepts a premise it should reject, namely that writing and thinking are separable activities whose relative proportions are what is at stake. A mathematician does not think for two hours and fifty minutes and then write for ten; the proof is worked out in the mind — on a walk, lying awake at night — and the writing is an act of externalization, a way of offloading to paper what the mind cannot hold all at once, not the place where the understanding happens. Euler went blind and became more productive. Whether a program is written on paper, typed into an IDE, or held entirely in the head is a question about working memory, not about thinking, and automating the writing therefore does not automate the thinking but rather removes what we have long mistaken for it.

The corollary is that the worry about skill atrophy — that relying on AI to write code will gradually degrade your ability to think about code — is confused in the same way, since it locates the cognitive exercise in the act of inscription rather than in the effort to understand, as though the relevant muscle is in the fingers. Whether someone builds a genuine mental model of what they are building depends on whether they are trying to, and people who were never trying have been producing code that no one can explain or modify without consequences cascading unpredictably through the system long before AI entered the picture.

What makes code genuinely soft — what justifies calling it software rather than, say, brittleware — is not that it runs correctly but that someone understands it well enough to bend it: to change it, extend it, adapt it to new circumstances without it shattering. A pot is not cloth on the grounds that both are made of molecules; the shared substrate is irrelevant against the functional difference, and code that no one can reason about has the functional properties of hardware regardless of the medium it runs on. The criterion that distinguishes software from brittleware is comprehensibility, and comprehensibility is not a property of the code itself but of the relationship between the code and the human minds working with it, which means it can only be assessed by those minds, which means the assessment is irreducibly a matter of taste — not the kind of taste captured by static analysis metrics, which gesture at the thing without reaching it, but the kind expressed in the recognition that a solution is elegant or that a structure makes the problem harder to think about than it needs to be. Elegance is not an aesthetic luxury but a cognitive one: it is what makes a system maintainable, because a system whose mental model is clean can be changed and debugged and extended by someone who was not present at its creation, whereas a system whose mental model is opaque will resist every intervention, and the apparent correctness of its current behavior offers no protection against the next change.

This is why the comparison to professional athletes, which the original article leans on, fails so completely: the athlete spends physical capital that does not renew, and the career is a countdown to the point where the body can no longer perform at the required level. The engineer who is genuinely developing judgment and taste is on the opposite trajectory, because every hard problem worked through deepens the mental model and every elegant solution encountered sharpens the sense of what elegance is, so that the career compounds rather than depreciates. A martial artist who has stopped memorizing techniques and started understanding the principles of leverage and stability — where to apply force, how to read an opponent's balance — does not become obsolete as their speed declines, because what they have developed is understanding rather than a repertoire, and understanding generalizes to situations the repertoire has never encountered. The people most threatened by AI are not those who have spent the most time thinking but those who have spent the most time accumulating, and the two groups are not the same.

What AI has genuinely changed is the cost of stamp collecting. Physicists used to distinguish between physics and stamp collecting, meaning between the work of finding deep principles that reduce many phenomena to few and the work of cataloguing instances of known patterns, and we have now built machines that do the cataloguing faster and more reliably than any human can, which means that competing on that axis — knowing more frameworks, memorizing more algorithms, pattern-matching faster against a larger corpus of solved problems — is not merely a losing strategy but an irrelevant one. The work that remains is the work that was always harder to teach and harder to hire for: developing the taste to know when a solution is right, the judgment to distinguish what is genuinely complex from what only appears so, the ability to reduce a problem to a mental model clean enough that it can be held in a human mind and passed on to another human mind intact. These things cannot be automated not for any contingent technical reason but because they are defined relative to human comprehension — what is simple, what is elegant, what produces the sense that a problem has been genuinely understood rather than merely dispatched — and have no meaning outside of the minds for whom simplicity and elegance are experiences. A proof that no one can follow is not a more rigorous proof; it is not a proof at all, because a proof is an argument that produces conviction, and if it does not do that the word does not apply.

The right description of what a good software engineer does is something like being a chef for the mind: not merely combining ingredients correctly but finding the combination that produces a specific experience in the person who encounters the result, where the criterion of success is entirely in that experience and the tools used to achieve it are beside the point. This requires taste, and taste is the one thing that cannot be measured by the processes the industry uses to evaluate candidates, which is why the hiring signal has always been broken — not recently broken by AI but always broken, measuring possession of knowledge rather than the capacity to think, asking candidates to recite facts in the presence of search engines, using years of experience as a proxy for judgment when the two are only loosely correlated. AI makes this finally undeniable by collapsing the value of possession to near zero, but it was always a poor signal for the thing that actually mattered. The only instrument capable of assessing the right thing is a genuine intellectual conversation between two people who both care about understanding, and the catch is that this requires the evaluator to be good by the same standard — you cannot recognize taste you do not have — which is a circular dependency that the industry has never resolved, and which it will not resolve by accident.

The deeper reason it will not resolve by accident is that the incentives point the other way. An organization that selects for taste and judgment produces people who are hard to replace, and irreplaceability is a liability in a system organized around interchangeable labor. The same logic governs what universities teach: justified by their contribution to the workforce, they optimize for the skills that workforce demands, which means training people to execute rather than to understand, to answer the question rather than to ask whether it is the right question, to produce outputs rather than to develop the sensibility that knows what good output looks like. The fields that develop that sensibility — literature, philosophy, history, the study of art — are precisely the fields being defunded, and this is not a coincidence but a consequence of asking institutions to justify themselves in terms of economic output, which those fields cannot do, because their value is not legible to that metric even though it is real. We have built a medium that rewards taste above almost everything else, and we have done so at the moment when we decided the cultivation of taste was not worth supporting.

What LLM actually is, and what almost everyone in the thread misunderstands, is not a tool in the sense that a calculator or a compiler is a tool — something that extends a specific faculty deterministically and whose output you can trust without further judgment — but a medium in the sense that film and music are media, transmitting human experience across time and minds rather than extending a physical capability. When you talk to an LLM you are not querying a database of facts; you are engaging with a compressed, conversational interface to recorded human thought, and the conversational persona sitting between you and that thought is analogous to the characters in a play, who are not real people but through whom you access the mind of the playwright and the world that shaped them. You do not engage with Hamlet by interrogating whether he is a reliable source; you engage with him the way you engage with any rich and fallible human artifact, bringing your own judgment to bear, taking what is valuable and setting aside what is not, using the encounter to sharpen your own thinking rather than to replace it. This is what good humanistic education teaches and what purely technical education does not, and the result of producing generations of engineers trained to use tools but not to engage with media is a profession that does not know how to use the most important instrument it has ever been given.

Several voices in the thread circle toward UBI and redistribution without quite saying what they mean, and what they mean is something Marx said more clearly: that the deepest problem with a system organized around profit is not that it distributes the gains unevenly but that it alienates people from the work that makes them human, separating them from the product of their labor, from the act of creation, from their own capacity for meaningful engagement with hard problems, and the fields that resist automation — art, literature, music, philosophy, genuine engineering — are exactly the fields where that alienation is overcome, where the doing and the meaning are not separable and the work is its own justification. The technology now exists to automate the stamp collecting and return people to the physics, which should be a liberation, but whether it is depends on whether we have the political imagination to organize a society around human flourishing rather than around the production of interchangeable labor, and the answer to that question is not in the technology.


The content and reasoning are my own. The wording is partially Claude's (Anthropic).