I Have 30 Years of Career Left. AI Made Me Rethink All of Them.
On judgment, hype, the joy of still building things, and learning to prepare for a future nobody can predict.

I’m turning 40 this year. That means, if I’m lucky, I have roughly 30 more working years ahead of me. Thirty years of building things, making career decisions, and trying to stay relevant in an industry that reinvents itself every five to seven years.
Until recently, that felt manageable. I’ve been in software engineering for over 20 years. I’ve survived the transition from monoliths to microservices, the mobile revolution, the cloud migration wave, the DevOps transformation. Each one felt significant at the time. Each one changed what we built or how we built it. But none of them changed whether we were needed.
AI does. And that’s a fundamentally different kind of shift.
The part that’s actually different this time
Every previous technology wave I’ve lived through followed the same pattern: new tools arrived, the work changed shape, and engineers adapted. You learned new frameworks, new paradigms, new infrastructure patterns. The underlying deal stayed the same. Companies needed people to build software, and if you kept your skills current, you’d be fine.
What makes AI different isn’t that it changes the tools. It’s that it changes the leverage. When one engineer with AI can do the work that used to require three, the math changes at the org level. Companies don’t just need different engineers. They need fewer of them.
I watched this play out in real time. Teams getting restructured not because the work disappeared, but because the same work now required fewer hands. Job postings that quietly raised the bar, expecting senior-level output at mid-level headcount. Entire categories of tasks (boilerplate code, documentation drafts, test generation) moving from “junior engineer’s job” to “AI’s job” almost overnight.
And the hype makes everything worse. AI is genuinely transformative, but somewhere between “this is a useful tool” and “this will replace all engineers within five years,” the conversation went off the rails. The loudest voices in the room (often the ones furthest from the actual work) started treating AI capabilities as a foregone conclusion rather than a trajectory. CEOs read a blog post about AI agents replacing entire engineering teams and suddenly that’s the planning assumption. Headcount gets cut not because AI actually replaced those people, but because someone in leadership bought the narrative that it will.
That’s the part that keeps me up at night. Not AI itself, but the decisions being made on the back of AI hype by people who don’t understand what software engineering actually involves. The gap between what AI can do today and what executives think it can do today is enormous, and real careers are getting caught in that gap.
I sat down one evening and tried to project what my career looks like in 2035, and for the first time in two decades, I had no credible model for it. Not because the technology scared me, but because I couldn’t predict which version of the story the industry would choose to believe. Not a pessimistic model, not an optimistic one. Just a blank space where the plan used to be.
That blank space is what got me moving.
I’m betting on judgment, not output
What AI can’t do (at least not yet, and I’d argue not for a long time) is exercise judgment in context.
Here’s what made it click for me. I’ve been using Claude Code lately, and it’s good. Not “neat party trick” good. Actually good. The kind of good where I ask it to build something and the code that comes back is clean, well-structured, and works on the first run more often than I’d like to admit. A year ago I could dismiss AI-generated code as a rough draft that needed heavy editing. Now? Now it writes code that looks like something I’d write. Sometimes better.
That realization forced a question I’d been avoiding: if the code itself is no longer the hard part, what am I actually being paid for?
The answer, I think, is judgment. Knowing which thing to build. Understanding why one technically correct approach is wrong for this particular team, this codebase, this set of business constraints. Seeing the second and third-order consequences of a technical decision before they show up in production. That’s where experience lives, in the space between “this works” and “this is right for the situation.”
So I’m doubling down there. On understanding business context. On learning domains deeply. On being the person who can evaluate what AI produces and say “this looks right but it’s wrong, and here’s why.” That instinct doesn’t come from tutorials or certifications. It comes from watching systems succeed and fail in production for 20 years, from understanding not just how things work but why they were built that way.
But here’s the thing about that kind of judgment: it doesn’t develop in a vacuum. It develops through building things. Which is why I still code, even though my current role doesn’t require it.
I’m working as a developer relations manager focused on content now (which is both terrifying and exciting in equal measure), so I’m not writing code all day anymore. Most of my work is writing, and I use AI to help with it. But here’s what’s interesting: AI can help me find the right words, tighten a paragraph, suggest a better structure. What it can’t do is decide what’s worth writing about, or know which angle will resonate with a senior engineer who’s been through three rewrites of the same system, or recognize when a piece of technical content is subtly misleading in ways that only someone with domain experience would catch. I bring the judgment. AI helps with the execution.
And the exact same thing applies to coding. I still code because it’s fun, but also because I’ve realized the relationship with AI works the same way there. AI can write the code. It can’t architect the system. It can’t decide which tradeoffs to make, or know that the elegant solution it just generated will fall apart at scale, or understand why the team chose a boring technology stack on purpose. The person guiding the work, deciding what to build and what not to build, evaluating whether the output actually solves the problem, that’s where experience lives.
In both cases, you learn the same thing: how to decompose a vague problem into concrete steps, how to hold a complex system in your head and reason about its edges, how to develop an instinct for where things are likely to break. It’s not a coding skill or a writing skill. It’s a thinking skill. And if you don’t have it, you can’t meaningfully evaluate what AI gives you. You can look at the output and think “that seems fine.” But you can’t see the subtle N+1 query hiding in the data access pattern, or the race condition that only shows up under load, or the security assumption baked into a convenience method.
Learn to code. Keep coding. Not because you’ll write every line yourself for the next 30 years, but because it trains the kind of thinking that makes everything else you do more valuable.
I’m building things that are mine
I used to pour everything into my employer. My professional identity, my network, my reputation, my growth, all of it lived inside one company’s walls. That felt normal. It’s what everyone around me was doing.
Then I watched a round of layoffs hit people I respected. People with deep expertise and years of institutional knowledge. And yes, their skills transferred, their experience was real, their ability to do the work hadn’t changed overnight. But something else had. The ground they were standing on vanished. The internal reputation, the relationships with leadership, the security of knowing where you fit, all of that evaporated in a single meeting. And suddenly they were competing in a market that had gotten significantly more crowded, against people with similar resumes and similar experience, in a hiring landscape where being talented wasn’t enough anymore. You had to be visible. You had to be connected. You had to be someone the market already knew, not someone it had to discover from a cold application.
That’s when I started thinking about professional gravity differently. Not as something your employer gives you, but as something you build that exists independent of any single company.
I’ve always been a writer. Blog posts, technical articles, documentation, the kind of writing that lives inside a company’s content strategy and serves someone else’s goals. But I’d stopped writing for myself. So I picked it back up, this time with a different purpose. Not as a hobby, not as a creative outlet, but as a deliberate investment. A newsletter about the things I think about anyway: engineering careers, leadership in the age of AI, the unspoken tensions of navigating a rapidly changing industry with decades of runway still ahead of you. Published thinking that shows people how I reason, not just what I’ve done. A network of people who know my perspective because they’ve read it, not because we happened to work on the same Jira board.
That same logic extends to money. Income diversification is the area where I’ve historically been the worst. One paycheck, one employer, one industry. I never seriously thought about what happens if that stream dries up, because it never did. I just wasn’t wired to think about money strategically, and I suspect a lot of engineers are the same. We talk about total comp and RSU vesting schedules, but we rarely talk about income resilience.
So I’m learning (slowly, awkwardly) how to diversify. Talks and workshops where two decades of experience becomes a product instead of just a resume line. A professional network that creates optionality for consulting if I ever need it. None of these produce meaningful income right now. That’s fine. I have 30 years. The goal isn’t to replace my salary tomorrow. It’s to make sure that if something changes suddenly, I don’t get caught with no options and no runway to react.
I don’t have it all figured out, and that’s the point
I want to be clear about the limits of what I’m sharing here, because I think the unfinished thinking is more useful than pretending I have a polished playbook.
I don’t know how to plan a technical career when the half-life of technical skills is shrinking this fast. I don’t know what engineering leadership looks like in five years, whether managers become AI-team leads or the role gets compressed because there are fewer humans to manage. I don’t know if 30 years from now, the career I’ve built will look anything like what I imagined when I started.
That used to scare me. It doesn’t anymore, and here’s why.
Every major technology shift in my career has created more opportunity than it destroyed. Not immediately, and not for everyone, but eventually and overwhelmingly. The web didn’t kill software. Mobile didn’t kill the web. Cloud didn’t kill infrastructure. Each wave created entirely new categories of work that nobody predicted from the inside.
I believe AI will do the same. The possibilities opening up right now are extraordinary. We’re going to build things in the next decade that we can barely imagine today. Entirely new categories of work will emerge, just like they always have. That’s not a threat. That’s what makes this the most exciting time to be working in technology.
But exciting doesn’t mean safe. The opportunities will be there. They just won’t show up automatically at your door.
I don’t know what the future will bring. But I know what I’ll keep doing: coding, teaching, explaining, exploring, and building. Those are the things that got me here, and they’re the things that still make me want to sit down at my desk every morning. I hope I get to keep doing them as a profession for the next 30 years. I think I will. But in the meantime, I’m making sure that if the rules change, I’m not standing still wondering what happened.
That’s the bet. I’m genuinely excited about it. I’ll let you know how it goes.
Newsletter
The Long Commit
Practical, no-fluff writing on engineering careers in the age of AI. Weekly notes from 20+ years in the field.
Read on Substack