How my software engineering workflow has changed in the past year
Back in April 2025, I wrote a post about how AI has changed my software engineering workflow. However, the release of models like Opus 4.6 since then have changed the landscape of the industry so dramatically that most of what I wrote is already irrelevant! To account for that, I decided to write an updated version that shows what has changed since then.
A note on job prospects
In my previous post, I wrote about how AI wouldn't replace software engineering jobs because unlike humans, the models cannot make sense of vague requests. This is still largely true. Although the coding capabilities of the models have evolved dramatically, they still require the user to know what they are doing as those capabilities are only unlocked if you provide clear instructions to the model. This makes the idea of a product manager with zero programming experience vibecoding their way into a billion dollar business just as ridiculous as it was back then.
However, although the job will still exist, it's important to say that what it comprises is radically changing. The archetype of an ultraspecialist that focuses their entire career on a single platform / language, which became very popular around a decade ago when mobile engineering was on the rise, is no longer a good career path. There is no longer any point to becoming a specialist on a particular tool when anyone can now guide an AI model to the same result with just a fraction of the experience.
What's meta now in the engineering industry in the wake of AI is to T-shape into having broad knowledge on all the different systems and products developed by your company. That way, while the AI focuses on the deep aspects of the code, you can focus on what the AI currently cannot do, which is making sure that what it implements is in line with what's being developed for other aspects of the product, akin to a blend between software engineering and a product manager. The Pragmatic Engineer calls this a product engineer.
I predict that companies will soon require everyone to migrate to this capacity, with those that fail to do so being at risk of future layoffs. Those who are at senior level likely already perform at this role at some capacity, but this could be a large barrier to clear for more junior developers.
How my workflow has changed
In my previous post, I mentioned that the bulk of my work happened with Cursor's tab completion, with Claude only being a small percentage for more special cases. This has now flipped around. I would say I nowadays do 95% of my work by prompting Claude, with around 5% manual coding being required only to sometimes make small adjustments to the AI's code. I also stopped using ChatGPT entirely in favor of using Claude for everything.
The reason for this change is because the newest models are so intelligent that it is now much faster to simply ask the model to do the work instead of having me do it myself. While before the models would cause all sorts of mistakes, nowadays they really get pretty much everything right if you give them clear instructions. In repos that have well defined AGENTS.md files and skills, I can even go a step further and use the --dangerously-skip-permissions flag to let the agent do its work and commit + open a PR without any supervision.
It's even scary at times; sometimes I prompt absolute garbage full of typos, but still somehow the model is able to understand exactly what I meant.
Many folks are using all kinds of complex architectures and plugins to interact with agents, but I run a fully vanilla setup without any of that fuss. I find that those special setups are completely unnecessary to get good performance out of the agent.
The use-case for agents that I find the most powerful was and still is making sense of complex codebases. Recently I've been constantly doing tasks that require me to work in multiple codebases I've never seen before, and what would previously be a multi-month task requiring several engineers can now be done in mere minutes by simply asking Claude some questions while having the code cloned locally. Thanks to AI, I've been able to deliver extremely intricate improvements to some of our systems not only completely by myself, but also at an unprecedented speed. It's really incredible how much this technology was able to change the industry.
The downside: Am I really having fun?
What draws me to software engineering is the problem solving. I really enjoy trying to make sense of complicated issues. But now that AI does all of that for me, I'm not getting that kick anymore. Yes, I can deliver much faster now, but also I now feel that I don't really understand what I'm delivering anymore. Struggling to solve something was a very important part of the learning process, so with that out of the way, I feel that I'm not really learning anything anymore despite technically being able to deliver much more than I used to. I know many others are also feeling like this.
This makes me confused in regards to what the future of the industry will be. The AI models are smart because they were trained on decades of human content and documentation. But if no one produces this content anymore due to everything now being made by AI, how will future models be trained? They cannot train on their own content. Does this imply that there is an upper-bound to how smart coding AIs can be?
The future is very uncertain right now, but I hope we find ways to continue having fun in the midst of it.