The Year I Stopped Chatting and Started Managing
2025 in Review: AI, Developer Experience, and How Work Is Changing in 2026
Now that 2026 is here, I’ve been reflecting on what really changed for me in 2025. I’m not talking about big news or new models (Yeah, we all heard that more than needed, maybe), but about how daily work actually felt.
If I had to summarise the year in one sentence, it would be this:
We stopped chatting with AI and started managing it.
That shift did not arrive all at once. It did sneak in quietly, through tooling decisions, workflow changes, and a growing realisation that the role of an engineer was being subtly redefined.
From Autocomplete to Delegation
For a long time, AI in development felt like a smarter autocomplete, which was useful, occasionally impressive, but ultimately limited. In 2024, it became genuinely usable. By the beginning of 2025, it became structural.
The real shift wasn’t about raw model intelligence. It was about how work was handed off.
By the end of the year, the question was no longer “How do I write this function?” but “How do I frame this problem so an agent can plan it, execute it, and let me review the result?” That distinction matters. It moves you from being the default executor to being responsible for intent, constraints, and final judgment.
Once that clicked, my own tooling preferences started to change as well.
When LLMs Became Workflow
What changed for me is less about which tool I used and more about how I use these tools day to day. My workflow gradually shifted from asking for help in small chunks to relying on longer, uninterrupted reasoning sessions. Architecture decisions, large refactors, and system-level changes require continuity. Once that became the dominant mode of work, friction around limits, resets, and broken context started to matter more than UI polish or feature checklists (which, to me, is an easy part).
I also found myself thinking less in terms of prompts and more in terms of intent. Instead of asking for an answer, I wanted to hand off a responsibility. Plan this change. Carry it through. Validate the result. That style of interaction aligns better with how experienced engineers already operate when the problem space is large and ambiguous. It reduces the mental overhead of micromanaging each step, but it also raises the bar for trust and review.
At the same time, this shift exposed a subtle DX tension. Autonomy is useful, but it is not always what you want. There are many moments where the fastest path forward is simply understanding how something works. No plans, no execution, no artifacts. Just context. When tools default to action, even curiosity starts to feel expensive.
Late in the year, this evolution in how I worked is what led me to move from Cursor to Antigravity (Higher limits to premium models is the main reason, frankly). Cursor’s ASK mode remains one of the best examples of low-friction exploration I have used, and it is something I still miss. In Antigravity, I ended up writing a small prompt to force the system into a pure explanation mode when needed. The prompt itself matters less than what it represents, it’s just a way to pull the system back into a quiet, exploratory mode when autonomy becomes overhead.
For those interested in the prompt:
Act in ASK mode.
Do not create plans, outlines, or step-by-step solutions. Just respond to me conversationally.
When I ask about the codebase:
Ask clarifying questions first if needed
Answer briefly and directly
Focus on understanding, intent, and implications
Avoid long explanations unless I explicitly ask for them
Treat this as an interactive back-and-forth, not a planning exercise.
The DX Paradox of 2025
This was one of the bigger themes of the year. Indeed, agents reduce busywork. Sure, we typed less boilerplate. However, we also ended up reviewing more diffs, double-checking more claims, and spending more time considering whether the system had actually resolved the problem (or created new ones)
Things got faster, but confidence didn’t just come along for the ride.
The teams that thrived weren’t just shipping quicker. They were intentional about what AI handled and where humans stayed in the loop. Transparency mattered. Proof mattered. Tests, artifacts, traceability…they all became more important.
Trust became and still is the top concern.
Looking Ahead to 2026
If 2025 was about agents inside the IDE, 2026 feels like the year they start moving outward.
Anthropic’s release of Claude Cowork is an early hint of this shift. The focus isn’t just on writing code faster anymore, it’s about delegating work across files, tools, and environments. It’s horizontal assistance, not just narrow optimization. I’m testing Cowork. Desktop-level agents promise real leverage, but they also magnify mistakes. It will take time to see whether the friction outweighs the benefit.
One thing already feels clear: the story is about how our role is evolving. Engineers are now more accountable for framing problems clearly, exercising judgment, and owning the outcomes. AI can execute, but it doesn’t care, it can’t make the nuanced calls we do.
We’re not being displaced; we’re being repositioned. And that shift in how we think about responsibility and collaboration is likely to matter far more than any single tool release.
That’s all folks!
DevNama is about tech research, product growth, developer productivity, and engineering excellence. Subscribe now to stay in the loop!
If you know someone who might like this read, consider sharing it with them:
Thanks for reading!
— Ibtihaaj





