Calibrating Your Care

I've been a subscriber to the r/ExperiencedDevs subreddit for a few months, and I've noticed a common pattern.

Someone will post a question to the group desperately seeking advice about some stressful situation that they are clearly very worked up about at their job. The author's specific concern or complaint could be just about anything, but the general feeling is that from their point of view everything seems to be on fire around them and none of their peers or managers are doing anything about it.

Being around social media for many years trains one to expect posts like these to be opportunities for commenters to rage about the injustice on display and to pile on with their own indignation about incompetent managers and clueless coworkers.

Instead, many of the highest voted comments to the post will be something to the effect of:

With all due respect, I think you care too much about this.

Or...

You seem to be taking this personally, and there's no need to do that.

I was definitely surprised at first by responses like this being highly upvoted, but I just kept seeing them from different people on different posts, with very little pushback from other commenters.

Early in my career, I took things at work very personally. I saw it as a badge of honor to care more than the people around you. I should be mad if things are not being done the right way.

The unpalatable truth, in many cases, is that this thing that seems of grave importance to you...actually doesn't matter as much as you think it does. And the clueless coworkers and managers around you actually understand something important that you don't. In fact, you might be the clueless one.

It's easy to get caught in the spotlight effect, where you assume your thoughts and feelings about a situation are the right ones. And it's easy to believe that your view of what's important is shared by everyone.

If your ground level belief is that everything is on fire and no one cares, you’re probably right. But you have to keep in mind that no one else sees the world through your eyes. You may be reading the room inaccurately. Your boss may have more context around the project than your daily experience.

One could take the advice that "You care too much" as being told "You should be dead inside." I would say, just be prepared to lower the stakes in your mind. Be humble about your own role in the world of people around you. Get some mental distance from the situation and try to read the room.

Everybody gets to decide what they care about. Calibrate your care.

Don't Let an LLM Write Your Unit Tests

Writing unit tests is one of those tasks that software engineers often see as tedious. In fact, I've heard from fellow engineers that unit test generation is one of their favorite uses of LLMs. Let the AI write them! I've always been deeply uncomfortable with this use of LLMs, even though I'm all for reducing the tedious aspects of work in general.

Unit tests are not boilerplate code. If you've read a lot of bad unit tests, then you may get that impression, though. If your only goal in writing unit tests is Coverage number goes up!” then by all means let an LLM write your unit tests.

But I would argue that writing great unit tests means thinking carefully about what is valuable about the system under test. An LLM is great at generating reams of code in an instant, but it will never know anything about the value of the product your codebase represents.

If our goal instead is “Let’s ensure that our product is retaining its value as we add more code to our codebase.” then a human who understands the value of the product should write the tests.

Unit tests have yet another critical function, which is to provide runnable documentation about the system for other engineers to read. Want to know how some part of the codebase is supposed to work? Go read the unit tests; they represent the valuable features of the software that an engineer wanted to point out.

Code is written once, but read many times. And test coverage is just a number. If you want tests that matter, write them yourself.

Copilot Is Outsourcing 2.0

Decision-makers can remain irrational longer than you can remain solvent. (Or, in this context, remain employed.)

- Will Larson, "Career Advice in 2025"

"AI" coding tools like Copilot went from a mere curiosity to full-blown silver bullet territory in a few short years. I'm all for reducing the accidental complexity Fred Brooks wrote about in his classic essay on software engineering, but what’s driving me a little bit crazy lately is talk amongst people (mostly non-engineers) around the industry that Copilot and the like are magical productivity machines.

I'm not the only seasoned engineer to notice that this present moment feels eerily similar to the outsourcing craze of the 2000s. The dream of every non-technical executive who had a bunch of software engineers on their payroll was to lay them off and replace them with people in faraway countries with a lower standard of living who they could pay less to do the same job. Well...it didn't exactly work out that way. It turns out that software engineers are not, in fact, fungible code-typing machines. Outsourcing jobs to cheaper parts of the world is very much still a thing, but it took several years of reality colliding with the dream to bring expectations down to a reasonable level.

In 2025, the executive's dream is back, and bigger than ever. What's better than paying humans in other parts of the planet less to do the same job? We can get AI™ to do the job for even less!

The outsized hype around AI coding tools is premised on the idea that software engineers are primarily code-writers. If we could get machines to type the code, then we can save a ton of money, right? The reality in my experience is that typing code faster is not a bottleneck for any software engineer that you would want working on your product. Writing code is a small percentage of what a software engineer actually does.

I remember a friend of mine from college asking me many years ago, when learning that I was studying Computer Science, if I was worried about destroying my hands from typing so much. He apparently imagined that the career I was training for was similar to a court stenographer. In the software industry, we used to refer to engineers who produced huge volumes of repetitive code quickly as "code monkeys". This matches the outside perception of someone furiously hacking away at a keyboard, with speed of keystrokes making the overall impression.

The thing is, a “code monkey” is replaceable by AI. If there's anything we've learned over the last 200 years or so, it's that machines can do repetitive, mindless tasks faster and more cheaply than humans. If the work of a software engineer really looked like: 1. Do a Google search. 2. Copy and paste code from your web browser into your code editor. 3. Hit a button to commit to your team's code repository; then heck yeah, a machine can do those things way faster than a human.

I don't fear being replaced by an AI that can copy and paste code from the internet faster than I can. I do worry about non-technical executives either firing me or not hiring me based on marketing, or a belief that this is what software engineers primarily do to create software that is valuable to an actual business.

Just like the executive's dream of outsourcing faded to a modest incremental reduction in the cost of developing software, I believe that so to will the dream of the AI engineer. I don't need to ask Fred Brooks what he would have thought (RIP).

Engineering-led Research Tickets

One of my optimistic beliefs about software engineers is that we're full of good ideas about how things around us could be improved. We're smart people, problem solvers. We read tech news, we're friends with engineers at other companies, we bring diverse career experiences to each job--in other words, we're at least vaguely aware of better ways of doing things that are happening in other places.

If we want to go beyond just talking about better ways of doing things, we need to empower engineers to capture their ideas for improvement in the backlog right next to ideas from the product management side. It's critical, of course, that items in the backlog generated by engineers are refined, estimated, and actually included in sprints on a regular basis.

And we want to go beyond the all-too-easy step of adding a one-line ticket at the end of our backlog that says “do cool thing X”. Since no one has the foggiest idea how we’ll actually do that cool thing, the backlog item just sits there and no one ever puts it into a sprint because it’s too nebulous and scary and we don’t know how long it will take or if it’s even possible in a sprint-worth’s amount of time.

Make that “do cool thing X” a research ticket. Time-box the effort. We can put this into a future sprint because at the end of the time box (fitting within one sprint) we will have a documented outcome that gets us closer to the goal than we were before, and we’ll have a much better idea of the actual steps to get there with "real" tickets representing the work.

The output of these research tickets can be a comment saying we shouldn’t do this and here’s why. Or the output could be a series of other tickets the engineer creates with a ticket for each step of the process toward accomplishing the ultimate goal...or a nice intermediate goal.

Even if we end up looking at the list of steps for follow-on work and think, “Geez! That’s a lot of damn work to get the outcome we want!” and we decide it’s not worth the effort, that's still a great outcome.


Software Engineer Career Levels at Companies You've Heard Of

I found a site called Progression.fyi that curates a list of "career frameworks," or in other words, the level-progression system of titles that a bunch of tech companies use for their software engineers. It's pretty interesting to peruse.

Several of these companies define what the often nebulous title of Staff Engineer means to them, which I find particularly compelling.

Here are a few companies you've probably heard of, with links to their pages where they define their career progressions for software engineers:


Heisenberg Effect in Software

One of the greatest services that software engineers provide is that we help people understand their own thought processes. We're transcribers of human thought, in a way. We have to understand a human process so intricately that we can write down instructions in another language that the dumbest, most literal being on earth (a computer) can follow.

We hold a mirror up to the user: We built this system in your image--does this look like you? Is the image distorted? If so, how? In the Agile philosophy, we hold this mirror up as early and often as we can.

Another metaphor I like comes from an old interview with Andy Hunt and Dave Thomas (of "Pragmatic Programmer" fame), where they described a Heisenberg effect in the process of building software for people:

Software [has] a Heisenberg effect, where delivering the software changes the user's perception of the requirements.

Users need to see our interpretation of what they told us. That helps them understand themselves better, and then, we can understand them better.

Are You a Stable or Volatile?

I've written before on this blog about the different personality traits of software engineers, and how different traits balance each other on a team.

Some of the spectra I came up with were:

  • Dreamers ↔ Pragmatists
  • Big Picture ↔ Detail-Oriented
  • Move Fast & Break Things ↔ Slow & Methodical
  • Optimists ↔ Pessimists
  • Answerers ↔ Questioners
Recently I read a comment on Hacker News (which I can't find now) that mentioned a blog post from 2012 on Rands In Repose, a blog I've been familiar with for years, but somehow this post had escaped me.

The author, Michael Lopp, introduces the Stables and Volatiles divide, which feels to me like a meta-category that encompasses the ones I listed above.


You can probably guess just from the names, but Stables tend to like having a well-defined plan, are more risk-sensitive, predictable, and reliable. Volatiles are disruptive, like to blaze their own trail, dislike process, and have a strong bias for shipping something (even if it's ugly and unstable).

Lopp says:
Stables will feel like they’re endlessly babysitting and cleaning up Volatiles’ messes, while Volatiles will feel like the Stables’ lack of creativity and risk acceptance is holding back the company and innovation as a whole. Their perspectives, while divergent, are essential to a healthy business.

I believe a healthy company that wants to continue to grow and invent needs to equally invest in both their Stables and their Volatiles.
I think the Stable / Volatile concept feels very true to my experience, and it's one I'll keep with me. Assuming that everyone on a team is equally intelligent, competent, and well-intentioned, you need people of both the Stable and Volatile persuasions. Even though they annoy each other, a project staffed by only one or the other type is doomed.