Who Wrote This Crap?

I remember many years ago interviewing at a software consultancy where one of the consultants in the meeting was talking about a project they had going on. He was marveling at how fast the team of junior developers on the project was "cranking...out...code." I'll never forget the way he said it, and the look of awe on his face as he slowly shook his head back and forth.

This of course happened well before the advent of LLM assistants that can produce code at an astonishing speed. I wonder if that consultant's head would have literally exploded if I had showed him GitHub Copilot or Claude Code at the time.

Hell, I remember being gobsmacked as a junior engineer myself by CodeSmith, which was an early code generator product for C#. It could automatically spit out huge amounts of boilerplate code that an engineer would typically write by hand. I heard about it from a coworker at my very first job in the software industry, and I remember how unnerved I felt, thinking about a program that can automatically write code (isn't that why I'm here?).

The next code generator that came into my life was Ruby on Rails, which I first tried a couple years later. By that time, I had experienced what it was like to write the most tedious code in most applications, which was code wiring up database tables to web forms, and all the plumbing to pipe the data through each layer. It was a mind-numbing yet essential task, and almost every application needed it. Rails came at me like a breath of fresh air. At this point, I was like, oh yeah, this rules. I hated writing all that data access CRUD, and Rails made it largely disappear. Code generation clicked for me.

In the .NET world, where I've mostly worked for my career, we had a series of object-relational mappers that could generate classes off of your database schema, and resulted in engineers having to write way less code that shuttles data from a web application to a database and back. To name a few, we had NHibernate, SubSonic, LINQ to SQL, and then Entity Framework. I welcomed these tools with open arms, but I do remember there being pushback at the time from more senior people at my companies who were accustomed to writing this kind of code by hand and didn't trust what the tools were doing under the covers.

The common denominator in my early experience with code generation was eliminating the manual work of writing a lot of extremely predictable, relatively dumb, utterly tedious code that was also essential. And this usually took the form of data access code that moved data between layers of a web application, from the front-end to the database, where you had SQL tables that corresponded to C# classes, that corresponded to web forms. Classic CRUD. It's usually highly predictable stuff, and ripe for code generation. Yes, there were times where the tooling would break down, and an engineer would have to get under the covers and debug an edge case that the tool couldn't handle automatically, but, in my experience this was relatively rare.

What I'm seeing in the industry now with AI coding assistants like Copilot feels fundamentally different to me. I've witnessed fellow engineers in recent years generating code more akin to "business logic". This is the kind of code that's specific to the domain of the company and not transferable from codebase-to-codebase. I've also seen fellow engineers letting AI write the code for areas of the codebase that they don't understand well. For example, they may not know how to do a certain thing with React, so they describe what they're trying to do to Copilot, which then generates code that the engineer accepts without understanding, as long as it seems to work.

What's fundamentally different to me about the scenarios I just described and the code generation scenarios of yore, was that the pre-AI code generators were deterministic, and they were applied only to non-domain-specific logic.

What happens when a codebase is peppered with business logic that no human working at the company wrote, and hence cannot definitively explain? I have already seen first-hand times when bugs in important processes were not discovered until the AI-generated code had been in production for weeks. The engineer committing the code did not know that what Copilot generated did not match the logic they intended. Maybe I can write another whole blog post about this topic, but I'll say briefly here that in many scenarios, increasing the speed at which the code is produced is less important than a human understanding what it does at a granular level. In other words, speed of coding is not a bottleneck.

Another consideration is that AI code generators like Copilot are non-deterministic. As in, you can run them multiple times with the same input and they will produce different results. Going back to my examples before of pre-AI tools, the code generation features of CodeSmith and Entity Framework are deterministic. You can run them multiple times with the same input, and they will give you the same output every time. This is because a human software engineer wrote the code behind those tools, and the rules are directly and unambiguously traceable back to the lines of code a human wrote while designing them.

I can't help but wonder if, as an industry, we're hurtling toward a future where many production codebases will be littered with code that no human at the company understands or could definitively explain, not years later, but even weeks later. My personal relationship to AI-generated code as a working software engineer is that I will not commit code that I cannot explain. And when doing code reviews, I cannot accept the explanation that code included in the pull request was AI-generated and hence the submitter does not know what it does.

I also have to wonder if the AI slop era we're all in at the moment says something about the illusory nature of quality. Maybe quality was just an unintended side effect of manual coding that business leaders never really cared much about in the first place. In my multi-decade career in the software industry, the emergence of Copilot represents the first time I've ever experienced non-technical people mandating the use of a particular tool to software engineers. It seems that the idea of faster code production was so mouthwatering that quality flew out the window within seconds.

Who wrote this crap? Maybe the answer never mattered.


Don't Mess With Delivery

If your team has consistent, reliable delivery of working software to production, you’re crushing. Don’t mess with it. It's astonishing how many teams in real world software development can't do this. I've witnessed it over and over and over in my career.

If you have a backlog, sprint planning, a dependable QA function (even if manual and/or slow), and scripted deployments on-demand, you are very fortunate.

Of course, we don't want our engineers working in a feature factory, so we make sure they have input into what they're building and they know why they're building it. That's the motivation to keep delivering consistently. 

So how do teams "mess with" delivery? For teams that already have consistent delivery, the way they tend to mess it up is by prioritizing speed over consistency.

I think that continuous deployment is an incredible technical capability to possess. It's amazing to have a fully automated pipeline capable of taking a code commit and moving it fully out to customers without manual intervention. This is huge for production bugs and emergency fixes.

My belief is that feature work should prioritize ease of communication over raw speed of deployment. Just because it's technically possible to deploy new features every day doesn't mean it's actually beneficial to users. I've seen firsthand the chaos that results when new features appear in front of users without people beyond the immediate team knowing about them.

Every organization wants to "go faster", but I believe once you're touching production, communication matters more than speed. If you're consistently getting new features in front of users every two weeks, you're already doing so well. Optimize something else.

If you've got a reliable, dependable, consistent pipeline of new features to real users on a reasonable sprint duration, it's not worth the communication overhead of going out-of-band. Please don't mess with delivery.

User Advocate vs. Front-End Engineer

I really enjoy doing UI work: being close to the user's mind, thinking deeply about interaction design, and how small changes can massively improve someone's day. But in order to market oneself as a “Front-End Engineer” in the current landscape requires dogged attention to fashion trends, and I'm on Team Evergreen, baby.

Early in my career in web development, I loved reading Jakob Nielsen and Steve Krug, but at some point I had to accept that maintaining intimate awareness of the specifics of JavaScript Framework Du Jour was not sustainable for me. Call me "full-stack", that's fine.

Front-end is the most ripe for résumé-driven engineering. Default skepticism is always warranted for new technologies, but in front-end, it is truly a necessity to avoid lighting your company's money on fire.

I found a rather cathartic post by Marco Rogers on Hacker News recently called The Frontend Treadmill. I fully agree with Marco's take: 

If you are building a product that you hope has longevity, your frontend framework is the least interesting technical decision for you to make. And all of the time you spend arguing about it is wasted energy.

I would not necessarily describe myself as a React fan, but I am very happy that it feels like we finally landed on a framework that can survive for several years, with wide adoption by startups and enterprises alike. Let's go, Lindy effect. Maybe React can be the framework equivalent of what jQuery did as a library. 

Above All Else, Sustainability

From the principles of the Agile Manifesto:

Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.

Extreme measures are always doomed beyond a limited time horizon. Whether we're talking about diets or software development practices, without sustainability you have nothing.

Initiatives to increase productivity, improve quality, lower costs, or any other "good thing" simply don't matter if they're not sustainable.

A common topic in Agile circles is "sustainable pace", which usually focuses on the futility of working overtime, typically over 40 hours a week. I would argue that sustainable pace is about more than just the number of hours clocked in a work day or work week. 

Is the work emotionally sustainable? Do people hate working here? Do people end the work week feeling a sense of accomplishment? Do people feel like their leaders have reasonable expectations? Are they constantly battling reality?

Sometimes the degree to which an enforcer must continuously apply pressure to get people to follow a process indicates how sustainable the process is. Powering through is not a sign of discipline, it’s a sign of delusion.

Ignorance of reality is unsustainable. Coercion is unsustainable. Surveillance is unsustainable.

Can I and the people around me keep this up forever? If not, it's time to pause and reflect. Above all else, sustainability.

Agile as Progressive Enhancement

When I was coming up as a young web developer. the concept of progressive enhancement was kind of at the hipster vanguard of web development. This was in the phase of the web where JavaScript and CSS feature parity amongst browsers could not be taken for granted like it is today.

The idea of progressive enhancement was that you started by making your web application work on an essential informational level with semantic markup, then you layered on presentational and dynamic niceties with CSS and JavaScript respectively, if available to the user agent. At each layer, or each stage, you left the user with something usable and valuable that stood on its own without the other stuff.

Learning about the concept of progressive enhancement early in my career colored my interpretation of the Agile philosophy forever. The iterative nature of Agile methodologies, with "Working software [as] the primary measure of progress" is essential to how I approach making software for users to this day. In my mind's eye, I still visualize iteration as progressive enhancement. We start with the essential core of value, and ship that. If the core of the idea resonates with real users, then we iteratively layer on enhancements to the core, shipping to users at each stage.

In order to work this way, you have to accept that quality and feature completeness are not the same thing. At each layer, we're delivering a scope with high quality always, but with no ambition of completeness. We're delivering shippable increments of software as layers over a core idea of value that we've already shipped. We build out our product backlog as all the nice things we could layer over the core, as we think of them, but they are not essential. One of the amazing things about working this way is that users are delighted by it. The features just keep getting nicer before their eyes.

Legibility

I recently came across a blog post called Seeing Like a Software Company by Sean Goedecke on Hacker News. The post hooked me right away by introducing to my vocabulary the term "legibility": 

By “legible”, I mean work that is predictable, well-estimated, has a paper trail, and doesn’t depend on any contingent factors (like the availability of specific people). Quarterly planning, OKRs, and Jira all exist to make work legible. Illegible work is everything else: asking for and giving favors, using tacit knowledge that isn’t or can’t be written down, fitting in unscheduled changes, and drawing on interpersonal relationships.

One of the advantages that small companies have is that they can achieve greater speed compared to a large company by eschewing legibility. When you know everyone else in the company by name, or maybe you all work in the same room together, you don't need standardized processes to go about your work, in fact, they just slow you down. Work happens through what the author calls illegible backchannels:

An engineer on team A reaches out to an engineer on team B asking “hey, can you make this one-line change for me”. That engineer on team B then does it immediately, maybe creating a ticket, maybe not. Then it’s done! This works great, but it’s illegible because the company can’t expect it or plan for it - it relies on the interpersonal relationships between engineers on different teams, which are very difficult to quantify.

One of the struggles that small companies face as they grow into larger companies is that they can't get by anymore without legibility. Maybe they've grown to hundreds of employees or they're working with people distributed across multiple timezones.

The fact of growing up as a company is that the loss of speed in individual tasks is outweighed by the predictability of process. As the author writes:

The processes that slow engineers down are the same processes that make their work legible to the rest of the company. And that legibility (in dollar terms) is more valuable than being able to produce software more efficiently.

I feel like this is a hard concept to sell people on who are used to working in small companies. In the long run, going slower is actually better for the company. Writing things down and organizing the work in a more structured way reduces the chaos left in the wake of localized, marginal speed-ups.

The author concedes that illegible work is necessary in small doses even in large companies. In cases of show-stopping production bugs, for example, large companies create temporary sanctioned zones of illegibility in which they gather together a strike team of experienced people to swarm on a critical issue until it's resolved. They then return to legibility.

Legibility is a massively useful concept that I will carry with me. It explains so much about how companies function internally, in often counterintuitive ways.

The Financial Architecture of Software

Conway's Law is foundational to software engineering. It says:

Organizations which design systems...are constrained to produce designs which are copies of the communication structures of these organizations.

If you've ever had any job in the software industry, you've almost certainly seen this play out in the architecture of your codebase. Group A works on Thing X and Group B works on Thing Y, so Thing X is in one [project, repository, web service] and Thing Y is in another [project, repository, web service]. You can read the organizational structure in the structure of the technology. It helps to make sense of why things are the way they are. The silos of communication are reflected in the solutions.

But I recently came across this interview on InfoQ with Ian Miell, who I've mentioned on this blog before. He has interesting things to say about the sociology of software development, and in this interview he talks about a book he's writing about the financial architecture of software:

The material aspects of the world ultimately determine the software we build and specifically the decisions we make at a grand scale about the software that we build. 

I remember being frustrated and confused as a junior engineer, not understanding why certain things are prioritized in a company and others aren't. Obvious best practices from the industry at large don't get traction in certain places, what seem like universal good things like "quality" don't seem to matter, or why values spoken about at the company level seem unevenly distributed and applied.

Ian explains...

When I have a code base, I want to make sure all the tabs are correctly aligned before I do anything else that might take time, or I wanted to make sure the naming paths are consistent. But you get larger scale problems where engineers say, "No, we have to fix this now because it's a huge thing". And you say, "Well, I get it, but that's going to take us a month and in that month people are going to be looking at me as the leader of this team and saying, 'What have you delivered?' And I can try and explain to them what I've done, but it's not going to apply".

Hard work is not always rewarded. It depends on who sees the work, what their individual priorities are, and how much money they control. My advice to my younger self would be to pay close attention to what your specific boss thinks is important, and show yourself working on those things. The next level is understanding what your boss's boss thinks is important and by what criteria they're judging your boss; then you'll really understand what kind of work is rewarded.

As Ian describes...

Who owns the budget is a fascinating question. ...I start the book with a story about a very successful engineer, I call her Jan. She becomes a leader of a small group of engineers and she builds a platform in her spare time with those engineers, like a Kubernetes platform. And she does it within a central IT function and she builds it and she takes it, she shows it to her manager and the manager's like, "Well, this doesn't help me hit my goals for the year. This doesn't help me get a promotion, this doesn't help me get a pay rise". And she's nonplussed like. "I'm trying to deliver faster, cheaper, better. That's what the company is, we're supposed to be agile. I'm trying to help the whole business".

And it might sound naive, but a lot of people think this way. I certainly did. Where you're thinking, well, I'm doing what's good for the company, but the system is not structured in that way. It's not designed to be run that way because people are complicated and their interactions are complicated, so you silo things and you have different budgets and so on, and the end result is that you are slaving away at the central IT function, trying to make the business better as a whole, but it's not accounted for.

Understanding where the money comes from sheds light on the technology decisions, where quality matters a lot and where it's almost an afterthought, and why different teams are more scrutinized than others. In the end, follow the money.