Reading List

The most recent articles from a list of feeds I subscribe to.

Optimize for optionality and build towards checkpoints

In a project management-themed Hackers Incorporated episode, Adam Wathan introduced derisking projects with save points. The entire episode is definitely worth your time, but that specific piece of advice has changed the way I work as a developer and make decisions as a project manager.

In practice, it has taught me to optimize for optionality, not efficiency.

The monolithic branch

To illustrate what it means, we’ll build a user account section as an example. The new account section will be shipped to users as one big launch. A user can…

  • Update their basic profile information
  • Add their address with Google Maps autocomplete
  • Upload a profile picture
  • Connect social login accounts
  • Reset their password

Let’s make an estimate:

Task Estimate
Basic information 6h
Address autocomplete 12h
Profile picture 4h
Social login 8h
Reset password 4h

Summing up: 6 + 12 + 4 + 4 + 8 = 34, that’s a 40-hour estimate. (Gotta add that margin!) We’ll checkout a new feature/user-account branch, implement, and ship. The final logs in our time tracker:

Task Estimate Actual
Basic information 6h 3h
Address autocomplete 12h 14h
Profile picture 4h 6h
Social login 8h 18h
Reset password 4h 2h

3 + 14 + 6 + 18 + 2 = 43. Not too far off of our (padded) estimate! Some tasks were easier to implement than expected, some took a lot longer. But we didn’t ship 43 working hours later, it took 4 weeks to reach production. Why?

  • We realized our application needed to be approved by our social login provider, that took a week. After that it took a lot longer than expected to implement.
  • When we started to implement the new reset password flow we found a few issues in the design, so the design team had to update the Figma file. The implementation went smoother than expected, but we had to wait for the changes.
  • Of course, the usual bugs and small feature requests that creep into our schedule pushed this project further down the road.

That users had to wait longer than estimated for the new account feature isn’t a problem. This post isn’t a manifesto to ship incremental changes over big bang launches—I’ll leave that decision to the product managers. Shoving everything into one branch is plain easy. Because we weren’t going to ship anything individually, we were allowed to! What I care about, is how it affects everything else. Because meanwhile…

  • Another developer needed an address autocompletion on the shipping page. Now we have two implementations and need to trash one.
  • Someone else refactored the user model and related code, causing a bunch of merge conflicts throughout the 4 weeks of development.
  • Another timeline-like feature design relies on profile pictures, but we wanted to wait until this branch was merged so we had access to the profile picture components that are already set up.

Checkpoints

Let’s treat each task as an individual project. It will probably take longer—at least on our timesheet. We need to create a separate branch for every task and can’t implement changes in bulk. Instead of one big feature/user-account merge, we created, reviewed, and merged a bunch of smaller branches over the weeks.

feature/user-basic-information-views
refactor/share-google-maps-api-credentials
feature/google-maps-autocomplete
feature/profile-picture
refactor/user-authentication-changes
feature/social-login
fix/profile-picture-upload-bug
feature/reset-password-update
feature/reset-password-update-2

But what have we gained from this approach?

  • If the address autocompletion was merged earlier, we wouldn’t have ended up with two implementations. Even if the first wasn’t shipped to production yet, it would have been a usable component in the codebase.
  • We wouldn’t have spent as much time on merge conflicts because frequent merges keep them small or non-existent.
  • We could have decided postpone social login if we knew it was such a big investment. We didn’t because it was too entangled in the monolithic branch.

Working towards mergeable chunks doesn’t only give the developer more optionality, it makes the entire organization more flexible.

Hindsight is 20/20. It could have gone differently: maybe working in the dark for a prolonged time didn’t affect anyone. But you never know this ahead of time. Priorities change, and you don’t have control of the external factors that force them to. If you were halfway the monolithic user account branch and a critical bug or competitor forces your hand, all of your code is held hostage until you get back to it. Even if you don’t release it to your users, it’s more valuable to have in your main branch.

When code is in a feature branch it doesn’t contribute to the rest of the codebase. This 40-hour project is an innocent example. We often fall into the same trap for 250-hour projects. That’s a month or two of work—a month or two of work in progress code dilly-dallying. Code rots over time. Work in progress code rots significantly faster.

To paraphrase Eliyahu M. Goldratt (I’m a sucker for the theory of constraints): large amounts of work in progress masks inefficiencies and bottlenecks in the production process. Reducing work in progress can improve cash flow. As work in progress is converted to finished goods and sold more quickly, it accelerates the cash conversion cycle. In our language: code only becomes an asset after it’s merged.

Action items

To make this plea actionable: treat each chunk of work as something that should be merged by the end of the week. That doesn’t mean it needs to be “done” or available to the end user, it needs to become a citizen of The Codebase.

The hard part is finding your checkpoints. Identify the critical path. What can you strip from a feature while keeping it useful? Do that last. take a step back every few hours, ask yourself what the least amount of work would be to make what you’re doing mergeable. Have a bug-free staging environment from day one and keep it that way to ensure frequent merges don’t affect quality (read about the broken window theory).

It takes a while to get used to, and will feel uncomfortable at first. But do this enough, and you’ll see checkpoints all over the place. Working in small chunks means individual tasks may take longer. But in the long term you and the team as a whole will see gains in flexibility, optionality, and efficiency. These benefits vastly outweigh the time you would win from a bulk discount.

Some desert wisdom to close:

Arrakis teaches the attitude of the knife—chopping off what’s incomplete and saying: “Now, it’s complete because it’s ended here.”

Paternity Leave: Month 1

Highlights

  • My wife and I became parents.
  • I realized that caring for a newborn takes more time than I expected.
  • I’m unsure what to do with my partially-finished Hacker News course.

Goal grades

At the start of each month, I declare what I’d like to accomplish. Here’s how I did against those goals:

Finish recording my course

  • Result: Baby arrived early, and I only recorded 20% of the material.
  • Grade: N/A

Recording the course took longer than I thought, and the baby arrived a few weeks earlier than we expected, so I didn’t get to all the material.

Xecast Episode 4: A Psychic Whiplash Week

Xe reflects on a week of intense ups and downs, navigating a whirlwind of job offers, contract work, and personal projects.

Reflection is cooked

Going Buildless

The year is 2005. You're blasting a pirated mp3 of "Feel Good Inc" and chugging vanilla coke while updating your website.

It’s just a simple change, so you log on via FTP, edit your style.css file, hit save - and reload the page to see your changes live.

Did that story resonate with you? Well then congrats A) you’re a nerd and B) you’re old enough to remember a time before bundlers, pipelines and build processes.

Now listen, I really don’t want to go back to doing live updates in production. That can get painful real fast. But I think it’s amazing when the files you see in your code editor are exactly the same files that are delivered to the browser. No compilation, no node process, no build step. Just edit, save, boom.

There’s something really satisfying about a buildless workflow. Brad Frost recently wrote about it in “raw-dogging websites”, while developing the (very groovy) site for Frostapalooza.

So, how far are we away from actually working without builds in HTML, CSS and Javascript? The idea of “buildless” development isn’t new - but there have been some recent improvements that might get us closer. Let’s jump in.

The obvious tradeoff for a buildless workflow is performance. We use bundlers mostly to concatenate files for fewer network requests, and to avoid long dependency chains that cause "loading waterfalls". I think it's still worth considering, but take everything here with a grain of performance salt.

HTML

Permalink to “HTML”

The main reason for a build process in HTML is composition. We don’t want to repeat the markup for things like headers, footers, etc for every single page - so we need to keep these in separate files and stitch them together later.

Oddly enough, HTML is the one where native imports are still an unsolved problem. If you want to include a chunk of HTML in another template, your options are limited:

  • PHP or some other preprocessor language
  • server-side includes
  • frames?

There is no real standardized way to do this in just HTML, but Scott Jehl came up with this idea of using iframes and the onload event to essentially achieve html imports:

<iframe
    src="/includes/something.html"
    onload="this.before((this.contentDocument.body||this.contentDocument).children[0]);this.remove()"
></iframe>

Andy Bell then repackaged that technique as a neat web component. Finally Justin Fagnani took it even further with html-include-element, a web component that uses native fetch and can also render content into the shadow DOM.

For my own buildless experiment, I built a simplified version that replaces itself with the fetched content. It can be used like this:

<html-include src="./my-local-file.html"></html-include>

That comes pretty close to actual native HTML imports, even though it now has a Javascript dependency 😢.

Server-Side Enhancement

Permalink to “Server-Side Enhancement”

Right, so using web components works, but if you want to nest elements (fetch a piece of content that itself contains a html-include), you can run into waterfall situations again, and you might see things like layout shifts when it loads. Maybe progressive enhancement can help?

I’m hosting my experiment on Cloudflare Pages, and they offer the ability to write a “worker” script (very similar to a service worker) to interact with the platform.

It’s possible to use a HTML Rewriter in such a worker to intercept requests to the CDN and rewrite the response. So I can check if the request is for a piece of HTML and if so, look for the html-include element in there:

// worker.js
export default {
    async fetch(request, env) {
        const response = await env.ASSETS.fetch(request)
        const contentType = response.headers.get('Content-Type')

        if (!contentType || !contentType.startsWith('text/html')) {
            return response
        }

        const origin = new URL(request.url).origin
        const rewriter = new HTMLRewriter().on(
            'html-include',
            new IncludeElementHandler(origin)
        )

        return rewriter.transform(response)
    }
}

You can then define a custom handler for each html-include element it encounters. I made one that pretty much does the same thing as the web component, but server-side: it fetches the content defined in the src attribute and replaces the element with it.

// worker.js
class IncludeElementHandler {
    constructor(origin) {
        this.origin = origin
    }

    async element(element) {
        const src = element.getAttribute('src')
        if (src) {
            try {
                const content = await this.fetchContents(src)
                if (content) {
                    element.before(content, { html: true })
                    element.remove()
                }
            } catch (err) {
                console.error('could not replace element', err)
            }
        }
    }

    async fetchContents(src) {
        const url = new URL(src, this.origin).toString()
        const response = await fetch(url, {
            method: 'GET',
            headers: {
                'user-agent': 'cloudflare'
            }
        })
        const content = await response.text()
        return content
    }
}

This is a common concept known as Edge Side Includes (ESI), used to inject pieces of dynamic content into an otherwise static or cached response. By using it here, I can get the best of both worlds: a buildless setup in development with no layout shift in production.

Cloudflare Workers run at the edge, not the client. But if your site isn't hosted there - It should also be possible to use this approach in a regular service worker. When installed, the service worker could rewrite responses to stitch HTML imports into the content.

Maybe you could even cache pieces of HTML locally once they've been fetched? I don't know enough about service worker architecture to do this, but maybe someone else wants to give it a shot?

CSS

Permalink to “CSS”

Historically, we’ve used CSS preprocessors or build pipelines to do a few things the language couldn’t do:

  1. variables
  2. selector nesting
  3. vendor prefixing
  4. bundling (combining partial files)

Well good news: we now have native support for variables and nesting, and prefixing is not really necessary anymore in evergreen browsers (except for a few properties). That leaves us with bundling again.

CSS has had @import support for a long time - it’s trivial to include stylesheets in other stylesheets. It’s just … really frowned upon. 😅

Why? Damn performance waterfalls again. Nested levels of @import statements in a render-blocking stylesheet give web developers the creeps, and for good reason.

But what if we had a flat structure? If you had just one level of imports, wouldn’t HTTP/2 multiplexing take care of that, loading all these files in parallel?

Chris Ferdinandi ran some benchmark tests on precisely that and the numbers don’t look so bad.

So maybe we could link up a main stylesheet that contains the top-level imports of smaller files, split by concern? We could even use that approach to automatically assign cascade layers to them, like so:

/* main.css */
@layer default, layout, components, utils, theme;

@import 'reset.css' layer(default);
@import 'base.css' layer(default);
@import 'layout.css' layer(layout);
@import 'components.css' layer(components);
@import 'utils.css' layer(utils);
@import 'theme.css' layer(theme);

Design Tokens

Permalink to “Design Tokens”

Love your atomic styles? Instead of Tailwind, you can use something like Open Props to include a set of ready-made design tokens without a build step. They’ll be available in all other files as CSS variables.

You can pick-and-choose what you need (just get color tokens or easing curves) or use all of them at once. Open props is available on a CDN, so you can just do this in your main stylesheet:

/* main.css */
@import 'https://unpkg.com/open-props';

Javascript

Permalink to “Javascript”

Javascript is the one where a build step usually does the most work. Stuff like:

  • transpiling (converting modern ES6 to cross-browser supported ES5)
  • typechecking (if you’re using TypeScript)
  • compiling JSX (or other non-standard syntactic sugars)
  • minification
  • bundling (again)

A buildless worflow can never replace all of that. But it may not have to! Transpiling for example is not necessary anymore in modern browsers. As for bundling: ES Modules come with a built-in composition system, so any browser that understands module syntax…

<script src="/assets/js/main.js" type="module"></script>

…allows you to import other modules, and even lazy-load them dynamically:

// main.js
import './some/module.js'

if (document.querySelector('#app')) {
    import('./app.js')
}

The newest addition to the module system are Import Maps, which essentially allow you to define a JSON object that maps dependency names to a source location. That location can be an internal path or an external CDN like unpkg.

<head>
    <script type="importmap">
        {
            "imports": {
                "preact": "https://unpkg.com/htm/preact/standalone.module.js"
            }
        }
    </script>
</head>

Any Javascript on that page can then access these dependencies as if they were bundled with it, using the standard syntax: import { render } from 'preact'.

Conclusion

Permalink to “Conclusion”

So, can we all ditch our build tools soon?

Probably not. I’d say for production-grade development, we’re not quite there yet. Performance tradeoffs are a big part of it, but there are lots of other small problems that you’d likely run into pretty soon once you hit a certain level of complexity.

For smaller sites or side projects though, I can imagine going the buildless route - just to see how far I can take it.

Funnily enough, many build tools advertise their superior “Developer Experience” (DX). For my money, there’s no better DX than shipping code straight to the browser and not having to worry about some cryptic node_modules error in between.

I’d love to see a future where we get that simplicity back.

Permalink to “Links”