Reading List
The most recent articles from a list of feeds I subscribe to.
Neither artificial, nor intelligent
Large Language Models (LLMs) and tools based on them (like ChatGPT) are all some in tech talk about today. I struggle with the optimism I see from businesses, the haphazard deployment of these systems and the seemingly ever-expanding boundaries of what we are prepared to call “artificially intelligent”. I mean, they bring interesting capabilities, but arguably they are neither artificial, nor intelligent.
Let me start with where I'm coming from. In 2008 I started studying for a degree called Cognitive Artificial Intelligence. Of a number of Dutch universities offering degrees in artificial intelligence, mine was the one most focused on psychology and philosophy. Others were more geared towards linguistics and/or computer science. We had courses all across these different fields, as well as mathematics. This made it super interesting. Case in point: I learned then that artificial intelligence, as a field, isn't easily defined. It comprises a lot of things. It attracts people with a wide range of interests. And it has all sorts of applications, from physical robots to neural networks and natural language processing. Towards the end of the first year, I realised I had different interests (primarily in philosophy) and skills (not a programmer by heart, or a mathematician, and my grades and a BSA agreed). I ended up switching to philosophy full time, specialised in AI, language and ethics (of course, philosophy is great for generalists, too).
Fast-forward almost 15 years… my knowledge at this point is a bit rusty, but I am not less interested in the subject. Today, there is a lot of hype around Language Models (LMs), a specific technique in the field of “artifical intelligence”, which Emily Bender and colleagues define as ‘systems trained on string prediction tasks’ in their paper ‘On the dangers of stochastic parrots: can language models be too big?’ (one of the co-authors was Timnit Gebru, who had to leave her AI ethics position at Google over it). Hype isn't new in tech, and many recognise the patterns in vague and overly optimistic thoughtleadership (‘$thing is a bit like when the printing press was invented’, ‘if you don't pivot your business to $thing ASAP, you'll miss out’). Beyond the hype, it's essential to calm down and understand two things: do LLMs actually constitute AI and are what sort of downsides could they pose to people?
Artificial intelligence, in one of its earliest definitions, is the study of things that are in language indistinguishable from humans. In 1950, Alan Turing famously proposed an imitation game as a test for this indistinguishability. More generally, AIs are systems that think or act like humans, or that think or act rationally. According to many, including OpenAI, the company behind ChatGPT and Whisper, large language models are AI. But that's a company: a non-profit with a for-profit subsidiary—of course they would say that.
Not artificial
In one sense, “artificial” in “AI” means non-human. And yes, of course, LLMs are non-human. But they aren't artificial in the sense that their knowledge has clear, non-artificial origins: the input data that they are trained with.
OpenAI stopped disclosing openly where they get their data since GPT-3 (how Orwellian). But it is clear that they gather data from all over the public web, places like Reddit and Wikipedia. Earlier they used filtered data
from Common Crawl.
First, there is the long term consequences for quality. If this tooling results in more large scale flooding the web with AI generated content, and it uses the contents of that same web to continue training the models, it could result in a “misinformation shitshow”. It also seems like a source that can dry up once people stop asking questions on the web to interact directly with ChatGPT.
Second, it seems questionable to build off the fruits of other people's work. I don't mean off your employees, that's just capitalism—I mean other people that you scrape input data from without their permission. It was controversial when search engines took the work from journalists, this is that on steroids.
Not intelligent
What about intelligence? Does it make sense to call LLMs and the tools based on them intelligent?
Alan Turing
Alan Turing suggested (again, in 1950) that machines can be said to think if they manage to trick humans such that ‘an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning’. So maybe he would have regarded ChatGPT as intelligent? I guess someone familiar with ChatGPT's weaknesses could easily ask the right questions and identify it as non-human within minutes. But maybe it's good enough already to fool average interrogators? And a web flooded with LLM-generated content would probably fool (and annoy) us all.
Still, I don't think we can call bots that use LLMs intelligent, because they lack intentions, values and a sense of the world. The sentences systems like ChatGPT generate today merely do a very good job at pretending.
The Stochastic Parrots paper (SP) explains why pretending works:
our perception of natural language text, regardless of how it was generated, is mediated by our own linguistic competence
(SP, 616)
We merely interpret LLMs as coherent, meaningful and intentional, but it's really an illusion:
an LM is a system for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine, but without any reference to meaning: a stochastic parrot.
(SP, 617)
They can make it seem like you are in a discussion with an intelligent person, but let's be real, you aren't. The system's replies to your chat prompt aren't the result of understanding or learning (even if the technical term for the process of training these models is ‘deep learning’).
But GPT-4 can pass standardised exams! Isn't that intelligent? Arvind Narayanan and Sayash Kapoor explain the challenge of standardised exams happens to be one large language models are good at by nature:
professional exams, especially the bar exam, notoriously overemphasize subject-matter knowledge and underemphasize real-world skills, which are far harder to measure in a standardized, computer-administered way
Still great innovation?
I am not too sure. I don't want to spoil any party or take away useful tools from people, but I am pretty worried about large scale commercial adoption of LLMs for content creation. It's not just that people can now more easily flood the web with content they don't care about just to increase their own search engine positions. Or that the biases in the real world can now propagate and replicate faster with less scrutiny (see SP 613, which shows how this works and suggests more investment in curating and documenting training data). Or that criminals use LLMs to commit crimes. Or that people may use it for medical advice and the advice is incorrect. In Taxonomy of Risks Posed by Language Models, 21 risks are identified. It's lots of things like that, where the balance is off between what's useful, meaningful, sensible and ethical for all on the one hand, and what can generate money for the few on the other. Yes, both sides of that balance can exist at the same time, but money often impacts decisions.
And that, lastly, can lead to increased inequity. Monetarily, e.g. what if your doctor's clinic offers consults with an AI by default, but you can press 1 to pay €50 to speak to a human (as Felienne Hermans warned Volkskrant readers last week)? And also in terms of the effect of computing on climate change: most large language models benefit those who have the most, while their effect (on climate change) threatens marginalised communities (see SP 612).
Wrapping up
I am generally very excited about applications of AI at large, like in cancer diagnosis, machine translation, maps and accessibility. And even of capabilities that LLMs and tools based on them bring. This is a field that can genuinely make the world better in many ways. But it's super important to look beyond the hype and into the pitfalls. As a lot of my feed naturally have optimist technologists, I have consciously added many more critical journalists, scientists and thinkers to my social feeds. If this piques your interest, one place to start could be the Distributed AI Research Institute (DAIR) on Mastodon. I also recommend the Stochastic Parrots paper (and/or the NYMag feature on Emily Bender's work). If you have any recommend reading or watching, please do toot or email.
Originally posted as Neither artificial, nor intelligent on Hidde's blog.
My ideal accessible components resource is holistic, well tested and easy to use
It would be easier to have an accessible web if all we did with it was publish documents. Blobs of text, images here and there. But modern sites (or “applications”) do a lot more than documents. Often for better, sometimes for worse. To improve accessibility of the web as it is today, I feel we dearly need accessibility guidance that is holistic, well tested and easy to use.
Web sites or applications often come with menus, tooltips, dialogs, drag and drop, tabs and emoji pickers. Some say this is unnecessary and argue interfaces must be simpler. Sometimes they are right—a lot of interfaces can benefit from being simpler. Yet, complex UI components are often genuinely helpful to users. Like, it's good if not all information is always visible at the same time. Hiding less important stuff in less prominent places can help hierarchy. It can be good if, instead of selecting a city from a <select>
of thousands, there's some comboboxing going on. ‘No that's an input!’, you say… yeah, maybe, but it could be important for the business to have a city chosen out of a predefined list of options. And that can give certainties that are beneficial to users too.
So, complex UI patterns (widgets, components, etc) are sometimes needed. A lot of sites have them, for reasons ranging from good to bad. A lot of organisations hand-build these components as part of a design system effort, to make their investments reusable. Reusable components can have a huge impact on accessibilility. Because reuse means repetition: of good patterns, bad patterns, accessible patterns, inaccessible patterns… the stakes are high!
Over the years I've seen and heard a lot of developers talk about building components. I heard them speak about the developer experience of building these components. When I was working on guidance at WAI, I listened extra carefully. From what I gathered, many actually want to truly invest in the quality of their components, including the accessibility of those components. But the official guidance they can find is lacking: WCAG's supporting documents are often unclear (with reading levels, for what they are worth, up to professor grade), unpractical (a lot more words than concrete advice) and outdated (eg still recommending the title attribute). WCAG still works best for the web as a series of documents.
In other words, to better match the realities of people making websites, I feel the W3C's accessibility guidance should be more holistic, well-tested and easy to use.
Holistic
The closest to a guide on building accessible components is the ARIA Authoring Practices Guide (“APG”). It's a super useful resource for finding how to build components with ARIA, but it isn't “holistic”.
By holistic advice, I mean advice that considers the reader within their entire environment as a developer. Advice that builds on the best that can be done with HTML, CSS, JavaScript, WAI-ARIA and SVG (technologies websites commonly rely upon). The WAI-ARIA Authoring Practices Guide isn't holistic in that sense: it focuses on patterns built with ARIA only. From developer-who-wants-advice or user-who-needs-a-good-experience perspectives, that's not ideal. As accessibility specialists learn again and again, WAI-ARIA isn't what makes components accessible, it's merely one of the languages (ok, an ontology) that can help us build accessibly (see also: the first rule of ARIA). I don't mean to say any of these specificiations is inherently better, I mean the most accessible component is always some combination of these specs.
If that's the case, you may wonder, why does APG focus on ARIA only? There's no bad intent here… I think it is simply because it is written by a subgroup of the ARIA Working Group. That Working Group specifies ARIA and it has a deliverable to show how to use it. This makes good sense. But again, it isn't ideal if the intention is guidance that helps developers build the very best for users with disabilities (which I think is the goal we should really want to optimise for). Nobody seems to have that as a deliverable.
There is a W3C/WAI resource that is holistic in the way I described: WAI Tutorials. Shoutout to the great work of Eric Eggert and EOWG! It's a good resource, but it did not get expanded or updated much after the initial release.
There are resources outside of W3C/WAI that I can recommend, such as:
- Sarah Higley's posts on tooltips and custom selects
- Sara Soueidan on icon buttons and custom checkboxes/radiobuttons
- Accessible Components by Scott O'Hara
- Adrian Roselli's “Under-Engineered” series
- A complete guide to accessible front-end components by Vitaly Friedman
Well tested
Web accessibility ultimately is about whether people can use a thing. Some friends in the accessibility community like to emphasise that it's about people, not meeting standards. I get that sentiment, but there's a false dichotomy at play: making the web more usable for people with disabilities is a central goal for accessibility standards at organisations like W3C/WAI) They are a means to the same end. Either way, web accessibility is all about people. So, yes, user testing matters. We've got to find out which specific patterns are mostly to work well for people.
While it's essential and beneficial to involve people with disabilities in user tests, there can be challenges:
- just like one single user doesn't speak for all users, one person with a disability doesn't speak for everyone with that disability; you'll want larger samples;
- there are many disabilities; sometimes people with different disabilities could even have “competing” needs. For instance, high contrast benefits some, but could constitute a barrier to others;
- it may be more difficult to recruit users with disabilities and ensure the group you recruit for a given project is the right group in terms of what you want to test;
My friend Peter has documented some of his approach to testing with disabled users and WAI itself has a great page on involving users with disabilities too. Others have blogged about their user testing efforts: Fable tested Loom's video clipping functionality and the GOV.UK Design System team documented what they found testing conditionally revealing questions. These posts show there is a lot of nuance in determining if a complex pattern is accessible. But they also show this nuance can be described and help inform people.
As an aside: another aspect of testing guidance for accessible components is how well they perform across different browsers and assistive technologies. Bocoup, Meta and others are doing great work in this area in the ARIA-AT effort: they define tests (over a thousand drafted), but also pioneer ways to test assistive technologies automatically. I believe the plan is (was?) to show the results of this project next to code examples, which is great.
Easy to use
‘Developer experience’ is a phrase sometimes frowned upon, especially when contrasted with user experience. If we had to choose between them, of course, user experience would be the first choice. But the choice isn't binary like that. If the stars are aligned, one can lead to the other. Companies that make developer-focused products (like CMSes, versioning control, authentication, payment providers, databases etc) usually have a dedicated “developer experience” department that ensures developers can use the product well. Among other things, they try to reduce friction.
Friction could result in income loss for these companies. If the tool constantly displays unhelpful error messages, has code examples that are tricky to follow or documentation that is out of date, developers might look for a different tool. And the opposite is true too: if this payment provider makes it super easy to implement secure payments, a developer will likely want to use it again.
Friction could also cause a product to be “used wrong”. For instance, large groups of developers easily got started with this cool new authentication product, but the docs were so unclear that they missed important security steps? Or, in a CI/CD product, developers manage to get started quickly, but a majority does it in a way that uses way too many resources, because the example projects do? If the company charges overages unexpectedly, it may upset customers, if it doesn't, it could end up costing the company too much.
I'll admit it is a bit of a stretch: what if both of these frictions are at play with accessibility standards? And instead of looking for different standards, developers choose the “easier” route of inaccessibility? This could happen in places where leadership or organisational procedures don't enforce accessibility. They'll get away with it. It could also happen in places that do have a mature accessibility program or even a handful of accessibility-minded individual developers. If the most accessible solution isn't easy to learn (e.g. they get lost between different kinds of guidance), it could still result in inaccessibility, even with the best intentions.
I believe effective accessibility guidance answers “how easy will this make it for people to get it right”, and probably also ”how will this avoid that people take the wrong turn”.
Some examples of what could constitute good developer experience (dreaming aloud here):
- easy to copy examples that closely match real-world components people are building, like privacy setting banners and comboboxes (just to name two examples of major barriers I saw blind users encounters in a user test)
- full example projects for some popular frameworks and languages, eg here's how to build an accessible blog with Next.js, or how to report errors in a form in vanilla JS + 5 popular frameworks
- a specific focus on eliminating inconsistencies in documentation (“boring” work, maybe, but inconsistencies inevitably creep into any set of content—the more inconsistencies are avoided, the more effective documentation is)
While these examples are developer focused, the same kind of focus could be applied other roles like quality assurance and design (see also Roles involved in accessibility, which is a great document, though still in draft status).
I suspect many people with disabilities among us have a mental list or accessibility barriere they encounter most often. Many who do regular accessibility audits will have a list of things they find often. Many developers will have a list of components they are unsure how to build accessibly. Et cetera, et cetera. If I had a magic wand, I would try and put all of these people in one room.
In summary
In this post, I've tried to lay out what my ideal accessibility guidance looks like. The gist of it is: make it easier for people to get accessibility right. And the opposite, too: make it harder to get it wrong. I feel the closer we can get to that, the more accessible interfaces can become. I think this is the way to go: guidance that is holistic, well-tested and optimised for developer experience (or, more broadly, the experience of anyone touching web projects in a way that can make or break accessibility).
And to be clear, this is not an invite for people to care less or circumvent the responsibilities and duties they have. Accessibility needs to be part of one's MVP. But it is an invite for people to rethink our collective efforts in improving web accessibility: WCAG 3.0 may not be it, the world may benefit more from better guidance than from better testing methodologies.
My expectations are probably a tad unrealistic. I probably missed my chance to try and materialise them more when I worked for WAI. Yet, I hope the perspective is helpful to some. Get in touch if you have thoughts!
Originally posted as My ideal accessible components resource is holistic, well tested and easy to use on Hidde's blog.
200
It's meta blogging time, because this is my 200th post. Vanity metrics, I know, but sometimes you've got to celebrate milestones. Even the merely numerical ones. When I wrote my 150th post two years ago, I described why I blog and what about. This time, I want to focus on how I do it and look at the subjects of the last 50.
A lot of posts start on my phone
There are plenty of tools for writing. Fancy physical notebooks, favourite text editors, and what not. I do use both, but usually I start a post on my phone. It somehow is my least distracting device, and very portable. My setup is that I have iA Writer, which I love for its radical simplicity and advanced typography, on my phone and computers. I've got a couple of folders that are synced through iCloud, including for posts and talk ideas. They are literally folders—iA Writer just picks them up and displays them as folders in their UI. When I start a post, I create a file. Sometimes it stays around for days, weeks or months, sometimes I finish a draft in half an hour.
When the post is almost ready, I'll usually do another round on a computer. This is essential if the post needs images or code examples, sometimes I can skip it if a post is just text. This is usually also the time when I start adding it into my website and reach out to people for feedback, if it's the kind of post that very much needs review.
Having my posts exist in a cloud service has been a game changer, because it means I can blog when inspiration strikes. When I used to go swimming, I would sometimes think up a blog post while in the water and write up a quick structure of first draft in the changing room or the cafe nearby. Sometimes I revise a draft when I sit on a tram or bus, or add some more examples when I arrived early for an appointment. Sometimes I return to it on a computer, then a phone, then a tablet.
As for the format: I use Markdown processed by Eleventy. I am aware of the disadvantages, but this is a one-person-who is-a-developer-and-very-comfy-in-a-text-editor blog kind of use case. Still, I am pondering re-introducing a CMS so that it can manage images and history in a way that doesn't involve me committing into git (who needs commits for typos?) or compressing images by hand.
Getting the words flowing
A friend asked how I manage to write on this blog regularly, alongside other responsibilities. I don't know the secret, but I can offer two thoughts.
Firstly, my writing is usually a way to clarify my thinking, it sort of defragments thoughts, if that makes sense. It doesn't really add much to the time I would need for defragmenting thoughts anyway, if anything it speeds that process up. If I spiral in circles about a subject, jotting my thoughts down helps me get out of that spiral. Sometimes the result is I find out I was very wrong, sometimes I get to a post I deem worthy of publishing and often I end up somewhere in between.
Secondly, I try and add ideas to drafts when they come up. Like, I had a file with ‘200’ in it for a while that eventually got a few bullets and then became this post. When I feel like making a thread on social media, I force myself to make a draft post here instead. Occassionally, like when I haven't written for a while, I'll go through the drafts. There isn't really a magic trick here either, it's a habit if anything. And I guess it helps words come to me naturally, like numbers do for others.
Thirdly, a bonus one: it helps me to keep things very simple and stay away from tweaking too many things (eg I only switched tech stack once in 15 years and kept the design roughly the same). I won't say I'm not tempted, I mean it is fun to try out new things and this blog is definitely a playground for me to test new Web Platform features, but I try and focus on the posts.
My 50 most recent posts
The cool thing about having your own blog is that it doesn't need to have a theme per se. Mine follows some of my interests and things I care about: the web, components and accessibility.
On web platform features, I wrote about spicy sections (out of date now as I updated my site and there are some different ideas and solutions for tabs on the web), selectmenu and dialogs.
As I used Eleventy more, I wrote about using it for WCAG reports and for photo blogging.
A lot of my posts were also about web accessibility, like this primer on ATAG, two posts about low-hanging fruit issues (part 1, part 2) , what's new in WCAG 2.2 and the names section of ARIA. These posts usually start because I had to give some advice in an accessibility audit report I wrote, or because I couldn't find a blog post sized answer to a question I personally had.
I also covered some events, like dConstruct 2022, documentation talks at JSConf and JSNation, Beyond Tellerrand 2021 and an on-stage interview with Cecilia Kang on her fascinating book An Ugly Truth. These kinds of posts help me process what I learned at the event. While I write, I usually look up URLs speakers mentioned or try out features they discussed, so it's a bit of experiencing the whole thing twice.
This year, I hope to write more about CSS and other UI features in the browser. I did one post about using flex-grow for my book site, but want to dive deeper into subjects like scroll snapping, container queries and toggles. Even if Manuel has already covered every single CSS subject ever in the last few months (congrats, my friend!). I also want to cover design systems and Web Components more. I have some other subjects in mind too, and am open to suggestions too, just leave a comment or slide in my DMs. Thanks for reading my blog!
Originally posted as 200 on Hidde's blog.
Browser built-in search and ATAG A.3.5.1
The Authoring Tool Accessibility Guidelines (ATAG) are guidelines for tools that create web content. While reviewing this week, I wondered if the Text Search criterion (A.3.5.1) is met as soon as users can use browser built-in search. You know, Ctrl/⌘ + F
. Turns out: yes, if the tool is browser-based and the UI contains all the necessary content in a way that CTRL+F can find it.
Let's look into a bit more detail. The criterion requires that text content in editing views can be searched:
A.3.5.1 Text Search: If the authoring tool provides an editing-view of text-based content, then the editing-view enables text search, such that all of the following are true: (Level AA)
(a) All Editable Text: Any text content that is editable by the editing-view is searchable (including alternative content); and
(b) Match: Matching results can be presented to authors and given focus; and
(c) No Match: Authors are informed when no results are found; and
(d) Two-way: The search can be made forwards or backwards.
(From: ATAG 2.0)
True accessibility depends on a combination of factors: browsers, authoring tools, web developers and users—they can all affect how accessible something is. ATAG is about authoring tools specifically, so my initial thinking was that for an authoring tool to meet this criterion, it would need to build in a text search functionality.
However, reading the clauses, it sounded a lot like what Ctrl + F
/Cmd + F
in browsers already facilitate. Great news for authoring tools that are browser-based (sorry for those that are not). Browser built-in search finds all visible text in a page (as well invisible text in details/summary browsers), matching results are presented, “no match” is indicated and search works two ways. It doesn't find alternative text of rendered images, but if you have text input for alternative text, it finds the contents of that.
In Chromium, when the “hidden” part of a
details
/summary
contains the word you search for, it goes to open state (this is per spec, but Firefox and Safari don't do this yet)
Note: in screenreaders, the experience of in page search is not ideal. I have not tested extensively and am not a screenreader user myself, but quick tests in VoiceOver + Safari and VoiceOver + Firefox indicated to me that one can't trivially move focus to a result that is found and “no match” is not announced, it needs cursor movement to find. This seems not ideal, but again, I am not a screenreader user and may be missing something.
All in all, the in-browser seems to satisfy the requirements. Lack of matches is indicated (though not announced) and matches can be given focus (though the browser doesn't do it; not sure how that would work either as the matches will most likely not be focusable elements so that would be a bit hacky and it would not solely have advantages). These caveats are accessibility issues. I feel they'd be best addressed in browser code, not individual websites/apps, so that the solution is consistent.
Ok, so if the browser built-in search meets the criteria, let's return to the question I started with: should an authoring tool merely make sure it works with built-in search, or should it implement its own in-page search?
The unanimous answer I got from various experts: yes, in this case the browser built-in is sufficient to meet the criterion. It also seems reasonably user friendly and likely better than some tool-specific in-page search (but note the caveats in this post, especially that all alternative text would need to be there as visible text). Of course, most authoring tools have more search tools available, e.g. to let users search across all the content they can author. In today's world, they seem like an essential way to complement in-page search, especially as a lot of authoring tools aren't really page-based anymore.
Originally posted as Browser built-in search and ATAG A.3.5.1 on Hidde's blog.
Data-informed flex-grow for illustration purposes
I have this web page to display the books I've read. Book covers often bring back memories and it's been great to scroll past them on my own page. It's also an opportunity to play around with book data. This week, I added a bit of page count-based visualisation for fun.
This is what it looks like: rectangles with colours based on book covers. Each relates to a specific book, and its size is based on that book's page count.

What's flex-grow?
Flex grow is part of the CSS's flex layout mode. When you add display: flex
to a HTML element, it becomes a flex container, and with that, its direct children become flex items. Flex items behave differently from block or inline level children. The difference is that flex items can be layed out in a specific direction (horizontal or vertical) and sized flexibly. This system is called “flexible layout box model” or simply flexbox. It comes with sensible defaults as well as full control via a range of properties.
One of those properties is flex-grow
. This is what it does:
[flex-grow] specifies the flex grow factor, which determines how much the flex item will grow relative to the rest of the flex items in the flex container when positive free space is distributed
(from: CSS Flexible Box Layout Module Level 1)
So let's say you have ten span
s in div
, where the div
is a flex container and the span
s flex items:

If you want one to proportionally take up double the space and another to take up triple the space, you'll give them a flex-grow
value of 2
or 3
, respectively:
/* item 2 */
span:nth-child(2) {
flex-grow: 2; /* takes up double */
background: crimson;
}
/* item 6 */
span:nth-child(6) {
flex-grow: 3; /* takes up triple */
background: cornflowerblue;
}
The others will then each take up one part of the remaining space (as if set to 1
):

The specification recommends generally using flex
instead of flex-grow
. This shorthand can take three numbers: a flex-grow
factor (how much should it be able to grow), flex-shrink
factor (how much should it be able to shrink) and a flex-basis
factor (what size should we start with before applying grow or shrink factors.
We can also give flex
just one positive number, to use that number as flex-grow
, and a default of “1” as flex-shrink
and “0” as the basis (meaning no initial space is assigned, all of the size is a result from the grow or shrink factor). In our case, we'll use that flex
shorthand, it's a sensible default.
Flex-grow for widths, aspect ratio for heights
My sites displays, like, 50 books for a given year. Each with a different amount of pages. Some are just over 100 pages, others are over 500. I wanted to use that number as the flex-grow
factor. This means some books claim space of 113
, others claim 514
:
[a-113-page-book] {
flex: 113;
}
[a-514-page-book] {
flex: 514;
}
So that will claim 113 worth of space for the one book and 514 worth of space for the other. I used little magic for this, I just applied an inline style
attribute to each book in my template (Nunjucks), setting flex
dynamically. When they all have a number, they all take a proportion, and when all the numbers are similar amounts, you'll see that the size they take up is quite similar, except for a few outliers.

As mentioned above, these numbers represent a proportion of the total available space. You might wonder: which space? Flexbox can work in horizontal and vertical direction. Or, really, in inline and block direction, where inline is the direction in which inline elements appear (like em
, strong
) and block the direction in which block-level elements appear (eg paragraphs and headings).
Flexbox only takes space in one direction. For my illustration, I used the default, which is the inline direction or flex-flow: row
(this is shorthand for a flex-direction: row
and flex-wrap: nowrap
). This means my flex-grow
values are about a proportion of “inline” space, in my site's case, this means horizontal space.
I also want each book to take up some vertical space, as with only a width, we would not actually see anything. I could set a fixed height (in fact, I did that for the screenshot above). But in this case, I want the height to have a fixed relationship to the width. The aspect-ratio
property in CSS is made for that: if an item has a size in one dimension, the browser figures out what the other dimension's size needs to be to meet a ratio you provide. In this case, the browser finds a height based on the width that it calculated for our proportion.
Ok, so let's add an aspect ratio:
.book {
/* flex: [set for this book as inline style] */
aspect-ratio: 1 / 4;
}

I also added a background-color that I have available in my template (it's a piece of image metadata that I get for free from Sanity).
Oh, but wait… they still have the same height! That's because by default, items are aligned to ”stretch”, they take up all available space in the opposite direction (block if items flow in inline direction, inline if items flow in block direction). When items stretch, remaining space is assigned back to them. In this case, that means they'll get the same height. Often super useful and quite hard to do pre-flexbox. But in this case, we don't want that, so we'll use align-items
to set our items to align at the “end” (also can be set to “start”, “center” and “baseline”):
What the books looks like for different values of align-items
In my case, I also added a max-width to avoid extreme outliers and I addded opposite rotate
transforms to even and odd items for funs:
.book {
transform: rotate(-2deg);
}
.book:nth-child(odd) {
transform: rotate(2deg);
}
This works with grids too
When I shared this, Vasilis rightly noted you could pull the same trick with grids. If you add display: grid
to an element it becomes a grid container and its children grid items. That sounds a lot like flexbox, but the main difference is that sizing isn't done from the items, it is done from the container. For a grid, you define columns (and/or rows) that have a size. The fr
is one of the units you can use for this, it's short for “fraction”. Pretty much like proportions in flexbox, fr
will let you set column/row size that is a proportion of the available space. If you set all your columns with fr
, each one will be a proportion of all available space in the relevant direction.
Summing up
I usually use flexbox with much lower numbers, often under 10. But it turns out, it works fine with larger numbers, too. I have not done extensive performance testing with this though, there may be issues if used for larger sites with more complex data.
Originally posted as Data-informed flex-grow for illustration purposes on Hidde's blog.