Automations are great. It’s pretty cool to explore building them. And it’s exhillarating when it works.
This is the appeal behind Factorio, a game entirely about building and optimizing supply chains. It’s addictive watching your expanding world of conveyor belts, resource extraction, and processing facilities. Building the POSSE script for this website has felt similar.
It’s easier than ever for workers in the knowledge economy to build automation into their workflows. As knowledge workers, we often build automations to save time, but without intentional design we inadvertently build systems that make demands on us. When you propose some kind of automation, people will respond to it based on how that system can be deployed in a context where it’s used to accelerate and coordinate work. This post is a call to consider the human element when we’re building and promoting automation.
I have no training in automation or design theory, but I think there’s two properties that can frame how we think of building automations: legibility and intrusion. I’ll talk about some design patterns along the way — trying to illustrate ways in which these features matter for human experience. And I’ll conclude with why this is important for our current moment: discussions about improving efficiency are dominated by tinkerers’ tunnel vision. Building things can be so exciting that we forget that we’re building systems and shaping practices and expectations for how we work.
Centaurs and reverse centaurs
Cory Doctorow has a helpful framing of automation in terms of centaurs and reverse centaurs. The basic idea is that some automations assist humans, and the experience of using them is empowering or thrilling. You function as a human head on a tireless machine body, and you’re able to do more, move more quickly. A reverse centaur is the opposite — a technology or system that demands (or controls) human inputs to produce optimized outputs. In this situation, the experience is reversed. You are dragged along, trying to keep up at the pace or quality of outputs that the system demands.
His most helpful example is in terms of assistive driving technologies. They can aid with driving more safely or efficiently (e.g., blind spot detection, cruise control). But they can also treat the driver as something to be measured and optimized, like monitoring a driver’s eyes and deducting points from their rating if the driver doesn’t pay attention on the road. On the reverse-centaur model, human effort is something to be modified by the system, while on the centaur model, the driver is augmented by the device.
These categories are about the relationship of the automation to the person. Is it aiding their tasks, or demanding and shaping their effort in a particular way? We can distill these models to two design patterns
- Centaurs: An automation that lets you complete a task more quickly. It accelerates your velocity when working on tasks, you are a human augmented by the machine.
- Reverse Centaurs: An automation where the machine treats human effort as a demand/input for accelerated output. It generates demands that users must conform to, and the human augments the system.
Design patterns are ways that automations can be shaped. One of the lessons from the assistive driving technology example is that this shape isn’t about the immediate things that the technology offers to users, but about the ways that they can be embedded in broader social systems. Part of the experience is about how automations treat the humans using them.
Legibility and intrusion
I think there are two key properties of automations that shape our experience with them. If I create a bash script, I’m immediately familiar with its logic and in full control of when it runs. If I’m working in Salesforce or a complex system of Zaps, I might be completely unfamiliar with the definitions and conditions of flows that generate notifications or tasks for me to complete. These examples differ in terms of the legibility of the automations and how intrusive they are for me. My script is extremely legible: I wrote and determined its logic. But it’s also intrusive: I have to run the script in my terminal. The background flows which send an email are the opposite: their logic is hidden and it’s minimally intrusive. I may not even be aware that someone is notified.
Here’s what I mean by these two terms:
- Legibility: Can the user see the “Why” and the “How” behind the automation? Does the user have the ability to adjust those elements, or incorporate this into their approach for the task?
- Intrusion: Does the system force a stop or change in the user’s path? Does it demand effort as an input before the output can proceed?
I want to stress that degree of control over the system is part of its legibility and intrusion is a measure of the effort and quality of how the system intrudes on a user’s experience or flow.
Here’s an example of legibility: A continuously variable transmission (CVT) in a car engine is low legibility and low intrusion. It transforms mechanical energy and simulates shifts, but you only need to put the car into drive before you stop thinking about it in operation. It is background mechanical infrastructure. A bicycle’s gearing system is both high legibility (you can literally see the parts) and higher intrusion (users must select the gear), even though it has a similar function. It feels like a tool that you need to use and efficient use requires “how-to” knowledge. Both systems are empowering — they enable users to effectively transform mechanical energy. Where they differ is in terms of legibility.
For a system to be centaur-like, the intrusiveness of the system must function in a way that’s engineered to empower the user. Low intrusion systems are more likely to be centaur-like. Cruise control eliminates the cognitive load for managing your velocity. Blind-spot assist is intrusive in a way that prevents drivers from making errors of judgment. Driver rating systems are intrusive in a way that allows managers/insurance companies to use incentives to encourage conformance. The scores and their impact on employment/prices are intrusive in much deeper ways than a blind-spot alert and they are often low legibility (especially with regard to how and what is being monitored). Reverse-centaurs are organized in a way that structures the end result without regard for how people enter into the system.
We can think about this in terms of a two by two matrix:
| Low Legibility | High Legibility | |
|---|---|---|
| Low Intrusion | Background infrastructure. Background processes that just work. | Assistive Aids. Systems that provide signal, adjust outputs, etc. without force or effort. |
| High Intrusion | Reverse-centaurs. Systems that demand increased outputs or compliance with opaque rules. | Tools, Frameworks. Systems that require effort but offer control and clear insight into their logic. |
If a system is highly intrusive but low legibility, it feels like a demand. If it’s high intrusion with high legibility (like the bike gears), it can feel like a tool or framework. The difference is whether the user understands the ‘why’ behind the friction. If it’s low legibility and low intrusion, it feels like background infrastructure. If it’s high legibility and low intrusion it feels like an aid. Fundamentally, reverse-centaurs are lower legibility, high intrusion — when an automation demands a higher standard or an accelerated pace of work, that harms its legibility. Centaurs can occupy a range of other spaces: but most often, they’re high legibility and low intrusion.
Types of automation
If we think of work as getting a bunch of tasks to be done with specific outputs, we can begin to think of automations as things that help us identify tasks, schedule and perform those tasks, clean-up after tasks, and alert us to when things are done. The ways we build and design these things can vary in the legibility/intrusion dimensions.
In this model, there are two fundamental types of automation:
- Gates: Prevent work from going out unless they meet some kind of condition or standard, they can also prevent the reverse centaurs from swamping you. They can also filter or limit inputs into the work process. Gates provide prevention and control. Gates range in intrusiveness and legibility, but generate intrusion.
- Cogs: Work inside of a machine, smoothing out friction without intruding on the user experience directly. They can transform energy, change a thing, or generate new output. Cogs are the engine that maintains velocity or acceleration. Typically, cogs are lower intrusion, and range in legibility.
These are similar to the basic elements of a programming language: functions and control statements. You can build arrangements of gates and cogs to perform more complex functions. Bash, Python, Salesforce Flows, n8n, are all pretty much the same in this respect. As design patterns, they are the basic atoms that let us build more complex systems.
Some of the more complex systems include:
- Sentinels: Put a cog after a gate so that something happens when a condition is met. The basic idea behind this is a notification system. Sentinels provide warnings. They may or may not prevent the machine from running off course. If the end goal of a sentinel is adjusting human behaviour, it’s higher intrusion.
- Gardeners: Run on a schedule or condition to delete, prune, archive, or organize. They prevent cogs and reverse-centaurs from overwhelming you (a script that cleans up my screenshots). Think pruning shears. They might have moderate to high legibility, but lower intrusion.
- Bridges: Transform information, data, tasks, or other inputs between forms. (Pandoc or ETL scripting). Typically lower intrusion and variable legibility.
- Distillers: Take a feed or stream of information and provide summaries that are useful for users. Reports and dashboards are the main examples. They extract signals from noise. Low intrusion, high legibility.
- Squires: Perform set up before you start a task. Package managers, tools to create virtual environments environments, etc. Typically lower intrusion, high legibility.
These aren’t exhaustive, but I think they’re helpful for getting an idea of how far this metaphor can run. One thing that’s important is that these metaphors all have an element of responsibility — this comes from the idea of the task having a functional goal. In my own approach to docs-as-code, I have distillers (spellcheck, Vale), gardeners, sentinels, and squires squished into a build and deploy script, and the static site generator functions as a bridge between markdown and published HTML.
Here’s the thing I’m going to argue for in this post. We can build automations out of gates and cogs, it tells us nothing about where we fall on the centaur/reverse-centaur axis. This is because I believe that technology has a fundamentally social aspect to it — technology (including automations) requires that we coordinate and prescribe tasks between human beings.
We often think about building automations in isolation. There is a task that I’m doing and the automation I’ve created eliminates performing that task for myself. Since it accelerates what I’m doing, it’s exciting and good. This is what I’m calling tinkerer’s tunnel vision. It can lead to a range of results, some good, some bad. Let’s use Vale as an example.
Implementing Vale
Vale uses syntactic pattern matching to check whether HTML or markdown conforms to a style guide. You can define and extend sets of rules and check for things like passive voice, common misspellings or typographic errors, and whether or not you’ve used title case or sentence case for headings. It’s catnip for technical writers who want to ensure that their articles meet certain quality standards and are happiest working in a docs-as-code environment. You can tinker with regex to generate suggestions, warnings, and errors.
It can be run in the command line and a really common way of using it is as a gate. As part of the process for deploying your site, you require that a branch passes the relevant checks. You can also run the command whenever you work on a file in your command line interface. And there’s an extension for VS Code that gives you live feedback as you work.
Is it a tool that makes you a centaur or a reverse-centaur? Well, that depends. Though I think it largely feels like a centaur. Consider these three ways you can use Vale:
- The empowered contributor. Vale is implemented as a CI/CD gate and the writer is able to use the VS Code extension while they work to get live feedback as they write. It will be rare that they submit any pull requests to be merged with style guide violations since it’s available both as a final check and as an assistive tool.
- The demanding oracle. Contributors may be aware of the style guide, but they are responsible for meeting the Vale check, but they are not provided the relevant style files or assistive tools aside from the CI/CD check, and instead must adapt their behaviour to conform to the style guide based on their ability to learn the system.
- The sorcerer’s apprentice. An automation is set up that generates Jira tickets based on Vale output. We set up a new style guide, and run the script for every page on our site. This generates a massive backlog of content that must be brought into alignment with the new style guide.
The demanding oracle isn’t a good way to implement the tool: it’s a fundamentally flawed approach to developer experience or a culture of trust. It isn’t a good experience precisely because of how the social organization around the tool is structured. That is, the demanding oracle is a perfect example of how a gate with low legibility functions as a reverse centaur demanding quality. Similarly, if you implement Vale in the manner of the sorcerer’s apprentice, it is a sentinel with high intrusion. Without a policy and plan for how to pace work through the backlog, this implementation quickly becomes a reverse centaur demanding quantity.
In the empowered contributor model, writers have access to the tool as an assistive technology and as a quality control gate on the final product. They know the standards and have distillers in their authoring process guiding them on how to meet those standards. And they determine the pace of their output. This becomes more centaur-like.
Interestingly, part of why skilled writers are valuable is precisely because of their ability to write about the subject matter and within constraints about the quality of their output. Good judgment for writers involves internalizing this knowledge and having a practice that enables them to achieve the level of output. Being that sort of writer involves dedicated practice and expertise, knowledge of how to use assistive technologies to produce quality docs is valuable. In a model where the legibility of Vale isn’t equally distributed, we might have increased demand for labour that can meet the quality standards.
Implementing Vale also imposes a new layer of work. Maintaining and updating the style guide is something that needs to be done. Adding Vale to the CI/CD and maintaining the CI/CD flow is something that requires work. Coordinating new style guide rules and propagating that knowledge to marketing or comms teams is also social work. Automated tooling can elevate the standards and quality of your work, but these things are embedded in a larger context of effort.
Whether or not we perceive Vale as a centaur or reverse centaur fundamentally depends on legibility and intrusion. And these things are fundamentally about how a human being perceives and works with the automation. It’s about arrangements of cogs and gates in ways that help, hinder, intrude, and expedite work.
The human element of automation
So! When we discuss automation, you might wonder why people don’t respond as favorably to some approaches as others. I think this is often because of the fact that something that might be empowering for one model of work might be demanding for another.
What I take from the Vale example is that we can view this kind of disagreement as arising from how your proposed automation treats people. Think about whether that person is being treated as a disposable element of the machine or being protected or empowered by an arrangement of cogs, distillers, gates, and sentinels. Are they visible? Intrusive? Where is friction generated by the demands of the system? What is the person left to do in your system?
A system that only produces outputs to be validated treats the user as legs for the reverse-centaur. A system that helps you develop quality work and leaves you to exercise your judgment at key choice points is one that treats the user as a centaur. It’s easy to have a solipsistic view of the tools that you develop for yourself, but working in software is a communal effort. When we deploy an automation, whether we’re a consultant, individual contributor, or a manager, we’re coordinating tasks and trying to ensure quality of outputs.
What might seem like an empowering assistive tool to one individual could leave another individual at the whims of a demanding oracle. The same assistive technologies that enable me to drive more safely, when plugged into a system of labour relations and measurement, can require the Amazon driver to have to perform at a level that’s beyond their human endurance.
Ursula Franklin’s notion of prescriptive technologies (from The Real World and Technology) is helpful here, which she defines as:
…specialization by process […] Here, the making or doing of something is broken down into clearly identifiable steps. Each step is carried out by a separate worker, or group of workers, who need to be familiar only with the skills of performing that one step. (20)
Prescriptive technologies are means of coordination, and they require that individuals comply with a standard or process in the course of producing output:
…the process itself has to be prescribed with sufficient precision to make each step fit into the preceding and the following steps. Only in that manner can the final product be satisfactory. The work is orchestrated like a piece of music — it needs the competence of the instrumentalists, but it also needs strict adherence to the score in order to let the final piece sound like music. Prescriptive technologies constitute a major social invention. In political terms, prescriptive technologies are designs for compliance. (23)
Demanding oracles, then, are a kind of prescriptive technology that lacks legibility. Working with a standard that is opaque, or not legible, is frustrating. It’s even more reverse-centaur-like if we build sorcerer’s apprentice scenarios where the system demands increased effort in terms of quantity, without regard for the current state of the system.
I want to caution us from concluding that prescriptive technologies are inherently wrong. Style guides are a perfect example of a prescriptive technology. They are tools for making the writing of groups of people look like it was written by a single person: coordinating human effort to produce a uniform quality of output. By using Vale as a gate and distiller we empower users to meet a higher quality of output.
Software automation fits a range of design patterns. Franklin defines holistic technologies as being associated with craft — allowing skilled workers to have control over their work from start to finish. But useful tools for craftspeople can easily become prescriptive standards for those who want to coordinate labour. And coordination is important when communal effort for single outputs are required. We don’t get to build software, airplanes, buildings, etc. without coordination. When we have powerful holistic technologies, it’s very easy to get into tinkerer’s tunnel vision: we get focused on the power of the cool systems of cogs and gates we can build. Being a skilled craftsperson, piloting a centaur, it feels good.
I think this does more to expalin the gap between adopters and resisters. It’s very easy to use generative AI to build an automation: typically, generative AI is low legibility and high intrusion. The internal logic of the chatbot is masked, but we can treat it as a gate, cog or source of tasks. If a writer or programmer has to review mountains of AI generated output, it feels like something has gone wrong. But for some ways of working, this is a success. Since the implementations, expectations, and use cases aren’t uniform, the conversations break down.
We’re in a social moment where people expect generative AI to accelerate the output of work. I hope that I’ve demonstrated that automations in the workplace involve both machines, people, and systems of coordination. I didn’t mention LLMs, generative AI, or agents until this section, because in a way, this isn’t a problem unique to artificial intelligence systems in 2026 — the issues are fundamentally about system design.
System design is something that often occurs organically or top-down at organizations and individual contributors often have little agency over the systems they find themselves in. Generative artificial intelligence provides a range of implementation possibilities, some of which are low legibility and high intrusion, but we don’t have to design systems like this. The intern model adopted in copilots and assistive technologies increases the legibility to operators and moderates the intrusion. Agents — construed as markdown files governing contributions to a codebase from an LLM — reduce intrusion (for operators) and moderate the legibility (for operators). The extent to which we build sorcerer’s apprentice or demanding oracle situations for ourselves is a matter of design and management.
When you’re excitedly talking about the automation that you’ve built, consider whether it is treating people as pilots or as gears that need to keep pace with an accelerated system. The experience of being empowered to make your own choices as a craftsperson is different from having an increased backlog of automatically generated PRs to review.
If we fundamentally view promoting automation and efficiency gains in terms of systems of trust, responsibility, and coordination, we can build better tools, cultures, and results as technologists. By building systems in the lens of “squires” and “gardeners” instead of “demanding oracles” we can build cultures of craft, which seems like a better way to work.