Writing about AI was hard.
I was once a TA for a philosophy course where students were encouraged to write about AI, back before the advent of ChatGPT and generative AI. It was difficult giving feedback even when navigating a reading list I was familiar with — students came from a range of backgrounds in philosophy, cognitive science, computer science, etc. I’m glad I’m not TAing for the same course now, since writing about AI feels even harder these days, for a variety of reasons.
To be honest, I feel kind of paralyzed when trying to write about AI. It’s incredibly easy to mean one thing, be heard saying another, and accidentally trigger a conflict you didn’t intend. I’ve been thinking about principles that could guide how I blog about generative AI in a way that’s responsible and precise.
The problems
The biggest reason for this linguistic minefield is that “AI” is a shortcut for a range of phenomena that are better discussed with more precise terms.
If I write a blog post about “AI” today, I could mean:
- The general concept of human-like machine intelligence.
- Generative AI, by which I mean the broader space of tools using machine learning techniques to generate text, audio, images, and video.
- Specific apps like ChatGPT, Gemini, Claude, etc.
- Specific technologies behind the apps, like LLMs and system prompts, transformer architecture, etc.
- Specific applications of these apps for productivity.
- Machine learning applications that used complicated statistic to determine models of phenomena that have nothing to do with general purpose computing.
The ambiguity gets even worse when you use general summary verbs. Think about the multiple meanings that could come into play for the phrase “using AI”: If I tell a junior writer they should learn to use AI, do I mean that they should interact with a chatbot during the writing process? Learn how to create scripts and tooling to enhance their workflows? Learn how to create entire drafts through prompts? Learn how to use Code Wiki instead of talk with developers? The verb “use” is incredibly ambiguous, on top of the ambiguity of “AI”.
The ambiguity in these discussions is a mile wide and deep. And it makes it very easy for two people to think they are arguing about values when they aren’t even agreeing about the meaning of the words. Things that might look neutral are incredibly value laden: “productivity”, “quality”, “ownership” are all tied to worth in various ways in western culture and internalized by individuals. A remark about what is the most productive way of doing something or what someone should do quickly becomes an evaluation of worth.
And the problem compounds as our brains are primed for confirmation bias, political allegiances, and internalized norms. I’m not immune to this. I don’t think there’s an actual view from nowhere that we can occupy. So, for me at least, writing about AI is like navigating a linguistic minefield.
My goals
Here’s something I remarked in conversation with other technical writers regarding writing about AI:
I’ve struggled with writing about it [AI] too. I want to stake out a “aware, cautious, and critical” space. But this kind of writing is just extremely difficult, since the broader space is heavily value-laden.
If you say “I use it to write scripts to enhance my productivity”, that reads as whole hearted endorsement, since we occupy a professional world that values productivity. The reality is leadership expects us [technical writers] to ship faster because of these tools (that’s what they were sold after all!). We should have a good response (“I can do x, y, and z. But u, v, and w are productivity theatre. Here’s my evidence.”), and that involves engaging with the tools so we can discern what is what.
And I fully acknowledge that in some way, playing this game (writing about how AI can be used) normalizes the political, ethical, ecological, and economic frameworks that make this technology so controversial. But at the end of the day, this is a complicated collective action problem, and the technology is here, along with leadership has expectations about how it should change my workflows.
As someone who works in software, this is a conversation that’s happening around me, so I should participate. If my employers expect me to be aware of how I can use these things, I should be engaging with those things.
And I’m someone who has a deep drive for clarity driven by a career in philosophy and strives to be a critically engaged lover of technology. Isolating ideas is something that comes naturally to me. I love the things we can do with computers. But I also want to be sensitive to questions of the value of human labour and creativity. So I try to be aware of a wide range of perspectives on AI. I found this post on AI skepticism really helpful for identifying different ways of thinking and writing about generative AI. My goals are to be aware of what I’m saying and how others might take it. To be precise and informed in a way that avoids conflict where I don’t intend it. Essentially, I really want to fall close to the informed critics part of this group — and enjoy reading posts by folks like Simon Willison, Colin Fraser, and Drew Bruenig.
Why personal guidelines?
Ok, because of all this difficulty in the language surrounding AI, it makes the topic intimidating to write about. And I enjoy blogging and thinking about technology. By formalizing my feelings and thoughts into a set of principles and guidelines, I can assess my own writing and check myself before posting things. Basically, this is a personal style-guide for blogging about AI.
This isn’t a hard and fast set of rules for everyone. If it doesn’t work for you, that’s fine. And the technology will change, so I’m sure these guidelines will have to adapt as well.
Each point is an imperative. They’re reminders of what I need to do to achieve the goal that I laid out in the first section: avoid confusion and harm, prioritize accuracy, and remember that this topic is heavily laden with values and norms.
My personal guidelines
These ten guidelines are how I want to approach writing about AI. Each point includes a bit of explanation:
- Be as specific as practicable. If a more specific term can be substituted into my sentence, use that term instead of AI. If I’m really talking about an LLM, then I shouldn’t use AI or even generative AI. If I’m talking about an app or a specific feature, I should say that instead of the LLM. Grok is a feature in Twitter/X and there is a model Grok 4.1, Gemini is an app but Gemini 3 Flash is a model.
- Watch your verbs and pronouns. Avoid anthropomorphizing AI by employing verbs as a shortcut. Avoid “the AI thinks”, “understands”, “knows”, “decides”, “hallucinated”. Use “the model calculates”, “predicts”, “outputs” or “processes”. This is doubly true for pronouns. Using personal pronouns (“you”) for these things is part of the illusion that we need to resist.
- Remember humans read your writing. People are primed for polarization. Both for and against and that includes myself. Readers arrive with pre-existing fears of job loss or hype fueled optimism. Regardless of my intent, writing about AI means trying to navigate an explosive minefield. Be humble and own up to errors.
- Remember your audience. Don’t assume that other people have similar pressures on how they interact with, or understand, technology. As a technologist, I have curiosity and a professional responsibility to be aware of the technology works that other people don’t. When I write about it, I may need to spend more words to get them on the same page. As a recovering philosopher, I have been trained to be painfully clear in a way that people might find frustrating.
- Be aware of background values. Saying that AI enhances productivity is a value laden statement, or is easy to interpret as value laden given our culture of capitalism. Being neutral is difficult, and I should be cautious about the normative aspects of what I’m writing.
- Be aware of background injury. I don’t depend on the value of my intellectual property for my livelihood as much as other professionals. A photographer or illustrator has a different relationship to content scraping.
- Center the human operator. It’s easy to omit people from the picture if AI is an automation, but people are ultimately responsible for the automations they create and the outputs they accept and the affordances a feature provides. For example, “The summary is generated by the AI” is passive, “The AI summarizes the text” attributes agency to the AI, and “I used the app to generate a draft summary” centers the operator (it also reinforces that the app is used instead of the model!).
- Remember the parts that make the whole. In generative AI, many components are presented as a simple thing. Awareness of those parts helps build clarity. The model is a complex blob of linear algebra. The system prompt mediates interaction with the model. The chatbot app is filtered by additional prompts and context. Web search is a separate component from the model. The UX conspires to hide these things. Clarity is important and difficult, but valuable.
- Remember your perspective. I don’t have the skills or knowledge to assess every aspect of generative AI, let alone AI. I can contribute to a conversation as one voice, with my perspective. Other people have other skills and knowledge, and I should strive to be charitable in interpreting their perspective.
- Demand clarity, but be open to generalizations. Communication is hard. We use lossy abstractions to quickly convey ideas. It’s important to get on the same page, but remember that others may have deeper or shallower understanding of the same topic.
I’m not sure if these guidelines will satisfy everyone. But I know that these guidelines keep me from off the cuff posting about what’s a sensitive and difficult topic. We’re all in this together and trying to figure it out. I plan on posting a short series of posts about AI and the technical writing profession in 2026, so getting these principles out here is an important part of framing what I’m trying to achieve.
One last note: It’s important to distinguish between writing about AI and the act of documenting AI products. The latter is a specific technical challenge where the architecture of the product dictates the shape of the writing. Documenting a foundational model (focusing on context windows and tokenization) is fundamentally different from documenting an MCP (Model Context Protocol) implementation, an agentic workflow, or a simple AI-powered feature. Each of these “products” requires a different level of abstraction and has different audiences with different backgrounds and needs. While these guidelines help me stay responsible when blogging, they are the foundation for the precision required when I’m actually in the trenches documenting these distinct architectures.