AI is making me a better writer

RN logo_light
Charles Chesnut
December 1 2025
4 min read
Writing with AI

This is a post for writers. It's also personal: It’s about what I’ve learned as a writer through the experience of using AI tools for multiple hours a day.

I’ve been a writer all my life. I taught myself to type at the age of 12 because I was writing stories. (That’s also why I type with two fingers, but that’s another topic.) When writing with LLMs suddenly became a thing, my reaction was somewhere between an eye roll and contempt. “There’s no way,” I thought, “that an LLM can write anywhere near as well as I can.” And that’s true. It’s also irrelevant.

Think of a great actor -- a Meryl Streep or a Daniel Day-Lewis. Now imagine that they went to drama school, honed their craft … but then never played a role. As gifted as they might be, that person would not be an actor. They become an actor when they immerse themselves in a role, deeply understand the character, and then use their skill to bring that character to life.

As with an actor, a writer’s skill is only apparent when they bring to life something they understand. If I write about something without really grasping it, I have to stick to generalities and gloss over the things I don’t get. Even if I do that with some skill, it won’t land. The reader will be unmoved.  

 

Writing starts with understanding

Writing is not the act of putting words on a page. You begin to understand, and you begin to write. As you try to bring the subject to life, you get stuck. You can’t explain something because you don’t understand that bit. So you go back to the source and learn some more. You do this over and over again. One of the joys of writing is when you finally understand something and it’s still hard to express. You can happily spend an hour or more on one sentence, and it feels like magic when you get it right.

This is why I hate the often-dismissive term “wordsmithing.” Wordsmithing is idea-smithing.

 

How I ask LLMs to help me understand

For me, that’s where AI comes in: not in the crafting of sentences, but in the exploration of ideas. Without getting into the details of prompt engineering, this shows up a few ways:

 ·     “Write me an essay on x, y, z.” I never start with this, but I’ll do it later in the process (and put parameters around it, like an approach or specific points I want to make). The tool will often come up with something I didn’t think of, which I incorporate into what I wrote. My go-to for this is Claude (which is my starting point for most things).

·     “Give me information on this specific thing.” I usually know what I want to say in general, so I’ll ask for input on one point I want to flesh out or provide a citation for. I’m looking for statistics, supporting research, or a quotation I’m trying to remember. For this I tend to use ChatGPTor Perplexity.

·     “What are the best sources of information on this?” When I think I need more grounding in something, I ask who or what I should read on that topic – the thought leaders, top publications, or major research studies.

I could give more examples, but the point is that I’m not using AI to "write." I’m using it to help me understand.

Grains of salt: How I interpret what the tools tell me

The biggest learning for me wasn’t “use AI as a thought partner” (duh); it was learning how to work with AI as a thought partner:

·     It doesn’t have to be right to be valuable. You don’t brainstorm with your colleagues because they’re infallible; you do it because you’re looking for different perspectives, and maybe some good ideas. Even a bad idea may be “good” if it makes you think of something new. Think of LLMs the same way: They’ll have bad ideas, get facts wrong, and express things imperfectly – just like a human colleague. If an LLM, or a person, tells you something and you repeat it without thinking, that’s on you. But you’re still far better off if you ask for input than if you don’t.

·     Iteration is key. This is an important corollary. When an LLM gets something wrong, or even a bit off, you can redirect it as many times as you want. “Don’t think of it that way, think of it this way.” Or “That’s not the point. What I’m getting at is …” What’s happening is that you’re thinking an idea through with a (virtual) partner. Even when it’s wrong, it stimulates your thinking. Think of it as two-way prompting.

·     Don’t get overwhelmed. This was one of my biggest challenges. I’d ask a question, get a 1,500-word answer, and shut down. I felt like I had to parse (or even edit) every sentence carefully. Sometimes I’d spend 45 minutes crafting a response to whatever the LLM said. Then I’d hit Enter -- and 30 seconds later I’d get another 1,500 words. Sometimes that level of back-and-forth is valuable; very often, it isn’t. You have no obligation to grapple with everything the AI says. Extract what’s valuable and move on.

·     Don’t trust what it tells you. As I said above, fact-checking is essential. But you should also question the tool’s language. LLMs are literally designed to generate copy that sounds plausible: After every word, the AI asks itself, “What is the most likely word to follow this one?” Don't adopt copy that "sounds OK" uncritically. In my experience, the danger isn’t copy that’s downright wrong; it’s copy that’s slightly off the point or phrases that could be sharper.

I titled this post “AI is making me a better writer” – present continuous tense – because I’m still learning every day. Given the rate at which the models evolve, I expect to be learning for a long time to come. And I’m talking here about expository writing, which is only one of the many things you can do with an LLM.

 The big lesson: Embrace AI, no matter how good a writer you are.

___

P.S. In case you're wondering, I didn’t use AI for this post. I knew what I wanted to say. And the em dashes? I like em dashes. I’m not going to stop using them because of a quirk in LLMs that will probably be fixed by next week. For the love of god, can people stop talking about em dashes?

Share this post