Computer, Enhance: An Interview with Lex Founder Nathan Baschez
Nathan Baschez, founder of the AI-powered word processor Lex, talks about how AI is shaping creativity and authenticity in writing.
Over the past few years, I’ve been reading and learning how people have incorporated AI—especially generative AI—into their lives. The list is fascinating, and even if you don’t think so, you have to admit it’s at least wide-ranging: the recipe for tonight’s dinner, a confident email asking for a pay raise, an AI boyfriend (and in this scenario, you also have a human husband).
When ChatGPT debuted in November 2022, it reached 100 million users in two months, making it the fastest-growing app in history. Practically overnight, it redefined expectations for AI, setting off an arms race among tech giants, fueling billion-dollar investments, and reshaping entire industries.
Governments scrambled to respond (they still are). Italy outright banned ChatGPT—later lifting the restriction—while the Biden White House issued an Executive Order on AI safety. (The latest Executive Order on AI, from the Trump White House, calls to establish “the commitment of the United States to sustain and enhance America’s dominance in AI.”) AI coding assistants like GitHub Copilot, CodeWhisperer, and ChatGPT automated repetitive coding tasks, allowing companies to hire fewer entry-level developers or reallocate resources. AI-generated art even won first place in a Colorado State Fair competition.
The near-future consequences of large-scale, ever-improving AI systems feel all too real—especially when a chatbot can lay them out so eloquently for you, with timestamps, before asking: “Would you like me to help you further plan out your demise?”
In 2022, a friend came over—a friend who operates his life mainly through his phone and doesn’t care about computers. Showing him something in this realm would hardly get a reaction. But I just had to show him something I had stumbled across: Lex, a word processor like Docs or Word, but with a key difference: "+++."
“+++” was shorthand in Lex to predictively continue your writing, offering a glimpse of where it might go next. I had just learned how to format a TV script, so I wrote a few lines, and Lex and I went back and forth, generating scene after scene of walking into the woods at night. Even for a casual (sorry, big dog), he was stunned.
It felt like unlocking a door, a fog lifting around us, the horizon of the future visible for the first time—until it didn’t. AI's responses became predictable, more like a party trick than a creative breakthrough, falling into loops, churning out variations of the same tropes.
AI’s failures weren’t just personal. In 2023, Google’s Bard AI wiped $100 billion from its market value overnight after confidently sharing a factually incorrect answer. Bing’s chatbot spiraled into existential crises, gaslit users, and even fell in love. In January, Meta pulled its AI character accounts after they repeatedly generated racist and offensive remarks.
Ethical, geopolitical, and philosophical debates around AI continue daily, with some arguing they could slow its progress—but so far, its momentum remains unchecked. Benchmark scores in reasoning, coding, and language comprehension have surged. Models like GPT-4 have achieved 40% higher factual accuracy, 2×–5× faster processing speeds, and a 3× reduction in cost, bringing accessibility and scale along with it. Beyond text, AI can now see, hear, and speak, reshaping how we create and problem-solve. But for me, its most profound—and most personal—impact has been on writing.
Nowhere has the backlash against AI been louder than in writing circles, where it has been seen as not only a threat but an affront to the craft. The Writers Guild of America (WGA) strike in 2023 put AI-generated scripts at the center of its demands, with writers fearing studios would replace them with AI trained on their own work. Bestselling authors, including George R.R. Martin and John Grisham, sued OpenAI, alleging their books were scraped without consent.
While some writers see AI as a tool—useful for brainstorming, structuring, or drafting—others believe it dilutes creativity, reducing storytelling to pattern recognition, stripped of human intent. If storytelling can be outsourced, if AI can mimic voice without understanding, what does that mean for human expression? For those who have spent lifetimes mastering the craft, AI’s rise feels like watching a ghostwriter with infinite speed, infinite memory, and—unlike a human ghostwriter—no understanding of what it truly means to write.
These tensions—between efficiency and artistry, augmentation and replacement, creativity and computation—are no longer theoretical. They are playing out in real time, particularly in how writers engage with AI. Some reject it outright. Others experiment with its possibilities, using it as a collaborator rather than a competitor. But no one, it seems, can ignore it. Lex, which now has 300,000 users, represents one of the more tangible shifts—carving out a modest yet substantial dent in the word processor monopoly.
What does this all mean for the future of writing?
To explore this, I spoke with Lex’s founder, Nathan Baschez, about how AI-assisted writing has evolved, whether it’s an enhancement or erosion of creativity, and if the fears around AI’s role in storytelling are justified—or just the latest chapter in the long history of technology disrupting the written word.
Here’s our conversation, edited for length and clarity.
When you were initially building Lex, was AI always a part of it? Or was that something that came later?
Nathan: I started Lex as a side project. AI was something I was curious about, but it wasn’t central at first. It was more like, Maybe there’s a cool role AI could play—how would that work? I didn’t start out thinking, I want to build an AI writing product.
I got a basic, usable writing tool working with simple AI integration. At the time, AI APIs—like GPT-3—were all about autocomplete. The main AI feature in Lex was built around the question: What’s a version of autocomplete that’s actually useful for the creative process?
Instead of throwing out something generic—like, Thanks!—I wanted it to be more substantial but only there when needed. It wasn’t about AI taking over writing but generating ideas when you’re stuck, giving you something to react to.
It’s funny, because that feature isn’t even our most popular one anymore. There are way cooler ways to use AI in writing now. But the original idea—that AI can be a source of ideas, high-level feedback, and even line-level suggestions—still holds.
What has been the biggest change or shift for you, from how you thought people might use Lex to how they are actually using it now?
It’s less that there’s been a huge difference in how people use Lex and more of a coming into focus. When I built it, I had specific ideas I was excited about, but they always came with a big asterisk of uncertainty—like, This seems cool, let’s try it.
The biggest shift is the chat paradigm. The original AI feature in Lex was autocomplete—the “+++” function, where you’d type “+++” and AI would generate the next paragraph. Now, though, chat is the center of everything.
If you want to do a better version of the “+++” feature today, you do it in chat. You take the ideas AI generates, workshop them in chat, and then bring them back into your document in some form. Maybe the conversation just inspires you, and now you’re writing again. Or maybe you use some of the text it generates. But the flexibility of chat makes it a great place to workshop ideas and get high-level feedback.
In many ways, we’re downstream from the way AI models are built—we don’t control how AI itself is developed, so our product is partly a reflection of how AI APIs evolve. The earliest AI APIs worked by autocomplete—you give an input string, and AI gives you an output string. But now, the dominant medium has shifted to chat—where you pass multiple messages, and AI gives you the next message in context.
What’s funny is that the first AI feature in Lex was “+++,” but the second was actually chat—just a really janky version. It wasn’t like today’s chat models; we didn’t have a messages API yet. It was just a simple autocomplete system where the user would type something like “Colin,” then input their text, and then AI would respond as “assistant Colin.” People were already hacking this into a chat-like experience before ChatGPT or chat-based AI APIs even became a thing.
AI writing tools are evolving so fast that it seems like every few months there’s a major leap forward. Have you seen an evolution in how AI interacts with a writer’s voice? What has that relationship been like?
I think there’s an interesting thing happening. Over the past year or two, AI has gotten way smarter, way less likely to hallucinate—it’s really good at solving reasoning problems now. But there’s been less progress—not none, but less—in areas like voice, tone, and style. That just turned out to be harder than expected.
But I think that’s actually the wrong mindset to use when working with AI. A lot of people assume, If it doesn’t sound like me, it’s not working. But the bigger opportunity isn’t AI perfectly replicating your voice—it’s about learning how to use AI effectively in the writing process.
And the thing is, the closer you look at what your voice actually is, the harder it is to define. Your voice changes depending on context, mood, and all sorts of small choices you make while writing. So the way I use AI is less about, Make this sound like me, and more about, Give me options to work with.
One of my favorite ways to use AI is not even for text generation—it’s to probe me. I’ll ask something like, What’s the most insightful question you could ask me about this? And AI is amazing at doing that.
For example, it might say: It seems like this piece is really about two things. Why these two? Should it just be about one? Or are they connected in a deeper way? That causes me to reflect, think through my approach, and make a cascade of edits I wouldn’t have considered otherwise. AI can deepen my level of consideration for what’s going into the writing.
You can use AI like someone you hire to do all the work so you never lift a finger—or you can use it like a personal trainer, pushing you to become stronger. It’s about what role you’re asking AI to play.
I think that’s really misunderstood right now. But over the next year or two, people will start realizing, Oh yeah, there are ways to use this that actually deepen my voice and my understanding of my voice. It’s not, AI—just generate the text for me.
I’d love to hear you explain context tags in Lex. I watched the video you all put out with Carl Richards, a former NYT columnist, and how he created a style guide for his writing, plugged it into a context tag in Lex, and had that influence the types of edits he received. Can you walk me through that use case and explain how it differs from asking AI to churn out an entire essay?
The best metaphor I’ve heard for AI is the guy from Memento—someone with general world knowledge but no medium-term memory. Every time you start a new chat, it’s a blank slate.
With a human editor, they remember your goals, your voice, and what you’re trying to accomplish before giving feedback. AI, on the other hand, needs a way to retain that information. Retyping everything in every chat isn’t practical.
That’s why we created context tags—a system that lets you give AI the background it needs to be useful. AI can generate text, suggest edits, or push you with high-level questions, but all of those require more context than just the draft itself. AI is already decent at inferring some context from the draft—it actually does better than most humans would if they had no background. But it does way better if you feed it the right information.
Context tags let you define and store relevant info—your past work, research, client preferences—so you can easily apply it to chats. If you’re writing a book, a research project, or a recurring column, you likely need AI to reference the same materials each time. Instead of manually re-entering that context, you create a tag, name it, and upload text, files, or links.
We’re working on making this even more dynamic—integrating it with Google Drive, scraping entire websites, or setting up a Perplexity-style contextual pull for specific domains.
The feature I’m shipping today lets users set a default context tag so it auto-applies to all documents, eliminating manual setup. If you want everything in a folder to follow a style guide, you can set it once and forget it. If you’re working on a research-heavy project, AI can reference those materials in every chat thread.
Carl Richards, for example, has a defined writing style, so he uses one context tag for his style guide and another for past columns. Another writer might have a tag for research materials.
One of the coolest things is that context tags are composable—you can attach one tag to another. If you have a project-specific tag, you can also include a general style guide, past research, or references. It works like a folder system, layering context without repeating information.
Ultimately, context tags help AI start with a strong understanding of what you need, rather than making you re-explain it every time. That makes AI much more useful.
In theory, could you take anything open-source from an author, create context tags, and have AI generate text in their voice?
I usually don’t do the write in this person’s voice thing—but you totally can. What I find more useful is asking: What would this writer’s advice be to me?
For example, I really like Paul Graham’s writing—his style is simple and direct. I don’t want to write exactly like him, but I find his approach useful as a lens when editing. So I have a Paul Graham context tag with a bunch of his essays, especially on writing.
I’ll ask AI: What advice would Paul Graham have for me on this piece? and it’ll generate three bullet points. If two of them spark something useful, that’s a win. Sometimes, I’ll ask: Can you show me what that would look like? But even then, I take it with a grain of salt and shape it myself.
Let me play devil’s advocate for a second. Say someone is just copying and pasting AI-generated text without questioning or refining it. Doesn’t that put us at risk of homogenizing writing or creating a kind of bottoming-out effect where everything starts to sound the same?
I don’t think it really risks voice, because I just don’t think that if you dump someone’s writing into a context tag and tell AI to generate in this style, it’ll actually do a good job. Maybe in the future, but honestly, I’m still skeptical.
Voice is an interesting concept—I actually quibble with the idea of voice as a fixed thing. Philosophically, what’s actually happening when someone writes? A person with experiences, tastes, and preferences is making choices in the moment. Voice isn’t some static entity—it’s a dynamic process that changes daily, monthly, yearly.
The skill of writing isn’t just about sounding like yourself—it’s about making choices that resonate right now and will still resonate in the future. And it’s part of a culture. Patterns of speech evolve over time. Writing styles shift. There’s no universal voice or taste because it’s always shaped by context. That’s why older writing can feel inaccessible—it was created in a different cultural moment.
So, I think writing is such a fundamentally human and dynamic process that, sure, people will generate tons of AI slop, but I don’t think that does anything real harmful to culture. It’s like how people are generating tons of low-quality AI code or images—it doesn’t take away from real creativity. There’s no finite amount of space for writing. AI-generated content just exists, but without a human driving the intention, it doesn’t really do anything.
It’s not that there’s some magical human quality AI lacks—it’s just that the difference is real.
Could AI get close enough that readers might struggle to tell the difference? Or does the “who wrote it” factor always matter?
People underestimate how much we value the fact that a writer exists. That’s what makes something interesting. It’s not that AI-generated writing is useless—tons of people read ChatGPT messages every day. But that’s not cultural writing—it’s just utilitarian information, or maybe a starting point for thought.
For writing to become a cultural artifact, there’s always a story behind it—who created it, why they wrote it, what their perspective was. That matters way more than people realize, even to themselves. Writing and reading are forms of connection.
I get the fear that people will lose writing skills by over-relying on AI, but honestly, that mostly applies to people who wouldn’t have written much anyway. The same thing is happening in programming—tons of people are using AI poorly right now, but some of them will learn from it and build great things.
It’s the same with writing. AI is widening the on-ramp to writing, which is a good thing, but it’s not going to replace cultural writing because, at the end of the day, culture is defined by what people actually like. And that’s always been competitive.
When the printing press came onto the scene, people flipped out. They thought it would lead to the collapse of culture. Then, as more tools became widely available—like the typewriter and the word processor—people had the same fears. Each time, there was this idea that culture as we know it would die. It feels like we’re at another inflection point now, questioning whether culture will move forward beyond AI. But this tool is unlike anything we’ve seen before—it’s wildly changing how we work. Do you think that’s part of what’s fueling the backlash, this idea that AI is killing culture?
Yeah, there’s always been backlash to new technology in writing. But if you look at the jumps we’ve made—from the typewriter to the word processor—they seem like small steps, right? Typewriters led to dedicated word processors, which were basically just computers with one function. Then we moved to general-purpose PCs with word processors.
Each time, the change was incremental—editing got a little easier, the backspace key was more convenient. But even then, people had real reservations. They said, It doesn’t have the tactile feel of a typewriter or Editing is too easy—people will fuss over small things too much.
The backlash wasn’t as extreme as what we’re seeing with AI, but it was there. And that’s just an inevitable part of every technological shift.
Certainly, some good things die with every change, but new good things emerge too. Things change, but they don’t get ruined. The larger forces—art, storytelling, people relating to each other through shared experiences—those persist. Because we crave them. And because the competitive market creates a high bar for what actually breaks through. It’s not perfect, but I’m not worried about AI collapsing culture.
For a prison education program I volunteer with, for the students inside, there’s no internet—just a word processor connected to a limited number of approved websites. And the way I see writers working in that environment reminds me a little of a way I used to write: drafting, agonizing, writing, maybe having an editor go through it. It’s a lot different compared to one of the ways I’ll write now, where I’ll run a draft through AI—whether Lex or ChatGPT—and ask, Am I making sense argumentatively? or Does everything connect from top to bottom? It’s been incredibly useful for getting an editorial perspective. It feels like a different skill set—more like programming, where you’re plugging in inputs, testing scenarios, and asking specific questions rather than just drafting and revising in isolation. It’s a different way of working compared to writing on a disconnected word processor.
I love that workflow. Honestly, that’s my favorite way to use AI in writing—getting it to point out high-level things.
A lot of people assume generative AI’s main function is to generate writing, but to me, that’s the least interesting thing. I’d rather use it to generate questions or high-level suggestions that push me to think differently. For example, I’ll ask AI: What might this mean? Show me an example of how you’d implement that. It’s like having a developmental editor—a neutral perspective that helps mine for insights.
That’s how I see it—it feels like mining for something valuable. If three out of ten AI suggestions are useful, that’s good enough for me. Maybe that’s a quirk of my personality, or maybe I’m just talking my book here. But even with a human editor, if only three out of ten of their suggestions were useful, I’d start questioning if it was worth it for either of us.
With AI, it’s different—it’s not about the model itself, but how I prompt it. The way I frame my questions has a huge influence on the outcome. I keep refining my prompts until I hit a rich vein, and when I do, it’s like, Oh yeah, that was a really useful suggestion. Then I’ll save that prompt or think about what made it work.
For me, AI is a tool, not a person—but one that produces intelligence I can act on. And that’s an interesting shift in how we think about writing.
If it hasn’t already, will AI ever be capable of producing a piece of writing that people genuinely enjoy?
Yeah, it’s hard to imagine a world where AI doesn’t eventually produce pieces people really enjoy reading. But the bigger question is—what happens then?
If the cost of producing good, entertaining writing goes to zero, how does that change the way we consume it? I don’t think the result will be people mastering prompts, publishing AI-generated books, and flooding the Kindle store. AI-generated writing will feel more like a video game—something people engage with personally rather than something mass-published.
Right now, most AI-generated writing happens in chat interfaces like ChatGPT or Character AI. That’s a different but related experience. As AI improves, I think it will expand how we use chat-based tools rather than replace what ends up on bookshelves.
We’ll likely see AI used the way Hollywood uses special effects—as an enhancement, not a replacement. AI could add an extra layer of fact-checking or thoughtfulness, prompting questions you wouldn’t otherwise have access to a human editor for.
That’s where AI can be most useful—filling the gaps when a piece is targeted at a specific audience, and you’d ideally brainstorm with someone, but no one is available. AI can provide inputs that help shape cultural artifacts.
The other shift is that AI itself will become the cultural artifact—not necessarily the writing AI produces, but the model itself, which people interact with directly. It won’t be about AI writing mass-market books; it’ll be about how individuals engage with AI writing in personal, private ways, rather than for broad publication.
Earlier, you showed me your books on your shelf, including A Swim in a Pond in the Rain by George Saunders. It’s one of my favorites. In it, and in his newsletter, he talks a lot about revision as the core of great writing—returning to a piece over and over, refining it with fresh eyes. Given how AI can generate new perspectives and challenge assumptions, do you see it as a tool that could help writers engage in that kind of iterative process? Can AI surface insights a writer might not get on their own?
I have no idea if he’s in any way curious about AI, or if he hates it, but I think writers like him—who believe the more you can return to a piece with fresh perspective over many, many layers, the better the result—would find AI to be an incredible tool, if used the right way.
And the idea isn’t just, produce the next revision, hit enter. The idea is: In the third paragraph, I start to go into XYZ, but I’m also thinking about something I want to set up later. Am I balancing that the right way? You start asking hyper-targeted questions—the kind of thing you think about when you’re deep in an editing process as an experienced writer. And AI can reflect back some insights, throw out suggestions. They won’t be perfect—maybe three out of ten will be useful. But who cares? If you’re mining for something valuable, and three out of ten pickaxe strikes find a piece of valuable metal, that’s worth it.
Some people say, You should mine your own consciousness rather than using AI. But AI is just a way to mine your own consciousness, especially when you’re asking it to prompt you, challenge you, or poke at your assumptions. Even when you’re using it for specific edits or suggestions, it’s much more like a Rorschach test—what it gives you isn’t as important as how you react to it. Your own assessment of what it generates is more important than the AI’s output itself.
To me, that’s the number one takeaway. And I feel like any writer could get behind that—but right now, most writers haven’t even considered using AI this way.
I think that’ll start to change. Right now, it’s extremely unpopular to say this. But I wouldn’t be surprised if, in a year or two, people start realizing just how valuable AI can be in this way.
A lot of writers and editors seem to equate pain with effort—as if struggling through the writing process is what gives a piece its worth. And if AI removes some of that struggle, it forces a mindset shift: Is writing still ‘real’ if it’s easier? Could AI help move it into that effortful-but-rewarding space, where you’re working but not stuck? Or do you think writers are too attached to pain as part of the process to accept that?
I really feel like a lot of writers are attached to something they don’t need to be attached to—pain. I think it’s completely fine and good to be attached to effort, but effort is not the same thing as pain.
For me, it all comes back to the idea of flow—when something is too easy, it’s boring. But when something is too hard, it’s demoralizing. And honestly, a lot of writing is just too hard. AI can help move writing into that effortful but rewarding space. It’s not about making writing effortless but about making it feel like an achievable challenge—where you’re putting in the work but not stuck beating your head against the wall.
Think about it like building Ikea furniture. At no point in the process are you banging your head against the wall, wondering if you’re completely wasting your time. Those feelings come from a task being too difficult or from a feedback loop that gives you no clarity.
Writing is a wicked feedback loop—it’s long, and even when people give you feedback, it rarely captures their full experience of reading your work. AI doesn’t solve that entirely, but it makes writing easier to work with in a structured way. It’s not about AI saying, Now it’s good, now it’s bad. You still don’t know until you put it out into the world. But at least it gives you something to do other than just sit there, stuck, feeling frustrated.
And I think a lot of writers have spent so many years feeling stuck and frustrated that they almost have Stockholm syndrome about it. They’ve come to believe suffering through the process is necessary when, in reality, they should be attached to effort, not pain.
If someone just wires up a generic prompt, loads in a bunch of context, and spits out an AI-generated piece, sure—good luck. It might work in some niche use cases, but it probably won’t matter to people in any meaningful way. It won’t win in the marketplace of attention. 🤖