- Perspectives
Our Survey on Creativity, Writing, and Reading in the Age of AI
What does it actually feel like to be a writer in the age of AI?
For nearly three years, we’ve been building Ellipsus alongside a growing community of writers. And in that time, AI has shifted from background noise to an unavoidable force shaping the tools, platforms, and economies of writing—affecting almost every aspect of the creative process.
So we ran a survey to hear directly from the people doing the work.
The response was overwhelming.
What we did
On April 10 2026, we released a survey on social media—Creativity, Writing, and Reading in the Age of AI—to better understand how creatives are reacting to the rise of generative AI in their creative lives. For nearly three years, we’ve been building Ellipsus alongside a growing community of writers, beginning with a few hundred people looking for a safer, more collaborative home for their creative work, and expanding into a much broader space, and a much wider mission—now nearly half a million writers, seeking space to write without the intrusion of AI. And in that time, generative AI has gone from background noise to an unavoidable force shaping the tools, platforms, and economies of writing—affecting almost every aspect of the creative process.
And this survey is our way of asking: what does that actually feel like right now, from the inside, for the people doing the work?
The response was overwhelming.
5202 respondents, 10.6k text responses, for a grand total of 527k words and counting (that’s longer than War and Peace!)
We—the writers, the readers—are concerned about AI, to put it mildly. Yet for all the column inches and conference panels devoted to the subject, there’s been a strange and persistent lag in how seriously this moment is being understood and quantified. Platforms and technologists have been quick to experiment, and quicker to scale, while publishers and arbiters of the culture have been remarkably slow to quantify the consequences—and slower still to offer meaningful alternatives.
From where we stand, in daily conversation with people who are writing, sharing, and sustaining creative communities, the disconnect is odd. It can feel, at times, like watching an entire world reorganize itself in real time, while the institutions meant to steward and critique it drift somewhere behind the horizon—possibly aware, but not yet fully awake. And writers are left disoriented, not only about the rapidly changing situation day to day, but about the future of creative work—how it’s made, how it’s valued, and how it can remain recognizably human at all.
These responses share common themes, spread across a few big questions—how AI is affecting creative motivation; the perceived differences between human vs. AI writing; the possibility of protections for human work; the future of creative communities; and how people are changing their reading and writing processes to counter an increasingly inescapable AI-saturated world.
Reflecting our audience, this sample skews toward fiction writers. Many of our respondents come from online writing and fan writing communities—some of the most active and engaged—as well as culturally aware—spaces for writing right now. Published authors, academics, editors, visual artists, and readers were also strongly represented.
Here are some of our major findings so far.
How AI is changing creative trust
Something fundamental has shifted. A new baseline of suspicion has taken hold, eroding trust in reading and sharing written work—not only in online writing communities and social media platforms, but across search results, media, academic texts, and information in general. Respondents write about reading in “forensic mode”—constantly second-guessing, and checking “is this real?” rather than simply engaging with the work as they once did.
Most respondents described a basic change in how they approach almost everything they read: they no longer assume there’s a human on the other end.
“In general, I would say the rise of generative AI has significantly eroded any sort of trust between readers and writers.”
AI impacts everything. If I plan to publish, I am competing against machines that can churn out something in a fraction of the time—this impacts motivation, even for people who try to focus on the story over the outcome.
AI "witch hunts"
Trust collapse has consequences. As the awareness of AI-generated content raises on creative platforms, waves of false accusations are quick to follow. Writers with polished prose are being accused of using AI, often aggressively, across online publishing spaces, often with little evidence. This is especially pronounced in online publishing and fan writing spaces, where visibility is high and moderation can be uneven.
Nearly half of respondents worry their work could be confused for AI.
Many writers described their fear of the possibility of harassment campaigns, leading them to share away from large platforms, or shift to share more privately, with trusted circles. Others say they’ve attempted to preempt suspicions deliberately by altering their styles, often for the worse: using simpler vocabulary with less “AI-tell words”, cutting out em dashes, and injecting intentional errors in effort to look “just human enough.”
There is such a witch hunt attitude towards AI currently in fandom spaces, with authors being publicly accused of AI… even if they are adamant that they do not use AI at any point in the creative process… there appears to be no escaping these accusations once they are made.
It's definitely kept me from writing at times, and kept me from posting any of my work online completely. I often use em dashes, the rule of three, short, punchy sentences, and several other things that are now considered "AI markers"[...]
For readers, suspicions can affect their ability to simply enjoy stories:
As a reader, I’m hesitant to jump into stories written post-AI craze… I’ve been duped before, read a wonderful story just to find the notes and credits at the very end say it was written by AI. The disappointment is gut wrenching.
As the pressure of AI prompts writers to reshape what they write, it’s also reshaping how they relate to each other. Lacking transparency, the informal policing of “AI-like” writing has begun to undermine trust between writers and readers, and fracture creative communities from within. There are real consequences—instead of sharing their work, some writers report withholding stories altogether in effort to protect them from scraping or misuse.
I was considering posting my work online again sometime in 2022. Then generative AI became a thing—that idea died. I don’t want to feed someone else’s cash cow… I don’t want my words cut up and butchered and turned into milquetoast slop while real people are harmed.
Creative defiance, writing as resistance
My creative motivation has remained quite high, because I do genuinely believe that people can outdo AI… and I’m keeping the em dashes.
But many writers are choosing another path. The experience of watching "effortless slop” content flood feeds and platforms actually makes them want to write and share their work more. AI’s “generic” output becomes a kind of negative motivation, a prompt clarifying what matters most in the writing process—craft, learning, deliberation, research, and the pleasure of making something personally meaningful. For them, creating in the age of AI is an act of self-definition, reinforcing and developing the value of their process and skill.
I do the research, I have the knowledge, I know the vocabulary and the grammar. Every choice is deliberate and mine.
The rise of AI really has just... fueled my need to Write. Write the juice of a passion fruit spilling over your chin and staining your teeth. Write about how the dirt outlines the fresh scrapes on your knees.
[…] AI is all about the end result (the finished piece of art or writing), whereas human creativity is just as much about the process & journey as it is the end result. The former is bad because it commodifies art and creativity, saying that the only important thing about it is WHAT you create, rather than the fact that you created at all. I don’t care if I make bad art, at least I’m making it in the first place and I can learn something new from it.
The value of human creativity
Humans create from their real emotions, experiences, biases, things that have influenced and stuck with them. Humans dedicate years to honing their skills and mastering their craft. AI can only chew up what already exists and spit out a bastardization of someone else's stories and talent. There's no thought or inspiration or emotion or lived experience behind the words, only a math problem that's predicting what should come next.
We have sentience, life experience; we don't just process all information we receive in the same algorithmic way like machines/AI; therefore content is unique and creative and often times vastly different even when given the same parameters. Also, if you ever just experiment with AI to see what people are talking about... it has no creativity alone. It's all recycled and bland. There is no "what if this happened?" wonder with AI.
One of the common patterns across these responses is the language used to describe what’s missing from AI content. Again and again, respondents arrive at the same word: soul. It appears in nearly a third of the answers to this question—what defines human writing vs. AI? It’s a shorthand for everything difficult to quantify: emotion, perspective, memory, intent; not only what was written, but why it was written.
AI cannot be creative on its own, not in the way a human can be. And everything a human does creatively, no matter the quality, is advancing a skill, a learning opportunity, and that in itself makes it a lot more valuable than any AI content.
Human creativity is informed by the senses; the ability to be away from the machine and live without processing what is happening through the thinking mind. To lose oneself in a moment, then to come back to that moment later in a hundred different ways.
From here, the responses open up larger questions: what, exactly, is writing for? And the conclusion from respondents is that writing is fundamentally incompatible with AI content, identity vs. production. One anecdote: a writer’s partner gifted them a ChatGPT subscription for their birthday, thinking this would help their writing. The writer reported feeling “never more misunderstood or disrespected,” because for them, writing is precisely the thing they don’t want a machine to do for them.
Human creativity takes time and hard work, when that person could be doing anything else. They could've been able to do another thing they liked, but they instead chose to write and show us the work. AI doesn't have feelings, so its "art" doesn't ever have any emotion like a human's, and AI can make content in seconds. That takes away the feeling of accomplishment from real people.
The rise of AI generated content has made me think more deeply about why I value human-made art. The fact that it is not only about the consumption or reception of art, but that the production is an incredibly valuable part of the artwork itself.
Culture is the biggest protector of human creativity. The human desire to connect, to share your story, to hear the stories of others. That's true and real and something that cannot be replaced by machines.
Trust the process (not the “product”)
Many writers argue that the important difference lay in how something is made, not the result. Human art is about process (an author’s intention, revision, decision‑making, individual quirks and voices), as well as how the work changes the person making it. An AI-generated text might masquerade as “art”, but respondents overwhelmingly say that by nature it lacks the fundamental trace of humanity—“a mind pushing against constraints.”
Making art is the process. The thing that you get at the end isn’t art because it has art’s shape, it’s art because it’s been crafted by a person that’s externalised an internal experience…
Writers often ground the difference in language of lived experience. Human creators have bodies, histories, grief, in‑jokes, hyper‑specific memories, etc. AI has “patterns in data.”
It cannot remember the thrill of energy the first time it accidentally brushed hands with someone it fancied. It cannot grieve. Its stomach muscles cannot hurt after a friend made it laugh so hard it cried.
Others mention conviction. Even the most easily influenced person, they say, still has tendencies and values that shape their work—AI, by nature, “has no personality.”
Human creativity has a personal touch and requires effort. How am I meant to care about someone's work if they didn't care enough to try to do it themselves?
The “mediocrity generator”
Respondents also judged AI writing based on how LLMs actually work: predicting the next likely token based on training data results, resulting in content that is inevitably pushed toward the “generic”, “soulless”, and “bland.” It may be functionally correct, but struggles with “creativity”, “idiosyncrasies”, "surprising metaphors”, “genuine weirdness”, and the kinds of things that might not make statistical sense to an LLM, but feel right to readers.
I do however believe that generative AI is a mediocrity generator… the AI reworks sentences in the most predictable, pedestrian manner possible and removes any nuance [they] may have unintentionally added.
Writers and readers gave many examples of techniques and plot twists that “AI could never do,” like Douglas Adams’ line about spaceships hanging in the sky “in much the same way that bricks don’t”—the kind of move that an LLM would never intentionally make, because it lacks intentionality. One respondent used an example of a child drawing their mother with six arms, because that’s how many times a day they feel hugged. An AI system, they say, would never choose six arms on purpose—it knows “moms have two.”
The overwhelming consensus is that AI content represents a general deadening of creativity, lacking the elements that make reading and engaging with human work deep and enjoyable.
Writing as human conversation
Across responses, writers consistently frame creativity as a form of connection—writing as a bond between creator and reader.
Many describe the act of creating as an offer of vulnerability: I made this; here is what I felt; do you feel it too? The reciprocity of the exchange (and the knowledge that another person struggled, chose, and shaped something for you to encounter, taking a risk to share it) is what gives art emotional weight.
Some writers fear AI may eventually produce human-seeming output, but still argue that the the point of the aesthetic experience depends upon the knowledge of a human origin—without “a human on the other end” the exchange loses meaning.
Art, writing or visual, is a conversation. You make a thing to say a thing… The robot does not fulfill any of the needs I was meeting through creativity. I am not forming a human connection with the robot.
Theft, ethics, and creative spaces
Many respondents mentioned the broader ethical issues they have with AI. LLMs were trained on the work of non-consenting creators, and the resulting devaluation of creative work is a form of theft—not only metaphorical but also structural, in training the models themselves. Respondents also mentioned the many societal costs of AI, and that political, economic, environmental, and infrastructure costs are inseparable from creative ones.
A major recurring image is one of AI severing creative work from its context and connection—AI excises all aspects of creative work aside from the product, an act that impoverishes “the commons” from which all creativity draws.
I think human creativity is suspended in a web of connection and context because nothing exists in a vacuum, and everything will influence everything else. I think AI generated content is fundamentally cheap and opportunistic in the most literal of senses: it takes the web of connection and context that human creativity has painstakingly woven over millennia, cuts out a piece of the mesh, and sells it as if it stands alone.
Human creativity is narrow and specific, concentrated if you will. AI generated content is the conventionally attractive but bland and washed out sum average of all human creativity on record. Humans have told stories about doppelgangers since the dawn of time. Things that are not human but pretend to be.
Much of the impact of creative content happens in the interpretation of the reader/viewer, regardless of author intent. The difference is that there is no community being made. Human-made content is, in the end, about connection.
What people want to protect, and how they think it could be done
I’m a lawyer, and I’ve been observing AI and thinking of it before it really became a thing. I firmly believe there should be legislation and something similar to Bioethics along the lines of Technoethics.
Among the responses, we see thousands of calls for copyright reform, for mandatory disclosure of AI-generated content, penalties for unauthorized data collection, data center restrictions, and international coordination among regulatory bodies; from policy analysis to urgent moral demands.
All LLMs should be legally required to produce their complete datasets for any reasonable legal challenge to the use of those datasets. People need to be able to find out what data has been used to train the LLMs they are using.
Legally require ‘this work is ai-generated’ disclaimers, only allow consenting creators' content to be fed into GenAI, and put regulations on resources allowed to be spent on maintaining GenAI servers.
Some respondents contributed proposals like mandatory dataset transparency for legal challenges, tiered licensing frameworks, and the extension of provisions to cover AI training and scraping. A significant number, while supporting legislation in principle, are skeptical about enforcement—noting that detection as it exists today is inherently flawed, and regulatory capture by AI companies is already well underway. Many writers are familiar with how LLMs have been trained initially on illegally-scraped datasets like Books 1 and 2, and mass scraping across nearly every online platforms. They focus on consent—who gets to decide how their work is used.
Ideally, there would be some form of legislation banning companies like Microsoft and Google from scraping the documents of their users to train their AI on. There needs to be some kind of ban on opt-out policies—consent CANNOT be assumed until revoked, it has to be assumed there is no consent until consent is actively given. There also needs to be environmental regulations that monitor the amount of water being used in ai storage facilities.
For example, I posted fanfiction online. I never intended to make a profit off of it, and I don’t lose any money from it being used to train AI. But that’s still a theft, because the AI company is still making money off of my work that it does not have the rights to use.
One writer suggests we “legally treat the bots as vampires”; that unless invited in, they should not be able to consume the work. Solutions need to meet the moment, and look more like data protection law with clear opt‑ins and revocable consent; not just tweaks to existing copyright.
Make these companies pay the artists they ripped off. Make disclosure of the use of AI mandatory. Give people a viable way to opt out of seeing it, give artists a way to permanently rescind permission.
Labeling and transparency
One thing is clear—respondents want strong, standardized labeling of AI-generated or assisted content.
Writers and readers repeatedly request that creative work leveraged with AI assistance be clearly and prominently labeled, allowing for genuinely informed reading choices with a declaration of AI involvement.
Many ask for a form of human‑centered verification—transparent systems which allow humans to vouch for their own work, without requiring them to modify their writing style to escape accusation, or rely on current AI detectors, which are notorious for false positives, and often fail when new LLMs evade detection.
Primarily, I believe transparency must be key. If creative projects use generative AI in any part of their work, they should need to disclose exactly how and why it was utilised.
Something like a nutrition label on creative works. AI use would have to be declared and maybe there could be a special 100% human stamp... The fight isn’t AI users versus non-AI users, it’s humanity against obsolescence.
Respondents are aware of the challenges—software boasting AI detection capabilities are often unreliable, but the absence of transparency is arguably more frustrating. People want to know what they're engaging with. For these writers and readers, “informed choices” are non-negotiable, even if enforcement or the systems themselves might be imperfect.
We need laws against branding AI as anything but AI. Laws against training AI on someone's work without consent and payment.
Community and cultural resistance
Beyond detection, we see a broader desire for ways to meaningfully support and define human work.
Writers and readers want to trust their content, and want the platforms that host it to foster and support human work. The tools and communities that incorporate AI or AI generated content should clearly disclose it, through simple, trustworthy signals that say: a person made this. At the same time, many are looking outside existing systems altogether, toward the creation of spaces where support for human creativity is the default.
Many respondents called for human-only creative spaces to serve as counterweights to the AI-driven monoculture:
If we can't put an end to AI, then we should separate AI and human work, and have designated spaces for each.
It’s important to not allow AI in some spaces. Companies are trying to convince that AI is a tool that you have to use by integrating it into everything, but I think it’s important to maintain spaces—especially creative ones—that don’t shove it in your face.
Continued and increased funding and support for artists at professional, amateur, and community levels… Overhaul the copyright system… Encourage the fan economy style, creative commons, transformative works.
What it means
These opinions are strongly held and widespread among creatives. Writers and readers aren’t Luddites shunning technology. They're less worried about AI "coming for their jobs" as they are about losing the trust that gives their work meaning.
Unfortunately, these dynamics are already playing out, accelerated by the rapid expansion and normalization of generative AI in creative spaces. What we are witnessing is an active, ongoing crisis of culture and a widening rift in relationships—between writers and readers; between creators and the platforms they rely on; between individuals and the infrastructures that now sit between them; between the system that is attempting to overtake art and the humans who value it.
The future will be shaped by the choices we make; about where and how creative work lives, about whether these spaces can protect human authorship, uphold trust, and be willing to stand behind the people doing the work.
Want to connect with a like-minded community and get the latest news on Ellipsus? Join our Discord to follow announcements and share your feedback.