AI writing does not fail because a sentence sounds like AI.
It fails when the human stops judging the work.
That is the real skill. Judgment. Stopping, reading, thinking, and revising. Asking whether the words actually do the job.
I have started to believe this is the dividing line between casual AI use and serious AI use. A serious AI user does not treat the output as the answer. They treat it as a draft, a proposal, a pile of material, or a first pass at something that still needs a human mind on it.
Without that step, AI becomes a way to produce more text while doing less thinking. That is how you get AI slop.
The Tell
Ethan Mollick’s LinkedIn post called out a short list of phrases that many frequent AI users now recognize on sight.
“Load bearing,” “I keep coming back to,” “Not just X, but Y”
His point was simple. When you use AI a lot, you start to see the patterns everywhere. The writing around you begins to carry the same tells: the same sentence shapes, the same polished transitions, the same tidy contrast moves.
I felt that immediately.
When I shared the post with my technology team, I wrote:
I’m there already. I cringe when I see it. When I use AI and I see this, especially “not this, but that,” I tell Codex about it and say, “how do we stop this from happening again?”
Over time, things get better.
Regardless, I read every line, and make edits and rewrite where necessary. If I don’t read every line, then it’s too much content and it needs to be reduced.
Otherwise it’s just AI slop.
That last rule has become a standard for me:
If I do not read every line, there is too much content.
If the work is too long for me to read, I should not pretend I reviewed it. If I did not review it, I should not send it. If I send it anyway, I have delegated my judgment to the machine.
This is where trust in my work starts to break.
AI Power Users Still Have To Think
Fast Company summarized Microsoft’s 2026 Work Trend Index under the headline that AI power users are pulling away from everyone else.
The useful part of that argument is how the better users work.
According to the source, Microsoft describes a group it calls “frontier professionals.” Rather than simply outsourcing every task to AI, they pause before starting work. They decide what AI should do and what should remain human. They treat AI output as a starting point. They keep some tasks human to keep their own skills sharp.
The more important gap is between people who use AI to extend their thinking and people who use AI to avoid thinking. Those behaviors lead to different results.
One produces better work.
The other produces more words.
How AI Slop Happens
AI slop usually does not look broken at first glance.
It often looks clean. It sounds organized. It uses complete sentences. It may even sound confident. But the more you read, the more the surface starts to feel disconnected from real judgment.
The signs are familiar:
- Generic claims.
- Weak evidence.
- Stock transitions.
- Neat contrast phrases.
- Polished conclusions that do not decide anything.
- Too many words for the amount of thought underneath.
The failure point is responsibility. AI can help draft the work, but a human still has to own what leaves the system.
For executive work, this is a real problem. A memo can sound polished while hiding a weak recommendation. A strategy note can summarize a lot and decide almost nothing. A project update can sound calm while avoiding the hard risk. A public post can sound insightful while repeating the same phrases everyone else is using.
That kind of writing costs trust.
It also wastes tokens.
If a prompt produces 1,200 words and only 300 are worth reading, the problem is not solely the model. The problem is the workflow. The human did not stop the machine soon enough, narrow the task enough, or judge the output hard enough.
The Skill Is Judgment
Judgment is a practical writing behavior here.
Judgment means asking:
- What am I trying to say?
- Who needs this?
- What is true?
- What is assumed?
- What did AI make sound cleaner than it really is?
- What needs evidence?
- What should be cut?
- What sounds like a pattern instead of me?
This is why I do not think the answer is an AI detector.
Detectors invite the wrong question: did AI write this?
The better question is: can I trust this writing?
That requires a different kind of system. A system that helps the writer make the work clearer, more specific, more grounded, and easier to review.
That is why I built a House Style System.
The House Style System
My House Style System started as a way to keep human-facing writing clear and useful. Over time, it became something more important: a practical quality system for AI-assisted writing.
It helps me fight the exact failure mode that bothers me when I see AI slop.
When a pattern shows up, I do not just fix that one paragraph. I ask how to reduce the chance that the same pattern comes back in the next draft.
Sometimes the answer is a rule, or a checklist, or a Vale warning. Sometimes it is just a reminder that the style gate cannot do the work a human reviewer must do.

The system has a few parts:
- A short house style guide for plain language.
- Domain modes for different kinds of writing.
- A style gate that catches objective issues.
- Examples and tests so the rules do not drift.
- Human review for facts, judgment, recommendations, and tone.
The most important boundary is simple:
The system is not an AI detector.
It does not try to prove authorship or try to make AI writing pass as human. It certainly does not replace reading every line.
It is a writing quality system. Its job is to make judgment easier to apply and harder to skip.
What The System Catches
The academic assessment I saved for this work made one point that stayed with me: AI-like writing is more than vocabulary.
Experienced readers notice more than obvious phrases. Sentence patterns, overly smooth structure, weak originality, and generic tone all matter. Tidy conclusions, missing evidence, and a lack of real author judgment matter too. These are common signs of AI-like writing.
That gave me a better way to think about the House Style System.
The objective is not to ban every phrase that feels AI-adjacent. The objective is to notice where writing becomes generic, unsupported, over-polished, or too smooth for the seriousness of the work.
For example, the system can help flag:
- filler phrases,
- vague claims,
- undefined acronyms,
- long or overloaded sentences,
- repeated contrast formulas,
- conclusions that summarize without deciding,
- places where evidence and recommendation are blurred.
Those checks create friction in the right places.
But they are not enough.
A style gate owns surface checks: em dashes, filler, repeated phrasing, and long sentences. It can also catch other patterns that are easy to define.
Human review owns the harder questions: whether the claim is true, whether the evidence is strong enough, whether the conclusion has been earned, and whether the writing sounds like something I would stand behind.
The system supports judgment. The writer still owns it.
My Current Loop
The loop is simple:
- Craft the idea, topic, problem, or concept I want to flesh out.
- Collaborate with AI to draft, explore, compress, challenge, or restructure.
- Read every line.
- Cut anything I am not willing to stand behind.
- Rewrite what sounds generic.
- Check whether claims have support.
- Name the pattern when something feels like AI slop.
- Improve the system so the next draft starts from a better standard.
- Repeat until satisfied.
Step 8 is crucial when working with AI.
If I see the same failure twice, I do not want to keep correcting it manually forever. I want the system to learn the standard.
That might mean adding a rule to the style guide, or creating a better prompt instruction. It might mean adding a Vale warning or adding an example of a bad pattern and a better rewrite.
The point is to turn judgment into a repeatable practice.
What AI Power Users Should Be Practicing
If you want to become better with AI, you must go beyond prompting.
Practice judgment.
Before you prompt, decide what you are asking AI to do. Is it drafting? Research framing? Editing? Compression? Counterargument? Structure? Cleanup?
After you get the output, slow down.
Read it like your name is going on it, because it is.
Ask what is missing, what sounds too smooth. Ask what the model made plausible without making true. Ask where the writing got longer because the thinking got thinner.
Then improve the loop.
That is what makes someone an AI power user: the discipline to stay responsible for the work, regardless of the tools, prompt length, or volume of output.
The difference is whether the human stays responsible for the work.
Where This Goes Next
I am sharing the House Style System publicly because I think more people need practical writing quality systems.
The starter repo is here: House Style System.
The House Style System is not an authorship scoring tool. It also does not imply that AI-assisted writing is bad. It also should not imply that automation can replace human review.
It should give people a starting point:
- a short style guide,
- a few domain modes,
- a clear boundary around AI authorship,
- a style gate,
- rule examples,
- test fixtures,
- before-and-after rewrites,
- and a workflow that keeps judgment with the human.
The real AI writing skill is judgment.
If you use AI to write, you still have to stop, read, and think. You still have to decide what is true, what is useful, what is too much, and what is yours to stand behind.
That is the work.
AI can help with it.
It cannot own it for you.
Sources I Am Working From
- Ethan Mollick LinkedIn screenshot that prompted this essay.
- Fast Company: “AI power users are pulling away from everyone else, Microsoft says”.
- Microsoft: 2026 Work Trend Index Annual Report.
- Internal source pack: House Style System snapshots and academic-style assessment.