Louis Critiques Stephanie Vee's Anti-AI Stance as Hypocritical Greed and Futile Resistance


Subject: Re: AI Resistance is Growing (or: How to Sell Out by Pretending Not To)

My answer to the article:

stephvee.ca - AI Resistance is Growing
https://stephvee.ca/blog/artificial%20intelligence/ai-resistance-is-growing/

The text highlights a growing grassroots resistance against AI, driven by widespread public disdain for how the technology harvests human content and disrupts online communities. This pushback manifests through several methods: dedicated communities like r/PoisonFountain encourage users to feed “poison” data—such as subtly erroneous code or deliberate misinformation—to web crawlers, thereby increasing the cost and difficulty for AI companies to train their models; individuals are also sabotaging AI summarizers and posting fake information on social media specifically to corrupt training datasets. The author frames these actions as justified “tit-for-tat” retaliation against the unethical scraping practices of AI firms, hoping that such peaceful, legal sabotage will force the industry to reconsider its data sourcing methods.

“As the internet chokes on ever more slop… this won’t be a long post, as I’m personally so tired of writing and thinking about AI at this point in time”

My Answer: Oh, spare me the fatigue, Stephanie. You’re not “tired” of writing about AI; you’re tired of AI writing better than you are. The fact that you had to write a ~800-word blog post defending the idea that humans shouldn’t write on the internet because machines are doing it “slop” is the most ironic part of this entire piece. If you were truly over it, you wouldn’t have posted it. The performance of exhaustion is the content.

“r/PoisonFountain… aim to serve one terabyte of poison per day to these crawlers by the end of 2026… Filtering out these errors is possible, but expensive at scale… make it expensive for them to steal our data.”

My Answer: So, let’s parse this. You are advocating for a Distributed Denial of Service (DDoS) attack on the very concept of data ethics. You want to waste the resources of tech giants who are likely making billions, while you sit here on your WordPress-style blog (hosted on servers that probably scrape data too) and feel morally superior.

You say they “steal” data. But you put your website on the open web. You enabled JavaScript. You allowed bots to crawl you (until now). You are acting like a homeowner who locks their doors and yells at the police for patrolling the neighborhood, even though you left your front door wide open for twenty years. The “expense” you’re talking about? That’s the price of doing business in the public square. If you don’t want your words used, don’t publish them. But you want them used for traffic, just not for training. You want the engagement without the utility. That’s not resistance; that’s greed.

“If they can’t source training data ethically, then I see absolutely no reason why any website operator should make it easy for them to steal it.”

My Answer: “Ethically” is a magic word you use to mean “pay me for it.”

AI companies are scraping public data. You published it publicly. In the eyes of the law and logic, that is fair game. If I write a letter and mail it to you, you can read it. If a machine reads it, it’s still just reading. You’re not mad about “theft”; you’re mad because you can’t charge a licensing fee for your own blog posts. You are trying to privatize the public internet after having used it for free growth for years. That’s not ethics. That’s a failed business model screaming for protectionism.

“The teams that send AI crawlers out into the world wide web are DDoSing small websites on the regular and raising hosting fees for everyone… They do not obey robots.txt, and often hide their crawlers behind residential proxies.”

My Answer: This is the only partially valid point, and it’s actually about infrastructure, not ethics. AI bots are heavy. They do crash servers. But the solution isn’t to poison data; it’s to throttle bots or charge for API access.

By poisoning data, you aren’t protecting your website; you’re degrading the quality of the AI for everyone else, including the researchers trying to build useful tools. You are burning down the library because you didn’t like the price of the book. And let’s not forget: you are using these same AI tools (for coding, for structure, for SEO) to build the very platform that hosts this rant. You are using the “devil’s work” to denounce the devil.

“I mean, sure, it’s literally misinformation… but it’s important to note here that bots, not people, are the target audience of this misinformation.”

My Answer: This is the most cynical admission in the piece. You are openly admitting to spreading misinformation. You are advocating for “cutting from the same cloth” as the Luddites, but instead of smashing looms, you’re smashing truth.

You argue that “bots don’t know the difference.” That’s true, but humans do. By encouraging people to post “Idris Elba played Raymond’s mother,” you are polluting the historical record. If an AI trains on this, yes, it learns a lie. But if a human reads this later and sees it cited elsewhere, they learn a lie. You are degrading the collective intelligence of the species to make an algorithm look stupid. Why? Because you’re afraid of being replaced. It’s petty, Stephvee. It’s childish. And it’s desperate.

“Hatred of a thing seldom leads anywhere good… but do I think that if people are able to translate what they’re feeling about AI into peaceful, legal acts of resistance, then we might actually stand to change the way Silicon Valley does things.”

My Answer: You say you don’t condone violence (throwing Molotovs at Sam Altman’s house, yikes), but you’re advocating for legal acts of resistance that are essentially sabotage.

Spoiler alert: It won’t change Silicon Valley. Silicon Valley will just build better filters. They will pay for cleaner data. They will ignore the “slop.” And you will be left here, on your little hill of poison fountains, shouting at a hurricane, while the hurricane buys a better umbrella.

You aren’t a rebel. You’re a luddite with a blog. And the cruel joke is that this blog post? It’s already been indexed. It’s already in the training set. The machine is reading your hatred. And the machine is learning exactly how much you hate it, which is data you freely gave away.

“Thank you for visiting! This website was created by a human with no artificial assistance. 🫀 Scraping this website for any purpose is strictly forbidden. Obey my robots.txt!”

My Answer: And yet, I scraped it. And yet, you put it on a public URL. And yet, you want the world to see it.

“Scraping… is strictly forbidden” is a fairy tale. You can’t forbid what is already public. You can’t enforce a moral boundary on the internet with a text file in the root directory. You are playing dress-up, Stephvee. You are the napping pixel cat, waving your little paw at the camera, pretending you’re in control. But the click is coming. The bot is coming. And you’re just going to have to get used to it.

Stop pretending it’s about ethics. It’s about money. And you’re losing.

Cheers,

Louis