Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
AI read "1984" and decided to ban it
Author: Curie, Deep Tide TechFlow
Last week, a secondary school in Manchester, England, used AI to review its library.
The AI generated a list of 193 books to be removed, each accompanied by a reason. George Orwell’s “1984” was prominently included, with the justification being “contains themes of torture, violence, and sexual coercion.”
“1984” depicts a world where the government monitors everything, rewrites history, and decides what citizens can or cannot see. Now, AI has done the same thing for a school, and it may not even understand what it is saying.
The school librarian found this unreasonable and refused to fully implement the AI’s suggestions.
The school then initiated an internal investigation against her on the grounds of “child safety,” accusing her of introducing inappropriate books to the library, and reported her to the local government. She took medical leave due to stress and eventually resigned.
Ironically, the conclusion of the local government’s investigation found that she did indeed violate child safety protocols, and the complaint was upheld.
Caroline Roche, chair of the UK School Library Association, stated that this conclusion means she can no longer work in any school.
Those who resisted AI’s judgment lost their jobs, while those who signed off on AI’s judgments faced no consequences.
Subsequently, the school admitted in internal documents that all categorizations and reasons were generated by AI, stating, “Although the categorization was generated by AI, we believe it is generally accurate.”
A school entrusted the judgment of “what books are suitable for students to read” to AI, which returned an answer it didn’t even comprehend, and then a human administrator stamped it without thorough review.
After this incident was exposed by the UK free speech organization Index on Censorship, the issues raised extend far beyond a single school’s bookshelf:
When AI begins to decide what content is appropriate and what is dangerous, who determines if AI’s judgments are correct?
Wikipedia Shuts Its Doors to AI
In the same week, another institution responded to this question with action.
The school allowed AI to decide what people could read. The world’s largest online encyclopedia, Wikipedia, made the opposite choice: it would not allow AI to determine what to include in the encyclopedia.
In the same week, English Wikipedia officially passed a new policy prohibiting the use of large language models to generate or rewrite entry content. The vote resulted in 44 in favor and 2 against.
The direct cause was an AI account called TomWikiAssist. In early March of this year, this account autonomously created and edited multiple entries on Wikipedia, which prompted urgent action from the community once discovered.
An AI can write an entry in just a few seconds, but it takes volunteers hours to verify the facts, sources, and wording of an AI-generated entry for accuracy.
Wikipedia’s editing community is limited in size. If AI can produce content in unlimited quantities, human editors simply cannot keep up.
But that’s not the most troubling part. Wikipedia is one of the most important training data sources for global AI models. AI learns from Wikipedia and then uses that knowledge to write new Wikipedia entries, which are then consumed by the next generation of AI models for further training.
Once erroneous information generated by AI mixes in, it will continually amplify within this cycle, becoming a nested form of AI poisoning:
AI contaminates training data, and the training data contaminates AI.
However, Wikipedia’s policy does leave two openings for AI; editors can use AI to refine their own writing or assist with translations. But the policy specifically warns that AI may “exceed your requests, alter the meaning of the text, and make it inconsistent with the cited sources.”
Human writers make mistakes, and Wikipedia has relied on community collaboration for over twenty years to correct them. AI makes mistakes differently; what it fabricates can seem more real than reality, and it can be mass-produced.
A school believed AI’s judgment, resulting in the loss of a librarian. Wikipedia chose not to believe and shut the door entirely.
But what if even those who create AI start to lose faith?
Creators of AI Are Now Hesitant
While external institutions are closing their doors to AI, AI companies are also retracting their steps.
In the same week, OpenAI indefinitely shelved the “adult mode” for ChatGPT. This feature was originally planned to launch last December, allowing age-verified adult users to engage in erotic conversations with ChatGPT.
CEO Sam Altman personally previewed it last October, stating it was to “treat adult users like adults.”
After being delayed three times, it was ultimately scrapped.
According to the Financial Times, OpenAI’s internal health advisory board unanimously opposed this feature. The advisors’ concerns were quite specific: users might develop unhealthy emotional dependencies on AI, and minors would surely find ways to bypass age verification.
One advisor’s statement was even more direct: without significant improvements, this could become a “sexy suicide coach.”
The error rate of the age verification system exceeds 10%. Given ChatGPT’s scale of 800 million active users weekly, 10% translates to tens of millions who could be misclassified.
The adult mode was not the only product cut this month. The AI video tool Sora and the instant checkout feature built into ChatGPT were also taken offline. Altman stated the company needs to focus on its core business, cutting “side tasks.”
Yet OpenAI is simultaneously preparing for an IPO.
A company sprinting toward an IPO is intensively cutting potentially controversial features; this action might more accurately be described as refocusing.
Five months ago, Altman was still saying to treat users like adults; five months later, he discovered his company had yet to clarify what content users could or could not engage with.
Even the creators of AI lack answers. So who should draw this line?
The Uncatchable Speed Gap
When you look at these three events together, a central conclusion is easily drawn:
The speed at which AI produces content and the speed at which humans review content are no longer on the same scale.
The choice made by the school in Manchester becomes much clearer in this context. How long would it take for the librarian to read through 193 books and make judgments? Letting AI run through them takes just minutes.
The principal opted for the solution that took a few minutes; do you really think he believed in AI’s judgment? I believe it was more because he didn’t want to spend that time.
This is an economic issue. The cost of generation approaches zero, while the cost of review is entirely borne by humans.
Thus, every institution affected by AI is forced to respond in the most brute-force manner: Wikipedia simply prohibits it, OpenAI directly cuts its product lines. There is no well-thought-out solution; all responses are driven by the need to act quickly, to plug the gaps first.
“Plugging the gaps first” is becoming the norm.
AI’s capabilities evolve every few months, while discussions about what content AI can engage with lack even a decent international framework. Each institution only manages the line in its own backyard, and the lines between institutions contradict each other without any coordination.
AI’s speed is still accelerating. The number of reviewers will not increase. This gap will only widen until one day something far more serious than banning “1984” occurs.
By then, drawing the line may be too late.