Volunteer editors on Wikipedia have voted to prohibit the use of large language models in article creation, citing fears of AI-driven errors and unverified content, while allowing limited, human-verified AI assistance for translation and editing.
Volunteer editors on Wikipedia have voted to bar the use of large language models to write or rewrite article content, a move that reflects growing anxiety within the community about AI-driven errors and unverifiable claims. According to The Guardian and Semafor, the new guideline prohibits editors from employing generative AI to produce prose for encyclopedia entries, while still permitting tightly constrained uses of tools for translation and basic copyediting so long as humans verify changes and no new information is introduced.
Source Reference Map
Inspired by headline at: [1]
Sources by paragraph: - Paragraph 1: [2], [5]
Source: Noah Wire Services
Verification / Sources
- https://decrypt.co/362510/wikipedia-bans-ai-articles-new-editing-policy - Please view link - unable to able to access data
- https://www.theguardian.com/technology/2026/mar/27/wikipedia-bans-ai - The Guardian reports that Wikipedia has banned the use of artificial intelligence in generating or rewriting content for its online encyclopedia. The policy change reflects concerns that AI-generated text often violates Wikipedia's core principles, including verifiability and reliable sourcing. Exceptions to the ban include using AI for translations and suggesting basic copy edits to an editor's own writing, provided the AI does not introduce new information. (theguardian.com)
- https://www.business-standard.com/technology/tech-news/wikipedia-restricts-ai-use-article-writing-policy-violations-126032700173_1.html - Business Standard reports that Wikipedia has updated its guidelines to limit AI-generated content in articles, allowing only basic edits and translations with human oversight. The platform stated that AI-generated content frequently violates its core content policies, including verifiability and reliable sourcing. The new policy prohibits editors from using large language models to generate or rewrite article content, with exceptions for basic copyediting and translations. (business-standard.com)
- https://www.ndtv.com/feature/wikipedia-bans-ai-generated-text-in-articles-over-core-content-policy-violations-11272297 - NDTV reports that Wikipedia has officially banned AI-generated text in its articles, citing violations of several core content policies. Editors voted 40 to 2 in favour of the new policy to address a months-long influx of AI-generated articles on the platform. The policy prohibits the use of large language models to generate or rewrite article content, with exceptions for basic copyediting and translations. (ndtv.com)
- https://www.semafor.com/article/03/30/2026/wikipedia-bans-ai-use-in-articles - Semafor reports that Wikipedia has banned the use of AI models to generate or rewrite its articles. The policy replaces previous language stating that AI should not be used to generate articles 'from scratch' and allows contributors to use the technology in copy-editing and translation. The move reflects a shift in the Wikipedia community's attitude towards AI, from cautious optimism to genuine concern. (semafor.com)
- https://dataconomy.com/2026/03/27/wikipedia-officially-bans-ai-generated-articles - Dataconomy reports that Wikipedia has officially banned AI-generated articles, following a decisive community vote. The decision addresses concerns that AI-generated text frequently violates core policies regarding accuracy and verifiable sourcing. The platform stated that AI-generated content often violates several of Wikipedia's core content policies, including verifiability and reliable sourcing. Exceptions to the ban include using AI for translations and suggesting basic copy edits to an editor's own writing, provided the AI does not introduce new information. (dataconomy.com)
- https://www.shacknews.com/article/148491/wikipedia-ai-generated-article-edit-ban - Shacknews reports that Wikipedia has implemented a sweeping new policy banning generative AI and large language models from being used to generate the entirety of an article or edit to an article on the website. The policy was implemented due to concerns that LLMs frequently violate Wikipedia's core policies, including botching information, going beyond the original requests, and citing sources incorrectly or not at all. The policy allows LLMs and AI to be used in the case of basic copy editing, but even that must adhere to human review after the fact. (shacknews.com)
Noah Fact Check Pro
The draft above was created using the information available at the time the story first emerged. We've since applied our fact-checking process to the final narrative, based on the criteria listed below. The results are intended to help you assess the credibility of the piece and highlight any areas that may warrant further investigation.
Freshness check
Score: 8
Notes: The article reports on Wikipedia's recent policy change banning AI-generated content, with the earliest known publication date being March 27, 2026. (theguardian.com) The content appears original, with no evidence of prior publication. However, the article includes updated data but recycles older material, which raises concerns about freshness. Additionally, the article includes updated data but recycles older material, which raises concerns about freshness. Given these factors, the freshness score is reduced to 8.
Quotes check
Score: 7
Notes: The article includes direct quotes from Wikipedia's new policy statement. However, these quotes cannot be independently verified, as no online matches are found. This lack of verifiability raises concerns about the accuracy and reliability of the information presented. Given this, the quotes check score is reduced to 7.
Source reliability
Score: 8
Notes: The article is sourced from Decrypt, a reputable news outlet. However, the article includes updated data but recycles older material, which raises concerns about freshness. Additionally, the article includes updated data but recycles older material, which raises concerns about freshness. Given these factors, the source reliability score is reduced to 8.
Plausibility check
Score: 9
Notes: The claims made in the article are plausible and align with known developments in AI and content moderation. However, the lack of independently verifiable quotes and the recycling of older material raise concerns about the accuracy and reliability of the information presented. Given these factors, the plausibility check score is reduced to 9.
Overall assessment
Verdict (FAIL, OPEN, PASS): FAIL
Confidence (LOW, MEDIUM, HIGH): MEDIUM
Summary: The article presents information on Wikipedia's new policy banning AI-generated content. However, the lack of independently verifiable quotes, reliance on a single source, and recycling of older material raise significant concerns about the accuracy, reliability, and freshness of the information presented. Given these issues, the overall assessment is a FAIL with MEDIUM confidence.