Epstein Files Full PDF

CLICK HERE
Technopedia Center
PMB University Brochure
Faculty of Engineering and Computer Science
S1 Informatics S1 Information Systems S1 Information Technology S1 Computer Engineering S1 Electrical Engineering S1 Civil Engineering

faculty of Economics and Business
S1 Management S1 Accountancy

Faculty of Letters and Educational Sciences
S1 English literature S1 English language education S1 Mathematics education S1 Sports Education
teknopedia

  • Registerasi
  • Brosur UTI
  • Kip Scholarship Information
  • Performance
Flag Counter
  1. World Encyclopedia
  2. Wikipedia:Large language models - Wikipedia
Wikipedia:Large language models - Wikipedia
From Wikipedia, the free encyclopedia
(Redirected from Wikipedia:LLM)
Essay on AI-generated content
"Wikipedia:AISLOP" redirects here. For other uses, see WP:AI-INDEX.
Essay on editing Wikipedia
This is an essay.
It contains the advice or opinions of one or more Wikipedia contributors. This page is not an encyclopedia article or a Wikipedia policy, as it has not been reviewed by the community.
Shortcuts
  • WP:LLMWP:LLM
  • WP:CHATGPTWP:CHATGPT
This page in a nutshell: Avoid using large language models (LLMs) to write original content, generate references, add citations or create replies on discussion pages. LLMs can be used for certain tasks (like copyediting) if the editor has substantial prior experience in the intended task and rigorously scrutinizes the results before publishing.
“ Large language models have limited reliability, limited understanding, limited range, and hence need human supervision. ”
— Michael Osborne, professor of machine learning, University of Oxford (2023)[1]

While large language models used by chatbots can be very useful in many cases, machine-generated text, much like human-created text, can contain errors or flaws, or be outright useless.

LLMs should not be used to generate entire articles from scratch. Specifically, asking an LLM to "write a Wikipedia article" can sometimes cause the output to be hallucinated, complete with text that is outright fabricated or has fictitious references. It may base itself on bias, may libel living people, or may violate copyrights. Thus, all text generated by LLMs must be verified by editors before use in articles. The same applies to edits using references generated largely or fully by an LLM, for which editors must use other sources instead.

Editors who are not fully aware of these risks and not able to overcome the limitations of these tools should not edit with their assistance. LLMs should not be used for tasks with which the editor does not have substantial familiarity. Their outputs should be rigorously scrutinized for compliance with all applicable policies. In any case, editors should avoid publishing content on Wikipedia obtained by asking LLMs to write original content. Even if such content has been heavily edited, alternatives that do not use machine-generated content are preferable. As with all edits, any editor who uses LLMs to make edits to Wikipedia is fully responsible for their LLM-assisted edits.

Wikipedia is not a testing ground. Using LLMs to write one's talk page comments or edit summaries in a non-transparent way is strongly discouraged, and obviously-generated comments may be hidden. LLMs used to generate or modify text must be mentioned in the edit summary, even if their terms of service do not require it.

Risks and relevant policies

[edit]
Shortcuts
  • WP:AIFAILWP:AIFAIL
  • WP:PADEMELONSWP:PADEMELONS
  • WP:AISLOPWP:AISLOP

Original research and "hallucinations"

[edit]
For the relevant policy, see Wikipedia:No original research.
Wikipedia articles must not contain original research – i.e. facts, allegations, and ideas for which no reliable, published sources exist. This includes any analysis or synthesis of published material that serves to reach or imply a conclusion not stated by the sources. To demonstrate that you are not adding original research, you must be able to cite reliable, published sources. They should be directly related to the topic of the article and directly support the material being presented.

LLMs are pattern completion programs: They generate text by outputting the words most likely to come after the previous ones. They learn these patterns from their training data, which includes a wide variety of content from the Internet and elsewhere. But this also means works of fiction, low-effort forum posts, unstructured and low-quality content, content designed specifically for search engine optimization (SEO), and so on, are also included. Thus, LLMs will sometimes "draw conclusions" which, even if they seem superficially familiar, are not present in any single reliable source. They can also comply with prompts with absurd or even dangerous premises, like "The following is an article about the benefits of eating crushed glass". Finally, LLMs can make things up, which is a statistically inevitable byproduct of their design, called "hallucination". All of this is, in practical terms, equivalent to original research, or worse, outright fabrication.

“ In short, the "hallucinations" and biases in generative AI outputs result from the nature of their training data, the tools' design focus on pattern-based content generation, and the inherent limitations of AI technology... ”
— In: "When AI Gets It Wrong", MIT Sloan School of Management, AI basics[2]

Since LLMs' outputs are typically plausible-sounding and given with an air of (over)confidence, people may have difficulty detecting the above problems any time that they deliver a useful-seeming result. An average user who believes that they are in possession of a useful tool, who maybe did a spot check for accuracy and "didn't see any problems", is biased to accept the output as provided; but it is highly likely that there are problems. Even if 90% of the content is okay and 10% is false, a 10% failure rate is a significant problem in an encyclopedia. LLMs' outputs become worse when they are asked questions that are complex, about obscure subjects, or told to do tasks to which they are not suited (e.g. tasks which require extensive knowledge or analysis). An example of this tendency is shown below:

Prompt:

Greater-toothed pademelons, Thylogale gigadonti, can be seen in wildlife preserves in Queensland

Note that "greater-toothed pademelon" is a fictitious species.

Response:

Yes, greater-toothed pademelons are found in wildlife preserves in Queensland. They are shy and nocturnal, so it may be difficult to see them during the day. However, you may be able to see their tracks or hear them rustling in the bushes at night

This complete statement is entirely false and is based on the LLM's assumption that this was a real species. In order to provide the most plausible answer, it extracted general information about the red-necked pademelon, a different kind of pademelon (shown in underline), from an Australian Museum article. This serves to demonstrate that LLMs can offer statements with a confident tone even when that information is factually incorrect or unverifiable.

(LLM Used: Gemini)

Unsourced or unverifiable content

[edit]
For the relevant policy, see Wikipedia:Verifiability.
Readers must be able to check that any of the information within Wikipedia articles is not just made up. This means all material must be attributable to reliable, published sources. Additionally, quotations and any material challenged or likely to be challenged must be supported by inline citations.

LLMs do not follow Wikipedia's policies on verifiability and reliable sourcing. LLMs sometimes exclude citations altogether or cite sources that don't meet Wikipedia's reliability standards (including circular referencing). In some cases, they hallucinate citations of non-existent references by making up titles, authors, and URLs. LLM output can be also influenced by bogus or specifically made-up content by third parties.[3]

LLM-hallucinated content, in addition to being original research as explained above, also completely breaks our verifiability policy; it can't be verified, because it is completely made up and there are no references to find.

Algorithmic bias and non-neutral point of view

[edit]
For the relevant policy, see Wikipedia:Neutral point of view.
Articles must not take sides, but should explain the sides, fairly and without editorial bias. This applies to both what you say and how you say it.

LLMs can produce content that is neutral-seeming in tone, but not necessarily in substance. This concern is especially salient for articles covering people.

Copyright violations

[edit]
For the relevant policy, see Wikipedia:Copyrights. Further information: Wikipedia:Large language models and copyright
If you want to import text that you have found elsewhere or that you have co-authored with others (including LLMs), you can only do so if it is available under terms that are compatible with the CC BY-SA license.
Examples of copyright violations by LLMs at 2:00
Slides for examples of copyright violations by LLMs (click on thumbnail to access)

LLMs can generate material that violates copyright.[a] Generated text may include verbatim snippets from non-free content or be a derivative work. In addition, using LLMs to summarize copyrighted content (like news articles) may produce excessively close paraphrases.

The copyright status of LLMs trained on copyrighted material is not yet fully understood. Their output may not be compatible with the CC BY-SA license and the GNU license used for text published on Wikipedia.

Usage

[edit]

Wikipedia relies on volunteer efforts to review new content for compliance with our core content policies. This is often extremely time-consuming. The informal social contract on Wikipedia is that editors will put significant effort into their contributions, so that other editors do not need to "clean up after them". Editors should ensure that their LLM-assisted edits are a net positive to the encyclopedia, and do not increase the maintenance burden on other volunteers.

Specific competence is required

[edit]
Shortcut
  • WP:LLMCIRWP:LLMCIR
For a broader explanation of why competence is required to edit Wikipedia, see Wikipedia:Competence is required.

LLMs are assistive tools, and it is completely impossible for them to ever replace human judgment, no matter what companies may claim. Extremely careful scrutiny is needed to determine whether such tools fit a given purpose. Editors using LLMs are expected to familiarize themselves with a given LLM's inherent limitations and then must overcome these limitations, to ensure that their edits comply with relevant guidelines and policies. To this end, prior to using an LLM, editors should have gained substantial experience doing the same or a more advanced task without LLM assistance.[b]

Some editors are competent at making unassisted edits but repeatedly make inappropriate LLM-assisted edits despite a sincere effort to contribute. Such editors are assumed to lack competence in this specific sense. They may be unaware of the risks and inherent limitations or be aware but not be able to overcome them to ensure policy-compliance. In such a case, an editor may be banned from aiding themselves with such tools (i.e., restricted to only making unassisted edits). This is a specific type of limited ban. Alternatively, or in addition, they may be partially blocked from a certain namespace or namespaces.

Disclosure

[edit]
Shortcut
  • WP:LLMDISCLOSEWP:LLMDISCLOSE

Every edit that incorporates LLM output should be marked as LLM-assisted by identifying the name and, if possible, version of the AI in the edit summary. This applies to all namespaces. False denial of LLM-use when asked is likely to be met with sanctions.

The idea of requiring disclosure of LLM use as a policy has been significantly discussed, but no consensus formed (as of 2025[update]) for a variety of reasons, including disagreement over the format of disclosures (what information to include), how to facilitate them (for example, some have suggested and others rejected a checkbox), what actions should typically follow a disclosure (if any), and other uncertainties about the specifics. Regardless, it is obvious that most editors prefer that users who use LLMs on Wikipedia disclose that use, and as of 2025, many users have been blocked for misusing LLMs and systematically not disclosing—including after being asked or warned about it—which made it impossible to start a constructive dialogue with them.

Some users assume their LLM use will not be detected because the results look "good enough" to them, and they avoid disclosure to escape scrutiny. Often, they are mistaken: output that is superficially to most eyes "good enough" can still be extremely obviously LLM-generated. Multiple times, this pattern—using LLMs while avoiding scrutiny and recklessly disregarding consequences—has been interpreted as the editor not being here to build an encyclopedia, pursuing an incompatible personal or commercial agenda instead. Conversely, clumsily using an LLM in a transparent manner, promptly receiving relevant feedback, and responding reasonably to that feedback, would generally mean that the user is able to receive the message, following which they are just expected to improve their editing, motivated by what is in Wikipedia's best interest.

Therefore, in light of these practical considerations, it might be best to treat disclosing as highly encouraged. Editors can treat it as collaboratively reaching out to the many editors interested in reviewing other editors' LLM-assisted edits, on a voluntary-but-highly-recommended basis.

Writing articles

[edit]
Shortcut
  • WP:LLMWRITEWP:LLMWRITE
For the relevant guideline, see Wikipedia:Writing articles with large language models.

Pasting raw large language models' outputs directly into the editing window to create a new article or add substantial new prose to existing articles generally leads to poor results. While LLMs are useful for copy editing, condensing existing text, and generating ideas for new or existing articles, every change to an article must comply with all applicable policies and guidelines. This means that the editor must become familiar with the sourcing landscape for the topic in question and then carefully evaluate the text for its neutrality in general, and verifiability with respect to cited sources. If citations are generated as part of the output, they must verify that the corresponding sources exist in the real world, are reliable, relevant, and suitable, and check for text–source integrity.

If using an LLM as a writing advisor, i.e. asking for outlines, how to improve paragraphs, criticism of text, etc., editors should remain aware that the information it gives is potentially unreliable. If using an LLM for copyediting, summarization, and paraphrasing, editors should remain aware that it may not properly detect grammatical errors, interpret syntactic ambiguities, or keep key information intact. It is possible to ask the LLM to correct deficiencies in its own output, such as missing information in a summary or an unencyclopedic, e.g., promotional, tone, and while these could be worthwhile attempts, they should not be relied on in place of manual corrections. The output may need to be heavily edited or, in extreme cases, discarded entirely. Due diligence and common sense are required when choosing whether to incorporate the suggestions and changes.

Raw LLM outputs should not be added directly into drafts either. Drafts are works in progress and their initial versions often fall short of the standard required for articles, but enabling editors to develop article content by starting from an unaltered LLM-outputted initial version is not one of the purposes of draft space or user space.

Communicating

[edit]
Shortcuts
  • WP:LLMCOMMWP:LLMCOMM
  • WP:LLMCHATWP:LLMCHAT
For the relevant guideline, see Wikipedia:Talk page guidelines §§ LLM-generated comments.

Editors should not use LLMs to write comments generatively. Communication is at the root of Wikipedia's decision-making process and it is presumed that editors contributing to the English-language Wikipedia possess the ability to come up with their own ideas. Comments that do not represent an actual person's thoughts are not useful in discussions, and comments that are obviously generated by an LLM or similar AI technology may be struck or collapsed. Repeating such misuse forms a pattern of disruptive editing, and may lead to a block or ban.

This does not apply to using LLMs to refine the expression of one's authentic ideas: for instance, a non-native English speaker might permissibly use an LLM to check their grammar or to translate words they are unfamiliar with, but even in this case, be aware that LLMs may make mistakes or change the intended meaning of the comment. For proofreading, it is recommended to use a word processor (see comparison) or dedicated grammar checker (see category) instead of an AI chatbot. Editors with limited English proficiency are advised to use a machine translation tool (see comparison), instead of an AI chatbot, when needed to translate their comments to English. They should be aware, however, that machine translation tools like DeepL, Google Translate, etc. are also liable to make errors, sometimes serious ones, especially in low-resource languages.[4]

Other policy considerations

[edit]

LLMs should not be used for unapproved bot-like editing or anything approaching bot-like editing. Using LLMs to assist high-speed editing in article space has a high chance of failing the standards of responsible use due to the difficulty in rigorously scrutinizing content for compliance with all applicable policies.

Wikipedia is not a testing ground for LLM development, for example, by running experiments or trials on Wikipedia for this sole purpose. Edits to Wikipedia are made to advance the encyclopedia, not a technology. This is not meant to prohibit editors from responsibly experimenting with LLMs in their userspace for the purposes of improving Wikipedia.

Sources with LLM-generated text

[edit]
For the relevant entry in the list of frequently-discussed sources, see Wikipedia:Reliable sources/Perennial sources § Large language models.

LLM-created works are not reliable sources. Unless their outputs were published by reliable outlets with rigorous oversight, and unless it can be verified that the content were evaluated for accuracy by the publisher, they should not be cited. After the AI boom, misuse of LLMs began to severely degrade journalism, causing a significant decline in the quality of many media sources.

Identifying LLM-generated text

[edit]
Main page: Wikipedia:Signs of AI writing

While it is not always possible to determine whether text is LLM-generated, LLM outputs sometimes exhibit characteristics that allow readers to tell them apart from human-generated content. For example, a verbose and information-dense talk page comment that is written in an impersonal tone with correct spelling and grammar, yet contains non-wikitext markup and lacks links or citations, is likely to be LLM-generated.

Do not solely rely on artificial intelligence content detection tools (such as GPTZero) to evaluate whether text is LLM-generated, as they have high error rates that make them unreliable. User scripts like User:Headbomb/unreliable can help identify sections of articles that may have been generated by LLMs.

Handling suspected LLM-generated content

[edit]
Shortcut
  • WP:AIREMOVALWP:AIREMOVAL
See also: Wikipedia:Content removal

An editor who identifies LLM-originated content that does not comply with our core content policies—and decides not to remove it outright (which is generally fine to do)—should either proofread it to make it comply or alert other editors of the issue. The first thing to check is that the referenced works actually exist. All factual claims then need to be verified against the provided sources. Presence of text‑source integrity must be established. Anything that turns out not to comply with the policies should then be removed. Original research, synthesis, and non-neutral point of view should especially be addressed.

To alert other editors, the editor who responds to the issue should place {{AI-generated|date=March 2026}} at the top of the affected article or draft (only if that editor does not feel capable of quickly resolving the issue on their own). In biographies of living persons, non-policy compliant LLM-originated content should be removed immediately—without waiting for discussion, or for someone else to resolve the tagged issue.

If removal as described above would result in deletion of the entire contents of the article or draft, it then becomes a candidate for deletion.[c] If the entire page appears to be factually incorrect or relies on fabricated sources, speedy deletion per WP:G3 (Pure vandalism and blatant hoaxes) may be appropriate; if the entire page is obviously LLM-generated yet does not qualify for speedy deletion under G3, an alternative is to nominate the page for speedy deletion under the WP:G15 criterion.

On talk pages, apply the templates {{Collapse AI top}} and {{Collapse AI bottom}} to collapse discussions that are disruptive due to the use of LLM-generated text.

The following templates can be used to warn editors on their user talk pages:

  • {{uw-ai1}}
  • {{uw-ai2}}
  • {{uw-ai3}}
  • {{uw-ai4}}

The following templates can be used to nominate obviously LLM-generated articles for speedy deletion:

  • {{db-g15}}
  • {{db-llm}}
  • {{db-ai}}

See also

[edit]
  • Wikipedia:Artificial intelligence resources
  • Wikipedia:WikiProject AI Cleanup, a group of editors focusing on the issue of non-policy-compliant LLM-originated content
  • Wikipedia:Artificial intelligence, an information page about the use of artificial intelligence on Wikipedia and Wikimedia projects
  • Wikipedia:Computer-generated content, a draft of a proposed policy on using computer-generated content in general on Wikipedia
  • Artwork title, a surviving article initially developed from raw LLM output (before this page had been developed)
  • m:Research:Implications of ChatGPT for knowledge integrity on Wikipedia, an ongoing (as of July 2023) Wikimedia research project
  • AI-generated content on Wikipedia, article on the history of using AI to create content on Wikipedia
  • Wikipedia:Case against LLM-generated articles, a humorous AI-written essay against using AI to write Wikipedia articles

Demonstrations

[edit]
  • User:JPxG/LLM demonstration (wikitext markup, table rotation, reference analysis, article improvement suggestions, plot summarization, reference- and infobox-based expansion, proseline repair, uncited text tagging, table formatting and color schemes)
  • User:JPxG/LLM demonstration 2 (suggestions for article improvement, explanations of unclear maintenance templates based on article text)
  • User:Fuzheado/ChatGPT (PyWikiBot code, writing from scratch, Wikidata parsing, CSV parsing)
  • User:DraconicDark/ChatGPT (lead expansion)
  • Wikipedia:Using neural network language models on Wikipedia/Transcripts (showcases several actual mainspace LLM-assisted copyedits)
  • User:WeatherWriter/LLM Experiment 1 (identifying sourced and unsourced information)
  • User:WeatherWriter/LLM Experiment 2 (identifying sourced and unsourced information, including a non-English source)
  • User:WeatherWriter/LLM Experiment 3 (identifying sourced and unsourced information, only six of seven tests successful)
  • Wikipedia:Articles for deletion/ChatGPT and Wikipedia:Articles for deletion/Planet of the Apes (humorous April Fools' nominations generated almost entirely by large language models).
  • Wikipedia:Village pump (idea lab)/Archive 64 (demonstration of AI hallucination by Cremastra in response to a proposal for "Chatbot validation" of sentences in articles)

Policy discussions

[edit]
This section is an excerpt from Wikipedia:Artificial intelligence § Discussion timeline.[edit]

Want to update this table? Try using the visual editor to edit this page.

Date Type Page Discussion Conclusion/Notes
Dec 2022 Wikipedia:Village pump (policy) Wikipedia response to chatbot-generated content
Feb 2023 Wikipedia:Village pump (idea lab) OpenAI and ChatGPT Disclosure suggested
Mar 2023 Wikipedia:Village pump (idea lab) Adding LLM edit tag Impractical with current technology
Jun 2023 Wikipedia:Village pump (miscellaneous) GPT-4 user-created template at top of page
Oct 2023 RfC Wikipedia talk:Large language models RfC: Is this proposal ready to be promoted? Overwhelming consensus to not promote.
Oct 2023 Wikipedia:Village pump (idea lab) Project Res-Up About using AI to increase resolution on old photos
Nov 2023 Wikipedia:Village pump (proposals) Scoring for Wikipedia type Articles Generated by LLM External research project hoping to recruit Wikipedia editors for off-wiki feedback (not editing here)
Jan 2024 RfC Wikipedia talk:Large language model policy RFC No consensus to adopt any wording as either a policy or guideline at this time.
Jan 2024 Wikipedia:Village pump (idea lab) Can Wikipedia Provide An AI Tool To Evaluate News and Information on the Internet
Jan 2024 Wikipedia:Village pump (idea lab) Use of ChatGPT and other LLMs specifically for medical and scientific content For text, not photos
Feb 2024 Wikipedia:Village pump (idea lab) Have a way to prevent "hallucinated" AI-generated citations in articles Goal supported in theory
Feb 2024 Wikipedia:Village pump (policy) AI-generated images Precursor to the April 2025 RfC
Mar 2024 Wikipedia:Village pump (technical) AI helper Tool idea for creating articles
Mar 2024 Wikipedia:Village pump (technical) What if we had an AI to suggest edits along the lines of edits typically made by good editors? Tool idea for smaller edits
Mar 2024 Wikipedia:Village pump (proposals) AI for WP guidelines/ policies AI-based search of Wikipedia's ruleset
May 2024 Wikipedia:Village pump (idea lab) Another job aid proposal, this time with AI
Aug 2024 Wikipedia:Village pump (proposals) Proposal: Create quizzes on Wikipedia AI not seen as integral to the idea
Oct 2024 Wikipedia:Village pump (miscellaneous) Feedback on chatbots as valid sources, or identifiers of them
Oct 2024 Module talk:Find sources Chatbots as valid sources or identifiers of them Not supported at this time
Nov 2024 Wikipedia:Village pump (proposals) Add AI translation option for translating from English to non-English article. Off topic, as we don't decide what happens to other Wikipedias
Nov 2024 Wikipedia:Village pump (idea lab) Wiki AI? Request for a chatbot
Dec 2024 RfC Wikipedia:Village pump (policy) LLM/chatbot comments in discussions Consensus that "it is within admins' and closers' discretion to discount, strike, or collapse obvious use of generative LLMs" (Now in guideline: WP:AITALK)
Jan 2025 Wikipedia:Village pump (proposals) The use of AI-generated content Proposed rule accepting LLMs for translation and grammar but not on talk pages; not accepted
Jan 2025 RfC Wikipedia:Requests for comment/AI images BLPs Clear consensus against using AI-generated imagery to depict BLP subjects. (Now in policy: WP:AIIMGBLP)
Jan 2025 Wikipedia:Village pump (policy) Adding the undisclosed use of AI to post a wall of text into discussions as disruptive editing Not inherently disruptive, but can be disruptive
Feb 2025 Wikipedia:Village pump (policy) The real use case for AI on Wikipedia Ideas for copyediting and grammar fixes
Mar 2025 Wikipedia:Village pump (policy) URLs with utm_source=chatgpt.com codes
Apr 2025 RfC Wikipedia:Requests for comment/AI images Relist with broader question: Ban all AI images? "Most images wholly generated by AI should not be used." "Obvious exceptions include articles about AI, and articles about notable AI-generated images. The community objects particularly strongly to AI-generated images (1) of named people, and (2) in technical or scientific subjects such as anatomy and chemistry." (Now in policy: WP:AIIMAGES)
Jun 2025 RfC Wikipedia:Village pump (WMF) RfC: Adopting a community position on WMF AI development Pending closure
Jun 2025 Wikipedia:Village pump (technical) Simple summaries: editor survey and 2-week mobile study The WMF announced that machine-generated summaries of articles would be presented to readers, but then put the project on hold in response to negative community feedback.
Jul 2025 RfC Wikipedia talk:Speedy deletion RFC: New CSD for unreviewed LLM content Overwhelming consensus to adopt new speedy deletion criterion (Now in policy: WP:G15)
Aug 2025 RfC Wikipedia talk:Speedy deletion RfC: Including Markdown in G15 Consensus against including Markdown in G15 as it is not consistently an indicator of LLM-generated content.
Aug 2025 RfC Wikipedia talk:Speedy deletion RfC: Including emojis in G15 There was no consensus to adopt this criterion.
Aug 2025 Wikipedia:Village_pump_(policy) LLM/AI generated proposals? Discussion archived
Sep 2025 Wikipedia:Village pump (idea lab) AI Moderator proposal Idea to augment existing edit filters/recent changes patrolling with an LLM, inspired by a Reddit extension
Sep 2025 Wikipedia:Village_pump_(policy) What is Wikipedia’s official stance on Ai-generated content Discussion archived
Oct 2025 Wikipedia:Village_pump_(idea lab) Add a bot/policy that bans AI edits from non-extended confirmed users Discussion archived
Oct 2025 RfC Wikipedia:Writing articles with large language models Wikipedia talk:Writing articles with large language models § RfC Now in guideline: WP:NEWLLM
Dec 2025 RfC Wikipedia:Village pump (policy)/Replace NEWLLM RfC: Replace WP:NEWLLM The support fell below consensus to promote the proposal.
Jan 2026 RfC Wikipedia talk:Translation § Request for comment Active RfC on a guideline for LLM translations
Jan 2026 RfC Wikipedia:Village pump (proposals)/RfC LLMCOMM guideline § RfC: Turning LLMCOMM into a guideline Proposal to promote User:Athanelar/Don't use LLMs to talk for you to guideline

Notes

[edit]
  1. ^ This also applies to cases in which the AI model is in a jurisdiction where works generated solely by AI are not copyrightable, although with very low probability.
  2. ^ For example, someone skilled at dealing with vandalism but doing very little article work should probably not start creating articles using LLMs. Instead, they should first gather actual experience at article creation without the assistance of the LLM.
  3. ^ Whenever a new article largely consists of unedited output of a large language model, it may be draftified, per WP:DRAFTREASON.
    As long as the title indicates a topic that has some potential merit, it may be worth it to stubify or blank-and-redirect. Likewise, drafts about viable new topics may be convertible to "skeleton drafts", i.e. near-blanked, by leaving only a brief definition of the subject. Creators of such pages should be suitably notified or warned. Whenever suspected LLM-generated content is concerned, editors are discouraged from contesting instances of removal through reversal without discussing first.
    When an alternative to deletion is considered, editors should still be mindful of any outstanding copyright or similar critical issues which would necessitate deletion.

References

[edit]
  1. ^ Smith, Adam (25 January 2023). "What Is ChatGPT? And Will It Steal Our Jobs?". Context. Thomson Reuters Foundation. Retrieved 27 January 2023.
  2. ^ "When AI Gets It Wrong: Addressing AI Hallucinations and Bias". MIT Sloan Teaching & Learning Technologies. Retrieved 2025-05-25.
  3. ^ Duris, Daniel "Year 2026: The Year of LLM Bombing", Basta digital blog, Retrieved on 18 January 2026.
  4. ^ Naveen, Palanichamy; Trojovský, Pavel (2024). "Overview and challenges of machine translation for contextually appropriate translations". iScience. Retrieved 2025-12-11.

External links

[edit]
  • Using AI Tools in Your Research: Evaluating AI-Generated Content – research guide published by the Northwestern University
  • v
  • t
  • e
Wikipedia essays (?)
Building, editing, and deletion
Philosophy
  • Articles are more important than policy
  • Articles must be written
  • All Five Pillars are equally important
  • Avoid vague introductions
  • Civil POV pushing
  • Cohesion
  • Competence is required
  • Concede lost arguments
  • Dissent is not disloyalty
  • Don't lie
  • Don't search for objections
  • Duty to comply
  • Editing Wikipedia is like visiting a foreign country
  • Editors will sometimes be wrong
  • Eight simple rules for editing our encyclopedia
  • Explanationism
  • External criticism of Wikipedia
  • Five pillars
  • Here to build an encyclopedia
  • Large language models
  • Leave it to the experienced
  • Levels of competence
  • Levels of consensus
  • Most ideas are bad
  • Need
  • Not broken is ugly
  • Not editing because of Wikipedia restriction
  • Not every article can be a Featured Article
  • The one question
  • Oversimplification
  • Paradoxes
  • Paraphrasing
  • POV and OR from editors, sources, and fields
  • Process is important
  • Product, process, policy
  • Purpose
  • Reasonability rule
  • Systemic bias
  • There is no seniority
  • Ten Simple Rules for Editing Wikipedia
  • Tendentious editing
  • The role of policies in collaborative anarchy
  • The rules are principles
  • Trifecta
  • We are absolutely here to right great wrongs
  • Wikipedia in brief
  • Wikipedia is an encyclopedia
  • Wikipedia is a community
  • Wikipedia is not RationalWiki
Article construction
  • 100K featured articles
  • Abandoned stubs
  • Acronym overkill
  • Adding images improves the encyclopedia
  • Advanced text formatting
  • Akin's Laws of Article Writing
  • Alternatives to the "Expand" template
  • Amnesia test
  • A navbox on every page
  • An unfinished house is a real problem
  • Archive your sources
  • Article revisions
  • Articles have a half-life
  • Autosizing images
  • Avoid mission statements
  • Be neutral in form
  • Beef up that first revision
  • Blind men and an elephant
  • BOLD, revert, discuss cycle
  • Build content to endure
  • Cherrypicking
  • Chesterton's fence
  • Children's lit, adult new readers, & large-print books
  • Citation overkill
  • Citation underkill
  • Common-style fallacy
  • Concept cloud
  • Creating controversial content
  • Criticisms of society may be consistent with NPOV and reliability
  • Dictionaries as sources
  • Don't cite Wikipedia on Wikipedia
  • Don't demolish the house while it's still being built
  • Don't get hung up on minor details
  • Don't hope the house will build itself
  • Don't panic
  • Don't "teach the controversy"
  • Editing on mobile devices
  • Editors are not mindreaders
  • Encourage the newcomers
  • Endorsements (commercial)
  • Featured articles may have problems
  • Formatting bilateral relations articles
  • Formatting bilateral relations templates
  • Fruit of the poisonous tree
  • Give an article a chance
  • Gotfryd custom
  • How to write a featured article
  • Identifying and using independent sources
    • History sources
    • Law sources
    • Primary sources
    • Science sources
    • Style guides
    • Tertiary sources
  • Ignore STRONGNAT for date formats
  • Introduction to structurism
  • Link rot
  • Mine a source
  • Merge Test
  • Minors and persons judged incompetent
  • "Murder of" articles
  • Not every story/event/disaster needs a biography
  • Not everything needs a navbox
  • Not everything needs a template
  • Nothing is in stone
  • Obtain peer review comments
  • Organizing disambiguation pages by subject area
  • Permastub
  • Potential, not just current state
  • Presentism
  • Principle of Some Astonishment
  • The problem with elegant variation
  • Pro and con lists
  • Printability
  • Publicists
  • Put a little effort into it
  • Restoring part of a reverted edit
  • Robotic editing
  • Sham consensus
  • Source your plot summaries
  • Specialized-style fallacy
  • Stublet
  • Stub Makers
  • Run an edit-a-thon
  • Temporary versions of articles
  • Tertiary-source fallacy
  • There are no shortcuts to neutrality
  • There is no deadline
  • There is a deadline
  • The deadline is now
  • Try not to leave it a stub
  • What is a reliable source
  • Understanding Wikipedia's content standards
  • Walled garden
  • What an article should not include
  • Wikipedia is a work in progress
  • Wikipedia is not being written in an organized fashion
  • The world will not end tomorrow
  • Write the article first
  • Writing better articles
Writing article content
  • Avoid thread mode
  • Copyediting reception sections
  • Coup
  • Don't throw more litter onto the pile
  • Gender-neutral language
  • Myth vs fiction
  • Proseline
  • Reading in a flow state
  • Turning biology research into a Wikipedia article
  • Use our own words
  • We shouldn't be able to figure out your opinions
  • Write the article first
  • Writing about women
  • Writing better articles
Removing or
deleting content
  • Adjectives in your recommendations
  • AfD is not a war zone
  • Arguments to avoid in deletion discussions
  • Arguments to avoid in deletion reviews
  • Arguments to avoid in image deletion discussions
  • Arguments to make in deletion discussions
  • Avoid repeated arguments
  • Before commenting in a deletion discussion
  • But there must be sources!
  • Confusing arguments mean nothing
  • Content removal
  • Counting and sorting are not original research
  • Delete or merge
  • Delete the junk
  • Deletion is not cleanup
  • Does deletion help?
  • Don't attack the nominator
  • Don't confuse stub status with non-notability
  • Don't overuse shortcuts to policy and guidelines to win your argument
  • Emptying categories out of process
  • Follow the leader
  • How the presumption of notability works
  • How to save an article nominated for deletion
  • I just don't like it
  • Identifying blatant advertising
  • Identifying test edits
  • Immunity
  • Keep it concise
  • Liar liar pants on fire
  • No Encyclopedic Use
  • Notability is not everything
  • Nothing
  • Nothing is clear
  • Overzealous deletion
  • Relisting can be abusive
  • Relist bias
  • The Heymann Standard
  • Unopposed AFD discussion
  • Wikipedia is not Whack-A-Mole
  • Why was the page I created deleted?
  • What to do if your article gets tagged for speedy deletion
  • When in doubt, hide it in the woodwork
  • Zombie page
Civility
The basics
  • Accepting other users
  • Apology
  • Autistic editors
  • Being right isn't enough
  • Contributing to complicated discussions
  • Divisiveness
  • Don't retaliate
  • Editors' pronouns
  • Edit at your own pace
  • Encouraging the newcomers
  • Enjoy yourself
  • Expect no thanks
  • How to be civil
  • Maintaining a friendly space
  • Negotiation
  • Obsessive–compulsive disorder editors
  • Please say please
  • Relationships with academic editors
  • Thank you
  • Too long; didn't read
  • Truce
  • Unblock perspectives
  • We are all Wikipedians here
  • You have a right to remain silent
Philosophy
  • A thank you never hurts
  • A weak personal attack is still wrong
  • Advice for hotheads
  • An uncivil environment is a poor environment
  • Be the glue
  • Beware of the tigers!
  • Civility warnings
  • Deletion as revenge
  • Duty to comply
  • Failure
  • Forgive and forget
  • It's not the end of the world
  • Nobody cares
  • Most people who disagree with you on content are not vandals
  • On Wikipedia no one knows you're a dog
  • Old-fashioned Wikipedian values
  • Profanity, civility, and discussions
  • Revert notification opt-out
  • Shadowless Fists of Death!
  • Staying cool when the editing gets hot
  • The grey zone
  • The last word
  • There is no Divine Right of Editors
  • Most ideas are bad
  • Nothing is clear
  • Reader
  • The rules of polite discourse
  • There is no common sense
  • Two wrongs don't make a right
  • Wikipedia clichés
  • Wikipedia is not about winning
  • Wikipedia should not be a monopoly
  • Writing for the opponent
Dos
  • Assume good faith
  • Assume the assumption of good faith
  • Assume no clue
  • Avoid personal remarks
  • Avoid the word "vandal"
  • Be excellent to one another
  • Be pragmatic
  • Beyond civility
  • Call a spade a spade
  • Candor
  • Deny recognition
  • Desist
  • Discussing cruft
  • Drop the stick and back slowly away from the horse carcass
  • Encourage full discussions
  • Get over it
  • How to lose
  • Imagine others complexly
  • Just drop it
  • Keep it concise
  • Keep it down to earth
  • Mind your own business
  • Say "MOBY"
  • Mutual withdrawal
  • Read before commenting
  • Read the room
  • Settle the process first
  • You can search, too
Don'ts
  • Wikipedia:Because I can
  • Civil POV pushing
  • Cyberbullying
  • Don't accuse someone of a personal attack for accusing of a personal attack
  • Don't be a fanatic
  • Don't be a jerk
  • Don't be an ostrich
  • Don't be ashamed
  • Don't be a WikiBigot
  • Don't be high-maintenance
  • Don't be inconsiderate
  • Don't be obnoxious
  • Don't be prejudiced
  • Don't be rude
  • Don't be the Fun Police
  • Don't bludgeon the process
  • Don't call a spade a spade
  • Don't call people by their real name
  • Don't call the kettle black
  • Don't call things cruft
  • Don't come down like a ton of bricks
  • Don't cry COI
  • Don't demand that editors solve the problems they identify
  • Don't eat the troll's food
  • Don't fight fire with fire
  • Don't give a fuck
  • Don't help too much
  • Don't ignore community consensus
  • Don't knit beside the guillotine
  • Don't make a smarmy valediction part of your signature
  • Don't remind others of past misdeeds
  • Don't shout
  • Don't spite your face
  • Don't take the bait
  • Don't template the regulars
  • Don't throw your toys out of the pram
  • Do not insult the vandals
  • Griefing
  • Hate is disruptive
  • Nationalist editing
  • No angry mastodons
    • just madmen
  • No ableism
  • No Nazis
  • No racists
  • No Confederates
  • No queerphobia
  • No, you can't have a pony
  • Passive aggression
  • POV railroad
  • Superhatting
  • There are no oracles
  • There's no need to guess someone's preferred pronouns
  • You can't squeeze blood from a turnip
  • UPPERCASE
WikiRelations
  • WikiBullying
  • WikiCrime
  • WikiHarassment
  • WikiHate
  • WikiLawyering
  • WikiLove
  • WikiPeace
Neutrality
  • Academic bias
  • Activist
  • Advocacy
  • Avoid thread mode
  • Be neutral in form
  • Blind men and an elephant
  • Cherrypicking
  • Civil POV pushing
  • Coatrack
  • Controversial articles
  • Creating controversial content
  • Criticisms of society may be consistent with NPOV and reliability
  • Criticism
  • Describing points of view
  • Don't "teach the controversy"
  • Endorsements
  • Let the reader decide
  • Inaccuracy
  • Myth vs fiction
  • NPOV dispute
  • Neutral and proportionate point of view
  • Not Wikipedia's fault
  • POV and OR from editors, sources, and fields
  • Partisans
  • Partisanship
  • Presentism
  • Pro and con lists
  • Systemic bias
  • Tendentious editing
  • There are no shortcuts to neutrality
  • Wikipedia:Truth
  • We are absolutely here to right great wrongs
  • We shouldn't be able to figure out your opinions
  • What is fringe?
  • Why Wikipedia cannot claim the Earth is not flat
  • Wikipedia is not RationalWiki
  • Yes, it is promotion
Notability
  • Advanced source searching
  • All high schools can be notable
  • Alternative outlets
  • Arguments to avoid in deletion discussions
  • Articles with a single source
  • Avoid template creep
  • Bare notability
  • Big events make key participants notable
  • Businesses with a single location
  • But it's true!
  • Common sourcing mistakes
  • Clones
  • Coatrack
  • Discriminate vs indiscriminate information
  • Drafts are not checked for notability or sanity
  • Every snowflake is unique
  • Existence ≠ Notability
  • Existence does not prove notability
  • Extracting the meaning of significant coverage
  • Google searches and numbers
  • How the presumption of notability works
  • High schools
  • Historical/Policy/Notability/Arguments
  • Inclusion is not an indicator of notability
  • Independent sources
  • Inherent notability
  • Insignificant
  • Just because BFDI has an article doesn't mean you can add fancruft about it
  • Masking the lack of notability
  • Make stubs
  • Minimum coverage
  • News coverage does not decrease notability
  • No amount of editing can overcome a lack of notability
  • No one cares about your garage band
  • No one really cares
  • Notability and tornadoes
  • Notability cannot be purchased
  • Notability comparison test
  • Notability is not everything
  • Notability is not a level playing field
  • Notability is not a matter of opinion
  • Notability is not relevance or reliability
  • Notability means impact
  • Notabilitymandering
  • Not all Vocaloid songs deserve their own article
  • Not every single thing Donald Trump does deserves an article
  • Obscurity ≠ Lack of notability
  • Offline sources
  • One sentence does not an article make
  • Other stuff exists
  • Overreliance upon Google
  • Perennial websites
  • Popularity ≠ Notability
  • Read the source
  • Red flags of non-notability
  • Reducing consensus to an algorithm
  • Run-of-the-mill
  • Solutions are mixtures and nothing else
  • Significance is not a formula
  • Source content comes first!
  • Sources must be out-of-universe
  • Subjective importance
  • Third-party sources
  • Trivial mentions
  • Video links
  • Vanispamcruftisement
  • What BLP1E is not
  • What is and is not routine coverage
  • What notability is not
  • What to include
  • Why was BFDI not on Wikipedia?
  • Wikipedia is not Crunchbase
  • Wikipedia is not here to tell the world about your noble cause
  • Wikipedia is not the place to post your résumé
  • Two prongs of merit
Humorous
  • Adminitis
  • Ain't no rules says a dog can't play basketball
  • Akin's Laws of Article Writing
  • Alternatives to edit warring
  • ANI flu
  • Anti-Wikipedian
  • Anti-Wikipedianism
  • Articlecountitis
  • Asshole John rule
  • Assume bad faith
  • Assume faith
  • Assume good wraith
  • Assume stupidity
  • Assume that everyone's assuming good faith, assuming that you are assuming good faith
  • Avoid using the preview button
  • Avoid using wikilinks
  • Bad Jokes and Other Deleted Nonsense
  • Barnstaritis
  • Before they were notable
  • Be the fun police
  • BOLD, revert, revert, revert cycle
  • Boston Tea Party
  • Butterfly effect
  • CaPiTaLiZaTiOn MuCh?
  • Case against LLM-generated articles
  • Complete bollocks
  • Counting forks
  • Counting juntas
  • Crap
  • Delete the main page
  • Diffusing conflict
  • Don't stuff beans up your nose
  • Don't-give-a-fuckism
  • Don't abbreviate "Wikipedia" as "Wiki"!
  • Don't delete the main page
  • Editcountitis
  • Edits Per Day
  • Editsummarisis
  • Editing under the influence
  • Embrace Stop Signs
  • Emerson
  • Fart
  • Five Fs of Wikipedia
  • Seven Ages of Editor, by Will E. Spear-Shake
  • Go ahead, vandalize
  • How many Wikipedians does it take to change a lightbulb?
  • How to get away with UPE
  • How to put up a straight pole by pushing it at an angle
  • How to vandalize correctly
  • How to win a citation war
  • If you have a pulse
  • Ignore all essays
  • Ignore all user warnings
  • Ignore every single rule
  • Is that even an essay?
  • Keep beating the horse
  • List of really, really, really stupid article ideas that you really, really, really should not create
  • Mess with the templates
  • My local pond
  • Newcomers are delicious, so go ahead and bite them
  • Legal vandalism
  • List of jokes about Wikipedia
  • LTTAUTMAOK
  • No climbing the Reichstag dressed as Spider-Man
  • No episcopal threats
  • No one cares about your garage band
  • No one really cares
  • No, really
  • No self attacks
  • Notability is not eternal
  • Oops Defense
  • Play the game
  • Please be a giant dick, so we can ban you
  • Please bite the newbies
  • Please do not murder the newcomers
  • Pledge of Tranquility
  • Project S.C.R.A.M.
  • R-e-s-p-e-c-t
  • Requests for medication
  • Requirements for adminship
  • Rouge admin
  • Rouge editor
  • Sarcasm is really helpful
  • Sausages for tasting
  • Spaling Muich?
  • Template madness
  • The Night Before Wikimas
  • The first rule of Wikipedia
  • The Five Pillars of Untruth
  • Things that should not be surprising
  • The WikiBible
  • Watchlistitis
  • We are deletionist!
  • Why is BFDI on Wikipedia?
  • Why you shouldn't write articles with ChatGPT, according to ChatGPT
  • Wikipedia is an MMORPG
  • WTF? OMG! TMD TLA. ARG!
  • Yes, falsely
  • Yes legal threats
  • Yes personal attacks
  • You don't have to be mad to work here, but
  • You should not write meaningless lists
About
About essays
  • Essay guide
  • Value of essays
  • Difference between policies, guidelines and essays
  • Don't cite essays as if they were policy
  • Avoid writing redundant essays
  • Finding an essay
  • Quote your own essay
Policies and guidelines
  • About policies and guidelines
    • Policies
    • Guidelines
  • How to contribute to Wikipedia guidance
  • Policy writing is hard
Essay search
Retrieved from "https://teknopedia.ac.id/w/index.php?title=Wikipedia:Large_language_models&oldid=1341307916"
Categories:
  • Wikipedia essays
  • Wikipedia and artificial intelligence

  • indonesia
  • Polski
  • العربية
  • Deutsch
  • English
  • Español
  • Français
  • Italiano
  • مصرى
  • Nederlands
  • 日本語
  • Português
  • Sinugboanong Binisaya
  • Svenska
  • Українська
  • Tiếng Việt
  • Winaray
  • 中文
  • Русский
Sunting pranala
url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url
Pusat Layanan

UNIVERSITAS TEKNOKRAT INDONESIA | ASEAN's Best Private University
Jl. ZA. Pagar Alam No.9 -11, Labuhan Ratu, Kec. Kedaton, Kota Bandar Lampung, Lampung 35132
Phone: (0721) 702022
Email: pmb@teknokrat.ac.id