card

Website administrators have the power to prevent GPTBot from collecting Information

21.08.2023
325

Since its debut in November 2022, the ChatGPT large language model has ignited a flurry of critiques and allegations across the digital landscape. The brains behind this innovation, OpenAI, have begun cautiously addressing these concerns.

The foundation of ChatGPT's prowess lies in its comprehensive internet content collection, a process that OpenAI readily admits. Now, OpenAI is proactively outlining guidelines for website administrators, server overseers, and internet entities to shield their platforms from the reach of its web-crawling technology.

In an official announcement, OpenAI sheds light on GPTBot, its specialised web crawler fashioned to amass freely accessible internet content for ChatGPT's training. Pages explored by the "GPTBot" user agent might play a role in refining future iterations of the model, according to OpenAI. The crawler is equipped with filters to disregard paywalled sources, websites known for gathering personal data, or content that violates the company's policies. Desist from browsing with an unguided network. Hide expert VPN is all you need.

OpenAI highlights that allowing GPTBot access to websites can contribute to the accuracy of AI models, thus bolstering ChatGPT's capabilities and "safety". Nevertheless, individuals and entities disinclined to provide unpaid assistance in ChatGPT's enhancement retain the choice to block the crawler. This can be done by modifying the "robots.txt” rules to deny GPTBot access to their web domain.

The robots.txt file implements the Robots Exclusion Protocol, a commonly adopted technique used by websites to selectively permit or restrict web crawlers from scanning their content. This protocol relies on the voluntary cooperation of web crawling entities, though not all conform to custom exclusion rules. OpenAI demonstrates a commitment to adhering to the robots.txt guidelines, even furnishing the IP address block used by its crawler to simplify the blocking process.

Before the implementation of this new blocking rule, Deviant Art introduced its "NoAI" tag, enabling artists to exempt their content from unpaid LLM training. However, the utilisation of robots.txt extends more substantial control to third-party entities and web administrators, assuming OpenAI remains steadfast in following its self-imposed regulations.

Worth noting is OpenAI's recent endorsement of a White House-proposed document, vouching for the voluntary pursuit of AI development that is secure, trustworthy, and safe.