OpenAI has developed a new tool that could potentially identify students who rely on ChatGPT to complete their assignments. However, the company is still considering whether to make this tool publicly available or keep it under limited use.
According to a company spokesperson, researchers are working on a method called “text watermarking.” This technique embeds a hidden marker within the text, making it possible to determine later whether the content was written by ChatGPT or by a human.
The spokesperson explained that while this technology is effective, it also carries certain risks. For instance, some individuals might find ways to bypass it, and it could unfairly affect non-English speakers.
OpenAI noted that it had previously launched tools aimed at detecting AI-generated text, but those tools were not highly accurate and were discontinued last year.
In this new approach, only ChatGPT’s writing will be detectable. The system will slightly alter word choices in ChatGPT’s output to create an “invisible watermark.”
The company added that this method could still fail under certain circumstances, such as when the text is translated, rewritten, or modified by another AI model.















































































