OpenAI has missed its self-imposed deadline to deliver a tool that would allow creators to specify how their works are included in or excluded from its AI training data. The tool, known as Media Manager, was announced in May as a way to help creators manage the use of their copyrighted text, images, audio, and video by OpenAI’s AI models. Initially, OpenAI aimed for a launch by 2025 to give creators control over how their content contributed to AI development.
However, the feature remains in development with no clear timeline for release. Sources close to the company indicate that Media Manager was never considered a top priority. One former OpenAI employee noted, “I don’t think it was a priority.
To be honest, I don’t remember anyone working on it.”
Moreover, Fred von Lohmann, a member of OpenAI’s legal team who was working on Media Manager, transitioned to a part-time consultant role in October, further delaying the project’s progress. OpenAI’s PR team has confirmed the transition of Von Lohmann but has provided no substantial updates on the progress of Media Manager as of the latest inquiries in December. The lack of updates has led to growing frustration among creators who feel their works are being appropriated without adequate consent mechanisms.
AI models like those developed by OpenAI learn patterns from extensive datasets to make predictions and generate new content. While the ability to generate convincing outputs based on prior data is powerful, it inadvertently replicates elements of the underlying proprietary content, causing legal challenges. OpenAI is currently facing lawsuits from artists, writers, and media organizations who claim their works were used in AI training without permission.
OpenAI’s delayed Media Manager launch
To address these issues, OpenAI has offered short-term solutions like a submission form for artists to flag their work for exclusion from future datasets and tools for webmasters to block their domains from being scraped by OpenAI’s web-crawling bots. However, these methods have been criticized as cumbersome and insufficient.
Media Manager was pitched as a comprehensive solution, using advanced machine learning to allow creators to declare ownership over their creations. By collaborating with regulators, OpenAI hoped to set an industry standard. Despite this, the tool has not been publicly mentioned since its initial announcement, and skepticism about its effectiveness persists.
Legal experts doubt that Media Manager will be fully effective in addressing creators’ concerns or resolving legal uncertainties about AI and intellectual property usage. Adrian Cyhan, an IP attorney, highlighted the complexity of enforcing creator protections amidst varying legal landscapes. Ed Newton-Rex, founder of Fairly Trained, argued that putting the onus on creators to opt-out could still lead to the exploitation of their work.
The absence of Media Manager has significant implications for ongoing legal disputes. While OpenAI claims its models produce transformative works, litigation outcomes remain uncertain. The courts may ultimately decide whether OpenAI’s use of copyrighted materials for training purposes falls under fair use, similar to the precedent set by Google Books.
For now, OpenAI continues to implement filters to prevent its AI from regurgitating training examples verbatim. The company’s future legal standing and its commitment to creators’ rights remain in flux as it navigates the complex terrain of AI and intellectual property law.
Rashan is a seasoned technology journalist and visionary leader serving as the Editor-in-Chief of DevX.com, a leading online publication focused on software development, programming languages, and emerging technologies. With his deep expertise in the tech industry and her passion for empowering developers, Rashan has transformed DevX.com into a vibrant hub of knowledge and innovation. Reach out to Rashan at [email protected]























