OpenAI Sets Realistic Tone with Subdued, GPT-5-Free DevDay This Fall
OpenAI’s DevDay will shift from a major event to global developer engagement sessions without announcing a new model, focusing instead on API updates, amid challenges like data access issues and controversies over safety and copyrighted data use.
OpenAI held a big press event in San Francisco last year to show off a bunch of new products and tools, including the doomed GPT store that looked like the App Store. This year, however, will be more low-key. On Monday, OpenAI announced that its DevDay conference will transition from a tentpole event to a series of on-the-road developer engagement sessions.
The company also confirmed that it will not release its next major flagship model during DevDay, instead focussing on updates to its APIs and developer tools.
“We’re not planning to announce our next model at DevDay,” said an OpenAI spokesperson. “We’ll be focused more on educating developers about what’s available and showcasing dev community stories.”
This year, OpenAI’s DevDay events will happen in Singapore on November 21, London on October 30, and San Francisco on October 1.
All will include workshops, breakout sessions, demos with OpenAI product and engineering staff, and developer spotlights. Registration costs $450 (or $0 if eligible attendees receive scholarships), and applications close on August 15.
In recent months, OpenAI has taken more incremental steps than monumental leaps in generative AI, preferring to hone and fine-tune its tools as it trains the successor to its current top models, GPT-4o and GPT-4o mini.
OpenAI seems to have lost the technical lead in the generative AI race, at least according to some benchmarks. The company has improved the overall performance of its models, ensuring that they don’t get off track as often as they used to.
One possible reason is the increasing difficulty of obtaining high-quality training data. Like most generative AI models, OpenAI trains its models on massive collections of web data that many creators choose to keep private, fearing plagiarism or not receiving credit or payment.
According to data from Originality.AI, over 35% of the world’s top 1,000 websites now block OpenAI’s web crawler. MIT’s Data Provenance Initiative conducted a study that restricted approximately 25% of data from “high-quality” sources to the major datasets used to train AI models.
If the current access-blocking trend continues, the research group Epoch AI predicts that developers will run out of data to train generative AI models between 2026 and 2032. Fear of copyright lawsuits has forced OpenAI to enter into costly licensing agreements with publishers and data brokers.
OpenAI’s Ambitious Plans and Controversies
OpenAI claims to have developed a reasoning technique that could improve its models’ responses to specific questions, particularly math questions, and the company’s CTO, Mira Murati, has promised a future model with “Ph.D.-level” intelligence.
(OpenAI announced in a May blog post that it had started training its next “frontier” model.) That’s a big commitment, and there’s a lot of pressure to follow through. OpenAI is reportedly losing billions of dollars while training its models and hiring high-paid research staff.
Many controversies remain surrounding OpenAI, including the use of copyrighted data for training, restrictive employee NDAs, and the effective exclusion of safety researchers.
The slower product cycle may have the advantage of countering the narrative that OpenAI has deprioritized AI safety work in favour of more capable, powerful generative AI technologies.