To cap off an afternoon of product releases, OpenAI researchers, engineers, and managers, together with OpenAI CEO Sam Altman, responded questions in a wide-ranging Reddit AMA on Friday.
OpenAI reveals itself in a little of a precarious place. It’s fighting the belief that it’s ceding floor within the AI race to Chinese language firms like DeepSeek, which OpenAI alleges would possibly’ve stolen its IP. The ChatGPT maker has been looking to shore up its courting with Washington and concurrently pursue an bold knowledge middle mission, whilst reportedly laying groundwork for probably the most greatest financing rounds in historical past.
Altman admitted that DeepSeek has lessened OpenAI’s lead in AI, and he mentioned he believes OpenAI has been “at the mistaken facet of historical past” relating to open sourcing its applied sciences. Whilst OpenAI has open sourced fashions up to now, the corporate has most often appreciated a proprietary, closed supply building manner.
“[I personally think we need to] determine a distinct open supply technique,” Altman mentioned. “No longer everybody at OpenAI stocks this view, and it’s additionally now not our present best precedence … We can produce higher fashions [going forward], however we can care for much less of a lead than we did in earlier years.”
In a follow-up answer, Kevin Weil, OpenAI’s leader product officer, mentioned that OpenAI is thinking about open sourcing older fashions that aren’t state of the art anymore. “We’ll without a doubt take into accounts doing extra of this,” he mentioned, with out going into higher element.
Past prompting OpenAI to rethink its unlock philosophy, Altman mentioned that DeepSeek has driven the corporate to probably disclose extra about how its so-called reasoning fashions, just like the o3-mini type launched nowadays, display their “concept procedure.” These days, OpenAI’s fashions disguise their reasoning, a method supposed to stop competition from scraping coaching knowledge for their very own fashions. Against this, DeepSeek’s reasoning type, R1, displays its complete chain of concept.
“We’re operating on appearing a number greater than we display nowadays — [showing the model thought process] might be very very quickly,” Weil added. “TBD on all — appearing all chain of concept results in aggressive distillation, however we additionally know other folks (no less than energy customers) need it, so we’ll to find the way to stability it.”
Altman and Weil tried to dispel rumors that ChatGPT, the chatbot platform during which OpenAI launches a lot of its fashions, would build up in value someday. Altman mentioned that he’d love to make ChatGPT “less expensive” over the years, if possible.
Altman in the past mentioned that OpenAI used to be dropping cash on its priciest ChatGPT plan, ChatGPT Professional, which prices $200 per thirty days.
In a slightly comparable thread, Weil mentioned that OpenAI continues to look proof that extra compute energy results in “higher” and extra performant fashions. That’s largely what’s necessitating initiatives akin to Stargate, OpenAI’s not too long ago introduced large knowledge middle mission, Weil mentioned. Serving a rising consumer base is fueling compute call for inside of OpenAI as neatly, he persevered.
Requested about recursive self-improvement that may well be enabled by way of those tough fashions, Altman mentioned he thinks a “speedy takeoff” is extra believable than he as soon as believed. Recursive self-improvement is a procedure the place an AI gadget may just make stronger its personal intelligence and features with out human enter.
In fact, it’s price noting that Altman is infamous for overpromising. It wasn’t way back that he reduced OpenAI’s bar for AGI.
One Reddit consumer requested whether or not OpenAI’s fashions, self-improving or now not, can be used to broaden harmful guns — particularly nuclear guns. This week, OpenAI introduced a partnership with the U.S. govt to present its fashions to the U.S. Nationwide Laboratories partly for nuclear protection analysis.
Weil mentioned he depended on the federal government.
“I’ve gotten to understand those scientists and they’re AI mavens along with global magnificence researchers,” he mentioned. “They perceive the facility and the bounds of the fashions, and I don’t assume there’s any likelihood they simply YOLO some type output right into a nuclear calculation. They’re good and evidence-based and so they do numerous experimentation and information paintings to validate all their paintings.”
The OpenAI group used to be requested a number of questions of a extra technical nature, like when OpenAI’s subsequent reasoning type, o3, might be launched (“various weeks, lower than a couple of months,” Altman mentioned); when the corporate’s subsequent flagship “non-reasoning” type, GPT-5, would possibly land (“don’t have a timeline but,” mentioned Altman); and when OpenAI would possibly unveil a successor to DALL-E 3, the corporate’s image-generating type. DALL-E 3, which used to be launched round two years in the past, has gotten moderately lengthy within the enamel. Symbol-generation tech has progressed by way of leaps and limits since DALL-E 3’s debut, and the type is now not aggressive on a lot of benchmark exams.
“Sure! We’re operating on it,” Weil mentioned of a DALL-E 3 follow-up. “And I feel it’s going to be well worth the wait.”