"In the aftermath of GPT-5’s launch, it has become more difficult to take bombastic predictions about A.I. at face value, and the views of critics like Marcus seem increasingly moderate. Such voices argue that this technology is important, but not poised to drastically transform our lives. They challenge us to consider a different vision for the near-future—one in which A.I. might not get much better than this.
OpenAI didn’t want to wait nearly two and a half years to release GPT-5. According to The Information, by the spring of 2024, Altman was telling employees that their next major model, code-named Orion, would be significantly better than GPT-4. By the fall, however, it became clear that the results were disappointing. “While Orion’s performance ended up exceeding that of prior models,” The Information reported in November, “the increase in quality was far smaller compared with the jump between GPT-3 and GPT-4.”
Orion’s failure helped cement the creeping fear within the industry that the A.I. scaling law wasn’t a law after all. If building ever-bigger models was yielding diminishing returns, the tech companies would need a new strategy to strengthen their A.I. products. They soon settled on what could be described as “post-training improvements.” The leading large language models all go through a process called pre-training in which they essentially digest the entire internet to become smart. But it is also possible to refine models later, to help them better make use of the knowledge and abilities they have absorbed. One post-training technique is to apply a machine-learning tool, reinforcement learning, to teach a pre-trained model to behave better on specific types of tasks. Another enables a model to spend more computing time generating responses to demanding queries."
https://www.newyorker.com/culture/open-questions/what-if-ai-doesnt-get-much-better-than-this
