berlin.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
Alles rund um, über, aus & für Berlin

Administered by:

Server stats:

632
active users

#agi

31 posts28 participants3 posts today

"In the aftermath of GPT-5’s launch, it has become more difficult to take bombastic predictions about A.I. at face value, and the views of critics like Marcus seem increasingly moderate. Such voices argue that this technology is important, but not poised to drastically transform our lives. They challenge us to consider a different vision for the near-future—one in which A.I. might not get much better than this.

OpenAI didn’t want to wait nearly two and a half years to release GPT-5. According to The Information, by the spring of 2024, Altman was telling employees that their next major model, code-named Orion, would be significantly better than GPT-4. By the fall, however, it became clear that the results were disappointing. “While Orion’s performance ended up exceeding that of prior models,” The Information reported in November, “the increase in quality was far smaller compared with the jump between GPT-3 and GPT-4.”

Orion’s failure helped cement the creeping fear within the industry that the A.I. scaling law wasn’t a law after all. If building ever-bigger models was yielding diminishing returns, the tech companies would need a new strategy to strengthen their A.I. products. They soon settled on what could be described as “post-training improvements.” The leading large language models all go through a process called pre-training in which they essentially digest the entire internet to become smart. But it is also possible to refine models later, to help them better make use of the knowledge and abilities they have absorbed. One post-training technique is to apply a machine-learning tool, reinforcement learning, to teach a pre-trained model to behave better on specific types of tasks. Another enables a model to spend more computing time generating responses to demanding queries."

newyorker.com/culture/open-que

The New Yorker · What If A.I. Doesn’t Get Much Better Than This?By Cal Newport

"My gloss is that GPT-5 had become something of an albatross around OpenAI’s neck. And this particular juncture, not long after inking big deals with Softbank et al. and riding as high on its cultural and political trajectory as it’s likely to get—and perhaps seeing declining rates of progress on model improvement in the labs—a calculated decision was made to pull the trigger on releasing the long-awaited model. People were going to be disappointed no matter what; let them be disappointed now, while the wind is still at OpenAI’s back, and it can credibly make a claim to providing hyper-advanced worker automation.

I don’t think the GPT-5 flop ultimately matters all that much to most folks, and it can certainly be papered over well enough by a skilled salesman in an enterprise pitch meeting. Again, all this is clarifying: OpenAI is again centering workplace automation, while retreating from messianic AGI talk."

bloodinthemachine.com/p/gpt-5-

Blood in the Machine · GPT-5 is a joke. Will it matter?By Brian Merchant

#GPT5: »Here’s NBC, in a story about how #Altman has apparently and rather suddenly (once again) changed his stance on AGI:

“I think it’s not a super useful term,” Altman told CNBC when asked whether the company’s latest GPT-5 model moves the world any closer to achieving AGI.

Remember, just last February Altman published an essay on his personal blog that opened with the line “Our mission is to ensure that AGI (Artificial General Intelligence) benefits all of humanity.” Suddenly it’s not a super useful term? Another bad joke, surely. If anything, it’s clear that Altman knows just what a super useful term #AGI is, at least when it comes to attracting investment capital.«

bloodinthemachine.com/p/gpt-5-

Thanks to @brianmerchant

Blood in the Machine · GPT-5 is a joke. Will it matter?By Brian Merchant

By rights, Altman’s reputation should by now be completely burned. This is a man who joked in September 2023 that “AGI has been achieved internally”, told us in January of this year in his blog that “We are now confident we know how to build AGI as we have traditionally understood it”. Just two days ago he hold us that as quoted above) interacting with GPT-5 we “like talking to … legitimate PhD level expert in anything”.
#AI #technology #AGI #OpenAI #fraud
open.substack.com/pub/garymarc

Marcus on AI · GPT-5: Overdue, overhyped and underwhelming. And that’s not the worst of it.By Gary Marcus

The success stories about #AI in medical diagnosis are not #AGI but are due to:
1) Physicians blowing off patient concerns (unfortunately, it's common among our profession to call ppl whose symptoms we can not explain "crazy")
2) upsampling of "zebras" in #LLM training datasets