Artificial Intelligence thread

mossen

Junior Member
Registered Member
o3 pro has been released by OpenAI. Seems underwhelming. Costlier but no real performance increase compared to "regular" o3.


1.jpg

It's also not multimodal. OpenAI is still the firm to beat, but their progress seem to be slowing.

P.S. The "o3-preview" version was never released publicly because of its prohibitive cost. Seemed designed to just game the benchmark.
 

dropout003

New Member
Registered Member
Any LLM skeptics here? To me LLMs seem like advanced pattern matchers that can query and interpolate across all available human knowledge, and thus won't ever lead to "exponential intelligence growth". Are there labs in China that are studying alternative architectures (like Lecun's Jepa)?
 

Maikeru

Major
Registered Member
Any LLM skeptics here? To me LLMs seem like advanced pattern matchers that can query and interpolate across all available human knowledge, and thus won't ever lead to "exponential intelligence growth". Are there labs in China that are studying alternative architectures (like Lecun's Jepa)?
I read an X post on the back of the recent Apple AI report that described LLM as basically a very quick learning parrot. It could only solve problems that had been posted about on the web, in struggled with anything novel that required original reasoning. I think there's something to this.
 

huemens

Junior Member
Registered Member
Any LLM skeptics here? To me LLMs seem like advanced pattern matchers that can query and interpolate across all available human knowledge, and thus won't ever lead to "exponential intelligence growth". Are there labs in China that are studying alternative architectures (like Lecun's Jepa)?

In my opinion, LLM does not look like the path that would take us to AGI. It may have even slowed down research on more promising paths because most of the available capital is being pored into LLMs and similar stuff due to the FOMO around LLMs and Chat Bots. But it is good enough to keep people engaged in AI research and keep the momentum going. Eventually we'll peak it and start to see diminishing returns and other architectures would start to get more attention.
 

BlackWindMnt

Captain
Registered Member
Any LLM skeptics here? To me LLMs seem like advanced pattern matchers that can query and interpolate across all available human knowledge, and thus won't ever lead to "exponential intelligence growth". Are there labs in China that are studying alternative architectures (like Lecun's Jepa)?
Always have been or at least skeptical of the hyped up claims that this is the route to AGI. I think the best use case right now is LLM replacing search engines like google/stack overflow.

I can actually see people that were to intimidated to start coding projects, now being able to find the motivation to just start. Because LLM can create a somewhat starting point. That means there will be more shitty software running in the world. The more software that is running in the world the more developers will be needed to keep this house of cards going.
 

mellowcookie

New Member
Registered Member
Always have been or at least skeptical of the hyped up claims that this is the route to AGI. I think the best use case right now is LLM replacing search engines like google/stack overflow.

I can actually see people that were to intimidated to start coding projects, now being able to find the motivation to just start. Because LLM can create a somewhat starting point. That means there will be more shitty software running in the world. The more software that is running in the world the more developers will be needed to keep this house of cards going.
LLMs are not just useful for starting shitty programming projects. Although IMO they are unlikely to lead to AGI, they already have a bunch of use cases and can be an extremely potent tool and productivity gain. LLMs are really broad domain and hold a lot of basic knowledge about a variety of topics, such as the various different frameworks, syntax, library functions etc. of a language. They do well in domain knowledge for the finer details. What they do not do well is have long time-horizon thinking and symbolic reasoning.

To use LLMs efficiently for things like coding and other jobs, you can’t just give it an abstract goal or task and let the LLM do as it pleases - you have to put in guardrails and do things step by step, while verifying program correctness. This is slower than just vibe coding, but still a lot faster than doing the coding by hand. While this is not AGI, it is going to be extremely disruptive for many industries (especially SWE). As in the past, this will lead to a lot of productivity gains, AKA Job losses but potential salary increases for people who can leverage this technology the best, or are the most experienced with it.

Furthermore, it is still entirely possible that AGI comes into existence at some point in the near future. It is just not certain to come from LLMs/Transformer Architectures. I would not neglect using LLMs and developing skills for interacting with them.
 
Top