Dan O
2 min readOct 24, 2024

--

Ignacio you frame this a two sides of a great debate, yet I wonder... it seems all the serious scientists are on the same side here. I think maybe this is a non-debate at least among scientists. We are all shocked at how much of human reasoning can be done by pattern matching, (and likely how much of our own logical reasoning is really just pattern matching.). Still there is real type-II reasoning, and these LLMs cannot do it. But which scientist is claiming they can?

I think they are claiming that many PRACTICAL reasoning tasks can be solved with larger LLM, and I think this might be true, even though I am confident they are not doing type-II reasoning, and thus will continue to fail on others.

To support this claim, I looked at your reference article for Andy and found this quote:

... but the fact they cope far better is actually - to me - a very optimistic sign, indicating that the world's largest and most sophisticated models may really exhibit some crude reasoning - at least, enough to get over deliberately confounding benchmarks!

Sure he is optimistic. But not optimistic that that LLMs can reason. indeed it is quite a humble claim:

some crude reasoning - at least, enough to get over deliberately confounding benchmarks!

It seems he is saying, Hey if we build larger LLMs they can do type-1 reasoning better, but they way he phrases this its clear he is not claiming it can do type-2 reasoning. But understand, from a PRACTICAL perspective he thinks bigger models can "do the trick" on lots of practical problems... and maybe it can.

--

--

Dan O
Dan O

Written by Dan O

Startup Guy, PhD AI, Kentuckian living in San Fran

Responses (1)