Dan O
1 min readSep 3, 2024

--

Valid experiment. Completely invalid conclusion.

The experiment shows that >>> LLMs <<< alone do not learn new stuff beyond what they learned during training. Yes, they are quite limited in what they can learn post training. As such LLMs alone will not eclipse us. They are a very good representation of type-I reasoning, (the reasoning humans do in under one second.)

They are not a good model for human type-2 reasoning. But many smart people see pathways for extending type-1 into type-2 reasoning. I am guessing we will have a working version within a decade, but that is my personal opinion, and it is a guess.

Still measuring ability of current LLM type-1 reasoners, and then concluding AI is no thread since these system cannot do type-2 reasoning is just GARBAGE LOGIC.

We cannot rest easy, unless we can convince ourselves that somehow type-2 reasoning will remain out of reach. (which I doubt is true)

I don't think you are in a great position to report on AGI. This is subtle stuff. But conclusions we draw I think will have profound impact on humanity.

--

--

Dan O
Dan O

Written by Dan O

Startup Guy, PhD AI, Kentuckian living in San Fran

No responses yet