Dan O
2 min readJan 12, 2022

--

Uri, I tend to agree. Still after having simulated such a thing enough it seems we can somehow the generalize reasoning pattern w/o requiring the simulation step.

So I think there is still a profound gap yet to be filled between DL, simulation and logical representations of the same things.

you are an NLP guy and a DL guy what are your thoughts:

it seems to me we have evidence that the human mind is a sequence of lingual/semantic token generalizer. e.g. I can tell you when one should insert the determiner 'the' in a phrase, yet it takes WORK for me to tell you (or invent) the rule I am using.

This suggests to me, that perhaps generalizing such grammars is the native ability, and things like reasoning, logic, algorithm following etc. are co-opted versions of this underlying native ability.

if this hypothesis is correct, then the gap to be filled is the mechanism that drives executive reasoning in a way that follows grammar structure -- but in this case it is the grammar of thinking patterns.

the question for me is how to train such systems. with language it is easy. we have many examples of language captured in digital form. but thinking is not as easy.

it is difficult for a person to even speak or write thoughts as fast as they come, and even then, we have much smaller corpora to work with.

how do babies do it? Do you think humans ruminate enough that their children can hear enough of the reasoning pathways that they can generalize the thinking paths (rules) by listening to adults?

--

--

Dan O
Dan O

Written by Dan O

Startup Guy, PhD AI, Kentuckian living in San Fran

No responses yet