Dan O
3 min readSep 10, 2021

--

> if ToM does have to come first ...

I don't mean to say that exactly. I think ToM of OTHER will come before ToM of SELF.

Here is the progression I see. And it is really just based on the logical dependency between concepts:

-- Generally I think the idea of functional systems. (e.g. gravity pulled the cup to the ground, the light switch activated the light) as a base level ability.

-- Then this functional modelling ability informs theory of mind. e.g. we can think of agents as a functional system where their eyeball things connect to the brain thing which connects to the arm and leg things.

-- of course ToM goes deeper, we need to invent the idea of goals, etc. but those are all built on top of a functional model of what is happening.

-- in parallel to the entire thread of concepts building ontop of concepts there would be a very direct model of self. As you noticed with this idea of "attention control" the modelling engine would early on realize that the "close eyes" action trigger is somehow connected to a MASSIVE change in sensory input singnals. And it would learn to organize visual sensory as different from sonic sensory on that basis (among many others)

-- None of this is theory of mind at all. it is just coorelational observations and models that happen to be talking about self, but it would even know that yet.

-- Along this second line of concepts building, it WOULD learn the fact of ideas like head orientation, etc. even without knowing that it had a head or even that there was a 3-space world. it would all be associations without any theory backing them.

-- THEN slowly it would see the parallels between 3rd person theories that had been developed, and portions of the first person coorelative models.

-- indeed long before getting to theory of mind, simply realizing that this head position thing is really a vector in 3-space with all of the conclusions that it has learned in third person about such vectors would be a big jump in comprehension.

-- it could for example reason about whether the eyes would be able to see the lion, and know what will and will not happen when the eyes see the lion, even before it has any idea of self.

-- AND as those more sophsticated third person models become understood, then it can see different parts of the self coorelational model actually map onto those third person models.

-- it is a kind of zippering of first and third person models.

Finally I believe it is a very deep insight to realize that this theory of mind can be applied to the self. Therefore I think the theory will need to be pretty well understood as a third person model, and only then is it understood well enough that applying it to the dramatically distinct looking first person model.

~~~~

you made the comment that maybe it takes humans until they are 2 or 3 until they are conscious. I think that is right. Really even later. What we think of a consciousness is not a single capability, but rather a constellation of abilities. It seems that even a fish has basic awareness, but it probably does not have enough of a theory of mind to really cognitively understand that it is alive in the way that we do. The same for a child... full consciousness means having ALL of the connected theories connected and being able to reason with them. That only happened after you have built those theories.

can't afraid of death if you don't undertand death.

So a dog is consciouss, but not as consciouss as you are. She understand pain and danger, but not death.

So what about this sci-fi stories. Have you published them anywhere?

--

--

Dan O
Dan O

Written by Dan O

Startup Guy, PhD AI, Kentuckian living in San Fran

Responses (1)