explicitClick to confirm you are 18+

TL;DR Knowledge and Truth

jonnymindMar 16, 2019, 11:57:36 AM
repeat4thumb_up41thumb_down

In my recent blogs, I wrote about the evolution of truth, the emergence of meaning and the structure of knowledge. Here I sum it up as much as possible.

The things I wrote about are little understood, as they are still object of study of a very small group in the academia. This required following a reasoning that starts from afar and threads on for long. In the spirit of a TL:DR, here I will just write the conclusions, in the plainest language I can use. Who wants to dwell into the rabbit hole can do so by following the link to the main articles I included here, and from there, to the literature I cite.

Also, in the sequence of articles I wrote, I started from the end result: truth, and our sense for it, then descending in the depths of how the structure of knowledge develops truth, and so meaning.

Here I will do the reverse: I will starts from the mechanics of our knowledge, and explain how it builds our sense of meaning, and then, of truth.

Knowledge-based Agents

We define those entities that:

1. exist in a world of facts;

2. internally represent those facts as events;

3. learn how to interpret events, in order to alter the world they exist in

as Knowledge-based Agents. This definition includes all sentient beings (as humans, apes, cats, up to possibly the flatworm), and all those beings that are not self-conscious, and yet adapt to the environment by learning how to perform some task. Trees receiving hormone signals from nearby trees and adapt in order to defend against a parasite, or AI programs, are example of those.

The knowledge structure of knowledge-based agents can be represented through the Observation-Inference-Casting (OIC) model of knowledge:


- Facts are what happens in the world.

- Events are how facts are perceived by the agent.

- Observations are atomic blocks of knowledge, automatically generated by observing one or more contextual events.

- Inferences are elaborations of observations, which put one or more observations in some sort of relation. Could be temporal (I noticed this always happens after that), conditional (this won’t happen unless that happens first), exclusive (if this happens, that cannot happen) and so on.

- Castings are knowledge about how to act to modify the world, that is, to cause facts to happen. They require one or more inferences: if this always follows that, then if do this, that will happen.

In the OIC model, the so called rest nodes (/R) are components of concepts which are not explained by their known constituents:

- Events/R are all those events that are not generated by facts. For example hallucinations.

- Observations/R are all those observations that are not caused by events; for example, automatic stimulus-response answers, where a sound presented each time food is given makes an animal to “think” food will be coming. In this case, the rest of the observation is provided by memory.

- Inference/R are inferences that are not derived by observations. Biases, hopes, miscalculations; here again, memory can be a component, but incorrect assumptions, logical errors and physical damage of the computational structure of the agent can be a part of that too.

- Castings/R are assessment about how the agent can affect the world not derived by inferences. Thinking that a rain dance can affect the weather might be that — unless that was derived by incorrect inferences first (i.e. noticing that every time I dance, it starts raining).

The Holistic OIC model condenses each type of node in an entity that represents them all in a set:

Here, we add the retroaction of castings into facts, and add the Facts/R concept: all those facts the agent cannot act upon.

Resolving this graph with linear algebra (or solving it graphically with CR-Algebra) for facts, we obtain the following function:

where uppercase letters are shorthand for a function applied to a node:  F actually means f(F). This formula expresses the relation with the quality of facts with each other (similar functions can be built for every node), where the quality of a node in a function giving a result between 0 and 1.

In the main article, we used the probability of survival, but it could be anything qualitative, as the ability of an AI to correctly recognize a pattern.

The Emergence of Meaning

The OIC model doesn’t require conscience, and doesn’t explain it. This is the reason why we can apply it to the collective mind, without having to determine whether the larger society has a conscience on its own or not.

Bacteria reacting to their environment are knowledge agents, but they don’t learn. They don’t improve their quality function, because they can’t observe it.

The evolutionary advantage of introspection consists in the ability to evaluate the impact of castings on facts. In other words, to take the interaction of castings and facts as a new event, to be computed upon.

As the computational advantage of this retroaction becomes larger, the computational structure (the brain) complexifies, and becomes able to look further down into the quality of the inferences, the observations, and lastly the events. A human can tell an hallucination from reality, and there is some evidence that higher apes can too; certainly, a cat can’t.

As you might have guessed, at a point the whole knowledge structure becomes a fact to be dealt with:

As the loops of interactions between observations, facts, and consequences of actions become larger, they get abstracted into increasingly complex levels of meanings.

- Distinction of the self: The creature computes its own actions, and can distinguish itself from the environment.

- Potentiality of the self: the creature can now improve its own castings in order to be more effective. It can fantasize on how to act.

- Quality of the potential: The creature can evaluate how well it determines its own castings; it can give a vote on its own cognitive process, and try to improve it. This involves being conscious of one’s own inferences down to the events, with various degrees of awareness the deeper in the automated knowledge processes we go.

- Future potential: At this level, the creature must start compute on the meta-facts. Now, the structure of knowledge becomes fractal.

- Quality of the future potential: the different futures imagined by the creature, thanks to the projection of the meta-facts into new inferences, can now be ranked in order of preferability. Here, meanings become values, as the creature can and must decide what meaning it prefers.

- Quality of the self: the level only human have reached; now the creature can evaluate the quality of its evaluation of future outcomes. Now it doesn’t just ranks its futures based on its values, but evaluates the values themselves, consciously choosing 1) which criteria will then be applied to the selection of the alternative futures, and 2) evaluating how good the selection itself is.

At level six, we are not just able to imagine how good a future we pictured will be for us, but also how well we can give that estimation.

We are not just able to say that being rich and healthy would be good, but also that, maybe, this wouldn’t make us as happy as we might expect.

The Sense for Truth

We learned to perform evaluation as introspective knowledge agents: the quality of the super-facts, and then of the meta-facts, and then the quality of our evaluations of their utility in “making us better”. How well we score at that is the deepest meaning of what we call, in common parlance, “truth”.

In the knowledge structure realm, truth is the level of coupling of the knowledge agent with its environment. How well it copes with facts, how well facts are represented by events, summarized in observations, rationalized in inferences and acted upon via castings.

Truth is how well the internal representation of reality matches the world we operate in.

Our sense for truth, the striving for truth every sentient being feels, has been burned into us by evolution. As creatures, we might not have needed to know reality at all to survive; but it turns out that those creature that knew the environment better than others survived over them.

And then, the creatures knowing how to rate their knowledge better than others survived over the former.

Conclusion

The fear of being wrong has the same source of fear of death, in the game of evolution, because they are basically one and the same.

We rose to the sixth level of meaning extraction because those creatures climbing to that peak of self-awareness were better at coping with the environment, at handling the facts coming their way, so to survive and thrive.

Truth is knowing reality, which includes the reality we create. We were created through truth, in order to be better and better at it, first across generations, then within lives. And this means, that as far as knowledge agents and evolutionary biology go...

Truth is life. Life is truth.

Links

The origin of our longing for truth

The truth of the self

The Pyramid of Meaning

Zeitgeist 1: Complex Logic

Zeitgeist 2: The Structure of Knowledge

Zeitgeist 3: The End of the Lies