explicitClick to confirm you are 18+

Zeitgeist 2: The Structure Of Knowledge

jonnymindFeb 9, 2019, 11:58:10 AM
repeat5thumb_up52thumb_down

In this second article of my series about the “end of the lies”, I explore how perception builds knowledge. This is the tool I will use to demonstrate that we’re at the beginning of the end of the age of propaganda.

To model how the building of knowledge works, I will leverage on the CR-Algebra introduced on the previous blog  of the series. The advantage of this approach is that it doesn’t need precision to be accurate. The approach generates a result approximately correct, and the degree of the approximation is tunable: deeper analysis and more research can reduce the approximation, and produce more precise models.

Everything I will present here is s speculation based on my theoretical expertise in the field of cybernetics, psychology and evolutionary biology. It’s an hypothesis I empirically tested time and time again, but it never underwent a procedure of scientific assessment.

A Pragmatic Approach to Knowledge

The model of knowledge I am going to develop here is a pragmatic model: how knowledge is acquired by any knowledge driven agent in order to act in its environment: in its simplest form, to survive. This include simple creatures with central nervous systems, humans and AIs.

The OIC Model

The model I am developing is called Observation, Inference, Casting or OIC for short.

Knowledge starts with observing the environment. The evolution of the brain started when the primordial creatures found a need to react to their surrounding. First, the need to react when a source of food is in proximity, then the need to move towards a source of food, then to flee from a predator, then to search for a mating partner and so on.

Then, the observations are organised in inferences, or, deduction about how they correlate. For example: food is near; food makes me feel good; then I am quite OK for the day.

Finally, inferences are used in order to act, to project, to change the status of the environment. In one word, to cast the acquired knowledge back at the environment. For example, take the knowledge necessary to move towards a food source: food makes me feel good; if I eat again, I will be good again; now I am not good; so I need to move and get that food.

Observations

Knowledge, that is, a conscious mapping of something that is not me, require first the ability to determine something about the environment. I’ll call this initial concepts about the nature of the environment observations.


The Observation Network

Observations are not objective and absolute: they are already part of the knowledge built by an observer. For example, a human in a forest is able to pick up some aspects of the reality that are oblivious to most animals, as colour and some form of shapes, while some animals can pick up aspects that humans have no ability to know, for example some scents or sounds.

I call these aspects events: events are facts (things that happen in some reality) that can be picked up and recognized by the observer.

Events are not necessarily facts known via biological senses only. Normally, we are oblivious of the atoms forming the chair we’re sitting on, but the motion of the particles in the chair stays a fact, and becomes an event when we gather information about it, i.e. through a particle accelerator. Of course, ultimately, even the scientists reading the data are doing it via their biological senses; but for the sake of this exposition, we can abstract this process as “firmware”, or “simply there”.

Note: even this “low level” facts/events would become relevant in other contexts (in complex logic, technically called “comprehension frames”).

So, the observational layer is formed by facts of reality, experienced as events, which are organised together as observations.

Well, this, and something else, called residual.

Residual Observations

Let’s make a concrete example: “a bird just flew in a bush” is an observation: an atomic knowledge construct about “what’s going on”. That was determined automatically by the interaction of multiple events: a shadow passed swiftly in front of your eyes, a rustling sound of leaves came from a bush, and turning your head in that direction, you see the leaves still moving. These events are how your senses gathered some facts about the structure of reality: the atoms of the bird’s body moved through space, sliding away the atoms of the air and those of the leaves, which then vibrated, causing the atoms of the air to vibrate back, and so on; but those facts are irrelevant to your knowledge. Only the facts that you recognise as event will merge (through some complex relation) into the observation.

Suppose that, actually, it wasn’t a bird flying in the bush: it was a rock. Yet, you had the impression of seeing a bird flying, in front of your eyes. Your brain built part of the final observation, out of missing informations that the events alone could not provide.

I call this other factor residual observation (observation/r in the above graph). It’s “everything else” concurring in creating the observation other than the “events” registered by your senses, or your devices.

At times, the residual can be pretty overwhelming: in an hallucination there is no event, the whole of the observation is produced by the mind through other mechanisms that are too complex to be analysed here.

But this is exactly the key of this analysis: we put all the complexity of “everything else” in the /r node, and in its relation with the main concept. The result will not be perfect, but it can be improved by taking out more and more from the residual, and analysing it little by little, until we’re satisfied with the achieved precision.

The complexity of relations

Events and the residual form the observation through complex relations. At this level of the analysis, this is as to say that an observation is formed in some way by events and something else.

If this seems somewhat vague, well, it is. But at the same time it’s a definition that is both complete, and informative. It’s complete because we’re comprising everything that might be ever contribute to form an observation in our definition. It’s informative because we now have some information that we didn’t have before: how the things we know and the things we don’t know differently influence the observations.

Suppose, for example, that we know that an observation is 70% determined by an event and 20% by another. The remaining 10% must be due to something else, whose details we don’t know, and at the moment, we don’t care. We know that what we know of, the events, determine the 90% of the observation (whatever this means in the specific context we’re analysing), and the remaining 10% is due to factors that are out of our control.

Suppose also that there is another observation, for which the residual weights 90%. We might know very little of this entity, but we do know a crucial thing: we know that we don’t know enough about it. If we think we need to know more, we can invest our resources to look what’s in the residual element, to break it down and get a better idea about that something else that’s contributing to our problem. Also, each improvement gives us the ratio of our success. Every each time we better our knowledge, and detail an event that was previously in the residual, we know how much is left unexplained, and how much gain (in term of incremental knowledge) there’s to be had in spending more time, energies and money in research.

A Word on The "Some Way"

The breaking down of the residual element into better defined concepts is not the only direction in which the analysis might progress. Other than understanding the something else that is influencing our observation, we might analyse the some way in which the influence is expressed.

Will that be a direct causal relation? (Rustling of leaves, could be a predator).

Will there be an inverse relation? (Absence of wind, there might be a predator)

Will there be a combination of influences? (Rustling of leaves and absence of wind, now it could really be a predator).

If the relations get tangled enough, it might be worth to break them down, and treat them as concepts in their own merit. The combination of the relations between the absence of wind and the rustling of leaves could become a new event instead: the event incongruence, which might excite other observations.

Again, the analysis can be detailed as needed: if a simple combination of relations does the trick, fits the experimental data and produces adequate predictions, in short, if it’s good enough, so be it. If not, the observation layer might become something much more detailed, and explain what’s going on in greater detail.

Inferences

Once the environment is observed, and facts about it become part of the knowledge, those observations are correlated, in order to produce an abstraction about it. Single observations are merged into coherent constructs through different kind of relations. For example, we observe something fall, and we hear that thing making a sound when it hits the ground. From that, we infer that falling things make a sound, and we’ll be now careful not dropping objects while hunting a deer.

I will name this nodes of higher knowledgeinferences.


The Inference Network


Residual Inferences

As for the observation layer, not all the inferences are perfectly explained by observations. No biological creature is perfectly rational, and if evolution is good at its job, probably no AI will be. There will always be a component of the inference that is not due to observation; experience, prudence, fear, biases, and other activities the brain is performing at the same time, and might short-circuit or remove computational resources needed for the correct assessment of the observations, all this factors might influence the inferences.

As I did for the observation layer, all this factor will be captured by the residual inference node.

A Fractal Design

As for the observation layer, both the residual and the complex relations in the inference layer could be further explored, as deeply as needed (and as allowed by the available resources). As we repeat the same kind of analysis here, with similar results, similar structures would form in this level as well.

The final result might well be a fractal tree of levels.

A subtle distinction

The acute reader might have noticed that an inference seems nothing but a more subtle observation. The distinction is indeed somewhat arbitrary and artificial, and depends on the structure of the analysis.

I consider observations those concepts that emerge directly from the recognition of multiple events, without the intervention of higher cognitive functions. On the other hand, inferences are those concept that take one or more observations and elaborate them through non-automatic mental processes.

With a software analogy, an observation is produced by the firmware of a brain, and comes immediately formed, ready for conscious analysis, and atomic in its structure. The sensation that a bird entered a bush is complete in itself, and can’t be taken apart. You can reason about what caused it, after the fact, or you can assess its validity, or you can articulate it in words later on, but the idea of a bird flying in a bush, as the events were presented to your brain, was an atomic concept, a non-repeatable, non-simplifiable unit of cognition, which your conscience could not take apart without completely dissolving and turning it into a rather different construct.

The inference, instead, is a conscious evaluation of the atomic observations. The process of the evaluation itself might be unconscious and inaccessible (and usually it is), but the necessity, the act of deciding to evaluate and the results of that process are all conscious. It’s like the flight of a humming bird: the bird knows that it wants to fly, it knows why, and it knows its destination. So, it just “tell its wings” to start humming, and here it goes. While the brain control its mussels, and every single wing beat, its conscience is as oblivious of this process as it is oblivious of the beating of its heart. Same happens here: the cognition starts and directs the process of evaluating the observations, but it doesn’t know how they are mixed together in order to generate the inference. We can analyse, after the fact, if an observation influenced an inference by 10% or by 90%; but as the inference is formed, the conscience doesn’t know it.

Continuing the IT metaphors, the process of evaluation is like a library function in a program. The high level program prepares the parameters, which are the observations, and invokes the library function that processes them into a single inference. The internal workings of the function are totally hidden from the program, which will only know about the final result.

Castings

Once an inference is made, it can be pragmatically used, by itself or in relation with others, to project the creature in its own environment. For example: “a falling shell breaks” is an inference derived by the observation of an about an event. “Inside a shell, there’s food” is again another inference derived by observations. “I can make a shell to fall to get food” is the basic idea some sea-faring bird has (maybe encoded genetically in its brain, at least in part) when hunting for food. As these interconnection of inferences have the purpose of acting on the surroundings, or projecting the creature in the environment, in order to achieve a practical effect, I will name them castings.


The Casting Network


Again, all that’s said for the other layers applies here as well: in a deepening of the fractal design, everything that’s not accounted for, everything that is not determined by inferences and their complex relation, is grouped in the residual concept. This residual node could be further analysed as much as needed and possible, and same can be done with the relations, which, if complex enough, could spawn intermediate inferences.

Again, the difference between castings and inferences is subtle: while the inferences are abstractions about facts, for example, generalisations, castings are abstractions about actions: fantasies about “what would happen if I...”.

Also, while the process of evaluation observations into inferences is unconscious, the process of casting is always conscious. Even when it becomes as natural as an instinct, so that part of the execution is automated, or requires less mental effort, each part, each one of the relations between inferences and castings, is still orchestrated and controlled consciously. It the application of the relation, that is, computing how each inference contributes to the casting, becomes easier and easier, but the act of performing the computation is voluntary.

The Casting Feedback

This is the complete network of events, observations, inferences and casting that I’ll use in the next article to demonstrate that the era of lies has come to an end.


The Knowledge Network

In this graph, there’s one line missing: the influence of the casting on the events. As the casting concept is computed, it is often (although not necessarily) actually applied in the environment. This causes something to happen, some fact to be generated.

Not all the facts turned into events by the senses are given in the environment: some are caused by the same knowing entity that is observing them. When a predator jumps to catch a prey, it is applying its casting. The success or failure of the action, and also other details, as the physical sensation its mussels provide during the jump, the tactile sensation while grabbing or grazing the prey, the sounds, the visual assessment of the position of the prey etc, all are facts, which become observable events, that are the turned into observations which influence the inferences which will inform the next casting: if the jump was good enough to catch the prey, or not, and what could be done better next time.

The feedback line is not there because we don’t need it. For what concerns our analysis, the residual observation node will also include any possible effect of the casting feedback.

Conclusions

The OIC model of knowledge describes how the information about the environment is turned into knowledge: how facts are turned into observations, inferences and castings.

In the next article, I’ll compute what is the critical point where everything that is not fact-driven (the residual nodes in each layer) becomes so influential to make the final casting ineffective. In other words, when reality comes back with a vengeance after you fail to recognise it.