explicitClick to confirm you are 18+

Zeitgeist 3: The End of Lies

jonnymindMar 9, 2019, 12:16:48 PM
repeatthumb_up39thumb_down1

In the third part of this article, we extend the OIC model of perception to the wider society, to its effect on reality, and we find the limits of deception.

This article builds heavily on the previous ones: Complex Logic and The Structure of Knowledge. Here, we bring the complex theory of the structure of knowledge to its conclusions: That there is a point past which deception ceases to work, and also, that our society is around that point.

The premises of the other two articles hold: there is no academic research backing the analysis I propose here, yet. My speculations here are a solid hypothesis based on my expertise in the field.

Extreme Approximation

I will now try and build a model I can use to prove my hypothesis.

A perfect representation of the knowledge of a single creature, based on the OIC model introduced in part 2, would be an immense network of events, observation, inferences and castings, with a relatively small number of *rest* nodes, ultimately connected to the facts that are then turned into events again; but, to prove my point in this particular reference frame, I don’t need a perfect model. I need something accurate enough for our purpose, that can be handled in a few paragraphs.

Using the precepts of the epistemology of complexity, I will rather use approximation, instead classical theoretical device of simplification. We need to take a lot into account, possibly everything, while retaining a certain practicality of use.

That’s what I’ll be doing now.

The holistic OIC model

So, let’s represent the whole of castings, inferences, observations, events and facts in as a set; a single node for each of them.


The Holistic OIC Model


The complexity of their interactions will be resumed, or hidden if you like, in the values of each node and in the relations between them.

Each node will be the sum total of all the concepts it refers to. The the value of the sum-node is the effect of their interactions.

In the HOIC model, the rest nodes represent whatever is not determined by the previous level; so Events/R represents the set of events that are not explained by all the facts directly known as events, Observations/R is represents all those observations that are not substantiated by events, and so on. Most importantly, Facts/R will be all those facts that cannot be influenced by Castings: it’s the sum total of the world outside the sphere of influence of the entity the graph refers to.

Now, with all this complexity hidden in each node, we can extract many aspects of their complex value out of them. We could count how many concepts are in each node, or how strongly they are correlated, or how fast change happens within each set. But for the purpose of determining if the graph is possible, to determine if it can exists, we must determine if the represented system is stable: in other words, if it can survive.

The Survivability Function

Suppose we have a function called survivability, or S, which tells us if a node is “good enough” for the system to survive beyond the horizon of the reference frame. Such a function would tells us if the facts support the life of the owner of the system: S(F) = 1 means that the creature thrives in its environment, while S(F) = 0 means that the creature is dead. The value 0.5 indicates that the owner of the system is barely surviving, and if the value is lower than that, it means that the environment is detrimental to the creature, and will cause its death in the long run. The nearer to 0, the sooner it will happen.

Facts are partially influenced by what a creature does, and partially by factors it cannot control. Let’s call this proportion influence or I: it’s the function of how well the castings can transfer their effect on the facts. So:

S(F) = Ic S(C) + (1-Ic) S(Fr)

This reads as: facts are good enough for the survival of a creature depending on how good is its knowledge of how to operate in its environment, and well it can apply this knowledge to the environment on one side, and how forgiving is the rest of the environment the actions of the creature cannot influence on the other.

Suppose we have a creature that is not very smart, but quite powerful, and lives in a forgiving environment. Its castings are not top notch (0.6), but the environment provides it almost all it needs to survive (0.4). With its great influence on the facts it controls (0.9), the final result is

S(F) = 0.9*0.6 + (1-0.9)*0.4 = 0.54 + 0.04 = 0.58

Our creature is somewhat doing fine. Also very smart creatures (0.9) with much lower influence (0.3) in a more forgiving environment (0.6) can do pretty well:

S(F) = 0.3*0.9 + (1-0.3)*0.6 = 0.27 + 0.42 = 0.69

But if the environment is very hostile (0.1), things are very dire also for a smart (0.7) and powerful (0.7) creature:

S(F) = 0.7*0.7 + (1-0.7)*0.1 = 0.49 + 0.03 = 0.52

We have found a way to determine if a creature will survive; or even better, a measure of how much it will thrive or struggle, just in a little magic number between 0 and 1. Can a thing so complex be that simple?

Yes, if you just ask the right question.

Selecting the reference frame

Indeed, it is this simple; a creature thrives or strives depending how much the environment is supporting its survival, and how well it can fix what’s lacking.

With time, some creature improves its adaptation to the environment, increasing the survivability of its castings and its influence on the facts; even the simplest creature is a... different beast, if taken in its infancy or at the peak of its development. Also, entire species can become better at castings and influence with each generation.

But this doesn’t mean that our equation is incorrect; it just means that it’s valid only to understand part of the problem. In particular, the question it answers is: “As things are now, how would this creature fare?”

The key to our ability to understand that problem is a correct definition of that now.

As described in the previous article, as we analyse the reality deeper and deeper, and we enlarge our time horizon, there isn’t any independent variable anymore. If we take into consideration all of space and time, there is no fact that could be considered external to our system; there is no “rest”. It all becomes a single, infinitely large system, where each node influence each other. But we don’t have an infinity of time to invest in our research, and such a perfect knowledge would not even be all that useful in the now we want to know about.

We need to determine when the analysis is good enough. In other words, we need to set the reference frame.

For example: suppose we want to determine the survivability of a certain creature during the upcoming summer. We will have to consider how clement the weather will be, what the creature knows now and how that knowledge will change in the next three months, and what actions it will likely take in order to apply its knowledge to the environment before the winter comes. We will then crystallize this information in three numbers, that will tell us how well is a creature suited to survive in the next three months.

If we want to repeat the experiment for a whole year, now we must also consider how hostile the winter might be, and how the creature is suited to survive it; and if there will be food enough during the autumn, or if the spring will bear fruits soon enough; and how the creature will grow, or weaken, in a year time. Now we’ll want to use these new numbers instead, and they will be certainly different. Moreover, they will be more approximate.

We can go as far as assigning a probability that our evaluations will be respected; in 3 months, in 1 year, in 5 years and so on. For who knows higher mathematics and statistics: we might even associate a probability function with our evaluation; and if the probability function is correct, our evaluation will be as good as it could get.

But for the scope of this article, we don’t need to go there. We just need to define a reference frame that we might be in a good position to evaluate, where we can assign good numbers to all the components of our equations.

The Whole System

So, once we establish our reference frame, Facts are determined only by the castings and what part of the facts that can’t be controlled (the rest), events by facts and the rest, observations by events and the rest, inferences by observations and the rest, and finally, castings by inferences and the rest. So, the facts ultimately influence the castings... which influence the facts.

In traditional logic, we are at an impasse. We have an effect that is the cause of its effect. It’s an egg-chicken situation, which defied classical logic for a long time. But we are using complex logic, and circular reasoning is not anymore an unbeatable foe, nor an error. Actually, it’s the proof we’re on the good track (as long as the loop we found can be matched against an observation... and the loop goes on, ad lib).

A bit of simple linear algebra fixes this problem for us; what we have here is nothing but a system of linear equations. To simplify the expression, we use the letter indicating each node to signify its survivability. From now on, C means “survivability of the castings”, F means “survivability of the facts”, and so on.

F = Ic * C + (1-Ic) * Fr
E = If * F + (1-If) * Er
O = Ie * E + (1-Ie) * Or
I = Io * O + (1-Io) * Ir
C = Ii * I + (1-Ii) * Cr

Since we want to know how well the system of knowledge grown by the creature makes the facts good enough for its survival, we can risolve the system in this variable.

The result is the Knowledge-Survival Relationship Function, or more simply, the Knowledge Function:


The Knowledge Function


We are all one

Can our society sustain the actual level of deception? How can the Knowledge Equation help us solve this question?

The HOIC model condenses sets of concepts into specific nodes, accepting an increased approximation. All the observations are summarised in the “observations” node, all the inferences in the “inferences” node, and so on. This makes the model less detailed, but not less correct: its correctness depends by how we define the values of the nodes, not in the fact that it has more or less detail.

If we condensed all those concepts hanging out in a single cognitive system, can we do the same for a system comprising multiple cognitive sub-systems? Can we do this for the collective mind?

The collective mind, which is the cognitive system of the larger society, operates on an environment through the collective corpus of knowledge known as technology, which is based on inferences called science and culture, gathering observations of events, which is how it perceives the facts it influences.

Yes, indeed, the OIC model, and the approximated holistic OIC, apply to any cognitive system, including the collective mind.

The Reference Frame

I want to demonstrate is that we have recently passed the tipping point in the cultural landscape, and that further “pushing of narratives” devoid of reality is not a sustainable behavior anymore. My reference frame will be the past year, 2018 and the beginning of 2019. I will not consider the humanity at large, but the western cultural sphere only; not the relationship of the human species with nature and natural resources, but the capacity of our culture to survive it’s reality.

Plugging in the numbers

Now that we have the formula, we can set one reference frame and "make up" some number. What we want is to get some estimation of the values to to experiment with.

The value of the rest elements is the survivability of what we don't control. How well the facts, events, observations, inferences and castings that our culture doesn't control (in the reference frame) are fit for its survival. Example of Event/Rests would be the meme sub-culture, which creates new events that can go viral and influence the culture at large, but are completely random and unmanaged (in the very restricted reference frame we have chosen). Example of uncontrolled inferences could be a viral commentary, or a book, that was not inspired by the publicly available and shared observations of the time. The works of Jordan Peterson might be just that. Uncontrolled castings would be the hijacking of communication platforms and political power by eversive, unrestrained groups that are not influenced by the "common sense" (inferences) of our time.

Explained this generic principles, let's try to evaluate the survivability of the rests.

Facts/R: 0.65. In general, the facts happening outside the direct control of our culture have been quite favorable to it. On the other hand, uncontrolled immigration and multiculturalism, which was injected in the mainstream culture by elites with a different agenda, is a threat; I still see the overall set of facts as a net help to our civilization, but not so positive after all.

Events/R: 0.5. Events not determined by facts, or those "perceived facts" that are actually not there, don't seem to play a particular positive or negative role in our culture by themselves. In general, we are very good at reading facts, and events that are simply not justified by fact don't seem to destabilize the society by themselves.

Observations/R: 0.2. With observations, the story changes, literally. Fact: a policeman shoots a robber. Event: the TV broadcasts a policeman shooting a robber. Observation: a policeman overstep the bounds of proportional response and guilty of unwarranted use deadly force. This is where the bad narratives that are tearing apart our culture are originated: in the misreading of events; and recently, all of those went in the direction of being inflammatory, hyperbolic and disruptive.

Inference/R: 0.45. What's the effect of inferences that were not derived by observations (no matter if those observations are themselves factual or not)? Although there had been some work in the direction of helping revitalizing our culture, as Peter Bogossians's "grievance study", or Lindsay Shephard's expose on the corruption of its university, I still think that the overall balance is slightly negative. For example: #metoo, the women's march, "punch a nazi", etc., are all self-sustaining constructs that weren't originated by observations: a single observed event might have started them, but the self-sustaining aspect of them, the fact that they grow as inferences outside the influence of the mass of ordinary observations is what makes them "rest". There's a mix of bad and good there, but I sense that the bad is still a bit predominant.

Casting/R: 0.25. The effect of unwarranted policies, as senseless immigration, predatory welfare, anti-cyclic economic policies has been extremely bad for our culture -- this has even been recognized by some of those actors that implemented those baseless policies, as Angela Merkel.

Now for the influences. The numbers here represent how strongly each step is influencing the next; how many facts turns into events, how well events translate in observations, how many inferences are informed by factual observations, how our policies get based on inferences, and finally the actual power of our policies to shape our reality.

Influence of Facts: 0.75. In the current year, most of the things that actually happen end up somewhere for our collective mind do perceive them.

Influence of Events: 0.3. This represents how well the gathered news is interpreted in terms of basic observation; for the collective mind, it would be the media opinion, and the base opinions shared by the public. We have grown quite terrible at this, even trying to deconstruct the logic processes that should guide the forming of well assessed opinions.

Influence of Observations: 0.75. How observations contribute to generate inferences. Most intellectuals, and in the general ethos, is greatly influenced by the available observations. There is much work going on around perceived facts, as the gender gap, or perceived differential of oppression, or infringements of the Bill of Rights, or on the perceived censorship. All this observations, true or not, tend to generate inferences quite strongly and promptly.

Influence of Inferences: 0.65. Lawmakers, companies and academia are relatively fast and thorough in reacting to the inferences; not as fast as the inferences are generated from the observations, but still quite responsive.

Influence of Castings: 0.5. Policies seem to be a hit and miss, with some highly effective in achieving the desired result, and some completely unsubstantial in their effects.

Plugging these numbers in the formula, I get a result of 0.493; barely tipped under the resilience point.

You can play with those numbers and plug in your evaluations here: The Knowledge Formula

Conclusions

The knowledge formula is telling us that, in the past year - year and a half, we just tipped under the stable survival level of 0.5. Our culture is start to break down, and this is mainly due to the level of the "influence of events": that's where lies rest. That's a measure of the willful falsehood, or misconception, in understanding the reality as we already perceived it, and turn it into useful observations on which to base our judgments (inferences).

Just tweaking that value from 0.3 to 0.4 would bring us back in the green.

My point here is that we cannot afford any more lies, any more deceptions, as they are starting to break our culture down; and starting to demand the truth is the best way forward, if we want our civilization to survive.

The cycle

And now that the conclusions are drawn, you might want to re-read the introductory paragraph of the first article, to reappreciate the premise, and close the complex logic cycle that article opened.