*This cycle of three articles is about the reason why, in this age where we have developed the instruments for maximal deception, we also have produced the greatest sense for truth, and how, ultimately, lies are are going to lose.*

But to get there, I first need to introduce some analysis tools powerful enough to support our hypothesis. Don’t worry: I’ll keep it slim, and just point at the sources in case you want to dig deeper into each topic.

In this cycle of articles, we’ll talk a lot about cycles. Indeed, we’ll proceed through interrelated logic cycles, chain of thoughts where each ring is complete in itself, yet interrelated with the next. And the whole chain itself is closed on our conclusion: *the age of deception has come to an end*.

There is a reason, a deep, important reason why I am not starting from a set of hypothesis and working my way up to the thesis through logic and deduction, in the reasoning I am proposing here. But for the moment, I just ask you to stick with me, because, you’ll see, this will make sense only when the whole reasoning will be complete: when the chain will be closed.

I will then ask you to read this paragraph again, after reading the whole cycle of the three articles; at that point, you will see how *the process is part of the demonstration*. In this case, *the form is the substance*.

And starting from the end, I resume here what is going to happen:

1. In part 1, I am describing the tools I am going to use.

2. In part 2, I will use the tools described here to explain how our mind builds a map of reality.

3. In part 3, I will show how we price the errors in drawing such map of reality, and how this cost has been steadily increasing, becoming now unbearable.

I can’t do any of this using an axiomatic logic system. While my method is compatible with the scientific method, my proceedings are not compatible with traditional logic. I will be dealing with such *complexity *that it can only be addressed through the *complexity science*.

Reality is complex. We know that instinctively, but the great success of the hard sciences has created the strong illusion that reality can be axiomatised.

The Hilbert Program, at the very beginning of the previous century, consisted in producing a system of symbols able to explain itself through itself only. It was the dream of a mathematics language that wouldn’t require any notion that could not be expressed through that language. A dream that was nearly fulfilled by Bertrand Russell in his *Principia Mathematica* (1910).

The only flaw was what Russell called “the problem of the barber”. Suppose we have two sets of men in a small town: those who shave themselves and those who are shaved by the town barber. As the town barber also shave himself, in which set does he belong?

This is the set theory version of the ancient conundrum of the liar uttering the sentence “I can only say lies” (actually, in the original Epimenides’s version, it’s the Cretan saying “all Cretan lie”).

Russell was sure that, with some more effort, this small glitch in his matrix would be solved. After all, he mathematically demonstrated how one plus one equals two, and that is a fairly hard matter, probably the hardest matter ever faced by mathematics.

But in 1931, Kurt Goedel finds a way to express a sentence declaring the sentence itself can’t be true through the same mathematical language used by Russell. By doing this, he mathematically demonstrates that... mathematics can’t be demonstrated. More precisely, the *incompleteness theorems* demonstrates that any formal symbol system is either *inconsistent *or *incomplete*.

In short, Goedel demonstrates that mathematics can’t be the only epistemology of science. We need also a *pragmatic *component, in order to advance the scientific knowledge through the scientific method of the experimental falsification of an hypothesis. We can express everything through mathematics, but that will never be all that must be done: we then need to go out in the world and test our findings. Indeed, the root of modern science, the Galilean method, is itself based in pragmatism, and the attempt to remove the final, practical, worldly attempt at a concrete demonstration was undertaken long after Galileo died.

For how our senses can be incomplete, for how what we experience might not even be “real”, there is no self-true axiom. To be science, any mathematical construct must be validated through the *insertion of a symbol external to the symbolic language that must be proven*. Logic can’t ever prove an axiom-free truth: at a point there will be an axiom that can’t be proven through logic itself — and that axiom can be validated only through experimental observation.

Or in other words, by trying it out and seeing if it works.

Starting from the early sixties, two branches of mathematics start to explore “the world of reality”: a world that is not based on a set of axioms, a non-euclidean space where the “common sense” of the basic a-priori assumptions makes no sense.

*Chaos Mathematics* and *Cybernetics *attack the problem on two different sides. The first explores self-replicating mathematical structures, as the *fractals *(Mandlebrot, 1967): entities that never start nor end, that need no axioms, as they are self-defined. The second does the same with logic, describing the effect of elementary computational steps applied to other elementary computational steps (possibly, the first application of *recursion *in cybernetics is in a discipline called *system dynamics*, Forrester, late 1950's, finally formalized in 1969 in "Urban Dynamics").

Soon, the potential of this new approach to mathematics is explored by physics (i.e. *fluid dynamics*), chemistry (Thermodynamic Theory of Structure, Stability, and Fluctuations, Prigogine 1971), economics (Kenneth Arrow, General Economic Equilibrium, 1974), cognition sciences (Maturana & Varela, The Tree of Knowledge, 1984), with impacts on ecology, biology, sociology and countless applications in other disciplines.

A *pattern emerges*. A common thread in the *philosophy* of this new approach to the scientific method: the surrender of an axiomatic point of view to embrace the self-organising, self-defining nature of *complex phenomena*, studying them not as simplified *samples of themselves*, but as a full, *complex reality*, as the “thing that’s really out there”.

The study of this new point of view on science itself is called *science of complexity*.

On one side of the ocean, the American school studies it through a more pragmatic lens, less concerned about the philosophical underpinnings and interdisciplinary commonalities of the approach. The peak of the research in the field is the *Santa Fe Institute for Complexity Studies*.

The European approach gets the route of studying the deep significance of this way of making science: an *epistemology of complexity *is born with the work of Edgard Morin (1977): **La Méthode**, “The Method”.

On the basis of an epistemology studied to capture the *complexity of reality*, he developed a *scientific method*, a revised Galilean method to bring science past the limits of the last century.

The details are not relevant here: for the time being we will just lean on the Morinian concept of *complex logic*.

Circularity is a fallacy in traditional logic, but it’s the only possible state of a valid expression in complex logic.

Traditional logic is axiomatic: given set of statements or symbols that are “assumed” true, formally called *axioms *it proceeds through transformation dictated by formal rules, obtaining results formally called *theorems*.

Complex logic is circular. We define *entities *and establish *relations *between them. The network of relations is valid only when is either *actually *closed, with the last concept being in some sort of relation with the first, or when it’s *validly assumed to be closed*. In this second case, we accept having initial and final concepts, under the assumption that they are a partial, temporary view of a larger closed network of concepts.

As the logic is circular, it cannot be used to prove itself (actually, we can establish “connection rules” that must be respected for a specific logic network to be formally valid — but that’s a technicality we can ignore for the purpose of this articles).

The validity of a complex logic network of concepts and relations is proven only *pragmatically*, by matching it with observed facts, and/or verifying the results of the predictions it suggests *in practice*. This process is called **coupling**.

Traditional logic is verified through **derivation**, which is the application of rules to axioms and already proven theorems. Complex logic is verified through **coupling**, that is, the verification of the accuracy of descriptions (*normative models*) and predictions (*predictive models*) against observations of the reality the structure models.

If you think this is a circular definition... well... it is.

A complex description of a complex phenomenon can never be perfectly true. Non dissimilarly to what happens in quantum mechanics, where it is not possible to know everything of a particular quantum particle, it is impossible to perfectly describe a complex system: as a perfect description would be generated, the complex system would be already different enough to make its description less than perfect.

For this reason, the principle of **coupling** accepts *degrees* of truth, rather than requiring perfect truth.

The existence of degrees of truth in logical mathematics was already explored by *fuzzy logic*; in this theory, the degree of truth is a property of each single predicate, and the truth value of a whole expression was determined by applying the logic operators to each fuzzy predicate.

For example, in fuzzy logic I can say that “if today it’s clear and warm, I’ll go play football”. So, if the sky is somewhat cloudy (truth of “clear day”: 50%), and it’s just lukewarm (truth of warm day: 50%), the implication (I’ll go play football) is just *somewhat true* (25%).

The truth of a complex logic expression measures “how well” it couples with reality.

However, there is a *pragmatic paradox* in this definition of truth: not dissimilarly to what happens in quantum mechanics, because of the very formal definition of both, the truer a complex logic expression, the less useful it is.

It’s easy to build an expression that is certainly true; for example: “we exist in a universe”. This is certainly true, but doesn’t give us much information, and is not of much utility. On the other hand, a statement as precise as “where I stand, gravity acceleration is 9.8 m/sec square” is certainly false, as it’s impossible to measure exactly gravity acceleration at any given instant in a specific point, but for how “untrue” it is, it’s *true enough* to be helpful in many situations.

Coupling accepts a degree of truth that is *true enough* for the appointed purpose; when it’s not true enough, we work on the complex logic statement, refining it until the cost of making it more true is greater than the benefit we achieve from pushing it further — and if we can’t reach that point, it means the statement is *not true enough*, and we discard it, in search for a better interpretation of reality.

It’s an epistemology rooted in *pragmatism*, and that’s exactly the point: this approach to science has brought us here, typing on our devices and sending our thoughts in a network of thoughts, where they are free to search for the cure from a shrinking number of illnesses and project mankind to the Moon, to Mars, peeking into the stars, piercing the very first instant of this universe, and unravelling the infinity of infinite universes, of which, ours is but an infinitesimal infinity.

Or, using a wonderfully recursive complex logic statement, *pragmatism just works*.

A *Concept-Relation Graph* is a representation of a complex-logic structure, where entities called *concepts *are connected through functions called *relations*. *Concept-Relation Algebra* is a formal algebra that computes on those *relations*, eventually producing a *value* for each *concept* in the graph.

In short, CR-Graphs are a *mathematical formalisation of complex logic statements*.

This is the tool we’ll use in the next articles of the cycle. To show what's about, I have prepared a couple of simple examples.

Suppose I am really fond of gummy bears. Each unit of gummy bears gives me a unit of happiness (ignore, for the sake of this example, the complexity of the concept of happiness quantization). But gummy bears are not good for my health: each unit of gummy bears I eat, it subtracts 1 unit of health. On the other hand, being happy is good for my health: each unit of happiness gives me a unit of health.

Also, eating a unit of healthy food replenishes my health, to the rate of 3 units.

Now, to buy gummy bears and healthy food, I need to put in some extra work. But I do that only if I am happy, and I need 5 units of happiness to get a unit of extra work.

This is a CR-Graph expressing just one kind of relation: causality. The relation is transitive: if A causes B, and B causes C, A causes C. Also, the graph presents several *recursions*, where a thing is a far cause of itself. Although self-causing things are hard to conceptualize, they are omnipresent both in reality and in mathematics. A simple equation system, for example a set of equations where x = 3y +2 and y = -2x + 1, implies

that x is a partial determinant of y, and the other way around.

Indeed, this graph might be expressed as a system of equations, where, for example, Happiness = 3 Gummy Bears + 2 Health.

My question is: What's the effect of extra work on happiness?

To know that, you just have to expand all the equations as the one above; starting from Extra Work (W) to get to Happiness (H). So, if go like W = 0.2H -> W = 0.2(3G + 2h) and so on, you'll find out that each extra unit of work generates 1.25 units of happiness.

Notice that we have to cut the loop between happiness and work at some point here: it's a technicality due to the fact that we didn't put any "external" factor in the graph -- no uncaused cause. The graph is a set of

interconnected loops. This is what the whole of the universe (or the whole of *a universe*) looks like; and if you solve this equation system, you'll end up with an undefined result: mathematically, this turns out to be an *undefined system*, where any set of solutions is possible.

A valid CR graph will have at least some uncaused cause: while nothing is actually not interconnected, we stop our analysis where we have enough grasp of reality to safely assume that some part of the graph is *actually* as we declare it to be. But we always know that the uncaused cause, were we to look deeper, would be just part of a larger loop.

CR-Algebra doesn't deal just with one kind of (linear) relation: it's power is in being able to deal with more complex, unstable relations. What better example of unstable relations than diplomacy?

In our example, we'll deal with relations alliance and enmity.

An unstable relation is a relation that changes nature as it's applied through a CR-Graph. The enmity relationship changes as we apply it, so that if A is enemy of B, and B is enemy f C, A and C are somewhat allied. So, expressing alliance and enmity with the letters "a" and "e", the rules regulating transitivity of relations are:

aa = 1/2 a

ae = ea = 1/2 e

ee = 1/2 a

Also, if B and C are allied of A, but D is allied of C and enemy of B, what's its diplomatic relation with A?

Since it's allied of an allied and enemy of another, is reasonable to think that this different positions will cancel out, and A will be neutral to D. Similarly, if a country is allied of more of our allied, the alliance will grow stronger, and conversely, enmity will grow when a country is allied of more of our enemies, or enemy of more of our allied. So:

*n*a + *m*a = (*n*+*m*) a*n*e + *m*e = (*n*+*m*) e*n*a + *m*e = (*m*-*n*) e when *m*>*n*, or (*n*-*m*) a otherwise

Now, we can solve the political dilemma of the content of Bellica, here represented:

The nations of Bollia, Cammia and Domia are interlocked in a reciprocal enmity; Farsia is allied with Cammia, and Antasia is allied with Bollia. What's the diplomatic relation between Antasia and Farsia?

Writing down the graph equations, we see that:

* A = aB

* B = aA + eD

* C = eB + eD + aF

* D = eB + eC

* F = aC

Conversely to the previous example, this relations are bi-directional, which produces a slightly simplified mathematics when dealing with recursions, and requires some special operation when producing an equation for distant relations. Walking the graph back from F to A, the equation turns out to be:

relation(A,F) = aC = a(eB + eD) = a(ea + eeB) = a(ea + eea)

Solving the relations as per the above definitions, we obtain

relation(F,A) = a(1/2 e + 1/2 aa) = a(1/2 e + 1/4 a) = (1/4 e + 1/8a) = 1/8 e

So, the diplomatic relation between Antasia and Farsia turns out to be a tenuous enmity.

Full disclosure: CR-Algebra is my pet theory, and is being reviewed as I write this article. The latest draft of my paper can be found here. It's solid math, but it has not yet been vetted by the scientific community, although it has been already under its scrutiny for some time, and no objection was raised up to date.

But, since it's the only thing I know that can work through the kind of complexity we'll be dealing when analysing the interactions between abstract social constructs, I will shamelessly plug it in.

CR-Algebra itself is not concerned about proving its truth, or about truth values at all. I’ll be leaving the **coupling **of our work as an “exercise for the reader” — or more precisely, I’ll invite the readers to improve our work.

I’ll fire the first shot, and then I’ll invite the readers into refining our work, adding new concepts, fine tuning the relations, evaluating new results, until reaching a consensus over the **coupling **of our work with the observed reality.

I don’t have the resources to perform measurements to determine how well our graphs will coupled with reality, but a collective consensus is already a form of measurement. Not the best one, admittedly, but CR-Algebra, and complex logic at large, are somewhat self-correcting, and advance through selection and refinement. In fact, we will talk about the evolution of correctness in our representation of reality — which extends to complex logic.

So, I don’t expect to give you perfect number — but with your help, I expect to provide you with a CR-Graph ** true enough** to explain what’s going on in the Zeitgeist of the culture.

We are an open source platform for Internet freedom. Get paid in crypto for your contributions to the community.

Maybe later