Complexity

Hadi published on
23 min, 4416 words

Complexity

Let's define a new variable for complexity.


Above wallpaper reference1

The question is what's complexity? A while ago I watched this

The Biggest Gap in Science: Complexity

I am happy that Sabine pointed out its importance. That was the time that I decided to write this post, but I had other priorities until now. It's not too late! The other interesting aspect of this video is that, in my humble opinion, it has all the dots to connect to find the answer.

Here, even though that you will find the math pretty straight forward, you read it here first, and remember that I need its credit!

Introduction

I mentioned the dots in the video, so let's list them first. Then I'll add more dots to our graph, and in the end, the conclusion would be in front of us, hopefully!

First, in the start of Sabine's video there was a cup of coffee, where the milk recently was added to it. The patterns, as she described it, was emerged from low entropy to high entropy, but in the middle of the process apparently the complexity has a pick. This cup of coffee may be familiar to you too, because surging of the complexity in the middle of process is what we can compare to the emergence of life while the entropy of the Earth is increasing. And guess what! The life is complex.

The second is the rock, clock, and baby example. The question in that example was which one is more complex? Hope everyone agrees that the baby is more complex, even though we still don't have a measure for it. I am sure even the geologist would agree!

In the middle of the video, she explains her desire properties of the complexity. These requirements are as following.

  • Emergent properties
  • Edge of chaos
  • Evolution

Before going further, I would argue evolution is an emergent property, so it's already included in the first point. Also, the structures in the cup of mixture of coffee and milk are not evolving to something! This reduces our list to

  • Emergent properties
  • Edge of chaos

This is very interesting, because I don't see why she couldn't see the answer at this point! When I was watching it, I was like: she knows, she definitely knows, ..., she doesn't know!

Now that we reached to the edge of chaos, I have to show you yet another gold video, that's also related to this, not in the first glance! Before watching it, I have to mention that matter near its critical point is a chaotic system, therefore, based on the Chaos Theory 2, it can be modeled with fractals as you will see in below video. The phase transition is what we're looking to give us a language to talk about the edge of chaos. Also, there could be other videos that talks about these stuff, but I found this one in another level. Hopefully, we all agree that brain is the most complex part of our body, which this video explains that it's indeed in the edge of chaos.

Brain Criticality - Optimizing Neural Computations

Artem mentioned fractals, which are invariant on scaling. What we need from this video is the understanding of Ising model, when the critical point is in the middle of two states of the system, and also the derivation of Power law3 below. He has a variable, \(P_x\), for probability of observing a cluster with size \(x\) in the Ising model. This probability is a function, \(f(x)\), where it's invariant on scaling

\[ \frac{f(kx)}{f(x)}=g(k) \]

Here, \(k\) is the scaling factor. \(g(k)\) should not depend on \(x\) because the structure is scale invariance. Mathematically, it can be proven only the below functions satisfy above equation. This means

\[ f(x)=Ax^{-\gamma}, g(k)=k^{-\gamma} \]

Where \(\gamma\) is a constant. Keep these in mind, but before going further let's mention the rest of details we need to reach to some conclusion here. As mentioned before that scale invariance property of these structures means they are some kind of a fractal 4. A structure is a fractal if it has fractal dimension 5. So let's list the resources about the fractal dimension that I'll use later. I'll recall my previous work on "The dimension of spacetime is a measure of entropy" 6, where I need entropy as the measure of fractal dimension. However, this Hadi is not the one who treated his hard work as a treasure that should be kept, so he feared that he would lose the credit of his work, then consciously, or unconsciously, made the explanation vague before. Here, I am going to explain it as clear as possible.

It would be educational to check out Grant Sanderson(3brwon1blue) video

Fractals are typically not-self-similar

This one also described the power law

\[ N = cs^{D} \]

Where \(s=1/k\) is the scaling factor, \(N\) is the number of measurement units (sticks, squares, boxes, etc.) and \(D\) is the fractal dimension. Additionally, he explained the log-log plot 7 as well. Indeed, this also confirms that the critical point, the edge of chaos, in the Ising model is a fractal.

And last but not least, we need to recall my "Reproducibility" post 8, because as mentioned before we need to talk about emergent properties of mater to define complexity, where in that post I explained that reductionism is in the way of doing so. Even Sabine, in her video above, explained that the entropy of rock, clock, and baby are not that different because the underlying structure of all of them is atoms and molecules. This argument is based on reductionism. This is the problem we should get rid of by throwing away the reductionism.

Complexity

We don't have a lot of choices when we want to model the complexity with fractals. Fractals defined by fractal dimension, so the quantity of complexity must be the fractal dimension, or could be derived from it. But how to measure that? In the Grant's video above, he uses Minkowski–Bouligand dimension, or box-counting dimension 9. However, in reality, it's less obvious how you are going to count the boxes! For instance, for the rock, clock, or the baby in Sabine's video, how could we possibly count the boxes?

The definition of dimension that we are seeking here cannot be box-counting, because it depends on the coordinate system we choose, if the boxes are aligned with that coordinate system. Instead of boxes we need to count some kind of structures in the system.

The interesting part is that the little changes that we need to apply to the current standard definition of the entropy is exactly listed in our requirements, which is the emergent property of the complexity. That's why I think Sabine is on the right track, if I am!

In Reproducibility post 8, we pointed out the contradiction of reductionism, when we think about the calculation of entropy in the currently standard way. The contradiction is that we count the states of molecules and atoms, while there are smaller, and so called elementary, particles we could use to count their states. In other words, the atoms and molecules are emergent structures out of gluons, electrons, and quarks, therefore, in the standard approach to calculate entropy, we count the state of emergent structures. This clearly breaks the claim of reductionism, that we can only rely on the elementary particles to describe the systems.

If we already used emergent structures to count the entropy, then why not leverage this to count the state of other emergent structures? Not just leveraging, it's a necessity to have a consistent theory. On the other hand, we should not forget to count the states of gluons and quarks into our calculation. Notice, counting the state of elementary particles is not something new. In the standard approach, we argue that in the end we need to differentiate the entropy, a.k.a. \(dS\), while as long as no nuclear reaction is going on in the system we can say the terms related to that counting is a constant, thus, they have zero contribution into our overall results.

Did you hear some physicist are claiming we already know everything about our surrounding, because we well tested everything down to the elementary particles? I smiled when I heard that first time. Simply because how reductionism's failing is failed some of us. My personal approach is if we understood everything locally, we understood the whole universe.

Nonetheless, the new idea here is that now we can count other emergent structures in our calculation as well, because reductionism is failed. Indeed, the complexity must be related to the counting of the emergent structures of the system.

Emergent structure

We argued that atoms and molecules are emergent structures from the electrons, gluons, quarks, and other so-called elementary particles. Notice, the requirements of Sabine for complexity was the emergent properties, not emergent structures, however, the idea in this post is that we need emergent structures to quantify complexity, rather than only emergent properties. The question is: Is there other kinds of emergent structures, beside atoms and molecules? In general, my personal direction to move is that all emergent structures have the same math behind the scene, but it's a work in progress, even though, it already has a name, Constructivity. My almost latest math that can potentially describe the emergent structures is relying on metric tensor and spacetime curvature that is not depending on the mass in the same way gravity, in General Relativity, depends on the mass. I explained some corners of it in Superconductors' puzzle pieces post 10.

But what are the other emergent structures?! Folks, there are emergent structures surrounding us. The first example could be the structures in the middle of mixing coffee with milk in the cup of coffee. Recall Sabine's example. The hidden claim in this post is that objects we categorize as complex objects must have emergent structures, which leads to increase in their fractal dimensions. The nice thing that I already mentioned in Reproducibility post 8 is that these emergent structures must hold energy, due to the first law of thermodynamics 11.

\[ \Delta U=T\Delta S- P\Delta V \]

Where \(U\) is the internal energy of the system. This means if we have emergent structures in the system, that would affect the entropy of the system, where the internal energy would change, depending on the temperature, pressure, and changes in volume. It makes this claim measurable.

That said, I have some speculations about other emergent structures with the same math, but it's a work in progress. I don't see any barrier to claim that Dark matter 12, Galaxy filament 13, Cosmic web, Galaxies' disk, Fermi bubbles 14, etc. in the cosmology are all the same kind of emergent structures. That's why Dark matter is not a particle, but you can count it as matter. I would like to meet a brave experimentalist who observe the energy of filaments inside complex matters. By filaments inside the complex matters, I am referring to more speculations. The van der Waals force 15, and also Casimir effect 16, can be described with the same kind of math of emergent structures like Galaxy filaments. Perhaps, the structures in the mixture of coffee and milk are visible van der Waals filaments. Now, you made sure that I am high, but I am not!

Emergent entropy

As mentioned before we need to change the way we calculate entropy a little bit to extract the measure of complexity. So let's give it a new name, emergent entropy, even though, in Reproducibility post 8 I was still calling it entropy.

Before diving into the math, the idea of emergent entropy is that we count all the structures in the system and add them up. By all, I mean all, even the counting states of building blocks of spacetime, then after that electrons, gluons, quarks, etc., then after that atoms and molecules, then van der Waals filaments, then emerged boundary structures around the object 10, e.g. the boundary of cells if the system is alive, then probably the cell filaments inside the brain, then emerged structures like trees, humans, pencils, or even Tesla model Y, then probably the galaxies, and after that galaxy filaments, etc. Hope it gives you the big picture. The point is that if you only calculate the terms related to emerged entropy of spacetime, you will almost get \(3\), which is the dimensionality of spacetime fractal, all over the place, which is a constant, and as before it didn't have any contribution in differentiate of the entropy, a.k.a. \(dS\). Therefore, the result we had before would be the same, as far as spacetime dimension is almost a constant.

By the way if you cannot imagine counting filaments, take a look at Artem's video, where he counts the clusters, which built out of some kind of filaments.

clusters

Indeed, we'll stay with the "cluster", to describe what we're counting.

Emergent fractal

Now let's complain!! As a lot of physicist and mathematician complained before, we don't have the correct tools to describe complexity yet! Or, as a person who build tools for living, let's build some tools.

The current definition of fractals depends on the limit of the scale parameter into infinity. In the end of Grant's video, you can see how we only accept the slop of a curve in log-log plot as a fractal dimension, if it asymptotically has a constant slop. But we know in reality we could almost only scale down a little bit below the size of atoms. So scaling down to zero is pragmatically out of reach and useless. Therefore, we need to stop seeking scale invariance property of fractals, but instead, introduce the concept of emergent fractals. By drawing the log-log plot the same way as before, an emergent fractal is a structure that can have different slops while changing the scale parameter. This condition is so week that any structure I can imagine is an emergent fractal! Cool! So we can use it for a theory of everything!! haha!!

But yet we didn't mention how we count in the first place to then draw the log-log plot. For the dimension of an emergent fractal, we count very similarly to what Artem did in his video. We count the number of what he called cluster with a specific size, which basically is a closed loop, closed surface, or closed hyper-surfaces in higher dimensions, made out of the emergent structures. This is the algorithm to calculate the emergent fractal dimension. We choose a cluster and call it zero order cluster, therefore when scale parameter, \(s\), equals to \(s_0\) the number of clusters is exactly \(N(s_0,s_0)=0\), if we avoid counting the container itself. Then we count the clusters inside that zero order cluster, \(N(2s_0,s_0)\), where their size is bigger than, or equal to, half of the zero order cluster, which means their scale parameter must be \(s_9 < s \leq 2 s_0\). Recall \(s=1/k\), where increasing \(k\) would scale up, but increasing \(s\) would scale down. And do it recursively by setting the scale parameter \(2 s_0 < s \leq 4 s_0\) to calculate \(N(4s_0,s_0)\). Thus, the count of \(i\)th order clusters, \(N^{(i)}(s_0)=N(2^i s_0, s_0), i > 0\), would be calculated respectively by counting in \(2^{i-1}s_0 < s \leq 2^i s_0\) range. Additionally, we can argue \(N^{(0)}(s_0)=0\). To make it clear, the \(s_0\) shows the size of the container structure surrounding the system in study.

Here, let's remind you of the power law in Artem's video, but here \(N(s,s_0)\) is not scale invariant, thus, it's depending on \(s_0\).

\[ N(s,s_0) = {\left(\frac{s}{s_0}\right)}^{\gamma} \]

and it would be like below for the log-log plot

\[ \log N(s,s_0)=\gamma\log \frac{s}{s_0} \]

where we have one unknown, \(\gamma\), but there are more than one point in emergent fractals, unlike the fractals, which means known variables are much more than unknowns after we get rid of scale invariance, therefore, the natural extension of this equation would be

\[ N(s,s_0) = {\left(\frac{s}{s_0}\right)}^{\gamma^{(0)}+\gamma^{(1)}\left(\log\frac{s}{s_0}\right)+...} - 1= {\left(\frac{s}{s_0}\right)}^{\sum_{i=0}\gamma^{(i)}{\left(\log\frac{s}{s_0}\right)}^{i}} - 1 \]

Where the \(-1\) would fix the initial point at \(0\)th order cluster. Now, we can have the same number of unknowns, \(\gamma^{(i)}\). By the way the log-log plot shows why it's "natural" without forcing us to question our life choices!

\[ \log (N(s,s_0)+1)=\sum_{i=0}\gamma^{(i)}{\left(\log \frac{s}{s_0}\right)}^{i+1} \]

Just a heads-up, the \(\gamma^{(1)}\) is related to the curvature of the emergent fractal, while \(\gamma^{(0)}\) is its dimension. In this point, let's write down these equations based on known variables, \(N^{(m)}(s_0)\), count of \(m\)th order clusters.

\[ \log (N^{(m)}(s_0)+1) = \sum_{i=0}\gamma^{(i)}{\left(\log \frac{2^m s_0}{s_0}\right)}^{i+1}= \sum_{i=0}\gamma^{(i)}{(m\log 2)}^{i+1} \]

It's easier to find the \(\gamma^{(i)}\) by just writing these equations in the matrix representation.

\[ [\log (N^{(m)}(s_0)+1)]_m = [{(m\log 2)}^{i+1}]_{m,i}\times[\gamma^{(i)}]_i \]

Hence, the unknown variable matrix could be found like this

\[ [\gamma^{(i)}]_i= [{(m\log 2)}^{i+1}]_{m,i}^{-1}\times[\log (N^{(m)}(s_0)+1)]_m \]

Now, we are ready to calculate emergent entropy.

Calculating the emergent entropy

In above algorithm to count the \(N\), there are lots of cases when in the \(i\)th order clusters we can assign a type to different kinds of clusters. These types can be defined based on different properties of those clusters. For instance, we consider hydrogen atoms as cluster with specific spectrum lines, as their property. We also can refine this partition by specifying their energy, so we could have a type for hydrogen atoms with a specific kinetic energy range.

Let's tag different types with numbers like \(N_{m,s_0,1}, N_{m,s_0,2},... , N_{m,s_0,n}, ...\). Notice, dependent on \(m\) is showing they can be different for different \(m\)th order clusters, due to the fact that it's not scale invariant, and \(s_0\) is showing the local boundary we count inside. These typing would partition the clusters in the way we already have a lot of examples in the statistical mechanics. Each of these types make their own emergent partition with their own dimensions \(\gamma^{(0)}_n\), and curvatures \(\gamma^{(1)}_n\), etc.

\[ \log (N^{(m)}(s_0,n)+1)=\sum_{i=0}\gamma^{(i)}_n{(m\log 2)}^{i+1} \]

On the other hand, the entropy, based on how Boltzmann's entropy formula 17 is defined, looks like the following.

\[ S=k_B\log \Gamma \]

Where \(k_B\) is the Boltzmann constant, where we set it to one for the rest of this post, and \(\Gamma\) is number of all possible configurations of the system. This means the counting of configurations would be multiplications of the counting of each partition, therefore, \(\log \Gamma\) must be the sum of those counting. Nevertheless, we need to count the number of configurations inside each partition as well. And there could be different relationship among the inner clusters of a partition. For instance, fermions and bosons are the famous different counting that we could have in a partition. Let's say we can multiply the number of the clusters in a partition by a coefficient, \(f(m,n,N^{(m)}(s_0,n))\), to get the number of all configurations of that partition. For instance, it could be \(f(m, n,N^{(m)}(s_0,n))=N^{(m)}(s_0,n)!\) for a partition that its clusters are identical, where \(!\) is factorial.

Notice, \(m\) and \(n\) can get very correlated, which means for a specific \(n\), like hydrogen atoms, there's almost only one size possible, therefore, only one \(m\), among all \(N^{(m)}(s_0,n)\), would be non-zero. This means \(\log N^{(m)}(s_0,n)\) will get out of hand for some \(m\), and \(n\)s, when \(N^{(m)}(s_0,n)\) is zero. To avoid such a situation, we shift \(N^{(m)}(s_0,n)\) by one, so for the huge numbers that we usually work with in statistical mechanics the difference can be easily ignored. Thus, we have the following for all the configurations with \(m\)th order.

\[ S^{(m)}(s_0)=\sum_nf(m,n,N^{(m)}(s_0,n))\times\log (N^{(m)}(s_0,n)+1) \]

And of course you can sum them up to get the total emergent entropy.

\[ S(s_0)=\sum_mS^{(m)}(s_0)=\sum_{m,n}f(m,n,N^{(m)}(s_0,n))\times\log (N^{(m)}(s_0,n)+1) \]

Here, if we want to capture the complexity of emerged structures when adding milk into coffee, we need to define the complexity to depend on \(m,n\). Hence, I introduce you to this definition of the complexity.

\[ \Chi^{(m)}(s_0,n)=f(m,n,N^{(m)}(s_0,n))\times\log (N^{(m)}(s_0,n)+1) \]

Or

\[ \Chi^{(m)}(s_0,n)= f(m,n,\exp({\sum_i\gamma^{(i)}_n{(m\log 2)}^{i+1}})-1)\times\sum_i\gamma^{(i)}_n{(m\log 2)}^{i+1} \]

Therefore, complexity is the emergent entropy of a partition, a.k.a. a single type of clusters. The sum of all complexity is the total emergent entropy

\[ S(s_0)=\sum_{m,n}\Chi^{(m)}(s_0,n) \]

This is interesting because you can write the First law of Thermodynamics 11 like this

\[ \Delta U=- P\Delta V+\sum_{m,n}T\Delta \Chi^{(m)}(n) \]

Notice we ignored \(s_0\) this time, because this feels more familiar, however, we still count inside the zero order cluster as before. The First law of Thermodynamics models an irreversible process, which particularly useless if we want to capture the structure of coffee and milk mixture, for obvious reasons. On the other hand, if you stare at it for enough time, it'll confess that temperature, \(T\), is the average of energy per configuration. Therefore, we need to expand into a more general case. One way to do it is to assign different temperatures for different partitions, \(n\).

\[ \Delta U=- P\Delta V+\sum_{m,n}T^{(m)}(n)\Delta \Chi^{(m)}(n) \]

So \(T^{(m)}(n)\) is the average energy that emergent structures in the partition \(n\) are holding. It's all good and ready for the experiment!

The last thing that I need to mention is that this definition can capture the complexity of software by measuring something like the Shannon Entropy 18 of the input and output of a piece of a software. The orders, \(m\), could be defined by the index on the stack memory while calling a function, method, procedure, etc. with a specific inputs and outputs. The emergent structures are the inputs and outputs, where we need to count their uncertainty by using function space 19, which it basically defines like: the set of functions from \(X\) to \(Y\) may be denoted \(Y^X\). This by itself shows why statically typed programming languages are less complex, because the uncertainty of input and output is lower than dynamically typed ones. This will lead to lower emergent entropy, or complexity, for each function, and their summation, total emergent entropy. But you knew it, right?

To calculate the complexity for any other domains, you just need a tree graph, where the stack memory denotes that in software, and a space to count the emergent structures.

That being said, we're ready for the conclusion.

Conclusion

In the end, we can defend that \(\Chi^{(m)}(n)\) is satisfying the requirements mentioned by Sabine.

  • The defined complexity relies on emergent structures, therefore, we can argue it's an emergent property.
  • It's defined on the edge of chaos because emergent structures are an extension of fractals. Where fractals are the standard models to describe the chaos in the Chaos Theory 2.

Additionally,

  • It can describe the coffee and milk mixture experiment, life forms, brain, etc.
  • This emergent structures can be a solution to the long-standing questions like what's the Dark matter, or, how van der Waals force works.
  • This's measurable, due to the fact that these emergent structures can hold energy, therefore, we can measure that energy, or at least its average, a.k.a. \(T^{(m)}(n)\).
  • It can be generalized to the information realm.

References

Cite

If you found this work useful, please consider citing:

@misc{hadilq2024Complexity,
    author = {{Hadi Lashkari Ghouchani}},
    note = {Published electronically at \url{https://hadilq.com/posts/complexity/}},
    gitlab = {Gitlab source at \href{https://gitlab.com/hadilq/hadilq.gitlab.io/-/blob/main/content/posts/2024-02-07-complexity/index.md}},
    title = {Complexity},
    year={2024},
}