Type Mechanics

Hadi published on
49 min, 9726 words

Categories: Math Physics

Type Mechanics

Did you ever wonder if Quantum mechanics1 is not following Constructivity's principles2, then why it works, at least in the calculation level! I mean the "shut up and calculate" approach of the Copenhagen interpretation3 worked, right? Notice it's mostly referring to the gaps between the computations that don't have comprehensive explanation rather than the computation of the Quantum mechanics. After all, nobody has problem with the calculation part, since they are constructive and boring4!

Anyway, I hope it'll be clear for you in the end of this post why it even works! The Type Mechanics is inspired by Matrix Mechanics of Heisenberg5, and also its predecessor the Statistical Mechanics6. I would say that this is the third major reversion of this concept. Obviously, there are some differences, which will probably help us to decide how to experimentally validate them.


Above wallpaper reference7

Obviously the Type Mechanics must follow the Constructivity's principles2, so I could argue it worth working on. There's no new assumption or principle in the Type mechanics, but mostly reducing the principle will be the case. For a reminder, principles MUST be observable in Constructivity2, which is the main reason we can get ride of some non-constructive principles, such as the Pauli exclusion principle8. Anyway, before diving into the main goal, we have to play with some ideas and construct some tools, especially mathematical tools. So this post is a continuation of previous ones in my blog, especially the Counting space in Geometrical Probability9, the Constructivity2, and the Structurism4.

If you have questions like why I think I can develop any theoretical framework outside of academia, you can read the acknowledgment section in the end.

A brief notice is needed here though! In my mind, a path to perfection is zig-zag, so you are not about to read a perfect version of this theory, the same as my other writings in the blog! Small mistakes, typos, grammar mistakes, even small mistakes in calculation are expected here! In fact, there's no scientific writing without mistakes ever, so if you have a scientist mindset to act like a search engine, just like me, you are prepared to find them, where some of them are there intentionally to increase the fun! Hilbert's writing had mistakes, what did you expect from me?! Just find the mistakes and count your points. If you are kind, ping me in social networks, and I will iterate on that if it's not intentional! This is part of my rebellion against academia's bureaucracy, especially the so called scientific format, and peer-review process. Thanks!

Let's dive in! Starting by Algebraic Data Types(ADT)10 in the Type Theory11.

Algebraic Data Types in Type Theory

Here we don't want to dive deep in the Type Theory 11, but instead, we just want to review sum-types12 and product-types13. These two will build Algebraic Data types(ADT)10 for us. Hence, if you are familiar with these concepts, feel free to jump to the next section. Just be mindful that these two kind of types are capable to build up anything we need in the systems of formal logic, as the Curry-Howard correspondence 14 proved.

Before explaining them, you should know a type is a property we assign to objects. I can say it's the next version of the Set Theory, since, as discussed in previous posts 2, sets are not constructable. But here we will construct types after the Generating functions's section.

Sum-type:

If you have two types, \(A\) and \(B\), then you want to build another type out of them, \(C=A+B\), where an object with type \(C\) means it is either type \(A\) \(or\) type \(B\). Notice, \(+\) operator is computing the sum-type of \(A\) and \(B\). The \(or\) is very important here, which shows the relationship to systems of formal logic. We call it sum-type12 since let's say there are \(n\) number of objects with type \(A\), and \(m\) number of objects with type \(B\) in a given problem, then there is \(n+m\) number of objects with type \(C\), so the counting of number of objects is computed with the addition.

Product-type:

Very similarly for the product-type. If you have two types, \(A\) and \(B\), then you want to build another type out of them, \(C=A\times B\), where an object with type \(C\) means it is a pair of objects with type \(A\) \(and\) type \(B\). Notice, \(\times\) operator is computing the product-type of \(A\) and \(B\). As before, the \(and\) is very important here. We call it product-type13 since let's say there are \(n\) number of objects with type \(A\), and \(m\) number of objects with type \(B\) in a given problem, then there is \(n \times m\) number of objects with type \(C\), so the counting of number of objects is computed with the multiplication.

Generating functions

The Generating functions15 were always magical to a lot of us, but the underlying mechanism is just easy algebra! These tools are simply a power series that its coefficients are a sequence of numbers related to the problem in hand. We are especially interested in their use-case in Enumerative combinatorics16, where you can clearly see their relationship to the sum-type12 and product-type13. The Enumerative combinatorics is where we solve problems that are related to counting combinations and permutations. To give you a hint, the counting is the interesting part since we want to related these tools to the Counting Space9. But let's don't rush it!

In the Enumerative combinatorics, the generating function of \(f_n\) series is in the form of

\[ F(x) = \sum_{n}f_nx^n \]

or

\[ F(x) = \sum_{n}f_n\frac{x^n}{n!} \]

This could be used to solve different kind of combinatorics problems, but both of them show interesting properties. I am rewriting the textbook names of Union16 and Pair16 properties in the language of the Type Theory11, so we could unit these two ideas in one powerful framework, then we will talk nature with this framework.

Given two types, \(A\) and \(B\), with \(m\) and \(n\) number of objects with these types respectively. We call them atomic types since we cannot divide \(m\) and \(n\) into more classifications with their own types. If in the future our knowledge about the system increased and we could classify them then the new underlying types would be new atomic types, therefore, the atomic types are NOT promised to be concrete. In fact, based on the argument of newly found classifications above, we can write the old atomic types in term of new atomic types and their algebraic operations. By the way, these atomic types are also called primary types.

Notice, the "classification" is the terminology we use in the textbook study of the generating function, however, you can rephrase above sentences to use "abstraction" instead, as the way we build these atomic types. Hence, an abstraction would hide the properties of object of an atomic type in a way that you cannot distinguish them in any manner. However, if you find new abstractions to split the objects, you would have new atomic types, as explained before.

So we can make the \(F(x)\) and \(G(x)\) generating functions for these atomic types respectively. Since they are atomic and we only have one number as a sequence of numbers, then \(F(x)=m\) and \(G(x)=n\) are constant generating functions.

Now let's count again the number of objects with type \(A + B\). As before, the number of objects in \(A+B\) is \(m+n=F(x)+G(x)\). Very similarly for the product-type, \(A\times B\). And of course, the number of objects in \(A\times B\) is \(m \times n = F(x) \times G(x)\).

To make it easier to talk about the generating functions of types, define \(\eta\) to map the types into their generating function. Thus for above simple case we have

\[ \begin{array}{ll} \eta(A)=F(x), &\eta(B)=G(x), \\ \eta(A+B)=F(x)+G(x), &\eta(A\times B)=F(x)\times G(x) \end{array} \]

Thus, \(\eta\) is an isomorphism17 for both \(+\) and \(\times\) operators.

Here we can build more types on top of the atomic types by using these operators. Let two types be exclusive if they don't have a common object. In the Enumerative combinatorics16, it's a common approach to force their objects to be distinct even if they are equal! Given a series of exclusive atomic types, \(A_i\), with \(f_i\) number of objects, we can create a generating function like

\[ F(x) = \sum_{n}f_nx^n \]

Thus, for a given problem that have the \(f_i\) counting series, the generating function of \(A_0+A_1+...\) would be this \(F(x)\), since the power series don't let \(f_i\) coefficients mixed up. In other words

\[ \eta(A_0+A_1+...)=\sum_{n}f_nx^n \]

Notice the total number of objects in \(A_0+A_1+...\) can be calculated as before by simply setting \(x=1\)

\[ \sum_{n}f_n \]

Given two problems with two series of counting \(f_i\) and \(g_i\), with the common exclusive atomic types, \(A_i\), we can build their generating functions as follows.

\[ F(x) = \sum_{n}f_nx^n, G(x) = \sum_{n}g_nx^n, \]

These are correspondence to type \(B=A_0+A_1+...\) for problem one, and type \(C=A_0+A_1+...\) for the problem two. \(B\) and \(C\) have the same structure, but the only difference is the number of objects with these types in those problems. We're curious to combine these two problems though. Now we are interested to find the generating function of type \(B+C\), which is the sum-type of all of those exclusive atomic types. Since this is a sum-type then number of objects with each type will add up, therefore, the generating function is

\[ \begin{array}{ll} \eta(B+C)&=\sum_{n}(f_n+g_n)x^n = \sum_{n}f_nx^n + \sum_{n}g_nx^n \\ & = F(x)+G(x)=\eta(B)+\eta(C) \end{array} \]

Thus the generating function of a sum-type is the sum of the generating functions, as we expected from an isomorphism, \(\eta\).

It's all good but we have to add one more restriction to fit it into this picture. Let's have \(A_i\times A_j = A_{i+j}\). Now we can continue with the product-type. The generating function of \(B\times C\) would be the product of the generating functions as following.

\[ \begin{array}{ll} \eta(B\times C)&=\sum_{n}(\sum_{i+j=n} f_i\times g_j)x^n = \sum_{i}f_ix^i \times \sum_{j}g_jx^j\\ &= F(x)\times G(x) = \eta(B)\times \eta(C) \end{array} \]

Noticed we used \(A_i\times A_j = A_{i+j}\) as assumed before.

Theorem 1

\(\eta\) is isomorphic for both \(+\) and \(\times\) operators.

Proof

It's just proved above!

Notice the generating functions are algebraic rings18, which means \(+\) and \(\times\) operators, as you expected, have a ring structure18, since they also are associative, multiplicative, and distributive.

But even if we avoid \(A_i\times A_j = A_{i+j}\) assumption we would be good if we modify the generating functions' definition a little bit. This extension would also be the way we construct the ADTs, as promised before.

At this point I want to give you some more hints how we're going to reason about the Quantum mechanics1 by using bra-ket notation for the generating functions. This choice allow us make the connection to the Quantum theories easily. For instance, remember to think about the following sentence while reading the rest of this post: "quantum superposition is a sum-type".

Notice, here we are in the Constructivity's framework so these bra-kets must be constructable, right? Obviously yes! Someone can build them from matrices or in especial cases like above from power forms, but as any abstraction would do, we'll try to hide this important fact, even though we are aware of it, and we don't add any abstraction to violate it.

I don't know if I could call them generating functions anymore, since this will not be only power series, therefore, we will call them Type Generating Function(TGF). Hence, the type generating functions of \(f_n\) series would be

\[ |F\rangle = \sum_{n}\oplus f_n|n\rangle \]

Notice, this is not a wavefunction in the Quantum mechanics1, or even a vector in the Hilbert space19. In fact, we will show they are different objects even though they are very similar, however, feel free to think of it as an object in the Counting space9 as described before. This is due to the fact that in the Counting space we add up the coefficients to calculate the counting, but in the Hilbert space the sum of coefficients doesn't have any meaning, instead we have to calculate the square norm of each coefficients then add them up. So there's a fundamental difference here.

By this tweak, we still can say the TGF of sum-type is the sum of TGFs, \(\eta(A+B)=G(A)\oplus F(B)\), however, the TGF of product-type will be the tensor product of TGFs, \(\eta(A\times B)=G(A)\otimes F(B)\). The proofs are the same as before though! We just need to notice the tensor product will create a product-type that is not necessarily following \(A_i\times A_j = A_{i+j}\), instead it's just \(\eta(A_i\times A_j)=\eta(A_i)\otimes\eta(A_j)\).

Since \(\eta\) is an isomorphism, here we are talking about an additional correspondence to the Curry-Howard correspondence 14, if we add one more piece, update functions.

Update function

To build the update functions, we need to define \(\langle n\lceil\) to be able to decompose any TGF. The definition would be as usual by keeping \(|n\rangle\) orthogonal.

\[ \langle n|m\rangle = \langle n\lceil|m\rangle=\delta_{mn} \]

Where \(\delta_{mn}\) is the Kronecker delta20. We will call this definition the orthogonality of the basis. Notice, \(\lceil\) is used above since you will notice the \(\langle n\lceil\)s are operators acting to the right.

To bring back \(A_i\times A_j = A_{i+j}\) assumption whenever we need it, we can define an update function, \(U\), that is defined like

\[ U = \sum_i\sum_j\oplus | i+j\rangle\otimes\langle i\lceil\otimes\langle j\lceil = \sum_i\sum_j | i+j\rangle\langle i\lceil\langle j\lceil \]

So whenever we need to apply this assumption to simulate the power series, we just apply this function on the TGF, like this: \(U|F\rangle\). Notice, we replaced \(\oplus\) and \(\otimes\) with ordinary symbol of addition and multiplication whenever there's no confusion.

Obviously, we can define more flexible update functions in the future. This update function is linear, but in general update functions can be non-linear. Someone can say the update function is the same concept as rewrite in the proof systems, or evolution equation in Quantum theories, or the role of the lambda calculus.

Since it's a proof system for an exercise we can have types for True and False, as sometimes we do in the type theory to simulate formal logic. By the way, most of the times we don't really need to simulate formal logic to convey logical meaning in the Type theory.

\[ \begin{array}{ll} True = |+\rangle,& False = |-\rangle \end{array} \]

Then an operator like \(not\) would be an update function like

\[ \begin{array}{lll} not&=|+\rangle\otimes\langle-\lceil\ominus|-\rangle\otimes\langle-\lceil&=|+\rangle\langle-\lceil-|-\rangle\langle+\lceil \end{array} \]

The negative coefficient in the Counting space means the number of objects with this type is measured for an observer that will start counting in the future. In another words, the negative number of object with that type is expressing that it's in debt. Other than that it's a choice we took here. The same for an operator like \(and\) would be

\[ \begin{array}{ll} and&=|+\rangle\otimes\langle+\lceil\otimes\langle+\lceil\\ &\ominus|-\rangle\otimes\langle-\lceil\otimes\langle+\lceil\\ &\ominus|-\rangle\otimes\langle+\lceil\otimes\langle-\lceil\\ &\oplus |-\rangle\otimes\langle-\lceil\otimes\langle-\lceil \end{array} \]

The same for the \(or\)

\[ \begin{array}{ll} or&=|+\rangle\otimes\langle+\lceil\otimes\langle+\lceil\\ &\ominus|+\rangle\otimes\langle-\lceil\otimes\langle+\lceil\\ &\ominus|+\rangle\otimes\langle+\lceil\otimes\langle-\lceil\\ &\oplus |-\rangle\otimes\langle-\lceil\otimes\langle-\lceil \end{array} \]

And for the \(xor\)

\[ \begin{array}{ll} xor&=|-\rangle\otimes\langle+\lceil\otimes\langle+\lceil\\ &\ominus|+\rangle\otimes\langle-\lceil\otimes\langle+\lceil\\ &\ominus|+\rangle\otimes\langle+\lceil\otimes\langle-\lceil\\ &\oplus |-\rangle\otimes\langle-\lceil\otimes\langle-\lceil\\ &=(-not)\otimes\langle+\lceil\oplus(-not)\otimes\langle-\lceil\\ &=-not\otimes(\langle+\lceil\oplus\langle-\lceil) \end{array} \]

Did you see the simplification of \(xor\)? Amazing!

Theorem 2

The TGF, and mapping inside the Counting space, are building a proof system.

Proof

We need to show \(\eta\) is bijective and onto. The onto property is clear since it's achievable to write down the TGF for any type. We need to focus on the bijective part only.

With the orthogonality property, we can take out the number of objects with a type out of a TGF, thus we can construct the reverse function of \(\eta\), \(\eta^{-1}\), thus, TGF and Type Theory are successfully mapped.

\[ \eta^{-1}(\sum_i c_i|i\rangle)=\sum_i c_i \begin{cases} \eta^{-1}(|i\rangle), & \langle i|F\rangle \neq 0 \\ 0, & \langle i|F\rangle = 0 \end{cases} \]

and similarly

\[ \eta^{-1}(\prod_i \otimes c_i |i\rangle)=\prod_i c_i \begin{cases} \eta^{-1}(|i\rangle), & \langle i|F\rangle \neq 0 \\ 0, & \langle i|F\rangle = 0 \end{cases} \]

Hence, someone can dig in a TGF until reaches into the atomic types. Here the proof is completed.

The idea that you can proof by using algebra of TGF is awesome! It's like the idea of Euler to use the generating function themselves to use algebra to solve combinatorics problems, or the idea of Laplace to solve differential equations using Laplace transform, which leads to an algebraic equation.

Here, we are in a position to throw away all the above notation of the Type Theory11, and the \(\eta\), instead using bra-ket notation(the type generating functions) for types!

Given \(|a\rangle\) and \(|b\rangle\) are a types now, The sum-type of two types is \(|a\rangle\oplus|b\rangle\), and their product-type is \(|a\rangle\otimes|b\rangle\). The coefficients of each type is the number of objects with that type in the given problem, thus, this object belongs to the Counting space9.

If you check the post that I defined the Counting space9, you will notice that the definition of dimensions in the Counting space9 relies on the event-type, which is a type for events. Here it makes more sense after proving this correspondence.

Additionally, the Ring18 properties, being associative, multiplicative, and distributive, are built-in into the TGF, the same as their previous version, the generating functions. These enables us to think about them like tree graphs.

Generating functions and tree graphs

If you ever studied the generating functions, you have encountered how good is this tool to count objects that are structured on a tree graph. Even though TGFs are an extension of the generating functions, it easily supports this tree graph understanding. For reminder, check this tutorial video Counting with Calculus: The Magic of Generating Functions to see how by using logical relation of nodes, someone can deduce Recurrence relation 21 of the generating function, then simply by solving an algebraic equation we can count the objects on each node. So basically if I give you the logic of the problem in the type system, you can build a mapper from each node to its children to count the number of objects on each node!

This is also true in reverse since this is algebra! If I give you a bulk of logical relations, you can factorize its elements and build a tree graph. Just remember, the part that could be factored out is the logic of the parent node.

It's also extremely interesting if you notice we have a language in hand to define abstraction. Since types can be defined by grouping objects with their common properties, then if you can factorize a TGF out of the bulk of TGF of the whole problem/domain, then you have a well-defined abstraction in that domain, which means the factorized terms have common properties among all the types in the type generating function of the domain. The factorized terms are an abstracted type, and also the parent node in the tree graph we can construct. So basically if we write down the logic that the domain is working in by using the TGFs, then if you simplify the terms by using factorization, you developed a theory for that domain.

The simplification and factorization are usually associated together, where you can connect the dots if you noticed, making theories is all about compressing and simplifying the bulk of information we gather through our experiments.

The tree graph is important, since you know you understood a subject if you could draw its structure on a tree. Just open any book humanity ever wrote! Then you can find the list of content, or you can generate the list of content, where each item in the list could have more children items. That's the tree graph, since a genuine author wanted to make his/her thought understandable to you.

So this is not only how we develop theories! This is how we organize knowledge, by factorizing parent nodes out of the bulk of TGF in our hand. This is why we think TGF is powerful enough that we can talk nature with it.

Numbers

As mentioned a type is a property we assign to objects, but how do we show a specific type is assigned to an object in the TGF framework? It will be clear when we investigate numbers.

In general, numbers are a power series of digits just like generating functions.

\[ \sum_i d_i p^i \]

Where \(d_i\)s are digits of a number and \(p\) is the base for its representation. However, since in the Constructivity 2 we must be able to construct all terms of the series, we only can iterate for a finite number of terms.

\[ \sum_{i=a}^b d_i p^i \]

Where \(a\) and \(b\) are two integers. Basically here we defined the Rational numbers, where you just need to replace the base of the number with a variable to have their corresponding generating Function. So far so good! Next we want to extend it to the TGF framework.

When you read an analog clock, or watch a gauge of a thermometer to measure temperature, or pick a number on the ruler, you count the number of one hundreds, then the number of tens, then the number of ones, etc. These are the digits of a number, right? Thus, each of them has a type, and you can put the number of object with that type as its coefficient, which is a digit. There's also one observation that we call it normalization, where it's carrying extra counting to the left to normalize the number, so we can apply it with an update function as before. Given the following number

\[ |n\rangle=\sum_{i=a}^b\oplus d_i|p\rangle^{i}=\sum_{i=a}^b\oplus d_i\prod_{j=1}^i\otimes|p\rangle \]

Here \(p\) is the base for this number as before.

Be aware all digits of the numbers, \(d_i\), are counting numbers, as explained before, since they count the number of ones, tens, hundreds, etc.

The carrying to the left can be constructed by using two update functions. It looks like this

\[ carry_1 = \sum_{i=a}^b\oplus |p\rangle^i\otimes \langle p\lceil^i \ \mathbb{mod}\ p \]

Where equal sign is the modulo \(p\), which means if you apply it to a number, the digits would be the remainders. The second update function is

\[ carry_2 = |p\rangle\otimes(1-carry_1) \]

This transfers the difference between the remainders and the initial digits, then carry the result one step to the left. Then the more normalized \(|n\rangle\) would be

\[ \begin{array}{ll} norm_1|n\rangle &= carry_2 |n\rangle + carry_1 |n\rangle\\ &= (carry_1 + carry_2) |n\rangle\\ &= (carry_1 + |p\rangle\otimes(1-carry_1)) |n\rangle\\ &= (carry_1 + |p\rangle-|p\rangle\otimes carry_1) |n\rangle\\ &= (|p\rangle+(1-|p\rangle)\otimes carry_1) |n\rangle\\ \end{array} \]

But doing this one time will not normalize \(|n\rangle\) completely, so we need to iterate on it for sufficient amount of time. Finally the normalized \(|n\rangle\) could be achieved by apply the following update function.

\[ \begin{array}{ll} norm|n\rangle &= (|p\rangle+(1-|p\rangle)\otimes carry_1)^{b-a} |n\rangle \end{array} \]

It's not the most efficient recipe since it may get the normalized \(|n\rangle\) in a few iteration, then do nothing, so no need for all \(b-a\) iterations, but it's a general recipe.

I don't know if you noticed, but in the type theory, objects have a type as its properties, including numbers, but in the above recipe the atomic types are so refined that you can distinguish them with their types. In other words, numbers are types themselves, so they can hold values only if the atomic abstractions hide a little bit less! That being said, you can conclude the notion of both objects and their types can be expressed using TGFs!

For instance, take the \(u32\), which is an unsigned binary integer with 32 bits, then you can think of it as \(2^{32}\) distinct objects with their own types, where these types are sum-types. In other words, a \(u32\) can hold either \(0\), or \(1\), or \(2\), or ... This means you cannot have two numbers in the storage occupied by u32 at once, so there should be only one number there! Thus, you can write \(u32\) like below.

\[ |u32\rangle = \sum_{i=0}^{2^{32}-1}|i\rangle \]

In other words, \(|u32\rangle\) can be one of \(|i\rangle\)s at a time, which is the definition of a variable. Notice again, each number has its own type now. We explained before that atomic types can be break down to some refined ones if your knowledge increases. This would be an example of that. However, based on what we learned from generating functions, to count all of them we just need to compute their formal power series, for instance by applying below update function to replace types with \(z^i\)s we can compute a generating function for this problem.

\[ pow=\sum_i\oplus z^i \langle i\lceil \]

Then compute it in \(z=1\) to get the total number of integers with type \(|u32\rangle\), which obviously is \(2^{32}\), since all the coefficients in \(|u32\rangle\) are ones, and it's repeated \(2^{32}\) times,

However, to pin point one of these types out we need an update function, right? Something like

\[ M_j= |j\rangle \langle j\lceil \]

We already has the hint that "quantum superposition is a sum-type". There's also an idea of measurement in the Quantum Mechanics, which has its own problem22, that we will attempt to address by using this mathematical tool. One of many goals of this post is to show there's nothing strange about the wavefunction of the quantum theories, so let's start now! In the quantum terms, the \(|u32\rangle\) in our computers collapses into a number, for instance it collapses to \(|5\rangle\), then to address the Measurement problem22 we should answer how this has happened? The answer clearly relies in the structure of \(M_j\), but let's postpone figuring out its structure for later. The point here is \(|u32\rangle\) is totally well-understood constructable classical object in your computer that acts exactly like what we call wavefunction in the quantum world!

The measurement happened when we applied \(M_j\) above. And you will see the update functions could be considered as rewrite rules in the Cellular Automaton of our reality. Hence, the Cellular Automaton model of reality would update itself all the time using these update functions. It's NOT something magical from the world below, as you expect from a constructive theory!

Let's combine all of these. If we write each digits in this form

\[ d_i = \left(\langle p\lceil^i\right)|n\rangle \]

Then any number could be written ones and zeros as coefficients like below.

\[ |n\rangle =\sum_{i=0}\oplus d_i|p\rangle^{i} \]

Nice!

Calibration

We are not done yet with numbers! The next question is what if we found a better instrument for our measurement, then how do we combine the results? So imagine you first have a measurement that gives you the following number.

\[ |n\rangle_1=\sum_{i=a}^b\oplus d_i|p\rangle^{i} \]

Then the next measurement with a more precise instrument, that is already calibrated with the first instrument, gives you

\[ |n\rangle_2=\sum_{i=b-c}^e\oplus d_i|p\rangle^{i} \]

So \(e>b\), and \(c\) is the common terms in both measurements. This means for the same measurement \(d_i\) must be the same for the common terms, since we calibrated the instruments, right? This would be our ladder to a more precise measurement. A ladder like Cosmic distance ladder23. Above checking compatibility of these two measurement, we don't need these duplicated terms in the second number other than calibration purposes, so let's write the result of the second measurement like below.

\[ |n \rangle_3 = \sum_{i=b+1}^e\oplus d_i|p\rangle^{i} \]

You can write the final result of both measurements as below.

\[ |n\rangle=|n\rangle_1\oplus |n\rangle_3 \]

The conclusion would be the logic of more precise measurement would be an \(or\) operation, since you either measured with the instrument one or two.

Number theory

Here, I just want to show an interesting example of TGF in the study of numbers, so feel free to jump to the next section if you are not interested.

The example is Euler's famous method to connect prime numbers to the Riemann zeta function24. The name of zeta function's series was not Riemann zeta function at the time, but this is what Euler did.

\[ \sum_{n=1}^\infty\frac{1}{n^s} = \prod_{p \text{ prime}} \frac{1}{1-p^{-s}} \]

The intermediate step is like

\[ \sum_{n=1}^\infty\frac{1}{n^s} = \prod_{p \text{ prime}} \sum_{i=0} p^{-is} \]

Can you see the types? If you define a type for the power of a prime number to \(i\), something like this:

\[ |p^i\rangle \]

Then choosing one of these types out of the sum-type of

\[ \sum_{i=0}\oplus |p^i\rangle \]

For each \(p\), this will be the type for each node in the tree graph of its TGF. To express how many numbers are on one of the leaves of this tree graph, we are multiplying their counting on the nodes. It's like pairing the coordinate of one specific leave on the tree. Thus we count them using product-type as following.

\[ \prod_{p \text{ prime}} \otimes \sum_{i=0}\oplus |p^i\rangle \]

After expanding this expression, the coefficient of each types on this TGF will count the number of numbers we can build with \(|p^i\rangle\) atomic types, where Euclid proved it will be one! Also he proved it'll give us all the possible numbers. This all means the following expression can be expanded to

\[ \prod_{p \text{ prime}} \otimes \sum_{i=0}\oplus |p^i\rangle = \sum_{n=1} |n\rangle \]

Obviously, someone can extend this argument for \(|\frac{1}{n^s}\rangle\) in the same way. It may not be so much, but the counting and the existence of the tree graph under the hood is so satisfying to me! Another application of TGF in this domain could be thinking of translating a problem about finding the roots of any function to finding a binary tree.

Functions

Here we want to investigate the connection between the multivariable functions and the TGFs. You probably already guessed that as far as the TGF holds the structure of the logic, it could be constructed by any kind of orthogonal basis. For instance, you can have an update function like

\[ fourier=\sum_n e^{in\pi z}\langle n\lceil \]

This maps TGF of all kind, including numbers, into a Fourier series 25. We can write the basis of the Fourier series like this:

\[ e^{in\pi z}=|n\rangle_f \]

Hence, the orthogonality property will be satisfied using

\[ \langle n\lceil_f=\frac{1}{a}\int_0^a dz e^{-in\pi z} \]

This expansions are not exclusive to the Fourier series. We previously mention the \(pow\) update function that could map TGFs into Power series. By Power series we also referring to the Taylor series26. As before its update function would be

\[ taylor=\sum_n (z-a)^{n}\langle n\lceil=\sum_n |n\rangle_t\langle n\lceil \]

Where clearly \(|n\rangle_t\)s are the basis for the Taylor series. Thus, the orthogonality property will be the following.

\[ \langle n\lceil_t=\frac{1}{n!}\left.\frac{d^n}{dz^n}\right\vert_{z=a} \]

This means you need to compute the nth degree differential of a function in the \(a\) point. Additional to these series, the Laurent series27 is also invited to this party! The update function for the Laurent series would be

\[ laurent= \sum_n (z-a)^{n}\langle n\lceil=\sum_n |n\rangle_l\langle n\lceil \]

The same as before \(|n\rangle_l\) are the basis for the Laurent series, where the orthogonality property can be achieved by using

\[ \langle n\lceil_l=\frac{1}{2\pi i}\oint \frac{1}{(z-a)^{n+1}} \]

No need to mention these maps are onto, and bijective. Someone can find the reverse update function of these maps due to the fact that each of these series designed to provide unique coefficients.

That's not restricted to only Power series and Fourier series, but also on the entire Hilbert space.

Therefore, here TGFs is one language to talk about numbers, functions, type systems, logic, proofs, which makes them a powerful framework.

Entropy

Just by looking at the patterns, we can conclude someone can compute the entropy of TGF by following how the Von Neumann entropy28 is working. Given the following TGF.

\[ |F\rangle = \sum_n a_n|n\rangle \]

Someone can construct its density matrix like this.

\[ \rho = \sum_n |n\rangle a_n\langle n \lceil \]

Where its entropy would be

\[ S=-tr(\rho \ln \rho) \]

Where \(tr\) denotes trace of the input matrix. Notice, we can construct the density matrix by using an update function like below.

\[ \begin{array}{ll} density &=\sum_n (|n\rangle \langle n \lceil)\otimes \langle n\lceil=\sum_n |n\rangle \otimes\langle n \lceil\otimes \langle n\lceil\\ density|F\rangle &= \rho \end{array} \]

And the reverse update function like

\[ \begin{array}{ll} density^{-1}&=\sum_n |n\rangle\\ \rho\ density^{-1}&=|F\rangle \end{array} \]

Where it acts from right. By the way, notice even update functions could be residence of the Counting space. Thus, the relationship between the density matrix and its TGF is onto and bijective.

Evolution

Even though it's kinda obvious the update function can give us the Evolution, since someone can define an update function related to the environment of each species, so applying that update function would move the TGF of a kind to its next generation. So iterating by applying an update function would be the definition of the evolution. These update functions even have mathematical fix points, where different species will fall into these fix points, which is the subject of the Convergent evolution29, or sometimes these TGFs orbit around the Attractor30 of these update functions.

The main issue I can see these TGFs can solve is that people's under-estimation the probability of occurrence of a special events, since the claim is that the evolution is based on the randomness of events. The problem is that people tend to calculate the probability of events solely in a flat structure, while events have assembly lines and a structure like a tree graph, which means some events depends on the occurrence of other events. The Assembly theory31 studied these tree graphs in the molecular level, but this strategy can be applied to all kind of events, which makes TGFs very relevant for these studies.

Quantum Theory

By Quantum theory we are referring to the Quantum Mechanics, QED, and, QFT. As mentioned Quantum Mechanics as a social phenomenon is very interesting, but if you are interested in the real underlying reality, you only can find some interesting tools like wavefunction in it. The wavefunction has properties like Quantum superposition32 and Quantum entanglement33, where by now you must understood them in the terms of TGF. Yes! Quantum superposition is equivalent to sum-types and the \(or\) operator, or \(union\) operator, since it means one of the states in a superposition will be observed. The Quantum entanglement is equivalent to product-types and the \(and\) operator, or \(pairing\) operator, since it declares multiple spatially separated measurements are picking a correlate state, right? Not so fast though! As mentioned before, wavefunction is not an object in the Counting space. Given a wavefunction like

\[ |\phi\rangle = \sum_n \phi_n|n\rangle \]

Someone can construct their density matrix like

\[ \rho = \sum_n |n\rangle \phi_n\langle n | \]

Which means

\[ \rho'=\rho^\dagger\rho =\sum_n |n\rangle {|\phi_n|}^2 \langle n | \]

Thus, the \(\rho'\) itself is the density matrix inside the Counting space. Notice, we already gave a similar update function above to jump from wavefunction into density matrix and back. However, here you cannot construct \(\rho\) out of \(\rho'\), which means we can turn wavefunctions into objects inside TGF, but we cannot do it in reverse. Also be aware that the basis for both objects are the same, and the only important difference is the type of the coefficients. Disregarding the coefficients, the Quantum superposition and Quantum entanglement are \(or\) and \(and\) operators, since their basis reside inside the Counting space after all.

Transformation

The Mie scattering or Mie theorem is the underrated wave transformation. This is another case where so called classical waves transform exactly like quantum wavefunction transforms. It's also a very powerful mathematical tool to explain a lot of experiments scattering cross sections. In summary, the Mie theorem determines the coefficients of the transformation from an incident plane wave to a spherical wave. Here, we just want to express this transformation in TGF as a very good example of how these transformation work. Let \(|plane, \omega\rangle\) be the basis for the incident plane wave, and \(|spherical, k\rangle\) be the basis for the spherical ones. Then we have

\[ |plane, \omega\rangle = \sum_n c_{\omega, n}|spherical, \omega, n\rangle \]

Where to obtain these coefficients we use the orthogonality of the basis of spherical wave, of course!

\[ c_{\omega, n}= norm\langle plane, \omega\lceil |spherical, \omega, n\rangle \]

In the Mie theorem this projection is an integral, since these basis are in the Hilbert space. However, the basis of a TGF in general are not always in the Hilbert space. Sometimes they are basis of a number, or the basis of a Taylor series. However, since TGFs are linear, the transformations are always associated with rewriting the old basis with the new basis then expand them.

The Mie theorem is one example, which is well experimented. For more examples, you can think of Euler's method to expand the trigonometric functions with the Taylor series, where resulted in solving the Basel problem34. Or you can think of expressing the basis of a function in term of numbers, which will be their expansion to the basis of numbers, where will result in the evaluation of the function for one number. However, you cannot reverse this transformation from a function to a number, since by rewriting basis of a function into numbers you lose at least a degree of freedom of the TGF.

Theorem 3

A deterministic transformation of a TGF is only possible if the number of degrees of freedom of a TGF stays constant or decreases.

Proof

Here we just need to exclude the case in which the degree of freedom increases. A transformation that increases the degree of freedom of a TGF means the output of a deterministic function must has changed, even though the inputs are the same. Proved.

After this we only talk about the deterministic transformations, so no need to repeat "deterministic" adjective.

Theorem 4

A transformation of a TGF is reversible if and only if the transformation maintain the constant number of degrees of freedom.

Proof

No need to repeat what's in the theorem 3. Proved.

That being said, an evaluation of a function is a transformation in the TGF framework.

Multipole expansion

The Multipole expansion the very useful start to show how thinking in TGFs is much more useful than treating everything like they are always in the Hilbert space. Also it will come very handy when we think about the Stern–Gerlach experiment35 in the future posts.

The ordinary definition of this expansion is like the following.

\[ V_{2^l-pole}(R,\Theta,\Phi)=\frac{1}{4\pi\epsilon_0}\sum_{m=-l}^{+l}M_{l,m}\sqrt{\frac{4\pi}{2l+1}}\frac{Y_{l,m}(\Theta, \Phi)}{R^{l+1}} \]

Where \(V_{2^l-pole}\) is the \(2^l\)–pole potential, and \(M_{l,m}\) is the spherical harmonics of multipoles, which can be calculated using.

\[ M_{l,m}=\sqrt{\frac{4\pi}{2l+1}}\int\int\int d^3\text{Vol}(r,\theta, \phi) r^l\times Y_{l,m}^*(\theta, \phi)\times \rho(r,\theta, \phi) \]

This looks good until you notice \(M_{l,m}\) is designed to calculate potential, which is not exactly talking about the charge distribution. It would be more useful if we could expand the charge distribution using coefficients like \(M_{l,m}\). It's not very out of reach since the integration and \(Y_{l,m}^*(\theta, \phi)\) belongs to the Hilbert space, so we can start to think indeed \(M_{l,m}\) is the coefficient of an expansion, until we check the radial coordinate, \(r^l\)! In fact, the \(r^l\) is the core idea of multipoles, where it's not a basis in the Hilbert space.

Here, the TGF will come to rescue, since \(r^l\) are the basis of the Laurent series27. Thus we can expand the charge distribution using TGF, where we can call TGF of multipoles.

\[ M_{n,l,m}=\sqrt{\frac{4\pi}{2l+1}} \frac{1}{2\pi i}\oint dr r^{n} \int\int d^2\Omega(r,\theta, \phi) Y_{l,m}^*(\theta, \phi)\times \rho(r,\theta, \phi) \]

Notice, the integration on \(r\) is happening on the complex plane, but we are okay with complex numbers since \(Y_{l,m}(\theta,\phi)\) are also complex. And finally the charge distribution would expand with the following.

\[ \rho(R,\Theta,\Phi)=\sum_{n=-\infty}^\infty\sum_{l=0}\sum_{m=-l}^{+l}\sqrt{\frac{4\pi}{2l+1}}\frac{Y_{l,m}(\Theta, \Phi)}{R^{n+1}} \times M_{n,l,m} \]

This expansion cannot be fit inside the Hilbert space, but it reside inside the Counting space.

Classical wavefunctions

One of the goal here is to show there's nothing strange about reality, especially the quantum world. Selling strangeness belongs to movie theaters, not science! If this is true we should be able to have a classical wave that could have quantum properties. For this session let's forget everything about TGF, and focus to address one specific claim that is getting taught in all Quantum Mechanics classes. Even though, you can obviously track down the following argument into TGF to understand it better. The most strange property of a quantum system, as it's taught, is the collapse of the wavefunction and related measurement problem. It famously is the spooky action at the distance that Einstein warned us about. However, it's easier to start from the Quantum superposition, and specifically the Quantum entanglement The claim is that they are quantum in nature and we cannot build them for classical objects! Hold my beer!

Given a string on a guitar which we all know it's a classical object, we can have standing waves on it. Since the string has boundaries, it resonates with certain orthogonal modes. This orthogonality is the general property of the solutions of the Laplace's equation36 with boundaries. Since this equation is linear, its solutions are provably can be written as a sum of some orthogonal functions, which can be used to build the Hilbert space. These are all well studied.

The Laplace's equation is the base for all the classical wave equations. Even the angular part of Hydrogen atom's solution of the Schrödinger equation37 is also solutions of Laplace equation. So they are everywhere. This means what we explain here can be generalized to any system that follows the Laplace equation.

The Laplace equation is linear so we can add or remove solutions independently. Being able to write classical waves as a sum of some orthogonal modes is exactly the same as Quantum superposition. However, after a few moment of start running the wave on the string, the modes with higher frequency will die, since they pull and push the boundaries more than modes with lower frequency. This results in transferring more energy to the boundaries and losing energy faster. The pulling and pushing argument is based on the fact that if you increase the frequency of modes of solutions of Laplace equation, provably the slope the wave that is reaching to the boundaries will be more steep. The end result is that the wave will collapse to its lower frequency mode. So if it was taught in a quantum class it would be the measurement problem, but here it's a classical collapse that you can verify because the note you hear from a guitar is related to the mode with the lowest frequency. Not to mention that no Pauli exclusion principle8 is used to get this result. Anyway! There's nothing strange here!

So we dealt with the Quantum superposition and the collapse of the wavefunction so far. In fact, in a lot of cases the Quantum superposition is not considered the strange part of the Quantum Mechanics, but the Quantum entanglement is almost always considered as an unimaginable quantum effect. Watch this!

The original wave of the string on the guitar above, which is a sum of all modes is like the real part of below function.

\[ F(x)=\sum_n f_n e^{i 2\pi n x /L} \]

Where \(f_n\) are the coefficients for each mode, \(L\) is the length of the string defined by the boundaries. But as mentioned it will collapse to the lowest frequency mode very fast.

\[ F(x)=f_0 + f_1 e^{i 2\pi x /L} \]

Let's make it easy by changing the origin of the reference frame to remove \(f_0\). Also let's write it in bra-ket notation, so generalization would be very smooth.

\[ |F\rangle= |1\rangle \]

Now it's the time to measure. Let your finger land on a point on the spring, while the wave is already vibrating. That finger will be a measurement device. The string gradually stops vibrating under your finger by transferring energy, and momentum, from the wave to your finger. Finally, the string will stop moving under your finger, which makes it part of the boundary of the new wave. Since the boundary conditions has changed, we have new solutions that we show them by \(|n\rangle'\), however, they also collapse to the lowest frequency mode in the new boundaries. Thus, we have the following.

\[ |F\rangle'=\sum_n |n\rangle'\langle n\lceil'|1\rangle \]

Where will be collapsed to its lowest frequency mode.

\[ |F\rangle'=|1\rangle'\langle 1\lceil'|1\rangle \]

All good in our thought experiment, except for the entanglement we need to show the correlation between two measurements. So let's start over, but instead of putting one or two fingers, we take two other springs and attach hooks on one side and weights on the other side, so in the moment of the measurement we just hang them from the former spring, which we want to measure it, in a way that it forces the former spring to stop moving under the hooks. In this way each of the measurement springs have their own solutions to the Laplace solutions, their own modes, and their own energy levels to absorb energy from the former spring. Let's name these modes using \(|n\rangle_1\) for the first measurement spring, and \(|n\rangle_2\) for the second measurement spring.

Here we only can judge the former spring, \(|F\rangle\), with what we measure with \(|n\rangle_1\) and \(|n\rangle_2\), so in our view this measurement is a pair of modes on the two measurement springs. However, the fun part of the Quantum entanglement is that it maintain a conservation, such as energy, momentum or spin. Here we can design the experiment in a way that the energy required to excite the lowest frequency mode of the measurement spring be the same, and also be the same as the lowest frequency mode of the former spring that is lost in the measurement. In this way after hooking the measurement springs, one of them could get excited. The other measurement spring must not get excited for that tuned mode. The lowest frequency mode of the former spring that is lost is \(|2\rangle"\), where \(|n\rangle"\) are the solutions for the former spring after new boundary conditions. Thus the measurement could be described as the following.

\[ |1\rangle_1\otimes|0\rangle_2 \oplus|0\rangle_1\otimes|1\rangle_2 \]

Where \(|n\rangle_1\), and \(|n\rangle_2\), are the modes of the first, and second, measurement spring respectively.

Here you must noticed that the modes we used to write the entanglement are the modes of the measurement device, not the former sprint that was the subject of the measurement. In fact, we don't have any specific details about the wave on the former springs, except the energy of one of its lost modes. Nonetheless, the excitement of measurement springs are correlated.

What has been mentioned in the last paragraph is correct about all entangled measurements of Quantum theories. Again! Nothing strange is about the quantum waves. In fact, if you make the above thought experiment a little bit more complicated, I bet you can reproduce the result of the Bell's theorem38 with the springs.

Conclusion

The Type mechanics is a next iteration on understanding the computation of Quantum theories.

To reach to this conclusion we developed a very powerful mathematical tool, named Type generating functions, TGF for short that can be used to work with any waves, functions, numbers, etc.

The study of TGFs shows the Quantum superposition is a sum-type, and the Quantum entanglement is a product-type.

This explains why it works even though the most popular interpretations are not constructive. The conclusion is there's nothing especial about Quantum theory that classical Physics cannot explain.


Acknowledgment

Here let's take care of some people's questions that why I think I can develop any theoretical framework outside of academia. I am not interested to these questions, but I am open to any question, so let's address them! I already explained before in some other posts that brain is a deterministic machine. Being deterministic means if you give it the same input again and again, the output will always be the same. Brain is also chaotic, in the Chaos Theory39 sense. Being chaotic means if you change the inputs a little bit, the output will change unpredictably different, but the output is only depending on its inputs, which is the definition of being deterministic. I find out some philosophers are defining a deterministic system with some nonsense buzzwords to justify that what we have is not deterministic, but if you just follow a simple input/output definition above, then brain is deterministic, and chaoric, as well as the whole world we are living in. Therefore, all of brains randomness is depending on some input at some point of your life, which we call them seeds, the same as the seed we use when we want to discuss so called Pseudorandomness systems40. I don't want to argue that all randomness you ever can get is what they called Pseudorandomness, so it's not "pseudo", but let it be a fight for another day!

Anyway! Those seeds are coming from your life. Academia is just one profession a human can have, which is mostly teaching, which is also shows why in the last century the science has been falling into the bureaucratic processes, such as peer-review! Teachers are teaching their way obviously! Being an engineer is another profession, which I chose, so the input seeds of my brain will be in a totally different category, thus the output will be different.

All top mathematicians, such as Euler and Gauss, were taking tasks from ordinary people who had ordinary problems, even though they were not engineers. The Graph Theory famously created by Euler because of these day-to-day life's questions. Gauss developed the normal distribution since he was working on a mapping project. Galilei was funding his projects by selling telescope.

That being said, the attack surface of problems is humongous, so a brain with different randomized inputs/seeds can generate different/maybe better guesses. And everyone knows that guessing is the important step in the Scientific Method41. Funnily, I just noticed in Wikipedia they referred to guessing as "Form an explanatory hypothesis"! Buzzwords! After all these explanations, hope you got closer to why I think I can solve big Physics and Mathematics problem, where very clever people in the academia cannot!

This also let me take the risk of my theory being wrong, instead of pushing the risk, and its cost, to the other people in the society, by asking funds from governments and institutions. Taking risk is the lost part of making theories, which I think who is serious about his/her research should do. If you ask me the reason that nobody is accountable for scientific theories, which means nobody took the risk, is that we created an abstract in society, which replaces the real risk and pressure of living in the nature with all kind of social pressures. For instance, if relativities are wrong we know Einstein is accountable, even though he's not alive now! But if quantum theories are wrong no individual developed all the underlying principles to be accountable for their inconsistencies. It's not a problem with scientists! It's a problem how social pressures pushed the people to do so. And guess what! Some people love to ignore the pain of their choices by avoiding any accountability, so here we are!

With this mindset, I am publishing without bureaucratic processes, such as peer-review. If you read it and enjoyed it, spread the word and refer me. This would be a better process than peer-review, since it's based on a concept I call Network of Trust. This is how science worked before 1970, and even after that I give the already existing Network of Trust the credit for any progress in the science, not any bureaucracy, such as the peer-review!

References

18

Ring

Cite

If you found this work useful, please consider citing:

@misc{hadilq2025TypeMechanics,
    author = {{Hadi Lashkari Ghouchani}},
    note = {Published electronically at \url{https://hadilq.com/posts/type-mechanics/}},
    gitlab = {Gitlab source at \href{https://gitlab.com/hadilq/hadilq.gitlab.io/-/blob/main/content/posts/2025-12-26-type-mechanics/index.md}},
    title = {Type Mechanics},
    year={2025},
}