Understanding Quantum Mechanics
“I think I can safely say that nobody really understands quantum mechanics,” observed the physicist and Nobel laureate Richard Feynman. But he just talked about Copenhagen interpretation, right?
No! Before a complete answere first let's review what we did to learn about quantum experiments. First, we had some experiments that we couldn't predict their behaviour. Then we created very clever calculation that could predict the experiments, but we don't understand our calculation. However, they are multiple tries to make sense out of those calculations and we call them interpretations. Generally, we admire this method to create a theory and it looks like a way to go for future theories too. But the problem is that we repeated the statement of not being understandable too much that it becomes a branding slogan for the Quantum Mechanics. And you know how Cognitive biases can affect our decisions. Nowadays, everyone knows that quantum is not understandable. Let's avoid hearing that statement, and focus to understand it to complete what Einstein, Planck, Bohr, Heisenberg, Schrödinger, Richard Feynman, and other Quantum legends started.
This article is written with the assumption that reader actually knows the Quantum Mechanics, but to keep the continuity of discussions we have to repeat some obvious details. Also, it's a decade old idea so so sorry if author's knowledge is too rusty. Please if you see any problem use Twitter to raise it up. It's a privilege that someone finds your mistake. As a confession, I'm writing it to remember I understood Quantum Mechanics thoroughly, so if you want to make sense out of it, please reach out.
Okay, let's start. In fact there are some problems in quantum mechanincs that exists in all currently existing interpretations, which stops us to understand our calculation! Here, we will develop a new interpretation that would not touch our basic calculation, so totally compatible, but it may add some corner cases that can be subject to experiments. We'll see how this interpretation would lead us to a quantum gravity theory, where author would write about his ideas in that area in details in a future articles.
Now that you know the plan to explain a version of quantum gravity, let us make it clear what we are talking here when we say gravity. Gravity in General Relativity has two parts. The first part is the Equivalence Principle which will lead us to Riemannian geometry, and the second part is the Einstein field equations. Here we will ignore the second part, but will accept the space-time can have curvature even in short distances without need of any huge mass, which is the first part. This just means we believe Einstein field equations has a modified version to be discovered, but we don't need it here.
In the Quantum Mechanics side, we'll stick to Uncertainty principle to the end. Anything else can change in this interpretation. It states that the standard deviation of measuring position \( \sigma_x \) and the standard deviation of measuring momentum \(\sigma_p\) have the following relation.
\[ \sigma_{x}\sigma_{p} \geq \frac{\hbar}{2} \]
Let's continue on quantum mechanics. What are the problems in quantum mechanics that stopped us to understand it? Well, let's first stick to Copenhagen interpretation while describing these problems, as this one is the original one and the most accepted one too. Nonetheless, generally these problems didn't solve in other interpretations, or if they clamed to solve some of them, they raised other problems.
Wave–particle duality
The Wave–particle duality according to Wikipedia is
Wave–particle duality is the concept in quantum mechanics that every particle or quantum entity may be described as either a particle or a wave.
which is a simple paradox, because anything that is a particle cannot be a wave or vise versa! By the way, we do all of our calculation based on waves, until we measure the wave, then we see particles.
Quantum tunnelling
The Quantum tunnelling which allows a particle to break through a barrier that classically cannot do it. However, if we accept particles are waves, understanding this one is pretty straightforward.
Measurement problem
According to Wikipedia Measurement problem
In quantum mechanics, the measurement problem considers how, or whether, wave function collapse occurs.
This one is a tough one, because as mentioned quantum tunnelling is not a big deal when we deal with waves only, and particles in wave-particle duality only occure when we measure the waves and see the particles, otherwise everything is some kind of waves. Notice waves are totally compatible with our assumption, Uncertainty principle, but particles are not. So if we could understand why we see particles when we measure waves, then all previously mentioned problems would become solved, and we really can understand quantum mechanics. So here we focus on this problem and assume we only have unreducable waves in small structures.
This worth mentioning that other interpretations, such as Many-worlds interpretation, tried to tackle Quantum being un-understandable by solving the Measurement problem before. Actually they were so successfull in their job that inspired this current idea, but it cannot provide any measurement of existing many world. We don't want that!
Additionally, there're some experiments that looks like they are not understandable, but when you look carefully again, there is nothing strange about them! For instance, Quantum eraser experiment looks un-understanable before you read The Notorious Delayed-Choice Quantum Eraser, which makes it totally understandable. There's another experiment named Elitzur–Vaidman bomb tester where is not anything surprising there, when you notice the bomb is a kind of detector!
Fraunhofer lines
To solve these problems and understand quantum mechanics we have to go back to history and explain the experiments as much as the idea become clear. Here is the start. Fraunhofer lines are a set of spectral absorption lines on the sun light.
These lines are correspondence to absorbtion spectrum of molecules in the path of sun's white light from emission to detection. Notice, the emissions and absorbtions for same atoms and moecules are the same. For instance, the followings are the emission lines of Hydrogen
These lines are some of missing wavelengths in the sun light. In case of Hydrogen, they can be calculated by Rydberg formula
This formula have been led us to a direction toward developing Schrödinger equation, but are these lines really one dimensional lines? No of course, as someone can see in Fraunhofer Lines's picture. They are mostly like some picks on a graph.
What author sees in this picture is some resonances. Can you see them? The aim of this article is to make it clear for others too. But quantum mechanics is far from resonances, right? In fact it's so likely that we're not the first persons to think about it. We can guess a lot of brilliant minds came to this idea before, but they rejected it because of a simple fact. The lowest energy of Rydberg formula has 91.13
nm wavelength which is much bigger than Hydrogen atom's diameter, which is 0.1 nm
, so that structure cannot emit or absorb that wavelength in the sense of resonance, right?
Here we want to make a bridge between Quantum Mechanics and General Relativity so red shifts in light's wavelength, while photons climb up from the curved space-time of a Hydrogen atom, is not something strange! In fact we're thankfull for anything that acts like a lens to strech out what's going on in atoms.
Although you can read in my previous post, The Science, that because I am an amature Theoretical Physicist, I'll not lose anything by such mistakes, which makes me look at problems open minded. Therefore, we move forward with this idea, even that it's clearly looks like a mistake in the first glance!
However, there're more to our assumption, but unfortunately, our integration to GR did finish here. Here, we just need to fill out the above gap. For more usage of curved space-time in small scales another article is needed.
Resonance
Regarding Wikipedia
Resonance describes the phenomenon of increased amplitude that occurs when the frequency of a periodically applied force (or a Fourier component of it) is equal or close to a natural frequency of the system on which it acts.
To describe resonance, text books start by a second order equation like this Newton's second law
\[ \frac{\mathrm{d}^2x}{\mathrm{d}t^2} = F_0 \sin(\omega t)-kx-c\frac{\mathrm{d}x}{\mathrm{d}t} \]
By solving it, we can find an amplitude like
\[ G(\omega) = \frac{\omega_0^2}{\sqrt{\left(2\omega\omega_0\zeta\right)^2 + (\omega_0^2 - \omega^2)^2}} \]
which looks like this.
But it's a very special case of resonances, right? What about the general cases? In case of having the natural frequencies of a system, how can we find these amplitudes? What are the natural frequencies of the system and how can we find them in the first place?
Resonance in Classical Mechanics
Let's have a matrix of classical particles with the following equations of motion.
\[ [m] \frac{\mathrm{d}^2\vec{x}}{\mathrm{d}t^2} + [c]\frac{\mathrm{d}\vec{x}}{\mathrm{d}t} + [k] \vec{x} = \vec{F} \]
where \( [m] \), \( [c] \), \( [k] \), \( \vec{x} \), and \( \vec{F} \) are called the mass, damping, stiffness matrices, matrix of positions, and the matrix of external forces, respectively. They are given by
\[ [m] = \begin{bmatrix} m_1 & 0 & 0 & ... & 0 & 0 \\ 0 & m_2 & 0 & ... & 0 & 0 \\ 0 & 0 & m_3 & ... & 0 & 0 \\ . & . & . & & . & . \\ . & . & . & & . & . \\ . & . & . & & . & . \\ 0 & 0 & 0 & ... & 0 & m_n \end{bmatrix} \]
and
\[ [c] = \begin{bmatrix} c_1+c_2 & -c_2 & 0 & ... & 0 & 0 \\ -c_2 & c_2+c_3 & -c_3 & ... & 0 & 0 \\ 0 & -c_3 & c_3+c_4 & ... & 0 & 0 \\ . & . & . & & . & . \\ . & . & . & & . & . \\ . & . & . & & . & . \\ 0 & 0 & 0 & ... & -c_n & c_n+c_{n+1} \end{bmatrix} \]
and finally
\[ [k] = \begin{bmatrix} k_1+k_2 & -k_2 & 0 & ... & 0 & 0 \\ -k_2 & k_2+k_3 & -k_3 & ... & 0 & 0 \\ 0 & -k_3 & k_3+k_4 & ... & 0 & 0 \\ . & . & . & & . & . \\ . & . & . & & . & . \\ . & . & . & & . & . \\ 0 & 0 & 0 & ... & -k_n & k_n+k_{n+1} \end{bmatrix} \]
These equations have a solution if we assume particles are moving with Simple harmonic motion so as the external forces, which is \( \vec{x}=\hat{x}e^{i\omega t} \), and \( \vec{F}=\hat{F}e^{i\omega t} \), then we have
\[ ([m]^{-1}[k]+i\omega [m]^{-1}[c]-\omega^2)\hat{x}=[m]^{-1}\hat{F} \]
Therefore
\[ \hat{x}={([m]^{-1}[k]+i\omega [m]^{-1}[c]-\omega^2)}^{-1}[m]^{-1}\hat{F} \]
In these equations \( Z = [m]^{-1}[k]+i\omega [m]^{-1}[c]-\omega^2 \) is the impedance matrix and we will come back to it, but for now let's calculate the natural frequencies. To do so, we need to remove external forces and the frictions, then we have
\[ ([m]^{-1}[k]-\omega^2)\hat{x}=0 \]
Everyone knows that this is an equation for Eigenvalues and eigenvectors and it will define the natural frequencies of the system by solving the following equation.
\[ det([m]^{-1}[k]-\omega^2)=0 \]
where \( det \) is the Determinant function. Let's name the eigenvalues of this equation, \(\omega_1 \), \(\omega_2\), \(...\), \(\omega_n\). After solving this eqation and find the Eigenvalues and eigenvectors we can transfer \( [m]^{-1}[k]-\omega^2 \) matrix to a diagonal matrix by using those eigenvectors. For simplicity assume this transformation also makes firctions, \( [m]^{-1}[c] \), diagonal. In another word, let's assume \( [m]^{-1}[k]\) and \( [m]^{-1}[c] \) commute
\[ [[m]^{-1}[k], [m]^{-1}[c] ] = 0 \]
where \( [a,b]=ab-ba \) is the commutator operator. Let's name the diagonal elements of \( [m]^{-1}[c] \) matrix, \( \zeta_i \). So in the transformed basis the impedance is diagonal. This will get interesting when we see what the Determinant of \( Z \) is in the denominator of \( Z^{-1} \). This imples in the new basis
\[ \hat{x}=Z^{-1}[m]^{-1}\hat{F} = \frac{...}{|det(Z)|} [m]^{-1}\hat{F}= \\ \frac{ ... \times [m]^{-1}\hat{F}}{\sqrt{({(\omega-\omega_1)}^2+{(\zeta_1\omega)}^2)({(\omega-\omega_2)}^2+{(\zeta_2\omega)}^2)...({(\omega_n-\omega_1)}^2+{(\zeta_n\omega)}^2)}} \]
which shows the amplitude, \( \hat{x} \), has picks when \( \omega \) is close to one of \( \omega_i \), which are the natural frequencies of the system.
Let's come back to the definition of resonance. It looks like there is a better definition for it.
A resonance is a pick in a frequency dependent amplitude close to natural frequencies of a stationary solution of a system.
where a stationary solution is an eigenfunction/eigenvector solution and its natural frequency is its eigenvalue. It's worth mentioning that if someone wants to define a clock in general sense, this definition would fit perfectly to that stationary solution. So here, we will think of resonances as picks on the amplitude related to a clock and vise versa. Later we will come back to this definition.
This is what we were looking for. Someone can fit the data from Fraunhofer lines to the amplitude above, but there are some problems here. What are the \( \hat{x} \)s? Is there any classical vibration is going on in atoms? Uncertainty principle clearly states that we just have waves down there, then there cannot be anything to vibrate.
Another problem is that the frequency of maximum value of amplitude on the picks is not exactly match to the natural frequencies. Therefore we need to modify/generalize this calculation, but stick to its main point.
There are natural frequencies that can be find by eigenvalues of an equation.
Generalization of Resonance
We can start by simplifying
\[ det([m]^{-1}[k]-\omega^2)=0 \]
By assumning a matrix, \( H \), that its square is \( H^2 = [m]^{-1}[k] \) then
\[ det((H-\omega)\times(H+\omega))=det(H-\omega)\times det(H+\omega)=0 \]
So it can be simplified as
\[ det(H-\omega)=0 \]
This simplification will fix our problem with the picks in amplitude, that are not the natural frequencies, because it will lead us to
\[ H\hat{u}=\omega\hat{u} \]
Where in this point we don't know what the eigenvectors, \( \hat{u} \), mean! Then, by bringing back hypothesised friction to the equation, but we cannot bring back \( i\omega [m]^{-1}[c] \), because if we do that we can factor \( \omega \) out and redefine \( H \), which would create the same frictionless equation as above. So we don't want that! Instead we do our best by adding \( i\times F=i F \) to do this job. Then we have
\[ (H-i F-\omega)\hat{u}=\hat{v} \]
so the amplitude, \( \hat{u} \), will have reverse relation with \( |det(H-iF-\omega)| \), where, because the imaginary part is not a function of \( \omega \), the maximum on the picks are exactly the natural frequencies determinded by \( H \). So we fixed one problem, Tada!
On the other hand, the classical resonance's equations assumed to have Simple harmonic motion but there's a nice generalization of these solutions, which called Hilbert space, that can make us free from thinking about smaller vibrations than our fundumental waves. Instead those vectors actually are functions in the Hilbert space. Nice! Now it seams clear what is the possible generalization of \( H \).
Our generalization is not complete and it was not the only way we could generalize classical resonances, but it gives us some hints. Also there are other hints that brilliant minds in 1900s followed to reach to Schrödinger equation. Here, we just wanted to show that Schrödinger equation is the natural generalization of calculating resonances.
Quntum Mechanics
It starts by eigenstates, which is another name for the eigenfunctions, of Hamiltonian, \( H \), for a particular space-time and its boundary. Then by matching up the eigenvalues and its units with measurements, we reached to Schrödinger equation.
\[ i\hbar \frac{\partial}{\partial t} \Psi\left(\mathbf{r},t\right) = H \Psi\left(\mathbf{r},t\right) \]
which in Hilbert Space, by using Dirac notation, can be written as
\[ i \hbar \frac{\partial}{\partial t}|\psi(t)\rangle= H|\psi(t)\rangle \]
where
\[ H= -\frac{\hbar^2}{2\mu} \Delta+V(\mathbf{r}) \]
and no need to introduce \( \hbar \) and \( \mu \). Nevertheless, Schrödinger found the correct \( V(\mathbf{r}) \) for his equation to generate the correct energy levels, \( E_n \), of Hydrogen atom as eigenstates of \( H \).
\[ H|n,l,m\rangle = E_n|n,l,m\rangle \]
where
\[ E_n = - \frac{ m_e e^4}{2 ( 4 \pi \epsilon_0)^2 \hbar^2 } \frac{1}{n^2} \]
for Hydrogen Atom, in which, \( l,m \) are two countables that doesn't affect the energy levels. This was a great discovery to prove this theory is the right one. This result is matching up with Rydberg formula, if only we could understand how to subtract two energy levels of two waves, not particles. We'll come back to this problem soon.
Notice, the above form of \( H \) can be subject to change if we think of the curved space-time, but it's not what we want to deal with here. Another possible modification could be adding at least two more integers for the eigenstates, \( |p,q,n,l,m \rangle\), to support all kind of atoms, where \(p\) is the number of protons and \(q \) is the number of neutrons of the solution. Nontheless, we can also add the number of quarks, etc. to the list.
Also notice, solutions of Schrödinger equation respect to time are just
\[ |\psi(t)\rangle=e^{-\frac{i}{ \hbar} t \times H}|\psi\rangle \]
where we'll call it Schrödinger evolution. Indeed it looks like this equation is more essential than the differential equation of Schrödinger equation above, because its recursive nature is clearly showing the evolution of \( |\psi(t) \rangle \) will return to its initial state for eigenstates, which is why we called these stationary solutions clocks.
Therefore, the eigenstates of Schrödinger equation are stationary solutions, which built a great connection with resonance. Now we know particles are stationary solutions of the wavefunction, so trivially resonances reveal the structure of particles of the system. This is interesting because we noted that these stationary solutions are just clocks and, in fact, we know particles are just clocks.
Projection
The incredible idea of Hilbert Space is that we can define some kind of inner product for vectors in that space, so we can think of a projection of a vector on the other vector. In Copenhagen interpretation, these projections carry probability meaning. For instance, they can be the probability of happening an event, or a probability distribution of an event, etc. Here, we don't have to think of them as a probability. They are what we classically considered as amplitude. Increasing and decreasing the amplitude will make some probabilities to increase and decrease respectively, but probability is not the core of our understanding, like Copenhagen interpretation suggests. The probability arises the same as tossing a dice has probability properties. The only difference is that when we're tossing a dice we need to calculate the probability to fall in a static solution, but here, we calculate the probability to fall in a stationary solution. For instance, if each side of the dice after fall can be presented by \( |n\rangle \), then we can argue while the dice is in the air, before falling, the wavefunction is \(|\psi\rangle\), then the amplitude of \( |\psi\rangle \) on each of its basis is \( \langle n|\psi\rangle \) which is the projection of \( |\psi\rangle \) on its basis, \( |n\rangle \), and the probability of falling the dice on that side. You know how to generalize this to quantum states.
However, we notice the projection of a wavefunction on itself is a free parameter that we always put it to one, where we call it Normalization.
\[ \langle \psi|\psi\rangle = 1 \]
It's not the only usage of projection. Do you remember the Rydberg formula needs the subtraction of energy levels and it was hard to think that it's happening without considering an electron as a particle? But electron is a wave, so we cannot simply subtract its energy levels. Instead we have to calculate the amplitude of evolution of the electron wave as following.
\[ amplitude = \langle final |initial \rangle \]
The initial state is when electron was in the \( n_1 \) level of energy
\[ |initial \rangle = |n_1\rangle \]
And the final state is
\[ |final \rangle = |n_2\rangle \otimes |emitted\rangle \]
where \( \otimes\) is the Tensor product, and \(n_2\), \( n_2 < n_1 \), is the energy level of final state, and emitted light state, \( |emitted \rangle \), is the solution of Schrödinger equation for empty space that has no boundary. Therefore the emitted light state is as following
\[ \langle \mathbf{r}|emitted\rangle = e^{i\mathbf{k}.\mathbf{r}} \]
and by considering its time evolution
\[ \langle \mathbf{r},t|emitted\rangle = e^{-i\left(\frac{t}{\hbar}E_e -\mathbf{k}.\mathbf{r}\right)} \]
Now is the time to put all of these together to calculate the amplitude
\[ \langle final |initial \rangle = \left(\langle n_2| \otimes \langle emitted |\right)1_{2+1}|n_1\rangle \]
where \( 1_{2+1} \) is a transformation to make sense out of this inner product. However, we can use the following identity to calculate its value
\[ 1_{2+1}=\int d\mathbf{r}dt \left(|\mathbf{r},t\rangle \otimes |\mathbf{r},t\rangle \right) \langle \mathbf{r},t| \]
then we can see
\[ \langle final |initial \rangle = \int d\mathbf{r}dt \langle n_2|\mathbf{r},t\rangle \langle emitted |\mathbf{r},t\rangle\langle \mathbf{r},t|n_1\rangle \\ =\int d\mathbf{r}dt \langle n_2|\mathbf{r},t\rangle e^{i\left(\frac{E_e}{\hbar}t -\mathbf{k}.\mathbf{r}\right)} \langle \mathbf{r},t|n_1\rangle \]
Thus because of Schrödinger evolution, the above amplitude would be
\[ \langle final |initial \rangle =\int d\mathbf{r}e^{-i\mathbf{k}.\mathbf{r}}\int dt \langle n_2|\mathbf{r},t\rangle\langle \mathbf{r},t|n_1\rangle e^{i\left(\frac{E_e}{\hbar}t \right)} \\ =\int d\mathbf{r}e^{-i\mathbf{k}.\mathbf{r}}\langle n_2|\mathbf{r}\rangle\langle \mathbf{r}|n_2\rangle\int dt e^{\frac{i}{\hbar}\left(E_2+E_e-E_1 \right)t} \]
where because you are already familiar with Dirac delta function, it can be simplified to
\[ \langle final |initial \rangle =2\pi \delta\left(E_2+E_e-E_1 \right) \int d\mathbf{r}e^{-i\mathbf{k}.\mathbf{r}}\langle n_2|\mathbf{r}\rangle\langle \mathbf{r}|n_2\rangle \]
which shows the amplitude is non zero if and only if \( E_e =E_1-E_2 \). So no magic or particle is needed to subtract energy levels of two eigenstates, aka waves. Notice, this calculation is valid for eigenstates of any other Hamiltonian, not just Hydrogen Atom.
By the way, this usage of projection to calculate amplitude is not something new and in QFT we do it all the time for different kind of problems. In QFT, we just start by assuming Path integral formulation then bring out this projections out of it. Not a big problem for Resonance interpretation if the end calculation is the same. Even though Path integral itself doesn't have a strange meaning in this interpratation.
Conservation
Before going furthur, we need to clearify that no magic is going on! The subtract of energy above is the conservation of energy. How did we make that happen?
Let's start with the single wavefunction of the system, \( |\Psi\rangle \), and some operators, \( \hat{P_i} \), to describe the symmetry of it. Therefore, order of applying those operators on the wavefunction must not change the result. In another word, for all symmetric operators, \( \hat{P_i}, \hat{P_j} \)
\[ \hat{P_i}\hat{P_j}|\Psi\rangle = \hat{P_j}\hat{P_i}|\Psi\rangle \]
which means \( \hat{P_i} \) and \( \hat{P_j} \) commute
\[ [\hat{P_i},\hat{P_j}]=0 \]
But if these operators be coordinator operators, which are vectors in the direction of coordinates, like
\[ \hat{P_i}=-i\hbar\frac{\partial}{\partial x^i} \]
they all should satisfy their equations
\[ -i\hbar \frac{\partial}{\partial x^{i}} \Psi\left(\mathbf{r},t\right) = \hat{P_{i}} \Psi\left(\mathbf{r},t\right) \]
which will lead us to the evolution equation of them
\[ \Psi(x^{i})=e^{\frac{i}{\hbar} x^{i} \hat{P_{i}}}\Psi \]
Notice they all have the same wavefuntion as their eigenstate, which means
\[ \hat{P_{i}}\Psi_{n} = p_{i,n} \Psi_{n} \]
where \( p_{i, n} \) are the conserved variables. Notice \( \hat{P_{i}} \) are Hermitian operators, \(\hat{P_{i}}^\dagger=\hat{P_{i}}\), so \( p_{i,n} \) are real numbers.
For instance, \( \hat{L_z} = \hat{P^{\phi}} = \hat{P_{\phi}} \), and \( \hat{H} = \hat{P^{t}} = -\hat{P_{t}} \), where, upper indices are showing contravariant versions of conserved variables. So the Schrödinger equation is just a result of the symmetry along the time direciton.
Notice, the evolution equation above, is all we need to have calculations like previous section to subtract the conserved values, which indeed is what we mean by conserved values.
Double-slit experiment
Double-slit experiment is the basic experiment that we want to understand, and its underestanding will lead to understanding of measurement to solve the Measurement problem. In this experiment we have three part of space-time. One is from electron beam gun to double-slit. The other is from double-slit to screen, and the last one is on the screen. Let's assume the double-slit itself is so thin that will not have any different resonance modes, but be aware that adding any instrument to verify the particle hypothesis, and check from which slit particles go through, will break this assumption and we need to add another part, with different eigenstates, just for that. For each of these parts, someone needs to solve the Schrödinger equation with different boundary conditions. However, for the first two parts, we have empty space-time and we know exactly what are the solutions.
\[ \langle \mathbf{r},t|\Psi^{(1)}\rangle = e^{-i\left(\frac{E}{\hbar}t-\mathbf{k^{(1)}}.\mathbf{r}\right)} \]
for the first part,
\[ \langle \mathbf{r},t|\Psi^{(2)}\rangle = e^{-i\left(\frac{E}{\hbar}t-\mathbf{{k_{1}^{(2)}}}.\mathbf{r}\right)} + e^{-i\left(\frac{E}{\hbar}t-\mathbf{{k_{2}^{(2)}}}.\mathbf{r}\right)} \]
for the second part, where \( \mathbf{k_{1}^{(2)}} \) is pointing to the direction of propagation from the first slit, and \( \mathbf{k_{2}^{(2)}}\) is pointing to the direction of propagation from the second slit. For the third part, the screen solution, we don't know exactly its relation to \(|\mathbf{r},t\rangle\) but we know the eigenstates, which are the resonance modes of screen, are arranged on a grid. Someone can call the eigenstates, pixels.
\[ |\Psi_{i,j}^{(3)}\rangle \]
Even that these eigenstates can have overlaps, but we expect them to be orthogonal and complete.
\[ \langle\Psi_{k,l}^{(3)}|\Psi_{i,j}^{(3)}\rangle=\delta_{k,i}\delta_{l,j} \]
In the end, the amplitude of \(\langle \Psi_{i,j}^{(3)}|\Psi^{(2)}\rangle \) will directly related to the probability of excitation of \( |\Psi_{i,j}^{(3)}\rangle \) mode of screen, which will result a black dot on \( i, j \) pixel of the screen. Therefore, no need of thinking about particles. The probability nature of the process is the same as probability nature of tossing a dice. There's still an explanation needed about why among all states of the screen, the system choose one of them, where we'll explain about it later below.
But the screen is a 2D space, what about the trace of the particles in 3D spaces, such as Bubble chambers? A track of a particle in Bubble chamber is basically built of bubbles in the path of a wave function. Similar to the screen, the eigenstates of the Bubble chamber can be described by \( |\Psi_{i,j,k}^{(C)}\rangle \), and someone can call them 3D pixels. When these 3D pixels got excited by a passing wave function, they will create a bubble, which can be detected. So nothing new is about these Bubble chambers.
Entaglement
In above example of Double-slit experiment, we reached to the point that final result has \(\langle \Psi_{i,j}^{(3)}|\Psi^{(2)}\rangle \) amplitude. Because all \(| \Psi_{i,j}^{(3)}\rangle \) are orthogonal and complete in the Hilbert space, then we can write the wavefunction of the second part as the wavefuntion of the third part like
\[ |\Psi^{(2)}\rangle = \sum_{i,j}|\Psi_{i,j}^{(3)}\rangle \langle\Psi_{i,j}^{(3)}|\Psi^{(2)}\rangle \]
we can do the same for every other experiments too. We can assume the experiment is running on the bulk of a space-time, where all the measurements, results, are coming from the boundary of this space and time. It doesn't mean all the measurement devices are 2D. They are 3D devices on the 2D boundary of the experiment. Let the eigenstates of the bulk be \( |\Psi^{b}\rangle \) and the eigenstates of the boundary, measurement instrument, be \( |\Psi^{m}\rangle \), then we can describe the wavefunction of the experiment by the eigenstates of the boundary like
\[ |\Psi^{b}\rangle = \sum_{i}|\Psi_{i}^{m}\rangle \langle\Psi_{i}^{m}|\Psi^{b}\rangle \]
where \( \sum_{i} \) would sum over all eigenstates of boundary, measurement instrument. Someone can talk about this as the Superposition principle. But it's not really different from the Quantum entanglement when the eigenstates of the bounary has two or more different measurement devices that have their own eigenstates like
\[ |\Psi_{i,j, ...}^{m}\rangle=|\Psi_{i}^{m_{1}}\rangle\otimes |\Psi_{j}^{m_{2}}\rangle\otimes ... \]
Notice, \( |\Psi_{i}^{m_{j}}\rangle \) are measurement devices' eigenstates, not particles. Therefore,
\[ |\Psi^{b}\rangle = \sum_{i_1,i_2,...}\prod_j\otimes \left(|\Psi_{i_{j}}^{m_{j}}\rangle \langle\Psi_{i_{j}}^{m_{j}}|\right)|\Psi^{b}\rangle \]
is the famous entanglement. However, a lot of these terms would be zero, simply because they will violate some conservation properties of the bulk. For instance, for a simple case of two screens for measuring light's wavefunction with polarization of up and down, \( |+\rangle,| -\rangle \), we can write wavefunction of the bulk with
\[ |\Psi\rangle = |+\rangle\otimes|-\rangle+|-\rangle\otimes|+\rangle=|+-\rangle+|-+\rangle \]
This is totally compatible with Bell's theorem, where we show that in the bulk of experiment we can have superposition of what we measure. Be careful that projection is not a process, so it's not something that we can calculate how long it will take. However, excitation of one of the resonance modes will take a while, because the wavefunction needs to propagate in the measurement device. We will explain more about this in the following section.
The next related subject is Schrödinger's cat. The eigenstates on the boundary are alive and dead states of the cat, as we can observe the cat to see if it's alive or dead. But hey! The eigenstates are depending on the Schrödinger equation which is depending on the geometry, so the eigenstates of the bulk, which is inside the box and includes the cat, is depenging on the geometry of the cat. Therefore, there's no way to rotate the eigenstates to have an alive-dead eigenstate in the bulk, because the geomery of the cat demands its eigenstates to be alive or dead states, not superposition of them. This implies we don't have any problem with understanding the Schrödinger's cat experiment here. The problem raised in Copenhagen interpretation because probability is an essential ingredient to define the eigenstates. So we can argue there is where problem starts. Here, probability is an emergent property of the amplitude of a wavefunction, so we don't have any un-understandable stuff. Also for unknown reason to the author they don't separate the systems in parts and calculate their eigenstates separately, just like what we did above for the double-slit experiment.
Wave particle mystery
The above answers to considering everything as waves are not looking satisfying after we look at the Double-slit experiment that includes a kind of atoms that we proved are particles above. So the duality of wave particles still sticks, Right? No! Let's rethink it. We just showed an atom is a stationary solution of a wavefunction, so technically they are wavefunctions where have a bigger amplitude around a specific location, where someone can think of that location as the position of the particle. Therefore, even atoms are waves that can satisfy the Double-slit experiment.
The Photon is a very interesting case, but if we make it clear, this method can be applied on all variants of hypothesized particles. To make it clear, let's try to describe the original experiments that Photon revealed itself, Black body experiment, which is explained by Planck's law, and Photoelectric experiment. You're familiar with it, right? The Black body is a cavity with a hole on it where to derive Planck's law we needed to assume \( E=nh\nu \), which later Einstein assigned the \(n\) to the number of Photons. But we really don't need to assume that! Based on Noether's theorem each symetric coordinates has exactly one conserved variable pair, not more, not less. This means if you found the conserved variable that is paired to time coordinate, while time is the symmetry of the system, then this conserved variable must be energy. On the other hand, the eigenstates of Hamiltonian for empty space-time
\[ -\frac{i}{\hbar}H=\frac{\partial}{\partial t} \]
are
\[ \langle t|\omega\rangle = e^{-i\omega t} \]
where we showed that this kind of wavefunctions have a conserved value, which in this case is \( \omega \). So \(\omega\) is the conserved valriable that is paired to time, therefore, \( \omega \) must be the energy up to some constants.
\[ E=\hbar \omega \]
This simple step bypasses the need of Photon assumption in Photoelectric experiment, where \( E_2-E_1=h\nu = \hbar\omega \). When remembering that It was hard to find a convincing explanation for this formula in the text books, this becomes more important. However, we still need to explain what does more than one Photons mean? This situation is happening in the Black body experiment. However, the wavefunction inside the cavity must be expandable by Fourier transform which is the Hilbert space of eigenstates of Schrödinger equation in an empty space, with \( n\omega \) eigenvalues, where \(n\) is a Natural number. Thus, any wavefunction inside the cavity, because its boundary conditions are not in infinities, can be written as
\[ \Psi(t)=\sum_n a_n e^{-in\omega t} \]
so the energy of excited modes of the system inside the cavity are
\[ E_n=n\hbar\omega \]
which is exactly what we were looking for. Notice, we didn't need to assume any particle, including Photon, to derive this equation. The same situation is happening for other waves and related particles.
As you can see Photon's particle nature is totally different from and atom's particle nature! Photon is a stationary state. It's a clock. But it's not something that has a center like someone expect from a particle. Or maybe our definition of particle is some kind of wave that is stationary! Anyway, they are totally different concepts and it's a bad practice to put them under one name, particle!
I would suggest to rename all the Particle Physics departments to Wave Physics if we think Uncertainty Principle is correct.
Friction
Let's go back to the subtraction of energy, aka conservation of energy, in the calculation above. Did you notice Dirac delta function is not what we actually meature?
Do you remember in the Generalization of Resonance section above we followed the classical steps to add the friction to the equations with \(-iF \) to avoid infinities in the resonance frequencies? We can do the same here and the effect would be
\[ \langle final |initial \rangle =\int d\mathbf{r}e^{-i\mathbf{k}.\mathbf{r}}\langle n_2|\mathbf{r}\rangle\langle \mathbf{r}|n_2\rangle\int dt e^{\frac{t}{\hbar}\left(i\left(E_2+E_e-E_1\right) +\zeta_2+\zeta_e-\zeta_1 \right)} \]
If we assume \( \zeta_1\),\(\zeta_2\), and \(\zeta_e \) are constants during the the interval between \(initial\) and \(final\) states. Of course, after \(final\) state our solution will be the eigenstate of the system and there will be no friction by definition, so all \(\zeta_1\), \(\zeta_2 \), and \(\zeta_e\) will be zero. This awesome result will let us fit up all of our calculation to our measurement without assuming anything magical, like Dirac delta function! And remember you are an expert on Quantum Mechanics, so you know this result just can come up with Resonance interpretation, not other interpretations, as far as author's knowledge.
For details, let's review our old example of throwing a dice. What would make the wavefunction of falling dice to collapse on one of its eigenstates, aka sides of the dice, is the friction of dice with the ground. Do we have friction in our theory that wants to fundamentally explain the nature? If it's a complete theory it should explain where is the energy related to friction going? If that energy is turning into heat, and heat defined by the movement of particles, and our stationary solutions are describing those particles, then we have a loop of effects that cannot be satisfied. It looks like there's no room for friction inside Hydrogen atom or other small structures we want to describe by Quantum mechanics. Indeed, in other interpretations we don't have friction in Quantum mechanics. But in our classical generalization of resonance we had friction, then remove it to calculate the natural frequencies of the system, after that, we add the friction again to have a complete description of the system. So the difference between this interpretation and others is the friction that can be added to Schrödinger equation like below to have a general description of the system. Also, the heat energy is just interpreted as the extra mass of the particles based on Mass–energy equivalence, \( E = mc^2 \), so fundamentally we don't need to assigning the heat to moving particles. Thus the evolution of the system would follow
\[ |\psi(t)\rangle=e^{-\frac{t}{ \hbar} (i H + F) }|\psi\rangle \]
But it's not this simple in details! The friction operator, \( F \), is depending on the configuration of the system, and the configuration of the system is depending on time, which is why the dice would fall to different basis in different times. So correctly we have
\[ |\psi(t)\rangle=e^{-\frac{t}{ \hbar} (i H + F(t)) }|\psi\rangle \]
This is the place where we choose the recursive presentation or Schrödinger evolution rather than its deferential equation presentation, because they are not the same after adding the time dependent friction, but the recursive presentation keeps a part of solution as a clock, stationary, which is what we're interested in. The friction, \( F(t) \), is totally depending on the \( H \), which means \( F=F(t,H) \). It's showing how each stationary solutions absorb or emit energy to their mass or out of their mass, so there should be no difference to apply which one first on a wavefunction. In the other word, they should commute. We present this with
\[ [H,F(t, H)]=0 \]
where for each eigenstate of \( H \), like \(|n\rangle \), \( F \) has an eigenvalue like
\[ F(t)|n\rangle=\zeta_n(t)|n\rangle \]
Then we can calculate the evolution of any wavefunciton like this
\[ |\psi(t)\rangle=\sum_n e^{-\frac{t}{\hbar}(iE_n+\zeta_n(t))}|n\rangle\langle n|\psi\rangle \]
Thus if \( \zeta_n(t) \) has a minimum all along the time interval of \( T=[t_1,t_2] \) then we can define it by \( \zeta_{n_{min}}(t) = min(\zeta_n(t)) \), in a way that \( \zeta_n(t)-\zeta_{n_{min}}(t) > 0 \). we can write it in the \( T \) interval like
\[ |\psi(t \in T)\rangle=e^{-\frac{t}{\hbar}\zeta_{n_{min}}(t)}\bigg(e^{-\frac{t}{\hbar}iE_{n_{min}}}|n_{min}\rangle\langle n_{min}|\psi\rangle \] \[ +\sum_{n\neq n_{min}} e^{-\frac{t}{\hbar}iE_{n}}e^{-\frac{t}{\hbar}\left(\zeta_n(t)-\zeta_{n_{min}}(t)\right)}|n\rangle\langle n|\psi\rangle\bigg) \]
So by definition \( \zeta_n(t)-\zeta_{n_{min}}(t) > 0 \), therefore all terms except \( |n_{min}\rangle \) will be declined, and if \( T \) be long enough then effectively they will be gone, so after a while effectively we just have
\[ |\psi(t \rightarrow t_2)\rangle=e^{-\frac{it}{\hbar}(E_{n_{min}}+\zeta_{n_{min}}(t))}|n_{min}\rangle\langle n_{min}|\psi\rangle \]
but remember \( |n_{min}\rangle \) is one of the stationary clocks of the system, so after \( t_2 \), when just \( |n_{min}\rangle \) survived, system will not lose energy to heat anymore by definition of stationary clocks, which means \( \zeta_n(t>t_2) = 0 \). So the final state of the system will be
\[ |\psi(t \geq t_2)\rangle=e^{-\frac{it}{\hbar}E_{n_{min}}}|n_{min}\rangle\langle n_{min}|\psi\rangle \]
Oh my! We just described the evolution of collapsing states of the system. Notice, in Copenhagen interpretation, this collapse is sudden, non-local, and no evolution is going on. Also, in Many-worlds interpretation, there's no sudden collapse and everything would evolve just like here, but all states would survive and end up being in different hypothesised worlds, which means the world is branching off to many worlds.
Notice, we concluded after one of eigenstates surveved, we have a stationary solution, so \( \zeta_n(t>t_2) = 0 \), but let's be open minded! All above conclusions would hold true if \( \zeta_n(t>t_2) \) is a very small number respect to other quantities, with energy unit, in our calculation, \( \zeta_n(t>t_2) \cong 1/ O(a\ggg 0) \), which means even atoms can decay and we have an official support for that process!
The \( F \) is a Hermitian operator, which make \(iF\) a Skew-Hermitian operator. This means \(\zeta_n(t)\) are real numbers, to avoid conflict with the stationary solutions. Also notice when \(F\) depends on time, the evolution equation will not work as forward in time as the same as backward, which means it's not keeping the information of the system all over the time and will broke the time symmetry. So because we added friction and heat to our theory, we support Second law of thermodynamics, where any good theory must support, because clearly time has an arrow.
Quantum mechanics does not suppose to lose information, right? But, isn't it the case in other interpretations too? Let's be honest with ourselves. Other interpretations, that suffer from the Measurement problem, which includes all of them, have to explain our measurement that we end up with one survived state. For instance, both discussed interpretations here, Copenhagen interpretation and Many-worlds interpretation, describe this process in one or another way. The point is collapsing or branching DOES make our theory to lose information. So I have no idea why we act like nature shouldn't lose information. Also as a benefit of accepting this fact, there would be no Black hole information paradox in our dictionary. It doesn't mean Black holes are valid solutions, but there's no information paradox there! Information will be lost there.
As a bonus point, we're not the first people who thought about a Skew-Hermitian term in the evolution of wavefunctions. Feshbach–Fano partitioning has a similar idea, and this approach allows us to define rigorously the concept of resonance in Quantum mechanics while only working on the Scattering theory. But here, in the resonance interpretation, we defined resonance deeply as a fundamental concept of Quantum mechanics.
Summary
An article to remember my thoughts about how can someone understand Quantum Mechanics. It starts by listing the problems that stopped us to understand this wonderful theory, then reach to a point that its all related to the Measurment Problem. Therefore, proposed the Resonance interpretation to resolve this problem. The step by step process to adding building blocks of the theory to throw light on all dark corner cases. In the end, the Measurment was looking like a simple classical process, which doesn't include any problem to understand it. This means, Quantum Mechanics is working by waves that have very classical properties thus we can understand it thoroughly.
Some concepts like Spin, Dirac equation, or conservation and quantization of charge, etc. are totally ignored in this article, but they are the subject of future articles. Hopefully, we understand them soon.
References
- Richard Feynman
- Copenhagen interpretation
- List of cognitive biases
- Einstein
- Planck
- Bohr
- Heisenberg
- Schrödinger
- General Relativity
- Equivalence Principle
- Riemannian geometry
- Einstein field equations
- Wave–particle duality
- Quantum tunnelling
- Measurement problem
- Quantum eraser experiment
- The Notorious Delayed-Choice Quantum Eraser
- Elitzur–Vaidman bomb tester
- Fraunhofer lines
- The Science
- Rydberg formula
- Schrödinger equation
- Newton's second law
- Simple harmonic motion
- Eigenvalues and eigenvectors
- Determinant
- Uncertainty principle
- Laplace operator
- Fourier transform
- Fourier series _ Hilbert space
- Hydrogen Atom
- QFT
- Dirac notation _ inner product
- Normalization
- Tensor product
- Mass–energy equivalence
- Hamiltonian
- Hermitian
- Covariance and contravariance of vectors
- Black hole information paradox
- Dirac delta function
- Path integral formulation
- Laplace's equation
- Laplace–de Rham operator
- Double-slit experiment
- Bubble chamber
- Bell's theorem
- Superposition principle
- Schrödinger's cat
- Photon
- Black body experiment
- Planck's law
- Photoelectric experiment
- Fourier transform
- Natural number
- Special relativity
- Skew-Hermitian
- Feshbach–Fano partitioning
- Scattering theory
- Dirac equation
Cite
If you found this work useful, please consider citing:
@misc{hadilq2021Quantum,
author = {{Hadi Lashkari Ghouchani}},
note = {Published electronically at \url{https://hadilq.com/posts/understanding-quantum-mechanices/}},
gitlab = {Gitlab source at \href{https://gitlab.com/hadilq/hadilq.gitlab.io/-/blob/main/content/posts/2021-11-29-understanding-quantum-mechanices/index.md}},
title = {Understanding Quantum Mechanics},
year={2021},
}