Basic quantum circuit simulation in Python

I’ve always been a proponent of the idea that one of the best ways to learn about a topic is to code up a simple example that uses that idea/concept/algorithm.  In conversations I’ve had with students recently, I’ve realized there is some interest in playing with quantum computing, quantum circuits, and quantum simulation without a full background in quantum physics.  I thought a good way to help bridge this gap would be to provide some pedagogical examples of how one might go about simulating a quantum circuit.  This won’t focus on computing performance or approaches to simulating large systems (for information on that topic, this paper is a good reference), but rather simple ways to simulate actions on a small number of qubits.

Brief background

A classical digital computer is built upon a foundation of bits, which take on values of 0 or 1, and store information.  Clearly we also need to manipulate and work with that data, and the action on bits are performed by classical logical gates, such as the “AND” gate.  In analogy to classical bits, in quantum computing, we have quantum bits, which take on values \ket{0} and \ket{1}.  Similarly, we need to manipulate the data stored in quantum bits, and the equivalent operations are called quantum gates.  Quantum bits have a number of nice properties absent in their classical counterparts, such as superposition and entanglement, and quantum gates are designed to make sure they can exploit these features, a property known as unitarity.  For the purposes of this simple post, the exact details of these properties will be unimportant, as a correct mathematical representation of qubits and quantum gates often allows us to have these features naturally in our numerical simulations.

One Qubit

The state of a qubit may be represented as a column vector, and conventionally

(1)   \begin{align*} \ket{0} &= \left( \begin{array}{c} 1 \\ 0 \end{array} \right) \\ \ket{1} &= \left( \begin{array}{c} 0 \\ 1\end{array} \right) \end{align*}

In Python, we can represent such states as arrays in Numpy with the correct dimensions as:

which gives us a good representation of either the \ket{0} (Zero) or \ket{1} (One) state. Say we wanted to create a superposition of \ket{0} and \ket{1}, for example, the standard \ket{+} state which is defined by \ket{+}= 1/\sqrt{2} (\ket{0} + \ket{1}). The factor 1/\sqrt{2} is called a normalization factor, and helps ensure the relation to a probability distribution. If we wanted to create this state, one way of doing this in Python might be to define a normalization function, and input the raw state into it as

Now that we know how to create single qubit states, we might want to be able to perform interesting actions on them. Quantum gates in acting on single qubits are given by two-by-two matrices. One interesting gate is the Hadamard gate, given by

(2)   \begin{align*} \frac{1}{\sqrt{2}} \left( \begin{array}{cc} 1 & 1 \\ 1 & -1 \end{array} \right) \end{align*}

we can represent this gate in Python and examine its action on the state \ket{0} (that is, H \ket{0} = ?) through the following:

where “np.dot” in Python serves the role of matrix multiply for Numpy arrays. Take a look at NewState, you should find it has a striking resemblance to state we just examined a few lines ago…

More than one qubit

While one qubit is interesting, more than one is certainly more interesting. How do we generalize what we just learned about representing one qubit to many qubits? States of many qubits sometimes have a few different notations, for example \ket{001} or \ket{0} \otimes \ket{0} \otimes \ket{1}, which are the same state. The symbol \otimes is a source of confusion to many people, as rapid switches between its implicit and explicit use are rampant in quantum computing, and the symbol is essentially absent in some fields of physics, despite the operation being ubiquitous. The symbol \otimes stands for the “tensor product” between two states and it allows for a correct description of many particle or many qubit states. That is, the space occupied by 2 qubits is the tensor product of the space of two single qubits, similarly the space of N qubits is the N-fold tensor product of N single qubit spaces. Time spent on the Wikipedia page for tensor product is prone to causing excessive head-scratching wondering what the “universal property” is, and how it’s relevant. Fear not, most of these abstract properties are not required reading for working with a practical example of a tensor product, which is what we will do here.

A practical example of a tensor product that we will use here, is the Kronecker product, which is implemented in Numpy, Matlab, and many other code packages. We will use the fact that it faithfully takes care of the important properties of the tensor product for us to represent and act on many qubit states. So how do we build a correct representation of the state \ket{00} or \ket{++} using it in Python? Using the Kronecker product, it’s as easy as

What about an entangled Cat state such as 1/\sqrt{2} (\ket{00} + \ket{11}) you ask?

And if we want to create a 5-qubit state, such as \ket{10101}? We can simply chain together Kronecker products. A simple helper function makes this look a bit less nasty

Take note of how the size of the state grows as you add qubits, and be careful not to take down Python by asking it to make a 100-Qubit state in this way! This should help to give you an inkling of where quantum computers derive their advantages.

So how do we act on multi-qubit states with our familiar quantum gates like the Hadamard gate? The key observation for our implementation here is the implicit use of Identity operators and the Kronecker product. Say H_0 denotes a Hadamard gate on qubit 0. How do we represent H_0? The answer is, it depends, specifically on how many total qubits there are in the system. For example, if I want to act H_0 on \ket{00}, then implicitly I am acting H_0 \otimes I_1 \ket{00} and if I am acting on \ket{10101} then implicitly I am performing H_0 \otimes I_1 \otimes I_2 \otimes I_3 \otimes I_4\ket{10101}, and this is important for the numerical implementation. Lets see the action H_0 \ket{10101} in practice,

Luckily, that’s basically all there is to single qubit gates acting on many-qubit states. All the work is in keeping track of “Id” placeholders to make sure your operators act on the same space as the qubits.

Many-qubit gates require a bit more care. In some cases, a decomposition amenable to a collection of a few qubits is available, which makes it easy to define on an arbitrary set of qubits. In other cases, it’s easier to define the operation on a fixed set of qubits, then swap the other qubits into those slots, perform the operation, and swap back. Without covering every edge case, we will use what is perhaps the most common 2-qubit gate, the CNOT gate. The action of this gate is to enact a NOT operation on a qubit, conditional on the state of another qubit. The quantum NOT operation is given by the Pauli X operator, defined by

(3)   \begin{align*} X = \left( \begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array} \right) \end{align*}

In order to perform this action dependent on the state of another qubit, we need a sort of quantum IF-statement. We can build this from projectors of the form P_0 = \ket{0}\bra{0} and P_1 = \ket{1}\bra{1} that tell us if a qubit is in state 0 or 1. In our representation, the so-called “bra” \bra{0} is just the conjugate transpose of the “ket” \ket{0}, so these projectors can be written as an outer product

(4)   \begin{align*} P_0 = \left( \begin{array}{c} 1 \\ 0 \end{array} \right) \left( \begin{array}{cc} 1 & 0 \end{array} \right) \\ P_1 = \left( \begin{array}{c} 0 \\ 1 \end{array} \right) \left( \begin{array}{cc} 0 & 1 \end{array} \right) \\ \end{align*}

With this, we can define a CNOT gate on qubits 0 and 1 as CNOT=P_0 \otimes I + P_1 \otimes X. So if we wanted to perform a CNOT operation, controlled on qubit 0, acting on qubit 3 of our 5 qubit state, we could do so in Python as

This covers almost all the requirements for simulating the action of quantum circuit of your choosing, assuming you have the computational power to do so! So long as you can reduce the circuit you are interested in into an appropriate set of 1- and 2-qubit gates. Doing this in the generic case is still a bit of a research topic, but many algorithms can already be found in this form, so it should be reasonably easy to find a few to play with!

Measurement

Now that we’ve talked about how to simulate the action of a circuit on a set of qubits, it’s important to think about how to actually get information out! This is the topic of measurement of qubits! At a basic level, we will want to be able to ask if a particular qubit is 0 or 1, and simulate the resulting state of the system after receiving the information on that measurement! Since qubits can be in superpositions between 0 and 1 (remember the state \ket{+}), when we ask that qubit for its value, we get 0 or 1 with some probability. The probability of qubit i being in a state 0 is given by the fraction of the state matching that, which can be concisely expressed as

(5)   \begin{align*} \text{Tr}\left[\ket{\Psi}\bra{\Psi} P_0^i \right] \end{align*}

where \ket{\Psi} is some generic state of qubits and P_0 is the same projector we had above, acting on the i‘th qubit. If we measure a 0, then the resulting state of the system is given by

(6)   \begin{align*} P_0^i \ket{\Psi} / || P_0^i \ket{\Psi}||_2 \end{align*}

recalling that anywhere where the number of qubits don’t match up, we are following the generic rule of inserting identity operators to make sure everything works out! In Python, we could simulate measurements on a Cat state using the above rules as follows

If you look you will see that upon measurement the state collapses to either the state \ket{00} or \ket{11} with equal probability, meaning that a subsequent measurement of the second qubit will yield a deterministic result! This is a signature of entanglement between the two qubits!

Summary

So in this short post we’ve covered how to represent qubits, quantum gates and their actions, and measurement of qubits in Python by using a few simple features in Numpy/Scipy. These simple actions are the foundation for numerical simulation of quantum circuits and hopefully gives you some idea of how these simulations work in general. If you have any questions or want me to elaborate on a specific point, feel free to leave a comment and I will update the post or make a new one when I get the chance!

Posted in Code, Science

VQE, Excited States, and Quantum Error Suppression

It’s been some time since our original hybrid quantum-classical algorithm, the variational quantum eigensolver (VQE), made its debut on a photonic quantum device.  Since that time there have been many developments, some of which I outlined in a previous post here.  I’m happy to say we’ve finally developed a reasonable way to access excited states without requiring more coherence time and at the same time we managed to mitigate the effects of detrimental quantum channels with only an efficient amount of additional Pauli measurements and classical computation (all with no formal quantum error correction).  Overall, I think our new paper, entitled “Hybrid Quantum-Classical Hierarchy for Mitigation of Decoherence and Determination of Excited States”, contains a lot of interesting new insights into both variational quantum algorithms and ways to reduce the effects of noise, and I hope that you will check it out!

Posted in Publications, Science

A quick molecule in Processing.js and Python

As an extension to my last post on rendering Processing code in an IPython notebook, I thought it might be fun to play a bit with the 3D functionality and see how easy it would be to build an extremely basic molecule viewer! I didn’t spend much time polishing it, but the basic rendering turned out to be straight forward, and linking it with Python made pre-processing a breeze. Perhaps someone out there will find it useful! So let’s get into the anatomy of rendering a molecule in 3D with a combination of IPython and Processing.js.

We’ll use the same format as before, that is a Python string of Processing code, that we modify using the format method.  We’ll start with a little bit of Processing setup,

Which sets us up with a basic 3D scene, and a bit of mouse interactivity. Note that the double braces are for the Python format method to ignore them in replacement. Now we just need to place something functional into the draw() method. For example, we’ll need to be able to draw an atom, which is simple enough

and a bond between two points, which takes a bit more code in the Processing style

With these primitive rendering functions defined, let’s turn to where we get our data and how to prepare it. I’m going to use a simple XYZ list of coordinates for adenine as an example. That is, I have a file adenine.xyz that reads as follows

To render something in Processing3D, the coordinate system starts from 0, and is oriented perhaps in the opposite way that you’d expect. As a result, it’s prudent to load this data in, shift, and rescale it to fit inside a unit cube. Moreover, xyz files don’t contain information about bonds, so we can quickly infer if they might exist based on a distance calculation. There are certainly better ways to do this, and bond distance should depend on atom types, but this works for a quick pass. We can do this in Python as follows:

From here, all we have to do is prepare a small template string, loop through the atoms and bonds, and render:

Now we make an HTML template, and ask IPython to render it:

If everything went as planned, hopefully you’ll now have a somewhat crude rendering of adenine to play with and rotate. You could even put it on your webpage with Processing.js if you dump the code to a file like

which hopefully gives you the crude rendering below (try clicking a dragging)

Your browser does not currently support the canvas element.

Posted in Code, Science, Visualization

Processing.js in an IPython Notebook

I’ve been playing a bit with generative art recently, and in this domain the Processing language is a popular choice.  Processing allows fairly seamless creation of both 2D and 3D images as well as natural interactivity.  I had some interest in linking it with Python to make artistic renderings on the fly of work I was already doing in Python, or more specifically, IPython notebooks.  Luckily, the growing popularity of using HTML5 as the graphical medium of choice has lead to some nice packages such as Processing.js, which interpret existing Processing code, and render it using the increasingly standard canvas element in HTML5.  As IPython notebooks are similarly centered around web output, I thought linking the two might be reasonably straightforward.  It took a bit of playing with, as the most obvious solutions did not work, but I finally arrived at a pretty minimal working example which I hope is of use to you as well.

The steps to integrating Processing.js into an IPython notebook locally are as follows.

First, download the processing.min.js file from Here, and place it in the same directory as where you would like your IPython Notebook. No other installation is required.  Now import the only IPython function we need, the HTML display function as

From here, we define our Processing code as a Python string (I won’t talk about the Processing code itself, but nice tutorials exist for the interested reader).

One could use Python code to generate this Processing code on the fly from data if desired as well.  With the Processing code in place, we need some template to render it.  This is where my tweaking came in, as the most obvious solutions based on Processing.js examples failed to render for me.  I found that manually calling the compile function fixed my woes however, and only required one or two additional lines of code.  So now we define the following HTML template as a Python string as well, with brackets in place for easy substitution.

Finally, simply render the Processing code using IPython’s HTML function.

If everything went as planned, you should have the following Processing.js element rendered at the end of your IPython notebook. Play around with it by moving your mouse over the top of the canvas.


Your browser does not currently support the canvas element.

Posted in Code, Visualization

Simple bash-parallel commands in python

One of the benefits of using a primitive system like collections of flat files for data storage is the ability to trivially do work in parallel on them through the shell.  This seems to be a relatively common workflow in both computational and data science.  A quick Google search on the topic reveals a number of people asking about this on StackOverflow and an assortment of tools including GNU Parallel at a base level and more sophisticated workflow tools like Fireworks to address similar issues.  These tools are great and much more full featured than what I’m about to offer, however sometimes you want to stay within the Python ecosystem while not introducing too many dependencies.

For those reasons, the following short code snippet (and variants thereof) has become surprisingly common in some of my prototyping.

This simple example runs a fictional program called “program” on all the files in the specified path, with any ‘other’ ‘arguments’ you might want as separate items in a list.  I like it because it’s trivial to modify and include in a more complex python workflow.  Moreover it automatically manages the number of tasks/files assigned to the processors so you don’t have to worry too much about different files taking different amounts of time or weird bash hacks to take advantage of parallelism.  It is, of course, up to the user to make sure this is used in cases where correctness won’t be affected by running in parallel, but exploiting simple parallelism is an trivial way to turn a week of computation into a day on a modern desktop with 12 cores.  Hopefully someone else finds this useful, and if you have your own solution to this problem, feel free to share it here!

Posted in Code

Integrating over the unitary group

Quantifying the volumes of different types of quantum states with respect to each other is an interesting tool for analysis that I’ve recently become interested in.  For example, did you know the volume of separable states is super-doubly-exponentially small with respect to all quantum states? That means that if I choose a state of 100 qubits randomly from Hilbert space, with overwhelming probability that state will be entangled.  It also raises the bar for any would-be entanglement witness to beat the performance of an algorithm which guesses entangled for every state blindly if the test set is all possible quantum states.  Of course, if you talk to any person working in an experimental quantum lab, they would certainly like it if almost everything they did resulted in an entangled state, unfortunately that’s not really reality, and we know that the set of physical states we expect to see is a tiny subset of all possible states.  Coming up with a measure that reflects that reality and readily enables computation would be valuable indeed.

Regardless, one of the tools used in many of these analyses at the moment is integration over the unitary group.  This is because the unitary group allows us a way to generate all possible quantum states from some choice of starting state.  To complete these analyses one must also integration over the simplices of eigenvalues corresponding to physical states, but for now I want to focus on the unitary group.  To talk about integration, we need some way of quantifying volume, or a measure.  The unitary group has a natural measure, its unique Haar measure that treats all unitary operators equally.  It does this by being invariant under the action of the unitary group, in the same way you wouldn’t want the volume of a box to change if you translated it through space.  Mathematically we can phrase this as saying \mu(V) = \mu(UV) \ \forall \  U,V \in U(N), which leads to a corresponding Haar integral with the defining property

(1)   \begin{align*} \int_{U(N)} f(UV) d\mu(U) = \int_{U(N)} f(U) d\mu(U) = \int f(U) dU \end{align*}

for any integrable function f, group elements V,U \in U(N), and the last equality is written to match the shorthand commonly found in papers.  We should also note that it is convention to fix the normalization of integration over U(N) of the identity to be 1.  So that’s all very interesting, but something harder to dig up, is how to actually compute that integral for some actual function f that has been given to you, or you find interesting.  I’ll show a few options that I’ve come across or thought about, and if you know some more, I’d love it if you would share them with me.

Integration through invariance

Some integrals over the unitary group can be done using only the invariance property of the Haar integral given above, and reasoning about what it implies.  This is best illustrated through an example, which I borrow from this fine set of notes by Patrick Hayden.  Suppose that f = UXU^\dagger, where \dagger indicates Hermitian conjugation, and the integral I am interested in is

(2)   \begin{align*} D(X) = \int_{U(N)} UXU^\dagger dU \end{align*}

Now take any V \in U(N) and start using that integral invariance property as

(3)   \begin{align*} VD(X)V^{\dagger} &= V \left[ \int UXU^\dagger dU \right] V^\dagger \notag \\ &= \int (VU)X(VU)^\dagger dU  \text{ (By integral linearity)} \notag \\ &= \int UXU^\dagger dU  \text{ (By integral invariance)} \notag \\ &= D(X) \end{align*}

Thus we find that VD(X) = D(X)V for all V \in U(N), implying that D(X) \propto I.  If a matrix is proportional to the identity, then we can characterize it completely by its trace which we then evaluate as

(4)   \begin{align*} \text{Tr}[D(X)] &= \text{Tr} \left[ \int UXU^\dagger dU \right] \notag \\ &= \int \text{Tr} \left( UXU^\dagger \right) dU \text{ (by integral linearity)} \notag \\ &= \int \text{Tr} \left( X \right) dU \text{ (by cyclic invariance of the trace)} \notag \\ &= \text{Tr}(X) \end{align*}

From these two results, we conclude that

(5)   \begin{align*} D(X) = \int_{U(N)} UXU^\dagger dU = \frac{\text{Tr}(X)}{N} I \end{align*}

which was evaluated entirely through the defining invariance of the integral.

Monte Carlo Numerical Integration

For some problems of interest, there is no obvious way of integrating it analytically or reaching a closed result.  In order to perform a numerical integration, one could choose some parameterization and attempt integration with a quadrature, but this is both cumbersome and often runs out of steam as the dimension of the problem of interest grows.  Monte Carlo integration offers a straightforward way to at least attempt these integrations for high-dimensional problems of interest, and is often less human work even for low dimensional problems.  Monte Carlo integration is simple, and approximates the integral as

(6)   \begin{align*} \int_{U(N)} f(U) dU \approx\frac{1}{M}\sum_{i=1}^M f(U_i) \end{align*}

where M is the number of sample points you take, and U_i is a randomly drawn unitary that is drawn with uniform probability according to the Haar measure.  How to draw unitaries uniformly with respect to the Haar measure is not entirely obvious, but luckily this has been worked out, and there are a few ways to do this.  One of this requires only 2 simple steps available in most modern math libraries that are

  1. 1. Fill an N\times N matrix with complex Gaussian IID values, call it Z.
  2. 2. Perform a QR decomposition of the matrix Z, let U_i=Q.

where U_i is now the unitary of interest randomly distributed according to the Haar measure on U(N).  Here is a Python function that can do this for you if the above steps don’t make any sense.

Weyl’s Integration Formula

This formula is a bit of a sledge hammer for hanging a nail, but it exists for all compact Lie groups and for the unitary group takes on the specific form

(7)   \begin{align*} &\int_{U(n)} f(A) \ dA \\ &= \frac{1}{(2 \pi)^n n!} \int_{\theta_1=0}^{2 \pi} \int_{\theta_2=0}^{2 \pi} \ldots \int_{\theta_n=0}^{2 \pi} \prod_{j<k} |e^{i \theta_j} - e^{i \theta_k}|^2 f(\theta_1, ..., \theta_n) \ d \theta_1 ...d \theta_n. \end{align*}

This formula also demands that f be a conjugacy invariant function on the unitary group as well as symmetric in its arguments.  The e^{i\theta_j} correspond to the possible eigenvalues that characterize all unitary matrices.  I’ve yet to use it for a practical calculation, but like having a catalog of known options.

Posted in Code, Math, Science

Expressive power, deep learning, and tensors

Tensors are kind of like the wild west of applied math these days.  They are a brave new territory and somewhat dangerous with regards to procedures one might borrow from matrices. In particular, “Most tensor problems are NP-Hard”.  Despite this pessimistic result however, people are starting to utilize the additional structure available in tensor decompositions for great gains in machine learning, and in many-particle quantum mechanics and quantum computation, they are almost unavoidable.  A recent post to the arXiv showed that the power of deep neural networks can be understood in terms of the expressiveness of tensor decompositions.  The paper is titled “On the Expressive Power of Deep Learning: A Tensor Analysis”, and in my opinion it is a paper full of great insights and tools from measure theory to aid in the way we think about both neural networks and general tensors.  Well worth a read if these topics interest you.

Posted in Literature, Science

A new paper on the VQE

Quantum computers are a big interest of mine, and in particular I think there’s a lot to be learned in using prototype devices available in the lab today. Unfortunately, a lot of the algorithms we talk about today are simply too complex to fit on even near-future quantum devices. However, by thinking about quantum computers as a specialized computation device, sort of a QPU analogous to GPUs we see today, and coupling it to a CPU, we can offload some of the more mundane parts of a calculation to our well developed CPUs and utilize our QPUs only where it counts. This was the idea when we developed the original VQE (Variational Quantum Eigensolver) algorithm and its implementation on a photonic quantum computer.  Since that time, I’ve given a number of public talks on the topic, including two that can be found online here and here.  The algorithm was then developed further including some special enhancements for ion traps and picked up the attention of some other theory groups who worked to optimize it further.

Given the excitement around this algorithm as a potential candidate for a first algorithm to become competitive with a classical computer at scale, we wanted to expand a bit more on our views of the theory and purpose of this algorithm, as well as offer some computational enhancements.  We did just this in our new paper entitled “The theory of variational hybrid quantum-classical algorithms”, and we hope you enjoy reading it as much as we enjoyed writing it.

Posted in Publications, Science

Distance is weird in many-particle quantum mechanics

Distance measures between quantum states are interesting for all kinds of reasons.  I’ve been thinking about them a lot lately as they pertain to machine learning in and around quantum mechanics, as well as just generic approaches to studying many-body quantum systems.  One thing that has stuck with me lately is a peculiarity of distance between wavefunctions in many-particle quantum systems.  It also seems to be intimately related with the Van Vleck catastrophe, which afflicts wavefunction based methods for studying many-body problems.  In particular, almost every state seems to be equally far apart.  One could call this yet another instance of the curse of dimensionality in distance functions being witnessed in all fields from quantum mechanics to more general machine learning, but for some classes of quantum states, there are other choices of distance for which such things are not true, yet they are not in common use.  Let me be a bit more specific with an example

Consider a system of N qubits, or two level quantum systems with basis states \ket{0} and \ket{1}.  A generic separable state \ket{\Psi} of these qubits can be written in the form

(1)   \begin{align*} \ket{\Psi} &= \ket{\psi_1} \ket{\psi_2}...\ket{\psi_N} \\ \ket{\psi_j} &= \alpha_j \ket{0} + \beta_j \ket{1} \\ &= \cos \left( \frac{\theta_j}{2} \right) \ket{0} + e^{i \phi_j}  \sin \left( \frac{\theta_j}{2} \right) \ket{1} \end{align*}

with |\alpha|^2 + |\beta|^2 = 1.  Without paying special attention to global phase or normalization, it’s common (perhaps because of the measurement axioms of quantum mechanics or convenience) to think of states like \ket{\Psi} as living in the complex Hilbert space of dimension 2^N with a natural inner product between two states

(2)   \begin{align*} \braket{\Psi'}{\Psi} = \prod_i \braket{\psi'_i}{\psi_i}. \end{align*}

A quick check of this tells you that if even a single qubit i is orthogonal to qubit i' in the primed state, the inner product is zero.  If we use this inner product to define our distance metric on the total N particle Hilbert space then

(3)   \begin{align*} d(\ket{\Psi}, \ket{\Psi'})^2 = ||\ket{\Psi} - \ket{\Psi'}||^2_2 &= \left( \bra{\Psi} - \bra{\Psi'} \right) \left( \ket{\Psi} - \ket{\Psi'} \right) \\ &= 2 - 2 \Re[ \braket{\Psi'}{\Psi}] \end{align*}

and we see that every state \ket{\Psi'} which has a single qubit flipped from a state \ket{\Psi} has a distance of 2 from that state.  Recalling that this is 2^N-1 of the possible basis states, it’s easy to see that almost every state in this space has maximal distance from every other state.  Moreover, we can see that if each qubit state is only perturbed slightly such that \braket{\psi'_i}{\psi_i} = 1 - \epsilon, we see that the squared distance between the states, 2 - 2(1-\epsilon)^N, tends exponentially quickly towards 2 as a function of N, or a maximal distance.  This can be viewed as  a manifestation of the Van Vleck catastrophe for many-particle systems.  While perhaps this satisfies the axioms of a distance metric, it doesn’t seem to satisfy the properties of a useful metric.  Intuitively, one would like states with only a few bits different to be somehow closer than states that are totally different, or really to express some notion of similarity beyond “this is not the same state”.

However, if we restrict ourselves only to product states and the manifold \mathcal{M} they live on, the situation is a bit different.  A product state of N qubits can be studied on the product manifold of N spheres \mathcal{S}^2.  Product manifolds have a natural distance derived from the sum of the distance on each of the manifolds that compose the product, and surfaces of a sphere have a natural notion of distance defined by geodesics on the sphere.  Explicitly, the distance between two points on a unit sphere is given by

(4)   \begin{align*} d(\ket{\psi_i}, \ket{\psi_{i'}}) = \text{arccos}\left( \sin \theta_i \sin \theta_{i'}+ \cos \theta_i \cos\theta_{i'} \cos (\phi_i - \phi_{i'}) \right) \end{align*}

and the metric this induces on the manifold of the full quantum N particle state is then

(5)   \begin{align*} d(\ket{\Psi}, \ket{\Psi'}) = \sum_i d(\ket{\psi_i}, \ket{\psi_{i'}}), \end{align*}

which due to its form grows linearly with N, and gradually increases the distance between two states as more and more of the single particle states differ.  In other words it gives some meaningful notion of similarity between states.

So what we see here is a case where the same quantum state, considered in Hilbert space and on its natural product manifold have two very different distances.  The distance in Hilbert space is somehow related to the axioms of quantum measurement, while the distance on the manifold gives us some meaningful idea of similarity between states.  This notion of meaningful distance can be extended to anti-symmetric product states as well, such as those that describe electronic systems, where the representatives are members of the Grassmann manifold (see this great paper for more information).

Unfortunately for more interesting entangled states, these notions are less straightforward to define and often the manifold geodesics don’t have closed forms.  This brings me to an interesting question, which is what is the manifold of physical quantum states?  What is the best notion of distance in this manifold?  I’m not sure too much is known about it, but apparently it’s exponentially small in the convenient illusion of Hilbert space which is interesting in its own right…

Posted in Math, Science