Quantum computing and its models

Before reviewing in more detail the most promising experimental realisations of quantum information processors, I think it is useful to recap the basic concepts and most used models of quantum computing. In particular, the models, as the physical realisations mentioned in a previous post use different but equivalent computational models, which need to be understood to comprehend their implementations.

The primary logical units of a quantum computer are quantum bits, or qubits, which are two-level quantum systems that can be used to store and process quantum information. The two levels are usually represented by |0\rangle, |1\rangle, which are known as the computational basis states (in Dirac notation). The main difference between bits and qubits is that the latter can be in a linear combination, usually referred to as superposition, of the two basis states. Hence the most general representation of a qubit is:

| \psi \rangle = \alpha |0\rangle + \beta |1\rangle

where  \alpha and \beta are, in general, complex coefficients. When a qubit is measured in the computational basis, the results 0 or 1 are obtained with probability |\alpha|^2 and |\beta|^2 respectively. As these probabilities must add up to one (|\alpha|^2 + |\beta|^2 = 1), we have a normalisation restriction on the coefficients  \alpha and \beta , that can be geometrically understood as the condition that the qubit’s state has length one. We can take this geometric interpretation a bit further and parametrize the quantum state in spherical coordinates

\psi = \cos \theta |0\rangle +e^{i\phi} \sin \theta|1\rangle,

and understand the qubit as a vector in a sphere of radius one. This sphere is usually referred to as the Bloch sphere, shown in the figure below. Note that in this representation, orthogonal states are diagonally opposed rather than at right angles.


The three cartesian axes of the Bloch sphere correspond to the eigenstates of the Pauli matrices: \sigma_X=\begin{pmatrix}  0 & 1 \\  1 & 0 \end{pmatrix} ,\quad \sigma_Y=\begin{pmatrix}  0 & -i \\  i & 0 \end{pmatrix} ,\quad \sigma_Z=\begin{pmatrix}  1 & 0 \\  0 & -1 \end{pmatrix},

where the matrices are written in the computational basis. The eigenstates of the \sigma_Z operator are the computational basis states {|0\rangle, |1\rangle}, whereas the eigenstates of \sigma_X and \sigma_Y are {|\pm \rangle } and {| \pm i \rangle } respectively.

Single-qubit logical gates can be understood as transformations (rotations and reflections) of the states in the Bloch sphere. The most used single qubit gates are:

•Hadamard gate: H=\frac{1}{\sqrt{2}} \begin{pmatrix}  1 & 1 \\  1 & -1 \end{pmatrix}

• Phase gate: S=\begin{pmatrix}  1 & 0 \\  0 & i \end{pmatrix}

•π/8 gate: T=\begin{pmatrix}  1 & 0 \\  0 & e^{i \pi /4} \end{pmatrix}

• Rotations with respect to one of the cartesian axes: R_{\sigma_k} (\theta) = \cos \theta I-i\sin \theta \sigma_k     where k \in {X, Y, Z}.

The most commonly used two-qubit gate, controlled-NOT (CNOT), has the same truth table as the classical XOR gate, which flips the target (second) bit when the control (first) bit is in the state 1:

|00\rangle  \rightarrow |00\rangle,\  |01\rangle \rightarrow |01\rangle,\  |10\rangle \rightarrow |11\rangle\  \&\  |11\rangle \rightarrow |10\rangle .

The use of the CNOT gate in conjunction with some of the single qubit gates can produce entangled states, which show correlations with no equivalence in classical computation. For example, the action of a CNOT gate with the Hadamard gate on a pair of computational basis qubits yields:

|00\rangle \xrightarrow{H_1} |+0\rangle =\frac{1}{\sqrt{2}}(|00\rangle +|10\rangle) \xrightarrow{CNOT} \frac{1}{\sqrt{2}}(|00\rangle +|11\rangle)

This state (which is one of the four maximally entangled states referred to as Bell pairs), cannot be written as the product of two single-qubit states.

Other entangling gates such as controlled-Phase (CZ) can be obtained from combinations of CNOT gates with single qubit gates; moreover, any multi-qubit unitary can be achieved in the same way, which makes this set of gates universal. A set of gates can perform universal quantum computation if they are sufficient to approximate any unitary operation to an arbitrary accuracy via a quantum circuit. This universality is crucial as it allows any quantum algorithm to be performed and it ensures the equivalence of different models of quantum computation.

There are various quantum computing models that are universal, such as the circuit model, the measurement-based model, adiabatic model, topological model and quantum walk model. However, in this post, I will only focus on the first three models, as thy are the most relevant to the experimental realisations that show most promise today.

Circuit model

The circuit model is an algorithmic model for quantum computing that closely resembles classical algorithms. Single-qubit and two-qubit operations are performed in sequence on a set of qubits initialized in a fiduciary state, and the results are read at the end as the outcome of single-qubit measurements. The entanglement and interference necessary for the quantum speedup are built up during the computation, and if any ancillary states are used, their state must be erased so that they no longer interfere with the rest of the computation.

The following circuit diagram shows the most common representation of the quantum logic gates presented earlier:


The procedure runs from left to right: preparation, single Hadamard gate, CNOT gates, rotation and phase gates, measurement.

Circuit model key facts

State Space: A quantum circuit operates on a number of qubits (or two-level quantum systems), and therefore its state space is a 2^n -dimensional complex Hilbert space. The computational basis states are defined as product states of the form |x_1,...,x_n\rangle, where x_i = 0, 1.

State Preparation: Any computational basis state |x_1,...,x_n\rangle can be prepared in at most n steps.

Quantum Gates: Gates from a universal family of gates can be applied to any subset of the qubits desired.

Measurements: Measurements in the computational basis can be performed on one or more qubits.

Classical Computation: In principle, it is not necessary for the computation, but it can make certain tasks much easier.

Procedure of the computation: Quantum algorithms are run by applying one-qubit and two-qubit gates to the quantum systems, building up the amount of entanglement until the final measurement in the computational basis gives the result of the computation.

Measurement-based quantum computation 

For a long time, the circuit model was commonly used for quantum computation. Measurement-based quantum computation models are radically different to the circuit model (and have no classical analogue), as the resource for the computation is prepared in advance and “offline”. This strategy has the advantage that if errors occur at the preparation stage, the prepared state can be discarded, and the procedure can be repeated without any loss of information. There are two main approaches for measurement based quantum computing: the generalised teleportation model and the one-way quantum computer model. However, the latter is the most widely used, and therefore we will only look at that model here.

In the one-way model, the entire resource for the computation is supplied at the beginning of the computation, in the form of a highly entangled multi-particle state. This state is usually referred to as cluster state, and it is the same for every computation, although it may vary in size. The information is then processed by carrying a series of adaptive single qubit measurements.

This highly entangled multi-particle state is prepared by applying a pattern of entangling gates to the qubits. A generic cluster state of n particles is not easy to write in any basis, but it can be efficiently described with a graph, where each node of the graph represents a qubit and each bond denotes that the two sites have been connected by an entangling controlled-Z gate (CZ) operation. As the computation will be performed by single qubit measurements, the primary challenge of this computational model is the preparation of this highly-entangled cluster state, which can be technically challenging.

The information processing is done via sequential measurement of the qubits in a particular basis. It is assumed that the correct algorithm is performed if all the measurement outcomes are the +1 eigenstate, however, given the probabilistic nature of quantum mechanics, this is not always the case. We can steer the computation back to its correct subspace by applying Pauli corrections to subsequent measurements. Therefore, measurement results determine the basis of the following measurements on other qubits. Finally, the product of the computation is read out by one last measurement in the computational basis.

In the figure below (copyright 2001 by the APS), we can see an example of a quantum algorithm performed using the MBQC model. We initially start with a rectangular cluster state of 6 × 12 qubits, where qubits are located at the vertices of the grid (grid lines are not shown).  Despite starting with 72 physical qubits, this algorithm is actually performed on only 3 logical qubits, which are shown by the three horizontal yellow arrows. The yellow paths between the logical qubits represent two-qubit gates as explained in the circuit model, and the rest of the single-qubit gates are performed by measuring the qubits in the X-Y plane.


This model of computation has an enormous technical advantage over the classic circuit model, which makes it very appealing to implement quantum computation using certain physical systems. The cluster states can be produced offline and only when the resource is prepared correctly, the computation is performed. For many physical systems, performing entangling gates is the most challenging part of the computation, but if we post-select on the successful preparations of the resource state, this model substantially increases the probability of a successful computation. All that is required then is to be able to perform single qubit gates successfully with high fidelity, which is less technologically demanding for many physical systems such as superconducting qubits, ion traps and linear optics.

MBQC model key facts

State Space: A cluster state computation operates on n\times k physical qubits (or two-level quantum systems) for a quantum algorithm with n logical qubits. Its state space is a 2^{n\times k}-dimensional complex Hilbert space. The computational basis states are defined as product states of the form |x_1, \dots , x_{n\times k}\rangle, where x_i = 0, 1. The graph state representation is usually used.

State Preparation: Before the computation starts a resource state must be prepared. This resource state is a highly entangled multi-particle state (cluster state), which is the same (except for its size) for every computation.

Quantum Gates: An entangling gate is performed on qubits in a product state to build the resource state. Afterwards, only single qubit measurements are needed.

Measurements: Single qubit measurements can be performed in an arbitrary basis. (Or if only measurements of the computational basis can be performed, single qubit rotations must be performed alongside.)

Classical Computation: Classical computation alongside the quantum computations is a key feature of this model, as the basis of the measurements performed sequentially depends on the results of previous measurements.

Procedure of the computation: The entire resource for the quantum computation is supplied at the beginning of the computation. The information is then processed by carrying a series of adaptive single qubit measurements.

Adiabatic model

This model represents a new paradigm for quantum computation that was recently proposed based on quantum adiabatic evolution. Computation in this model is not performed by applying gates or measurements to qubits, but rather the algorithm starts from a disordered state, and it arrives at a solution to the problem by performing what can be understood as a local quantum search. The procedure for the computation is as follows:

•At time t = 0, the quantum mechanical system is described by a Hamiltonian H_E, whose eigenstates are easy to compute.

•The system is slowly transformed to the final Hamiltonian at time t = T, whose ground eigenstates are the solution to the problem that needs to be solved. This process can be described by a time-dependent Hamiltonian

H(t) = A(t)H_E + B(t)H_P .

A(t) and B(t) are slowly varying monotonic functions such that A(0) = 1,\  B(0) = 0 and A(T ) = 0,\  B(T ) = 1. According to the adiabatic theorem, if the evolution is slow enough, i.e. T is long enough, and if there is a gap between the ground state eigenvalue and the rest of the Hamiltonian’s spectrum, the state of the system at time t = T would correspond to the ground state of H_P , thus producing the solution to the problem.

•Measurement of the ground state allows for the extraction of the solution.
This model for quantum computation has been proven to be universal for quantum computation.

Adiabatic model key facts

• State Space: Assuming there are n particles to which the Hamiltonian is applied are two-level systems, the state space is 2^n -dimensional.

• State Preparation: The state is prepared in the ground state of the initial Hamiltonian, which is easy to prepare.

Quantum Gates: Once the state is prepared, no gates per se are applied, but rather, the Hamiltonian is adiabatically changed from the initial to the target Hamiltonian.

Measurements: A single measurement of the state of each quantum particle in the system suffices to read out the solution to the problem, which is encoded in the eigenstate of the final Hamiltonian.

Classical Computation: Classical computation is required to find the appropriate Hamiltonian to encode the problem, to which the system will be adiabatically evolved into.

Procedure of the computation: The computation is performed by slowly (adiabatically) varying the Hamiltonian, from the initial (easy to prepare) to the final, which encodes the problem.


Promising implementations of quantum computers

Quantum computers have raised a lot of interest and funding over the last few years, as they are expected to unlock an entirely new set of answers in the fields of physics, computer science, chemistry and material science. In a previous post I wrote about how we should define a quantum computer in theoretical terms, stepping away from definitions based on the hardware. Its implementation in practice is subject to wide debate, as the scientific community does not agree on which physical system constitutes the best option for the implementation of quantum information processors, and there are currently a few candidates that show promise.
Choosing the physical system that will ultimately be the main platform for quantum computers is not easy. Technological problems that may seem insurmountable today might be solved in a few years time. However, as quantum computers are physical devices, the laws of physics ultimately dictate what they can do or not. The amount of information that classical computers are capable of processing and the rate at which they do so has doubled every 18-months for the last 40 years, which is known as Moore’s law. However, Moore’s law is not a law of Nature and rather an observation of human ingenuity (and economic power), and it is expected it will soon reach saturation: Intel has already confirmed that their cadence in chip production has slowed.
The largest transistor count in a single CPU today is of 5.5 billion transistors, with current transistors being of the size of ~O(10)nm, we can imagine that even if we have quantum processors, machines with more than a trillion components do not seem physically feasible. There are other types of constraints too: if all our components need to be at mK temperature, the size of the quantum computer will be restricted by cooling ability (It is true that there exist large scale machines which operate at ~2K such as CERN, but they would not be considered efficient in the sense we describe here) the clock speed (number of operations per second) will be limited by the amount of available energy in the system, but more energy means more noise and entropy limits the amount of information that can be processed. The ultimate limits of computation are given by the laws of physics, but there is no guarantee that these limits can really be reached.

Various quantum technologies have been considered as good candidates for building quantum computers. They each have their own advantages and challenges, and it is not clear today which will be the final technology; it might not even be just one but a combination of several. In this entry, I will briefly mention the four technologies that (in my view) are most promising. Despite their differences, they have one significant factor in common: they are compatible with microfabrication techniques which will allow each architecture to become modular and be made from regular-sized chips.

Ion traps with photonic links

Charged atoms can be controlled and manipulated in macroscopic traps with a very high degree of accuracy. Excellent control has been achieved in macroscopic ion traps with nearly 100% fidelity in all gates, however for current implementations there exists a harsh scalability factor: only a bounded number of ions can be trapped and individually addressed in the trap. The networked model for quantum computation, in which cells with a small number of qubits are interconnected to perform quantum computation, is particularly well suited for this technology and full-scale architectures have been proposed. The entanglement between different cells is obtained via entangling operations on photons emitted by ions in the different traps. This operation is very slow, however (~300 times slower than any other gate) and uses large photonic switching networks which rapidly increase the photon loss rate. New very low-loss photonic switches and better entangling operations are needed for this technology to be feasible on a large scale. A new approach to overcome the scalability factor is that of integrated ion traps, in which standard semiconductor processing techniques can be used to fabricate micrometer-scale surface-chip traps.

Superconducting qubits

Superconducting systems exhibit generic quantum properties commonly associated with atoms, such as quantized energy levels, entanglement, and superposition of states. As such, artificial- atoms can be engineered from these systems and exquisite control can be achieved by using electromagnetic pulses. Recent demonstrations show the ability to perform single qubit gates with 99.92% fidelity and two-qubit gates with 99.4% fidelity. Moreover, these fidelities are within the fault-tolerant threshold for the surface code which has allowed the experimental implementation of a small surface code implementation of five qubits. Although this implementation of quantum computing benefits from microfabrication of the devices, it has a number of shortcomings. The most important are the cross-talk between nanowires, which hinders the construction of three-dimensional qubit structures (which are considered more advantageous for the implementation of fault-tolerance) and the fact that they operate at mK temperatures, which limits the number of qubits that can be implemented due to the limited cooling capacity.

Linear optics

Single photons are very good carriers of information with low decoherence rates and very high single-qubit gate fidelity. Non-deterministic two-qubit operations and photon loss are a challenge for current technologies, but a series of theoretical breakthroughs in recent years (in which the resources required have been lowered several orders of magnitude) together with technological advances make this physical system a competitive candidate for quantum computing. This architecture uses a percolation approach to the measurement-based model in order to counteract the effect of loss and probabilistic entangling operations. Integrated optical devices can be nano-fabricated, the ability to miniaturize (~1M) linear optical elements on a single chip, is a very promising sign for the construction of linear optical quantum computers with millions of elements per silicon chip in the future. An advantage of this type of architecture is that low temperatures are only required for the single photon detectors, and it is envisioned that in the near future, the entire architecture can be implemented at room temperature.

Spin donors in silicon

Solid-state systems of spin donors in silicon can be used to build quantum computers. Quantum information can be encoded in single phosphorus atoms embedded in a silicon substrate, and qubit gates can be implemented by controlling the spin resonance of the donors and the exchange interaction between different donors. Excellent control fidelity has been shown for single qubit gates, the achieved 99.6% fidelity falls within the requirements of fault-tolerant quantum computing. A scalable integrated two-qubit gate with high enough fidelity is yet to be demonstrated, however, remarkable process has been done in recent years towards this goal. An architecture resembling the networked model has been proposed, completely in silicon, in which donors are laid out in a 2D array. In each cell, single qubit gates are performed by low-power electric and magnetic drive at microwave frequencies, while two-qubit gates are mediated by the induced electric dipole. The cells are connected via long range (mm distance) two-qubit gates enabled by microwave resonators and photonic links.

On a series of posts over the next few weeks, I will study each of these four proposals in detail, attempting to answer the questions:

  • What are the resources used?
  • What are the key results of this architecture?
  • How do the different elements integrate together?
  • What are the open problems, and why are they open?
  • What is the rough resource count for implementing a particularly interesting problem?

Please comment below if there are any other questions you’d like me to (attempt to) answer! For each architecture, I will also provide a list of academic papers where more detailed information can be found.

What is a quantum computer?

This is a question you might get asked fairly often if you ever mention that you do research in quantum physics. You might even be asked if you are a Prime Minister visiting a research facility on the topic. So among all the questions I would expect to be asked, this is not a particularly surprising one. Except on my PhD viva exam, when I really did not expect it. Particularly in the form it was phrased: “How can you tell if a machine is a quantum computer? If aliens came to Earth and tried to sell you a quantum computer, what could you do to be certain they were not fooling you?”. I didn’t expect alien sales to be part of my viva, that’s for sure! Fortunately for me, some conversations I had previously had with experimentalists at the Centre for Quantum Photonics helped me give an answer that satisfied my examiners. However, I came out from the exam with the nagging feeling that that should have been a much easier question to answer.

In any standard course on quantum computing and its implementations, usually one learns about the DiVincenzo criteria, which states that a quantum computer should have:

  1. A scalable physical system with well-characterized qubits.
  2. The ability to initialize the state of the qubits to a simple fiducial state.
  3. Long relevant decoherence times, much longer than gate operation times.
  4. A “universal” set of quantum gates.
  5. A qubit-specific measurement capability.

These criteria were formulated in the year 2000 with a specific computational model in mind, the circuit model. Since then, other ways of implementing quantum computing have sprung out, from computational models (such as cluster state and adiabatic models) to physical implementations that don’t deal with qubits but rather with continuous variable systems. Many of these new developments don’t quite fit the DiVincenzo criteria, so are there any better definitions of what quantum computers are?

Searching through the scientific literature, and through statements of experts to the media, we can find five distinct ways in which to specify what a quantum computer is:

  1. Abstract theoretical definitions: Such is Deutsch’s definition in his 1984 paper where he formulates a physical version of the Church-Turing thesis that is compatible with quantum theory: “a quantum computer is a […] quantum generalization of the class of Turing machines”. Or Feynman’s: “It’s not a Turing machine, but a machine of a different kind”. These definitions are very abstract and lack details and specifications on the structural components, making them not very useful in practical scenarios.
  2. Implicit definitions: A quantum computer is not defined necessarily by its components but rather by stating that it uses the laws of quantum mechanics to perform computation. As true as this definition is, it is no help when trying to decide whether a computer is quantum or not. How can we assert from outside which laws govern the logical operations inside?
  3. Comparative definitions: A quantum computer is a device that can outperform classical computers. While we certainly expect them to be able to solve problems that would otherwise be outside our reach if we only had access to classical computers, relying on the classification of problems in complexity theory is uncertain business, as this classification is not written in stone and evolves as the field develops.
  4. Constructive definitions: These definitions specify what a quantum computer is by stating their components or the way information is processed. For example, defining the quantum computer as a machine that fulfils the DiVincenzo criteria falls in this category. This kind of definitions share the characteristic of being narrow and tied to a specific implementation, and therefore not general enough to apply to all architectures, physical implementations and computational models.
  5. Operational definitions: The quantum computer is fully defined by what it does, if a machine acts like a quantum computer, it is a quantum computer. The definition makes no assumptions about the theory of computation or the nature of physical reality, and therefore different interpretations of quantum mechanics should agree that the machine is a quantum computer.

There is an excellent paper which intends to figure out a useful operational definition for quantum computers that will stand the test of time, and by making no reference to how the computer exactly works, it should still hold in years to come when new techniques have been developed. As end users we care mainly about performance, and not necessarily about the nitty-gritty details of how that performance is achieved. As an example, how many of you understand all the complexity of the device you are using to read this blogpost? I certainly don’t.

The proposed operational definition of a quantum computer in this paper is as follows:

“We define a quantum computer as a device that can run a quantum algorithm efficiently, where a quantum algorithm is a classical bit string that encodes a series of quantum operations. The quantum computer should be able to take this string as input and produce another one as output. The probability distribution of the output should be consistent with the predictions of quantum theory. Finally, the time it takes the computer to produce the output should be in agreement with the difficulty of the algorithm.”

Note that this definition makes no mention of the way in which the quantum operations are performed, and therefore is open to different models of computation. The time constraint in this definition is crucial, as it excludes classical computers, because, for example, if we ran Shor’s factoring algorithm in a quantum computer we would expect an answer in polynomial time and a classical computer would require exponential time. But also this definition doesn’t depend on the current classification of complexity problems, as “the difficulty of the algorithm” mentioned in the definition would refer to the difficulty of a particular problem at the time of the test.

It is also worth noting that both the input and output are classical bit strings. After all, the input and output are the interactions of the machine with the controller, which lives in a classical world. The quantum program will be the encoding of the operations of a particular quantum algorithm, which will be part of the input string along with the initial state of the quantum computer. The instructions for generating the initial state on the quantum computer must have an efficient classical representation, this is the case for all computational models. It is worth noting that for all the current proposals of quantum computers, some classical pre and post-processing are assumed, which agrees with the above definition as long as this classical processing takes only polynomial time in the problem size.

In this paper they also have a set of criteria for building a quantum computer, that is general enough to fit all computational models (currently known) and all physical implementations proposed. It is based on 4 statements:

  1. Any quantum computer should have a quantum memory. Quantum memory is the broad term used to state that the quantum computer must have the capability of efficiently representing any computable pure quantum state (of a size accordant with the size of the computer) in its internal degrees of freedom. This quantum state will not have, in general, an efficient classical representation.
  2. Any quantum computer must facilitate a controlled quantum evolution of the quantum memory, which allows for universal quantum computation. By controlled quantum evolution, the authors mean that the evolution of the internal state of the quantum computer must follow the laws of quantum mechanics, and will ultimately be controlled by the end user.
  3. Any quantum computer must have a method for cooling the quantum memory. Entropy is accumulated in the quantum memory as a product of a previous computation or because of the presence of errors due to noise from the environment. Cooling refers to information-theoretic cooling, where the entropy is extracted from the quantum memory. It encompasses the process of initialisation of the quantum memory as well as the error correction procedure.
  4. Any quantum computer must provide a readout mechanism for subsets of the quantum memory. The computer must have a mechanism to translate the outcome of the computation to classical bits for the controller to obtain the result of the computation. The authors refer to subsets of measurements as during most error correcting procedures, there are intermediate measurements used to assess the presence of errors, and this kind of measurements are not of much use to the end user.

There are two other essential characteristics for a quantum computer the authors require:

  • Fault-tolerance: Fluctuations from the environment can cause stochastic errors to appear in the quantum computer. If a quantum computer still works according to its definition in the presence of such errors, it will be deemed fault-tolerant. This will be the case if the computer uses some form of error correction that has an error threshold (maximum size of individual errors) higher than the errors caused by the environment.
  • Scalability: Currently any claims of scalability in any particular physical implementation of a quantum computer are predictions, as no reasonably large quantum computer exists. Theoretically, a quantum computer is scalable if the resources required to run the same polynomial time algorithm on a larger input scaled polynomially on the size of the input. What makes a particular architecture scalable depends heavily on the architecture and technological prediction and it is difficult to make general statements without getting into the detail any given implementation. An excellent read on this subject can be found in the Quantum Pontiff blog post “Climbing Mount Scalable”.

It is perhaps because we don’t have a functioning large-scale quantum computer that it is difficult to give accurate definitions without hiding behind the “spooky” laws of quantum mechanics. At the end of the day definitions are not the most important thing, and proof of that is that Nielsen & Chuang, the must-read book for any quantum information scientist, does not define what a quantum computer is and rather lets the reader build up an intuition. But it is important to know what we talk about when we talk about quantum computers, for us to be able to make informed decisions about whether a machine is a quantum computer or not, in case aliens or someone else came to us with sales pitch.