The merging of two neutron stars

As a physicist, there are few things as exciting as experimental confirmation of our theoretical understanding of the universe. This week brought us groundbreaking news in the field of astronomy, and I didn’t want to pass the opportunity to learn more details about it.

dmb-h42vqae5r0v
Artistic conception based on simulations, by Karelle Siellez.

On Monday morning, during a conference at the National Science Foundation, it was announced that the merger of two neutron stars had been observed for the first time, both with gravitational wave detection and in the entire electromagnetic spectrum. This is a tremendous achievement; it is the first time that the gravitational wave signal of the merger of such objects has been observed (until now it was only black holes), but it has also confirmed predictions about electromagnetic emissions in such event. Until now only a prediction, we have observed the burst of highly energetic gamma rays that are produced during these events as well as the production of over 40 heavy elements, including gold, platinum, and uranium.

These experimental results have been obtained by a combination of over 70 Earth-based observatories in all continents and seven space observatories. It is an incredible example of the power of scientific collaboration across many countries in high-risk high-rewards science, such as the LHC effort which confirmed the existence of the Higgs boson. For the first time, we have electromagnetic and gravitational data for the same astronomical event, which opens a new field of multi-messenger astronomy, which will help us understand the universe to a whole new level. Our movies of the stars just got sound.

What happened and how did we see it?

On August 17 at 12:41:04 UTC, the LIGO and Virgo telescopes detected a gravitational wave signal lasting 100 seconds. This signal was much longer than previous ones detected, indicating that the objects producing them were much lighter than black holes.

About 1.7 seconds after, Fermi, the gamma-ray orbiting telescope, detected a burst of gamma rays. It was the first time that these two signals had been detected for the same astronomical event. The gamma rays and gravitational waves traveled together for 130 million years and arrived within 2 seconds of one another, both moving at the same speed as predicted by Einstein.

The three-detector network of gravitational telescopes, together with Fermi, were able to pinpoint the region of the sky that had witnessed the event and the astronomical observatories that detect in the electromagnetic spectrum, turned their eyes. About 10 hours and 52 minutes after the gravitational wave detection, a new visible light source was found in the NGC 4993 galaxy in the Hydra constellation. Further observations across the entire electromagnetic spectrum were made, including infrared emission (+ 11 hours and 36 minutes), ultraviolet (+15 hours), X-rays (+9 days) and radio (+15 days).

dmdyf0ixuaaou68
Poster made by Georgia Tech Physics

Astronomers now have an understanding of what happened through the emissions detected by the telescopes. The pair of neutron stars were spinning around one another while orbiting in their galaxy. As they spun, they lost energy through the emission of gravitational waves and got closer and closer to each other. It was only when they were about 200 miles apart, with the merger imminent, that the gravitational waves they emitted got strong enough for our detectors to be able to detect them. About ten milliseconds before the merger, the neutron stars started feeling tidal forces and up to few seconds after the merger matter was violently ejected into the interstellar medium, this ejection was detected as a burst of gamma rays. The ejected matter was the perfect site for heavy elements to form. The decay of some of these elements powered the light observed in the blue, red and near-infrared spectrum. The features of the light spectrum observed throughout the following days to the gravitational wave detection are a consequence of the merging of spectrums emitted by different elements. We now have confirmation that over 40 elements of the periodic table, including gold, platinum, and uranium, are formed in merging neutron stars.

 

 

What are gravitational waves?

Gravitational waves are perturbations in spacetime caused by the movement of large massive objects, they are particularly strong during the collision of very large astronomical bodies, such as black holes and neutron stars. The gravitational waves travel through the fabric of spacetime like ripples in the water. However, unlike water, spacetime is incredibly stiff — a gravitational wave caused by the merger of two black holes would only change the distance from Earth to Alpha Century by the width of a human hair. To detect these incredibly weak signals, the LIGO and Virgo observatories are miles long to boost the signal. A laser beam is split in two, and the beams travel back and forth in the two tunnels for exactly the same distance before interfering again. If the distance traveled by the beams has changed in the slightest, due to space being stretched by a gravitational wave, the interference pattern of the light when is combined together will give it away. The accuracy of these telescopes is mind-boggling, and the science that makes them work is even more fascinating: they use quantum phenomena to be able to measure beyond the accuracy of our classical tools — it is the science governing the smallest pieces of the universe that helps us understand the physics of the largest bodies. But this deserves a closer look altogether, for now, we can truthfully say that these observatories are the biggest and most accurate rulers that exist in the universe.

What are gamma rays?

Gamma rays are the most energetic electromagnetic emissions. We have specialized telescopes to look for these rays, both Earth-based telescopes and Earth-orbiting telescopes such as Fermi, which observes the entire sky (except the part blocked by the Earth at any given point). Fermi detects flashes of gamma rays about twice a day; these are usually caused by thunderstorms, flares from our sun and exotic stars from our galaxy. A few hundred detections a year, however, are huge explosions coming from galaxies far away. These are conjectured to be caused by the expulsion of matter at high speed during the creation of black holes and the merger of neutron stars, but there was no experimental evidence to back that claim. Until now!

What are neutron stars?

Neutron stars are the remnants of massive stars, about ten to twenty times the size of the Sun. These stars die after they run out of fuel for the nuclear fusion reaction that makes them shine, their core collapsing due to its own gravity, while the outer layers are violently expulsed resulting in a supernova. The star shrinks to the size of a small city, and the pressure inside is so large that atoms disintegrate into their constituents, protons and electrons, which recombine into neutrons.

The event observed in August was the first direct observation of the merger of a pair of these stars. However, we have long had experimental evidence of their existence. They were first observed by Jocelyn Bell Burnell and Anthony Hewish in 1967, who identified them as pulsars — radio-wave pulsing stars. In fact, they don’t emit the radio waves in pulses; rather they do so continuously. However, as they rotate around their own axis (which often does not coincide with the axis of the emission), we can only observe their waves when the beam is pointed towards Earth. They are akin to lighthouses in deep space.

In 1974, Joseph Taylor and Russel Hulse discovered the first pair of orbiting neutron stars in a binary system, which were orbiting around one another. They were found in an unstable orbit, (from radio observation) that was spiraling in, the spiraling was happening at the speed that Einstein predicted. Their observation provided the first firm observation of a system that confirmed that gravitational waves must exist in nature, confirming Einstein’s predictions. These neutron stars were over a million miles apart and won’t merger from another 300 million years. In contrast, the binary system that starred the event in August, these neutron stars started emitting detectable gravitational waves at 200 miles apart, and 100 seconds later they merged.

What next?

The era of multimessenger astronomy has started, and we expect to be able to gain a much deeper understanding of the universe by combining gravitational wave signals with detections from other sources. If anything I hope that the announcement on Monday, will inspire many old and young minds to learn more about the universe that surrounds us, because it is still full of secrets.

 


 

I thoroughly recommend watching the live announcement and sharing in the excitement of the physics community. There are, first and foremost, scientific articles describing the discovery, press releases by all parties involved, numerous first-hand accounts from the scientists that took part in this experiment, as well as many popular-science articles on the topic; I hope you find the reading of these accounts as interesting as I have.

For now, I will leave you with this intriguing musical piece that my talented friend Cora Miron, composed while inspired by these astronomical discoveries.

 

 

Why I joined the March for Science

Last Saturday was the March for Science, and I joined millions of others in marches around the world asking our governments for more funding for science. I attended the March for Science in San Francisco, and the atmosphere could not be better (for photos of the best posters, scroll to the bottom of the post!). It was truly inspiring to join so many people that were united in the belief that science keeps us safe and makes our lives better. The world we know today would have not been possible without the scientific breakthroughs of past centuries.

0C831F8E-A52F-48C3-A2C7-2225D314D6C2.JPG

Science is the organized skepticism in the reliability of expert opinion.” – Richard Feynman

The cornerstone of science is the fact that it is falsifiable, which means that any scientific theory can be proven wrong by new discoveries. Scientific theories of the natural world evolve through the ages because we learn of new facts that prove the old theories wrong and the scientific community needs to seek new answers. Examples of that are Kepler’s laws of planetary motion, quantum mechanics or general relativity. It could be easy to jump to the conclusion that there are no scientific facts or absoluts, and it’s all a matter of interpretation. This is not true, a scientific fact is measurable and verifiable observation, such as the Earth being round, the speed of light constant or the existence of subatomic particles. On the other hand, a scientific theory is an interpretation of the verifiable facts, which means that if we lack a relevant fact, we might arrive at the wrong conclusion.

Scientists use the scientific method to arrive to their scientific theories. They observe a fact of nature, they formulate a hypothesis as to why it happens, they determine the logical implications and make predictions following their hypothesis, and finally they test their predictions. This process can be preformed several times, but once all their predictions are proven correct, the theory is held as true, until a contradicting fact comes along. Any contradicting provable fact is enough to falsify a theory, and therefore scientists have to be careful and thorough in their analysis. It’s one of the things I love most about science: opinions or beliefs do not matter, only evidence. We are only explorers of the world around us, seeking understanding.

The first principle is that you must not fool yourself, and you are the easiest person to fool.” – Richard Feynman

Understanding the principles of the natural world equips us to improve our quality of life and push our technological development, however the importance of scientific progress has not always been understood. It was not until the twentieth century that science became more broadly publicly funded. In the past, scientists fell mainly into three categories:

  • Scientists that had the means to support themselves, or families who would foot the bill. Examples are Sir Isaac Newton, Charles Darwin and Tycho Brahe.
  • Scientists who sought patronage from the aristocracy, like Rene Descartes, Johannes Kepler and Galileo Galilei
  • Scientists who worked a second job to support their scientific endeavours, such as Gregor Mendel, Benjamin Franklin and Michael Faraday.

Nowadays, governments spend a sizeable portion of their GDP into Research & Development (R&D) programs. In fact, there is a correlation between the amount of funding that a country invests in R&D and their Human Develpment Index (HDI). The tables below show the 20 highest-ranking countries according to their HDI (on the left) and the 20 highest-ranking countries according to how much expenditure there is per capita. I’ve marked with a star those that appear in both lists, which is 75%. The other 5 countries that spend highly in R&D but do not appear in the top 20 according to their HDI, are ranked in the top 30.Screen Shot 2017-04-26 at 9.09.36 PM

This is the reason I attended the March for Science, because scientific progress has an unparalleled influence in our development as a society. Our ability to cure diseases, build infrastructures, mass-produce food or design electronic devices comes from basic research that seeked a deeper understanding of nature. It is in our best interest to keep funding basic research.

_______________________________________________________________________________

Here is a selection of the best posters I saw at the March for Science in San Francisco, there are more excellent posters from other marches around the world in various social media platforms.

Female scientists: not martyrs, but heroes.

Last week I attended the APS March meeting in New Orleans, and there was a session in honor of the 150th anniversary of Marie Curie, in which we had talks on Marie Curie’s life, as well as lively discussions on the challenges faced by women in physics today. Professor Emerita Ruth Howes gave an incredibly inspiring and entertaining talk about Marie Curie. With the title “Marie Curie: physicist and woman” Professor Howes showed us a side of Marie that, at least I, was not aware of. The way her story was told and the details I learned about her life made me feel closer to her, not only as the pioneer she was but as a woman: the mischievous pranks of that summer in the countryside, her horror at being underprepared for the Sorbonne, the persecution of the French media when her affair with Langevin was discovered. During that talk, somehow Marie gained a new dimension, outside the well-defined outline of her scientific life. I realized this is something I had felt missing before, in the biographies of trailblazing women in science. We usually put so much emphasis in highlighting the difficulties they overcame, their hard work, their scientific discoveries, that we end up making them so perfect that is hard for girls today to identify with them. Their brilliance is an inspiration, even more so when you consider the injustices they faced, but I believe we should remind ourselves, that they were women just like us, with their personal struggles and failures. They were not one-of-a-kind martyrs, but heroes that we can emulate today.

APStalkHOWES.jpg
Professor Howes

As a woman and a physicist, it bothers me that other girls and women might not feel welcome in the field. We still need to overcome challenges until we are in a position of equality with our male colleagues, and I (as am sure many others like me) often think about what is the best way in which I can contribute. There is an ever-growing community of scientists (both men and women) who champion equal opportunities and put in place programs that help this effort. Our greatest strength is our community, the mentors who help and champion students and young scientists, the role models that increase representation in the field and encourage new generations.

We want to shine the spotlight on the achievements of female scientists, in particular, those that have had to overcome particularly dire circumstances. But I think we should be careful that we don’t portray them as superhumans who have succeeded where no other person could, as by doing so we risk intimidating the same people we want to encourage. Science should not be the field of the brave but of the interested. We want to inspire young minds, and we want all the young people interested to feel they have a place in the scientific community.

I loved watching the film “Hidden figures” and one of the things I enjoyed the most was the companionship and support the women offered each other. And I loved it not only because it was inspirational and emotive, but because this is a part of my experience as a female scientist that I don’t see highlighted often. Marie Curie, Rosalind Franklin, Emmy Noether, Lise Meitner, they all had to face challenges on their own, but nowadays we have an extraordinary community of women doing scientific research, championing each other and looking for ways of levelling the playing field in the sciences. During my research in quantum computing, I’ve met incredibly smart, kind and all-around amazing women, some of whom I’m now lucky to call my closest friends. As female scientists of 2017, we still have challenges, but we are no longer alone.


Some groups and resources for women in physics:

Emmy’s beautiful mathematics

Noether’s theorem, which links together symmetry and physics, is one of the most beautiful and elegant concepts I have ever learnt. I didn’t expect to find out that Noether was really Emmy, a woman. And yet, what took me aback was my own surprise. I had always believed that women were just as capable as men of doing anything, however, when I learnt this beautiful piece of mathematics, I believed unquestionably that it had been written by a man, that the author could be a woman had not even crossed my mind. Of course, at the time I knew about Marie Curie, a female scientist so famous that could be called a legend among the hallways of the physics department. But I had never heard of Noether, nor any other woman on my physics course, and I was in the second year! This realisation opened my eyes to my unconscious bias and made me want to learn more.

noethers-theorem

Noether was really a mathematician, one of the leading founders of abstract algebra, who approached problems in a completely novel way. Her work in physics, which had so caught my attention, was just something she did on the side, to help a poor physicist who couldn’t figure out the maths of his new theory. As a result came Noether’s theorem, described technically by the equation above, which says that for every continuous symmetry of a physical system, there exists an associated conservation law. Conservation laws are fundamental in physics, as they allow us to determine phenomena that can or cannot happen in physics. Noether theorem links them to symmetries of the systems and allows us to determine which physical quantities are conserved uniquely from the properties of the Lagrangian, a function of the energy of the system. Many conservation laws had been known, such as energy and momentum conservation of a closed system, but Noether’s theorem resolved paradoxes in those conservation laws arising in new theories of physics, such as General Relativity. To say this theorem is an important result in physics is an understatement.

noether
Emmy Noether
Emmy grew up in a family of mathematicians, who somehow failed to notice her aptitudes and didn’t encourage her to pursue mathematics. She started training as a language teacher when she became fascinated by mathematics and started attending lectures at the university of Erlangen. As a woman, she could not officially enrol, so she would simply audit the lectures. Some years later women started to be officially allowed to take classes, but the policy on women’s rights would always have a hard time catching up with her. For some time, once she had passed her doctoral thesis, she was only allowed to teach classes at the University of Gottingen that were advertised as Hilbert’s. Years later, she was able to gain a position at the university although badly paid. She would never get to be a full professor in Germany, or even gain the wages due for her work. Being forbidden from teaching at the university of Gottingen due to her Jewish heritage in 1933, she moved to Bryn Mawr, a single-sex school in the USA. Her time at the college was accompanied by difficult circumstances, as she couldn’t teach graduate courses, find a permanent position, had health problems and the political situation of Germany was increasingly bad. However, she saw things differently and said that “the last year and a half had been the happiest in her whole life, for she was appreciated in Bryn Mawr and Princeton, as she had had never been appreciated in her own country”. Unfortunately, she was to die soon after, from complications of a surgery to remove a tumour.

When reading about Noether’s life, it was her personality that struck me the most. She was a fantastic woman and an incredible mathematician, whose informal lifestyle caused many jokes that she would simply ignore. Her appearance, dress and weight were usually commented upon, so was her voice, deemed “loud and disagreeable” because it was not soft and refined as other women’s. She cared enormously for her students, with whom she shared her ideas and whom she taught with passion and enthusiasm, regardless of their political position (to the point that one of her students used to come to her house to take class wearing a nazi brown SA shirt). Her students held her in high esteem as she made them feel like she was one of them, “almost as if she too were thinking about the theorems for the first time.” She applied to both mathematics and life a general principle of simplification and removal of the unnecessary. She wore comfortable men’s shoes and coats, and during a time, she would go six days a week to eat the same dinner at the same time, at the same table of the same restaurant. According to Noether’s only American graduate student, “her methods of thinking and working were simply a reflection of her way of life: that is, to recognise the unnecessary, brush it aside and enter wholeheartedly into the present”.

She was also held in high esteem by her colleagues, and it was thanks to their continuous campaigning that she was able to get her teaching positions, first at the university of Gottingen and later on in Bryn Mawr College and the Institute of Advance Studies in Princeton (although sadly she died before she could join the latter). Hermann Weyl, a professor in Gottingen before the Second World War, said that he was “ashamed to occupy a preferred position beside her, whom I knew to be my superior as a mathematician in many aspects”. After she made important contributions to Einstein’s theory of General Relativity, Einstein wrote to Hilbert: “Yesterday I received from Miss Noether a very interesting paper on invariants. I’m impressed that such things can be understood in such a general way. The old guard at Göttingen should take some lessons from Miss Noether! She seems to know her stuff.” He was later to write, in her obituary for the New York Times, “In the judgment of the most competent living mathematicians, Fräulein Noether was the most significant creative mathematical genius thus far produced since the higher education of women began.”
Noether, unlike some of her male colleagues, did not receive much recognition during her life and was instead criticised for many unimportant things. As historians, Crease and Mann said: “Had Noether been a man, her appearance, demeanour, and classroom behaviour would have been readily recognised as one of the forms that absent-minded brilliance frequently assumes in the males of the species”. I find Noether inspiring. Inspiring because of her achievements, which are made particularly striking given the tidal forces she had to fight against to pursue her lifelong passion for mathematics. Inspiring for her drive, her attitude towards students and colleagues, her dismissal of criticism. Inspiring because of her beautiful mathematics.

 


 

Today’s post is a celebration of Ada Lovelace’s Day, international celebration day of the achievements of women in science, technology, engineering and maths (STEM)!

Most of the biographical anecdotes of this post have been obtained from the book:  Nobel Prize Women in Science: Their Lives, Struggles, and Momentous Discoveries by Sharon Bertsch McGrayne

Fault-Tolerant Quantum Technologies ’16

After some weeks’ hiatus, Quanta for Breakfast is back! Today I want to give my thoughts on the Fault-Tolerant Quantum Technologies Workshop that I attended this summer in Benasque, Spain. It was my first time visiting the beautiful town and both the location and the workshop definitely lived up to my expectations.

The conference took place at the Centro de Ciencias de Benasque Pedro Pascual, a facility for hosting workshops and scientific meetings and truly a dream come true for physicists at a conference. There were blackboards everywhere: conference theatre, meeting rooms, corridors, outside blinds… It has all the facilities needed to be a place of scientific work and discussion and there is really no excuse to not talk about physics all day long. The building itself had a very interesting design and was built with sustainability in mind. The Centre was named in honour of the Spanish physicist Pedro Pascual, whose Quantum Mechanics book, co-authored with Alberto Galindo, I thoroughly studied as an undergrad at the Universidad Complutense de Madrid. Benasque itself is charming, full of hikers, incredible scenery and good food. There were some complaints about the time it took to have lunch, but what can I say, it’s a holiday town in Spain, restaurants assume the diners want to relax and enjoy the food 🙂 . However, for the people who couldn’t wait to go back to the blackboards, there was always the option of grabbing some tapas.

The meeting was really fantastic, from the content of the talks to the atmosphere throughout the two weeks. On top of the welcome drinks and conference dinner, there were some great activities organised, such as a couple of group hikes, an ascent to Aneto (the tallest peak in the Pyrenees), canyoning and an AMA Reddit session.

 

p1110331
Group hike

 

On top of all these activities, there was plenty of free time for work and discussion, which is mostly missing in other conferences. This free time combined with the group discussions truly gave us the opportunity to learn new concepts and work together. Speaking for myself, not being an expert in Quantum Error Correction, I came back from the conference having a much better understanding of many concepts, in particular around the concept of cellular automata decoders, which featured in several talks (including a video demonstration by Barbara Terhal, shown in the GIF below). The concept of algorithms using cellular automata in quantum information processes is very powerful, particularly when considering cluster state computations or topological error correction, where the information is stored in global degrees of freedom and can be acted upon with local operations.

 

2016-08-18-12-44-06
Demonstration of a cellular automata decoder

 

The biggest highlight of this workshop was, for me, the extensive discussion around experiments. There were several talks dedicated to the topic:

Steve Flammia: Comparing Experiments to the Fault Tolerant Threshold

– Hector Bombin: On the effect of correlated noise

James Wooton: A decoding algorithm for proof of principle experiments

Ying Li: Resource cost of fault-tolerant quantum computing in hybrid quantum computers and LOQC

Niel de Beaudrap: NQIT

– Yours truly: Fault-tolerant considerations when designing a linear-optical quantum computer

Hugo Cable: Minimised switching in linear-optical quantum computing

– James Auger: Topological Error Correction with Faulty Components

Joe O’Gorman: Realistic magic state distillation factories

Also, there were some technical discussions on experimental implementations of quantum computers, as well as which codes should be the first to be implemented  in small scale experiments.

We are currently at a very exciting point in the development of quantum computers. Experiments are starting to get large enough that some small codes can be tested on them. Proofs of principle experiments of topological codes have been implemented with superconducting qubits, as well as with photons and ion traps. However, the community is not in agreement on which codes are the most useful and what scope we have to find yet better error correction codes. On top of that, it might be the case that the different constraints of the various physical systems make it impossible for a single code to be optimal for all. Good news is that, now that the Quantum Error Correction and experimental communities are engaging so much with each other, we can expect vast improvements on the performance of small quantum computers thanks to codes tailored to the specific requirements of the physical systems.

Finally, I would like to thank the organisers – Dan Browne, Earl Campbell and Michael Kastoryano – for such a fantastic experience, I look forward to future editions of the workshop!

 


 

I don’t want to leave the post without mentioning the game Decodoku, a browser and mobile citizen-science game based on Quantum Error Correction, which was advertised at the conference. It’s presented as a series of puzzles, reminiscent of the popular sudoku, 2048 and Threes, but in which the problems solved mimic the effects of noise on a topological code. Good strategies for solving these puzzles efficiently could potentially become new decoding algorithms, it gives an excellent excuse for the time spent playing 😀 . If you find out you are really good at it, let the developer (James Wooton) know.

Quantum computing and its models

Before reviewing in more detail the most promising experimental realisations of quantum information processors, I think it is useful to recap the basic concepts and most used models of quantum computing. In particular, the models, as the physical realisations mentioned in a previous post use different but equivalent computational models, which need to be understood to comprehend their implementations.

The primary logical units of a quantum computer are quantum bits, or qubits, which are two-level quantum systems that can be used to store and process quantum information. The two levels are usually represented by |0\rangle, |1\rangle, which are known as the computational basis states (in Dirac notation). The main difference between bits and qubits is that the latter can be in a linear combination, usually referred to as superposition, of the two basis states. Hence the most general representation of a qubit is:

| \psi \rangle = \alpha |0\rangle + \beta |1\rangle

where  \alpha and \beta are, in general, complex coefficients. When a qubit is measured in the computational basis, the results 0 or 1 are obtained with probability |\alpha|^2 and |\beta|^2 respectively. As these probabilities must add up to one (|\alpha|^2 + |\beta|^2 = 1), we have a normalisation restriction on the coefficients  \alpha and \beta , that can be geometrically understood as the condition that the qubit’s state has length one. We can take this geometric interpretation a bit further and parametrize the quantum state in spherical coordinates

\psi = \cos \theta |0\rangle +e^{i\phi} \sin \theta|1\rangle,

and understand the qubit as a vector in a sphere of radius one. This sphere is usually referred to as the Bloch sphere, shown in the figure below. Note that in this representation, orthogonal states are diagonally opposed rather than at right angles.

BlochSphere

The three cartesian axes of the Bloch sphere correspond to the eigenstates of the Pauli matrices: \sigma_X=\begin{pmatrix}  0 & 1 \\  1 & 0 \end{pmatrix} ,\quad \sigma_Y=\begin{pmatrix}  0 & -i \\  i & 0 \end{pmatrix} ,\quad \sigma_Z=\begin{pmatrix}  1 & 0 \\  0 & -1 \end{pmatrix},

where the matrices are written in the computational basis. The eigenstates of the \sigma_Z operator are the computational basis states {|0\rangle, |1\rangle}, whereas the eigenstates of \sigma_X and \sigma_Y are {|\pm \rangle } and {| \pm i \rangle } respectively.

Single-qubit logical gates can be understood as transformations (rotations and reflections) of the states in the Bloch sphere. The most used single qubit gates are:

•Hadamard gate: H=\frac{1}{\sqrt{2}} \begin{pmatrix}  1 & 1 \\  1 & -1 \end{pmatrix}

• Phase gate: S=\begin{pmatrix}  1 & 0 \\  0 & i \end{pmatrix}

•π/8 gate: T=\begin{pmatrix}  1 & 0 \\  0 & e^{i \pi /4} \end{pmatrix}

• Rotations with respect to one of the cartesian axes: R_{\sigma_k} (\theta) = \cos \theta I-i\sin \theta \sigma_k     where k \in {X, Y, Z}.

The most commonly used two-qubit gate, controlled-NOT (CNOT), has the same truth table as the classical XOR gate, which flips the target (second) bit when the control (first) bit is in the state 1:

|00\rangle  \rightarrow |00\rangle,\  |01\rangle \rightarrow |01\rangle,\  |10\rangle \rightarrow |11\rangle\  \&\  |11\rangle \rightarrow |10\rangle .

The use of the CNOT gate in conjunction with some of the single qubit gates can produce entangled states, which show correlations with no equivalence in classical computation. For example, the action of a CNOT gate with the Hadamard gate on a pair of computational basis qubits yields:

|00\rangle \xrightarrow{H_1} |+0\rangle =\frac{1}{\sqrt{2}}(|00\rangle +|10\rangle) \xrightarrow{CNOT} \frac{1}{\sqrt{2}}(|00\rangle +|11\rangle)

This state (which is one of the four maximally entangled states referred to as Bell pairs), cannot be written as the product of two single-qubit states.

Other entangling gates such as controlled-Phase (CZ) can be obtained from combinations of CNOT gates with single qubit gates; moreover, any multi-qubit unitary can be achieved in the same way, which makes this set of gates universal. A set of gates can perform universal quantum computation if they are sufficient to approximate any unitary operation to an arbitrary accuracy via a quantum circuit. This universality is crucial as it allows any quantum algorithm to be performed and it ensures the equivalence of different models of quantum computation.

There are various quantum computing models that are universal, such as the circuit model, the measurement-based model, adiabatic model, topological model and quantum walk model. However, in this post, I will only focus on the first three models, as thy are the most relevant to the experimental realisations that show most promise today.


Circuit model

The circuit model is an algorithmic model for quantum computing that closely resembles classical algorithms. Single-qubit and two-qubit operations are performed in sequence on a set of qubits initialized in a fiduciary state, and the results are read at the end as the outcome of single-qubit measurements. The entanglement and interference necessary for the quantum speedup are built up during the computation, and if any ancillary states are used, their state must be erased so that they no longer interfere with the rest of the computation.

The following circuit diagram shows the most common representation of the quantum logic gates presented earlier:

CircuitDiagram

The procedure runs from left to right: preparation, single Hadamard gate, CNOT gates, rotation and phase gates, measurement.

Circuit model key facts

State Space: A quantum circuit operates on a number of qubits (or two-level quantum systems), and therefore its state space is a 2^n -dimensional complex Hilbert space. The computational basis states are defined as product states of the form |x_1,...,x_n\rangle, where x_i = 0, 1.

State Preparation: Any computational basis state |x_1,...,x_n\rangle can be prepared in at most n steps.

Quantum Gates: Gates from a universal family of gates can be applied to any subset of the qubits desired.

Measurements: Measurements in the computational basis can be performed on one or more qubits.

Classical Computation: In principle, it is not necessary for the computation, but it can make certain tasks much easier.

Procedure of the computation: Quantum algorithms are run by applying one-qubit and two-qubit gates to the quantum systems, building up the amount of entanglement until the final measurement in the computational basis gives the result of the computation.


Measurement-based quantum computation 

For a long time, the circuit model was commonly used for quantum computation. Measurement-based quantum computation models are radically different to the circuit model (and have no classical analogue), as the resource for the computation is prepared in advance and “offline”. This strategy has the advantage that if errors occur at the preparation stage, the prepared state can be discarded, and the procedure can be repeated without any loss of information. There are two main approaches for measurement based quantum computing: the generalised teleportation model and the one-way quantum computer model. However, the latter is the most widely used, and therefore we will only look at that model here.

In the one-way model, the entire resource for the computation is supplied at the beginning of the computation, in the form of a highly entangled multi-particle state. This state is usually referred to as cluster state, and it is the same for every computation, although it may vary in size. The information is then processed by carrying a series of adaptive single qubit measurements.

This highly entangled multi-particle state is prepared by applying a pattern of entangling gates to the qubits. A generic cluster state of n particles is not easy to write in any basis, but it can be efficiently described with a graph, where each node of the graph represents a qubit and each bond denotes that the two sites have been connected by an entangling controlled-Z gate (CZ) operation. As the computation will be performed by single qubit measurements, the primary challenge of this computational model is the preparation of this highly-entangled cluster state, which can be technically challenging.

The information processing is done via sequential measurement of the qubits in a particular basis. It is assumed that the correct algorithm is performed if all the measurement outcomes are the +1 eigenstate, however, given the probabilistic nature of quantum mechanics, this is not always the case. We can steer the computation back to its correct subspace by applying Pauli corrections to subsequent measurements. Therefore, measurement results determine the basis of the following measurements on other qubits. Finally, the product of the computation is read out by one last measurement in the computational basis.

In the figure below (copyright 2001 by the APS), we can see an example of a quantum algorithm performed using the MBQC model. We initially start with a rectangular cluster state of 6 × 12 qubits, where qubits are located at the vertices of the grid (grid lines are not shown).  Despite starting with 72 physical qubits, this algorithm is actually performed on only 3 logical qubits, which are shown by the three horizontal yellow arrows. The yellow paths between the logical qubits represent two-qubit gates as explained in the circuit model, and the rest of the single-qubit gates are performed by measuring the qubits in the X-Y plane.

MBQC

This model of computation has an enormous technical advantage over the classic circuit model, which makes it very appealing to implement quantum computation using certain physical systems. The cluster states can be produced offline and only when the resource is prepared correctly, the computation is performed. For many physical systems, performing entangling gates is the most challenging part of the computation, but if we post-select on the successful preparations of the resource state, this model substantially increases the probability of a successful computation. All that is required then is to be able to perform single qubit gates successfully with high fidelity, which is less technologically demanding for many physical systems such as superconducting qubits, ion traps and linear optics.

MBQC model key facts

State Space: A cluster state computation operates on n\times k physical qubits (or two-level quantum systems) for a quantum algorithm with n logical qubits. Its state space is a 2^{n\times k}-dimensional complex Hilbert space. The computational basis states are defined as product states of the form |x_1, \dots , x_{n\times k}\rangle, where x_i = 0, 1. The graph state representation is usually used.

State Preparation: Before the computation starts a resource state must be prepared. This resource state is a highly entangled multi-particle state (cluster state), which is the same (except for its size) for every computation.

Quantum Gates: An entangling gate is performed on qubits in a product state to build the resource state. Afterwards, only single qubit measurements are needed.

Measurements: Single qubit measurements can be performed in an arbitrary basis. (Or if only measurements of the computational basis can be performed, single qubit rotations must be performed alongside.)

Classical Computation: Classical computation alongside the quantum computations is a key feature of this model, as the basis of the measurements performed sequentially depends on the results of previous measurements.

Procedure of the computation: The entire resource for the quantum computation is supplied at the beginning of the computation. The information is then processed by carrying a series of adaptive single qubit measurements.


Adiabatic model

This model represents a new paradigm for quantum computation that was recently proposed based on quantum adiabatic evolution. Computation in this model is not performed by applying gates or measurements to qubits, but rather the algorithm starts from a disordered state, and it arrives at a solution to the problem by performing what can be understood as a local quantum search. The procedure for the computation is as follows:

•At time t = 0, the quantum mechanical system is described by a Hamiltonian H_E, whose eigenstates are easy to compute.

•The system is slowly transformed to the final Hamiltonian at time t = T, whose ground eigenstates are the solution to the problem that needs to be solved. This process can be described by a time-dependent Hamiltonian

H(t) = A(t)H_E + B(t)H_P .

A(t) and B(t) are slowly varying monotonic functions such that A(0) = 1,\  B(0) = 0 and A(T ) = 0,\  B(T ) = 1. According to the adiabatic theorem, if the evolution is slow enough, i.e. T is long enough, and if there is a gap between the ground state eigenvalue and the rest of the Hamiltonian’s spectrum, the state of the system at time t = T would correspond to the ground state of H_P , thus producing the solution to the problem.

•Measurement of the ground state allows for the extraction of the solution.
This model for quantum computation has been proven to be universal for quantum computation.

Adiabatic model key facts

• State Space: Assuming there are n particles to which the Hamiltonian is applied are two-level systems, the state space is 2^n -dimensional.

• State Preparation: The state is prepared in the ground state of the initial Hamiltonian, which is easy to prepare.

Quantum Gates: Once the state is prepared, no gates per se are applied, but rather, the Hamiltonian is adiabatically changed from the initial to the target Hamiltonian.

Measurements: A single measurement of the state of each quantum particle in the system suffices to read out the solution to the problem, which is encoded in the eigenstate of the final Hamiltonian.

Classical Computation: Classical computation is required to find the appropriate Hamiltonian to encode the problem, to which the system will be adiabatically evolved into.

Procedure of the computation: The computation is performed by slowly (adiabatically) varying the Hamiltonian, from the initial (easy to prepare) to the final, which encodes the problem.


 

Promising implementations of quantum computers

Quantum computers have raised a lot of interest and funding over the last few years, as they are expected to unlock an entirely new set of answers in the fields of physics, computer science, chemistry and material science. In a previous post I wrote about how we should define a quantum computer in theoretical terms, stepping away from definitions based on the hardware. Its implementation in practice is subject to wide debate, as the scientific community does not agree on which physical system constitutes the best option for the implementation of quantum information processors, and there are currently a few candidates that show promise.
Choosing the physical system that will ultimately be the main platform for quantum computers is not easy. Technological problems that may seem insurmountable today might be solved in a few years time. However, as quantum computers are physical devices, the laws of physics ultimately dictate what they can do or not. The amount of information that classical computers are capable of processing and the rate at which they do so has doubled every 18-months for the last 40 years, which is known as Moore’s law. However, Moore’s law is not a law of Nature and rather an observation of human ingenuity (and economic power), and it is expected it will soon reach saturation: Intel has already confirmed that their cadence in chip production has slowed.
The largest transistor count in a single CPU today is of 5.5 billion transistors, with current transistors being of the size of ~O(10)nm, we can imagine that even if we have quantum processors, machines with more than a trillion components do not seem physically feasible. There are other types of constraints too: if all our components need to be at mK temperature, the size of the quantum computer will be restricted by cooling ability (It is true that there exist large scale machines which operate at ~2K such as CERN, but they would not be considered efficient in the sense we describe here) the clock speed (number of operations per second) will be limited by the amount of available energy in the system, but more energy means more noise and entropy limits the amount of information that can be processed. The ultimate limits of computation are given by the laws of physics, but there is no guarantee that these limits can really be reached.

Various quantum technologies have been considered as good candidates for building quantum computers. They each have their own advantages and challenges, and it is not clear today which will be the final technology; it might not even be just one but a combination of several. In this entry, I will briefly mention the four technologies that (in my view) are most promising. Despite their differences, they have one significant factor in common: they are compatible with microfabrication techniques which will allow each architecture to become modular and be made from regular-sized chips.

Ion traps with photonic links

Charged atoms can be controlled and manipulated in macroscopic traps with a very high degree of accuracy. Excellent control has been achieved in macroscopic ion traps with nearly 100% fidelity in all gates, however for current implementations there exists a harsh scalability factor: only a bounded number of ions can be trapped and individually addressed in the trap. The networked model for quantum computation, in which cells with a small number of qubits are interconnected to perform quantum computation, is particularly well suited for this technology and full-scale architectures have been proposed. The entanglement between different cells is obtained via entangling operations on photons emitted by ions in the different traps. This operation is very slow, however (~300 times slower than any other gate) and uses large photonic switching networks which rapidly increase the photon loss rate. New very low-loss photonic switches and better entangling operations are needed for this technology to be feasible on a large scale. A new approach to overcome the scalability factor is that of integrated ion traps, in which standard semiconductor processing techniques can be used to fabricate micrometer-scale surface-chip traps.

Superconducting qubits

Superconducting systems exhibit generic quantum properties commonly associated with atoms, such as quantized energy levels, entanglement, and superposition of states. As such, artificial- atoms can be engineered from these systems and exquisite control can be achieved by using electromagnetic pulses. Recent demonstrations show the ability to perform single qubit gates with 99.92% fidelity and two-qubit gates with 99.4% fidelity. Moreover, these fidelities are within the fault-tolerant threshold for the surface code which has allowed the experimental implementation of a small surface code implementation of five qubits. Although this implementation of quantum computing benefits from microfabrication of the devices, it has a number of shortcomings. The most important are the cross-talk between nanowires, which hinders the construction of three-dimensional qubit structures (which are considered more advantageous for the implementation of fault-tolerance) and the fact that they operate at mK temperatures, which limits the number of qubits that can be implemented due to the limited cooling capacity.

Linear optics

Single photons are very good carriers of information with low decoherence rates and very high single-qubit gate fidelity. Non-deterministic two-qubit operations and photon loss are a challenge for current technologies, but a series of theoretical breakthroughs in recent years (in which the resources required have been lowered several orders of magnitude) together with technological advances make this physical system a competitive candidate for quantum computing. This architecture uses a percolation approach to the measurement-based model in order to counteract the effect of loss and probabilistic entangling operations. Integrated optical devices can be nano-fabricated, the ability to miniaturize (~1M) linear optical elements on a single chip, is a very promising sign for the construction of linear optical quantum computers with millions of elements per silicon chip in the future. An advantage of this type of architecture is that low temperatures are only required for the single photon detectors, and it is envisioned that in the near future, the entire architecture can be implemented at room temperature.

Spin donors in silicon

Solid-state systems of spin donors in silicon can be used to build quantum computers. Quantum information can be encoded in single phosphorus atoms embedded in a silicon substrate, and qubit gates can be implemented by controlling the spin resonance of the donors and the exchange interaction between different donors. Excellent control fidelity has been shown for single qubit gates, the achieved 99.6% fidelity falls within the requirements of fault-tolerant quantum computing. A scalable integrated two-qubit gate with high enough fidelity is yet to be demonstrated, however, remarkable process has been done in recent years towards this goal. An architecture resembling the networked model has been proposed, completely in silicon, in which donors are laid out in a 2D array. In each cell, single qubit gates are performed by low-power electric and magnetic drive at microwave frequencies, while two-qubit gates are mediated by the induced electric dipole. The cells are connected via long range (mm distance) two-qubit gates enabled by microwave resonators and photonic links.

On a series of posts over the next few weeks, I will study each of these four proposals in detail, attempting to answer the questions:

  • What are the resources used?
  • What are the key results of this architecture?
  • How do the different elements integrate together?
  • What are the open problems, and why are they open?
  • What is the rough resource count for implementing a particularly interesting problem?

Please comment below if there are any other questions you’d like me to (attempt to) answer! For each architecture, I will also provide a list of academic papers where more detailed information can be found.