On Monday morning, during a conference at the National Science Foundation, it was announced that the merger of two neutron stars had been observed for the first time, both with gravitational wave detection and in the entire electromagnetic spectrum. This is a tremendous achievement; it is the first time that the gravitational wave signal of the merger of such objects has been observed (until now it was only black holes), but it has also confirmed predictions about electromagnetic emissions in such event. Until now only a prediction, we have observed the burst of highly energetic gamma rays that are produced during these events as well as the production of over 40 heavy elements, including gold, platinum, and uranium.

These experimental results have been obtained by a combination of over 70 Earth-based observatories in all continents and seven space observatories. It is an incredible example of the power of scientific collaboration across many countries in high-risk high-rewards science, such as the LHC effort which confirmed the existence of the Higgs boson. For the first time, we have electromagnetic and gravitational data for the same astronomical event, which opens a new field of multi-messenger astronomy, which will help us understand the universe to a whole new level. Our movies of the stars just got sound.

On August 17 at 12:41:04 UTC, the LIGO and Virgo telescopes detected a gravitational wave signal lasting 100 seconds. This signal was much longer than previous ones detected, indicating that the objects producing them were much lighter than black holes.

About 1.7 seconds after, Fermi, the gamma-ray orbiting telescope, detected a burst of gamma rays. It was the first time that these two signals had been detected for the same astronomical event. The gamma rays and gravitational waves traveled together for 130 million years and arrived within 2 seconds of one another, both moving at the same speed as predicted by Einstein.

The three-detector network of gravitational telescopes, together with Fermi, were able to pinpoint the region of the sky that had witnessed the event and the astronomical observatories that detect in the electromagnetic spectrum, turned their eyes. About 10 hours and 52 minutes after the gravitational wave detection, a new visible light source was found in the NGC 4993 galaxy in the Hydra constellation. Further observations across the entire electromagnetic spectrum were made, including infrared emission (+ 11 hours and 36 minutes), ultraviolet (+15 hours), X-rays (+9 days) and radio (+15 days).

Astronomers now have an understanding of what happened through the emissions detected by the telescopes. The pair of neutron stars were spinning around one another while orbiting in their galaxy. As they spun, they lost energy through the emission of gravitational waves and got closer and closer to each other. It was only when they were about 200 miles apart, with the merger imminent, that the gravitational waves they emitted got strong enough for our detectors to be able to detect them. About ten milliseconds before the merger, the neutron stars started feeling tidal forces and up to few seconds after the merger matter was violently ejected into the interstellar medium, this ejection was detected as a burst of gamma rays. The ejected matter was the perfect site for heavy elements to form. The decay of some of these elements powered the light observed in the blue, red and near-infrared spectrum. The features of the light spectrum observed throughout the following days to the gravitational wave detection are a consequence of the merging of spectrums emitted by different elements. We now have confirmation that over 40 elements of the periodic table, including gold, platinum, and uranium, are formed in merging neutron stars.

Gravitational waves are perturbations in spacetime caused by the movement of large massive objects, they are particularly strong during the collision of very large astronomical bodies, such as black holes and neutron stars. The gravitational waves travel through the fabric of spacetime like ripples in the water. However, unlike water, spacetime is incredibly stiff — a gravitational wave caused by the merger of two black holes would only change the distance from Earth to Alpha Century by the width of a human hair. To detect these incredibly weak signals, the LIGO and Virgo observatories are miles long to boost the signal. A laser beam is split in two, and the beams travel back and forth in the two tunnels for exactly the same distance before interfering again. If the distance traveled by the beams has changed in the slightest, due to space being stretched by a gravitational wave, the interference pattern of the light when is combined together will give it away. The accuracy of these telescopes is mind-boggling, and the science that makes them work is even more fascinating: they use quantum phenomena to be able to measure beyond the accuracy of our classical tools — it is the science governing the smallest pieces of the universe that helps us understand the physics of the largest bodies. But this deserves a closer look altogether, for now, we can truthfully say that these observatories are the biggest and most accurate rulers that exist in the universe.

Gamma rays are the most energetic electromagnetic emissions. We have specialized telescopes to look for these rays, both Earth-based telescopes and Earth-orbiting telescopes such as Fermi, which observes the entire sky (except the part blocked by the Earth at any given point). Fermi detects flashes of gamma rays about twice a day; these are usually caused by thunderstorms, flares from our sun and exotic stars from our galaxy. A few hundred detections a year, however, are huge explosions coming from galaxies far away. These are conjectured to be caused by the expulsion of matter at high speed during the creation of black holes and the merger of neutron stars, but there was no experimental evidence to back that claim. Until now!

Neutron stars are the remnants of massive stars, about ten to twenty times the size of the Sun. These stars die after they run out of fuel for the nuclear fusion reaction that makes them shine, their core collapsing due to its own gravity, while the outer layers are violently expulsed resulting in a supernova. The star shrinks to the size of a small city, and the pressure inside is so large that atoms disintegrate into their constituents, protons and electrons, which recombine into neutrons.

The event observed in August was the first direct observation of the merger of a pair of these stars. However, we have long had experimental evidence of their existence. They were first observed by Jocelyn Bell Burnell and Anthony Hewish in 1967, who identified them as pulsars — radio-wave pulsing stars. In fact, they don’t emit the radio waves in pulses; rather they do so continuously. However, as they rotate around their own axis (which often does not coincide with the axis of the emission), we can only observe their waves when the beam is pointed towards Earth. They are akin to lighthouses in deep space.

In 1974, Joseph Taylor and Russel Hulse discovered the first pair of orbiting neutron stars in a binary system, which were orbiting around one another. They were found in an unstable orbit, (from radio observation) that was spiraling in, the spiraling was happening at the speed that Einstein predicted. Their observation provided the first firm observation of a system that confirmed that gravitational waves must exist in nature, confirming Einstein’s predictions. These neutron stars were over a million miles apart and won’t merger from another 300 million years. In contrast, the binary system that starred the event in August, these neutron stars started emitting detectable gravitational waves at 200 miles apart, and 100 seconds later they merged.

The era of multimessenger astronomy has started, and we expect to be able to gain a much deeper understanding of the universe by combining gravitational wave signals with detections from other sources. If anything I hope that the announcement on Monday, will inspire many old and young minds to learn more about the universe that surrounds us, because it is still full of secrets.

I thoroughly recommend watching the live announcement and sharing in the excitement of the physics community. There are, first and foremost, scientific articles describing the discovery, press releases by all parties involved, numerous first-hand accounts from the scientists that took part in this experiment, as well as many popular-science articles on the topic; I hope you find the reading of these accounts as interesting as I have.

For now, I will leave you with this intriguing musical piece that my talented friend Cora Miron, composed while inspired by these astronomical discoveries.

]]>

“

Science is the organized skepticism in the reliability of expert opinion.” – Richard Feynman

The cornerstone of science is the fact that it is falsifiable, which means that any scientific theory can be proven wrong by new discoveries. Scientific theories of the natural world evolve through the ages because we learn of new facts that prove the old theories wrong and the scientific community needs to seek new answers. Examples of that are Kepler’s laws of planetary motion, quantum mechanics or general relativity. It could be easy to jump to the conclusion that there are no scientific facts or absoluts, and it’s all a matter of interpretation. This is not true, a scientific fact is measurable and verifiable observation, such as the Earth being round, the speed of light constant or the existence of subatomic particles. On the other hand, a scientific theory is an *interpretation* of the verifiable facts, which means that if we lack a relevant fact, we might arrive at the wrong conclusion.

Scientists use the scientific method to arrive to their scientific theories. They observe a fact of nature, they formulate a hypothesis as to why it happens, they determine the logical implications and make predictions following their hypothesis, and finally they test their predictions. This process can be preformed several times, but once all their predictions are proven correct, the theory is held as true, until a contradicting fact comes along. Any contradicting provable fact is enough to falsify a theory, and therefore scientists have to be careful and thorough in their analysis. It’s one of the things I love most about science: opinions or beliefs do not matter, only evidence. We are only explorers of the world around us, seeking understanding.

“

The first principle is that you must not fool yourself, and you are the easiest person to fool.” – Richard Feynman

Understanding the principles of the natural world equips us to improve our quality of life and push our technological development, however the importance of scientific progress has not always been understood. It was not until the twentieth century that science became more broadly publicly funded. In the past, scientists fell mainly into three categories:

- Scientists that had the means to support themselves, or families who would foot the bill. Examples are Sir Isaac Newton, Charles Darwin and Tycho Brahe.
- Scientists who sought patronage from the aristocracy, like Rene Descartes, Johannes Kepler and Galileo Galilei
- Scientists who worked a second job to support their scientific endeavours, such as Gregor Mendel, Benjamin Franklin and Michael Faraday.

Nowadays, governments spend a sizeable portion of their GDP into Research & Development (R&D) programs. In fact, there is a correlation between the amount of funding that a country invests in R&D and their Human Develpment Index (HDI). The tables below show the 20 highest-ranking countries according to their HDI (on the left) and the 20 highest-ranking countries according to how much expenditure there is per capita. I’ve marked with a star those that appear in both lists, which is 75%. The other 5 countries that spend highly in R&D but do not appear in the top 20 according to their HDI, are ranked in the top 30.

This is the reason I attended the March for Science, because scientific progress has an unparalleled influence in our development as a society. Our ability to cure diseases, build infrastructures, mass-produce food or design electronic devices comes from basic research that seeked a deeper understanding of nature. It is in our best interest to keep funding basic research.

_______________________________________________________________________________

Here is a selection of the best posters I saw at the March for Science in San Francisco, there are more excellent posters from other marches around the world in various social media platforms.

]]>

As a woman and a physicist, it bothers me that other girls and women might not feel welcome in the field. We still need to overcome challenges until we are in a position of equality with our male colleagues, and I (as am sure many others like me) often think about what is the best way in which I can contribute. There is an ever-growing community of scientists (both men and women) who champion equal opportunities and put in place programs that help this effort. Our greatest strength is our community, the mentors who help and champion students and young scientists, the role models that increase representation in the field and encourage new generations.

We want to shine the spotlight on the achievements of female scientists, in particular, those that have had to overcome particularly dire circumstances. But I think we should be careful that we don’t portray them as superhumans who have succeeded where no other person could, as by doing so we risk intimidating the same people we want to encourage. Science should not be the field of the brave but of the interested. We want to inspire young minds, and we want *all* the young people interested to feel they have a place in the scientific community.

I loved watching the film “Hidden figures” and one of the things I enjoyed the most was the companionship and support the women offered each other. And I loved it not only because it was inspirational and emotive, but because this is a part of my experience as a female scientist that I don’t see highlighted often. Marie Curie, Rosalind Franklin, Emmy Noether, Lise Meitner, they all had to face challenges on their own, but nowadays we have an extraordinary community of women doing scientific research, championing each other and looking for ways of levelling the playing field in the sciences. During my research in quantum computing, I’ve met incredibly smart, kind and all-around amazing women, some of whom I’m now lucky to call my closest friends. As female scientists of 2017, we still have challenges, but we are no longer alone.

Some groups and resources for women in physics:

- Expanding Your Horizons Network
- The Roster of Women and Minorities in Physics – APS
- Women in Physics Group – IOP
- Women Speakers List – APS
- Conferences for Undergraduate Women in Physics
- Women in Physics – LinkedIn group
- Women in Reseach – Facebook group
- Alice: Women in Quantum Information – Facebook group
- Statistics on Women in Physics
- Data on the education and employment of women in physics in the U.S.

]]>

Noether was really a mathematician, one of the leading founders of abstract algebra, who approached problems in a completely novel way. Her work in physics, which had so caught my attention, was just something she did on the side, to help a poor physicist who couldn’t figure out the maths of his new theory. As a result came Noether’s theorem, described technically by the equation above, which says that for every continuous symmetry of a physical system, there exists an associated conservation law. Conservation laws are fundamental in physics, as they allow us to determine phenomena that can or cannot happen in physics. Noether theorem links them to symmetries of the systems and allows us to determine which physical quantities are conserved uniquely from the properties of the Lagrangian, a function of the energy of the system. Many conservation laws had been known, such as energy and momentum conservation of a closed system, but Noether’s theorem resolved paradoxes in those conservation laws arising in new theories of physics, such as General Relativity. To say this theorem is an important result in physics is an understatement.

Emmy grew up in a family of mathematicians, who somehow failed to notice her aptitudes and didn’t encourage her to pursue mathematics. She started training as a language teacher when she became fascinated by mathematics and started attending lectures at the university of Erlangen. As a woman, she could not officially enrol, so she would simply audit the lectures. Some years later women started to be officially allowed to take classes, but the policy on women’s rights would always have a hard time catching up with her. For some time, once she had passed her doctoral thesis, she was only allowed to teach classes at the University of Gottingen that were advertised as Hilbert’s. Years later, she was able to gain a position at the university although badly paid. She would never get to be a full professor in Germany, or even gain the wages due for her work. Being forbidden from teaching at the university of Gottingen due to her Jewish heritage in 1933, she moved to Bryn Mawr, a single-sex school in the USA. Her time at the college was accompanied by difficult circumstances, as she couldn’t teach graduate courses, find a permanent position, had health problems and the political situation of Germany was increasingly bad. However, she saw things differently and said that “the last year and a half had been the happiest in her whole life, for she was appreciated in Bryn Mawr and Princeton, as she had had never been appreciated in her own country”. Unfortunately, she was to die soon after, from complications of a surgery to remove a tumour.When reading about Noether’s life, it was her personality that struck me the most. She was a fantastic woman and an incredible mathematician, whose informal lifestyle caused many jokes that she would simply ignore. Her appearance, dress and weight were usually commented upon, so was her voice, deemed “loud and disagreeable” because it was not soft and refined as other women’s. She cared enormously for her students, with whom she shared her ideas and whom she taught with passion and enthusiasm, regardless of their political position (to the point that one of her students used to come to her house to take class wearing a nazi brown SA shirt). Her students held her in high esteem as she made them feel like she was one of them, “almost as if she too were thinking about the theorems for the first time.” She applied to both mathematics and life a general principle of simplification and removal of the unnecessary. She wore comfortable men’s shoes and coats, and during a time, she would go six days a week to eat the same dinner at the same time, at the same table of the same restaurant. According to Noether’s only American graduate student, “her methods of thinking and working were simply a reflection of her way of life: that is, to recognise the unnecessary, brush it aside and enter wholeheartedly into the present”.

She was also held in high esteem by her colleagues, and it was thanks to their continuous campaigning that she was able to get her teaching positions, first at the university of Gottingen and later on in Bryn Mawr College and the Institute of Advance Studies in Princeton (although sadly she died before she could join the latter). Hermann Weyl, a professor in Gottingen before the Second World War, said that he was “ashamed to occupy a preferred position beside her, whom I knew to be my superior as a mathematician in many aspects”. After she made important contributions to Einstein’s theory of General Relativity, Einstein wrote to Hilbert: “Yesterday I received from Miss Noether a very interesting paper on invariants. I’m impressed that such things can be understood in such a general way. The old guard at Göttingen should take some lessons from Miss Noether! She seems to know her stuff.” He was later to write, in her obituary for the New York Times, “In the judgment of the most competent living mathematicians, Fräulein Noether was the most significant creative mathematical genius thus far produced since the higher education of women began.”

Noether, unlike some of her male colleagues, did not receive much recognition during her life and was instead criticised for many unimportant things. As historians, Crease and Mann said: “Had Noether been a man, her appearance, demeanour, and classroom behaviour would have been readily recognised as one of the forms that absent-minded brilliance frequently assumes in the males of the species”. I find Noether inspiring. Inspiring because of her achievements, which are made particularly striking given the tidal forces she had to fight against to pursue her lifelong passion for mathematics. Inspiring for her drive, her attitude towards students and colleagues, her dismissal of criticism. Inspiring because of her beautiful mathematics.

Today’s post is a celebration of Ada Lovelace’s Day, international celebration day of the achievements of women in science, technology, engineering and maths (STEM)!

Most of the biographical anecdotes of this post have been obtained from the book: Nobel Prize Women in Science: Their Lives, Struggles, and Momentous Discoveries by Sharon Bertsch McGrayne

]]>

The conference took place at the Centro de Ciencias de Benasque Pedro Pascual, a facility for hosting workshops and scientific meetings and truly a dream come true for physicists at a conference. There were blackboards everywhere: conference theatre, meeting rooms, corridors, outside blinds… It has all the facilities needed to be a place of scientific work and discussion and there is really no excuse to not talk about physics all day long. The building itself had a very interesting design and was built with sustainability in mind. The Centre was named in honour of the Spanish physicist Pedro Pascual, whose Quantum Mechanics book, co-authored with Alberto Galindo, I thoroughly studied as an undergrad at the Universidad Complutense de Madrid. Benasque itself is charming, full of hikers, incredible scenery and good food. There were some complaints about the time it took to have lunch, but what can I say, it’s a holiday town in Spain, restaurants assume the diners want to relax and enjoy the food . However, for the people who couldn’t wait to go back to the blackboards, there was always the option of grabbing some tapas.

The meeting was really fantastic, from the content of the talks to the atmosphere throughout the two weeks. On top of the welcome drinks and conference dinner, there were some great activities organised, such as a couple of group hikes, an ascent to Aneto (the tallest peak in the Pyrenees), canyoning and an AMA Reddit session.

On top of all these activities, there was plenty of free time for work and discussion, which is mostly missing in other conferences. This free time combined with the group discussions truly gave us the opportunity to learn new concepts and work together. Speaking for myself, not being an expert in Quantum Error Correction, I came back from the conference having a much better understanding of many concepts, in particular around the concept of cellular automata decoders, which featured in several talks (including a video demonstration by Barbara Terhal, shown in the GIF below). The concept of algorithms using cellular automata in quantum information processes is very powerful, particularly when considering cluster state computations or topological error correction, where the information is stored in global degrees of freedom and can be acted upon with local operations.

The biggest highlight of this workshop was, for me, the extensive discussion around experiments. There were several talks dedicated to the topic:

– Steve Flammia: Comparing Experiments to the Fault Tolerant Threshold

– Hector Bombin: On the effect of correlated noise

– James Wooton: A decoding algorithm for proof of principle experiments

– Ying Li: Resource cost of fault-tolerant quantum computing in hybrid quantum computers and LOQC

– Yours truly: Fault-tolerant considerations when designing a linear-optical quantum computer

– Hugo Cable: Minimised switching in linear-optical quantum computing

– James Auger: Topological Error Correction with Faulty Components

– Joe O’Gorman: Realistic magic state distillation factories

Also, there were some technical discussions on experimental implementations of quantum computers, as well as which codes should be the first to be implemented in small scale experiments.

We are currently at a very exciting point in the development of quantum computers. Experiments are starting to get large enough that some small codes can be tested on them. Proofs of principle experiments of topological codes have been implemented with superconducting qubits, as well as with photons and ion traps. However, the community is not in agreement on which codes are the most useful and what scope we have to find yet better error correction codes. On top of that, it might be the case that the different constraints of the various physical systems make it impossible for a single code to be optimal for all. Good news is that, now that the Quantum Error Correction and experimental communities are engaging so much with each other, we can expect vast improvements on the performance of small quantum computers thanks to codes tailored to the specific requirements of the physical systems.

Finally, I would like to thank the organisers – Dan Browne, Earl Campbell and Michael Kastoryano – for such a fantastic experience, I look forward to future editions of the workshop!

I don’t want to leave the post without mentioning the game Decodoku, a browser and mobile citizen-science game based on Quantum Error Correction, which was advertised at the conference. It’s presented as a series of puzzles, reminiscent of the popular sudoku, 2048 and Threes, but in which the problems solved mimic the effects of noise on a topological code. Good strategies for solving these puzzles efficiently could potentially become new decoding algorithms, it gives an excellent excuse for the time spent playing . If you find out you are really good at it, let the developer (James Wooton) know.

]]>

The primary logical units of a quantum computer are quantum bits, or qubits, which are two-level quantum systems that can be used to store and process quantum information. The two levels are usually represented by , , which are known as the computational basis states (in Dirac notation). The main difference between bits and qubits is that the latter can be in a linear combination, usually referred to as superposition, of the two basis states. Hence the most general representation of a qubit is:

where and are, in general, complex coefficients. When a qubit is measured in the computational basis, the results 0 or 1 are obtained with probability and respectively. As these probabilities must add up to one (), we have a normalisation restriction on the coefficients and , that can be geometrically understood as the condition that the qubit’s state has length one. We can take this geometric interpretation a bit further and parametrize the quantum state in spherical coordinates

,

and understand the qubit as a vector in a sphere of radius one. This sphere is usually referred to as the Bloch sphere, shown in the figure below. Note that in this representation, orthogonal states are diagonally opposed rather than at right angles.

The three cartesian axes of the Bloch sphere correspond to the eigenstates of the Pauli matrices:

where the matrices are written in the computational basis. The eigenstates of the operator are the computational basis states {, }, whereas the eigenstates of and are {} and {} respectively.

Single-qubit logical gates can be understood as transformations (rotations and reflections) of the states in the Bloch sphere. The most used single qubit gates are:

•Hadamard gate:

• Phase gate:

•π/8 gate:

• Rotations with respect to one of the cartesian axes: where .

The most commonly used two-qubit gate, controlled-NOT (CNOT), has the same truth table as the classical XOR gate, which flips the target (second) bit when the control (first) bit is in the state 1:

.

The use of the CNOT gate in conjunction with some of the single qubit gates can produce entangled states, which show correlations with no equivalence in classical computation. For example, the action of a CNOT gate with the Hadamard gate on a pair of computational basis qubits yields:

This state (which is one of the four maximally entangled states referred to as Bell pairs), cannot be written as the product of two single-qubit states.

Other entangling gates such as controlled-Phase (CZ) can be obtained from combinations of CNOT gates with single qubit gates; moreover, any multi-qubit unitary can be achieved in the same way, which makes this set of gates universal. A set of gates can perform universal quantum computation if they are sufficient to approximate any unitary operation to an arbitrary accuracy via a quantum circuit. This universality is crucial as it allows any quantum algorithm to be performed and it ensures the equivalence of different models of quantum computation.

There are various quantum computing models that are universal, such as the circuit model, the measurement-based model, adiabatic model, topological model and quantum walk model. However, in this post, I will only focus on the first three models, as thy are the most relevant to the experimental realisations that show most promise today.

The circuit model is an algorithmic model for quantum computing that closely resembles classical algorithms. Single-qubit and two-qubit operations are performed in sequence on a set of qubits initialized in a fiduciary state, and the results are read at the end as the outcome of single-qubit measurements. The entanglement and interference necessary for the quantum speedup are built up during the computation, and if any ancillary states are used, their state must be erased so that they no longer interfere with the rest of the computation.

The following circuit diagram shows the most common representation of the quantum logic gates presented earlier:

The procedure runs from left to right: preparation, single Hadamard gate, CNOT gates, rotation and phase gates, measurement.

•**State Space**: A quantum circuit operates on a number of qubits (or two-level quantum systems), and therefore its state space is a -dimensional complex Hilbert space. The computational basis states are defined as product states of the form , where = 0, 1.

•**State Preparation**: Any computational basis state can be prepared in at most n steps.

•**Quantum Gates**: Gates from a universal family of gates can be applied to any subset of the qubits desired.

•**Measurements**: Measurements in the computational basis can be performed on one or more qubits.

•**Classical Computation**: In principle, it is not necessary for the computation, but it can make certain tasks much easier.

•**Procedure of the computation**: Quantum algorithms are run by applying one-qubit and two-qubit gates to the quantum systems, building up the amount of entanglement until the final measurement in the computational basis gives the result of the computation.

For a long time, the circuit model was commonly used for quantum computation. Measurement-based quantum computation models are radically different to the circuit model (and have no classical analogue), as the resource for the computation is prepared in advance and “offline”. This strategy has the advantage that if errors occur at the preparation stage, the prepared state can be discarded, and the procedure can be repeated without any loss of information. There are two main approaches for measurement based quantum computing: the generalised teleportation model and the one-way quantum computer model. However, the latter is the most widely used, and therefore we will only look at that model here.

In the one-way model, the entire resource for the computation is supplied at the beginning of the computation, in the form of a highly entangled multi-particle state. This state is usually referred to as cluster state, and it is the same for every computation, although it may vary in size. The information is then processed by carrying a series of adaptive single qubit measurements.

This highly entangled multi-particle state is prepared by applying a pattern of entangling gates to the qubits. A generic cluster state of *n* particles is not easy to write in any basis, but it can be efficiently described with a graph, where each node of the graph represents a qubit and each bond denotes that the two sites have been connected by an entangling controlled-Z gate (CZ) operation. As the computation will be performed by single qubit measurements, the primary challenge of this computational model is the preparation of this highly-entangled cluster state, which can be technically challenging.

The information processing is done via sequential measurement of the qubits in a particular basis. It is assumed that the correct algorithm is performed if all the measurement outcomes are the +1 eigenstate, however, given the probabilistic nature of quantum mechanics, this is not always the case. We can steer the computation back to its correct subspace by applying Pauli corrections to subsequent measurements. Therefore, measurement results determine the basis of the following measurements on other qubits. Finally, the product of the computation is read out by one last measurement in the computational basis.

In the figure below (copyright 2001 by the APS), we can see an example of a quantum algorithm performed using the MBQC model. We initially start with a rectangular cluster state of 6 × 12 qubits, where qubits are located at the vertices of the grid (grid lines are not shown). Despite starting with 72 physical qubits, this algorithm is actually performed on only 3 logical qubits, which are shown by the three horizontal yellow arrows. The yellow paths between the logical qubits represent two-qubit gates as explained in the circuit model, and the rest of the single-qubit gates are performed by measuring the qubits in the X-Y plane.

This model of computation has an enormous technical advantage over the classic circuit model, which makes it very appealing to implement quantum computation using certain physical systems. The cluster states can be produced offline and only when the resource is prepared correctly, the computation is performed. For many physical systems, performing entangling gates is the most challenging part of the computation, but if we post-select on the successful preparations of the resource state, this model substantially increases the probability of a successful computation. All that is required then is to be able to perform single qubit gates successfully with high fidelity, which is less technologically demanding for many physical systems such as superconducting qubits, ion traps and linear optics.

• **State Space**: A cluster state computation operates on physical qubits (or two-level quantum systems) for a quantum algorithm with *n* logical qubits. Its state space is a -dimensional complex Hilbert space. The computational basis states are defined as product states of the form , where . The graph state representation is usually used.

• **State Preparation**: Before the computation starts a resource state must be prepared. This resource state is a highly entangled multi-particle state (cluster state), which is the same (except for its size) for every computation.

•**Quantum Gates**: An entangling gate is performed on qubits in a product state to build the resource state. Afterwards, only single qubit measurements are needed.

•**Measurements**: Single qubit measurements can be performed in an arbitrary basis. (Or if only measurements of the computational basis can be performed, single qubit rotations must be performed alongside.)

•**Classical Computation**: Classical computation alongside the quantum computations is a key feature of this model, as the basis of the measurements performed sequentially depends on the results of previous measurements.

•**Procedure of the computation**: The entire resource for the quantum computation is supplied at the beginning of the computation. The information is then processed by carrying a series of adaptive single qubit measurements.

This model represents a new paradigm for quantum computation that was recently proposed based on quantum adiabatic evolution. Computation in this model is not performed by applying gates or measurements to qubits, but rather the algorithm starts from a disordered state, and it arrives at a solution to the problem by performing what can be understood as a local quantum search. The procedure for the computation is as follows:

•At time , the quantum mechanical system is described by a Hamiltonian , whose eigenstates are easy to compute.

•The system is slowly transformed to the final Hamiltonian at time , whose ground eigenstates are the solution to the problem that needs to be solved. This process can be described by a time-dependent Hamiltonian

.

and are slowly varying monotonic functions such that and . According to the adiabatic theorem, if the evolution is slow enough, i.e. T is long enough, and if there is a gap between the ground state eigenvalue and the rest of the Hamiltonian’s spectrum, the state of the system at time would correspond to the ground state of , thus producing the solution to the problem.

•Measurement of the ground state allows for the extraction of the solution.

This model for quantum computation has been proven to be universal for quantum computation.

• **State Space**: Assuming there are n particles to which the Hamiltonian is applied are two-level systems, the state space is -dimensional.

• **State Preparation**: The state is prepared in the ground state of the initial Hamiltonian, which is easy to prepare.

•**Quantum Gates**: Once the state is prepared, no gates *per se* are applied, but rather, the Hamiltonian is adiabatically changed from the initial to the target Hamiltonian.

•**Measurements**: A single measurement of the state of each quantum particle in the system suffices to read out the solution to the problem, which is encoded in the eigenstate of the final Hamiltonian.

•**Classical Computation**: Classical computation is required to find the appropriate Hamiltonian to encode the problem, to which the system will be adiabatically evolved into.

•**Procedure of the computation**: The computation is performed by slowly (adiabatically) varying the Hamiltonian, from the initial (easy to prepare) to the final, which encodes the problem.

]]>

Choosing the physical system that will ultimately be the main platform for quantum computers is not easy. Technological problems that may seem insurmountable today might be solved in a few years time. However, as quantum computers are physical devices, the laws of physics ultimately dictate what they can do or not. The amount of information that classical computers are capable of processing and the rate at which they do so has doubled every 18-months for the last 40 years, which is known as Moore’s law. However, Moore’s law is not a law of Nature and rather an observation of human ingenuity (and economic power), and it is expected it will soon reach saturation: Intel has already confirmed that their cadence in chip production has slowed.

The largest transistor count in a single CPU today is of 5.5 billion transistors, with current transistors being of the size of ~O(10)nm, we can imagine that even if we have quantum processors, machines with more than a trillion components do not seem physically feasible. There are other types of constraints too: if all our components need to be at mK temperature, the size of the quantum computer will be restricted by cooling ability (It is true that there exist large scale machines which operate at ~2K such as CERN, but they would not be considered efficient in the sense we describe here) the clock speed (number of operations per second) will be limited by the amount of available energy in the system, but more energy means more noise and entropy limits the amount of information that can be processed. The ultimate limits of computation are given by the laws of physics, but there is no guarantee that these limits can really be reached.

Various quantum technologies have been considered as good candidates for building quantum computers. They each have their own advantages and challenges, and it is not clear today which will be the final technology; it might not even be just one but a combination of several. In this entry, I will briefly mention the four technologies that (in my view) are most promising. Despite their differences, they have one significant factor in common: they are compatible with microfabrication techniques which will allow each architecture to become modular and be made from regular-sized chips.

Charged atoms can be controlled and manipulated in macroscopic traps with a very high degree of accuracy. Excellent control has been achieved in macroscopic ion traps with nearly 100% fidelity in all gates, however for current implementations there exists a harsh scalability factor: only a bounded number of ions can be trapped and individually addressed in the trap. The networked model for quantum computation, in which cells with a small number of qubits are interconnected to perform quantum computation, is particularly well suited for this technology and full-scale architectures have been proposed. The entanglement between different cells is obtained via entangling operations on photons emitted by ions in the different traps. This operation is very slow, however (~300 times slower than any other gate) and uses large photonic switching networks which rapidly increase the photon loss rate. New very low-loss photonic switches and better entangling operations are needed for this technology to be feasible on a large scale. A new approach to overcome the scalability factor is that of integrated ion traps, in which standard semiconductor processing techniques can be used to fabricate micrometer-scale surface-chip traps.

Superconducting systems exhibit generic quantum properties commonly associated with atoms, such as quantized energy levels, entanglement, and superposition of states. As such, artificial- atoms can be engineered from these systems and exquisite control can be achieved by using electromagnetic pulses. Recent demonstrations show the ability to perform single qubit gates with 99.92% fidelity and two-qubit gates with 99.4% fidelity. Moreover, these fidelities are within the fault-tolerant threshold for the surface code which has allowed the experimental implementation of a small surface code implementation of five qubits. Although this implementation of quantum computing benefits from microfabrication of the devices, it has a number of shortcomings. The most important are the cross-talk between nanowires, which hinders the construction of three-dimensional qubit structures (which are considered more advantageous for the implementation of fault-tolerance) and the fact that they operate at mK temperatures, which limits the number of qubits that can be implemented due to the limited cooling capacity.

Single photons are very good carriers of information with low decoherence rates and very high single-qubit gate fidelity. Non-deterministic two-qubit operations and photon loss are a challenge for current technologies, but a series of theoretical breakthroughs in recent years (in which the resources required have been lowered several orders of magnitude) together with technological advances make this physical system a competitive candidate for quantum computing. This architecture uses a percolation approach to the measurement-based model in order to counteract the effect of loss and probabilistic entangling operations. Integrated optical devices can be nano-fabricated, the ability to miniaturize (~1M) linear optical elements on a single chip, is a very promising sign for the construction of linear optical quantum computers with millions of elements per silicon chip in the future. An advantage of this type of architecture is that low temperatures are only required for the single photon detectors, and it is envisioned that in the near future, the entire architecture can be implemented at room temperature.

Solid-state systems of spin donors in silicon can be used to build quantum computers. Quantum information can be encoded in single phosphorus atoms embedded in a silicon substrate, and qubit gates can be implemented by controlling the spin resonance of the donors and the exchange interaction between different donors. Excellent control fidelity has been shown for single qubit gates, the achieved 99.6% fidelity falls within the requirements of fault-tolerant quantum computing. A scalable integrated two-qubit gate with high enough fidelity is yet to be demonstrated, however, remarkable process has been done in recent years towards this goal. An architecture resembling the networked model has been proposed, completely in silicon, in which donors are laid out in a 2D array. In each cell, single qubit gates are performed by low-power electric and magnetic drive at microwave frequencies, while two-qubit gates are mediated by the induced electric dipole. The cells are connected via long range (mm distance) two-qubit gates enabled by microwave resonators and photonic links.

On a series of posts over the next few weeks, I will study each of these four proposals in detail, attempting to answer the questions:

- What are the resources used?
- What are the key results of this architecture?
- How do the different elements integrate together?
- What are the open problems, and why are they open?
- What is the rough resource count for implementing a particularly interesting problem?

Please comment below if there are any other questions you’d like me to (attempt to) answer! For each architecture, I will also provide a list of academic papers where more detailed information can be found.

]]>

Cal Newport, a Georgetown professor in Computer Science and blogger of Study Hacks, wrote a book of the same title. The book’s goal is to identify the steps one can take to build a successful career. It analyses the careers of a number of people, both successful and failed, and finds what were the steps taken in each case that determined the career’s fate. One of the first things that got me interested in the book, is the rejection of the “*courage culture”-* people who promote the idea that the only thing standing between you and your goal (in this case your dream career) is yourself, and all it takes is for you to believe in yourself and build up the courage to step off the expected path. I have always been suspicious of the belief that all it takes is courage, for any situation, and much more with respect to a career path. First of all, because many endeavours won’t succeed unless we have a background set of skills that can help our bid, but mostly because I don’t believe most things come easy, it’s resolve and hard work what makes them possible. I believe in research, preparation, planning and effort. It is true that in many cases, talent plays a big role, but I’ve never regarded natural ability as the ultimate deciding factor. I wouldn’t be running if that was the case

I must say, however, that some amount of believing in yourself and your abilities is necessary, as otherwise, one might fall sick with “*imposter syndrome*”, so common in academia. In my opinion, the motto “*be so good they can’t ignore you*” can be used to soothe such feelings, as a survey of one’s worked-for abilities should put at ease any feelings of inadequacy. But, we won’t speak any more of imposter syndrome here, as that deserves a whole other post by itself. Let’s turn our attention instead to Newport’s 4 rules that can help us build a fulfilling working life.

**Don’t follow passion**: We are told time and time again to follow our passion: “*do what you love and you won’t have to work a day in your life*”. It’s one of those inspirational sayings we see everywhere and what makes Steve Job’s commencement address so popular that it has more than 24M views on youtube. However, it is not useful advice for someone unsure about which career is best for them. I’m sure you have experienced before surprise at enjoying something you didn’t think you would. By solely advocating to pursue the things you **know** you like, there are many other enjoyable endeavours that are left out. Moreover, there is little evidence that people have pre-existing passions and this kind of approach to finding a fulfilling work life can lead to a lot of unhappiness. “*Choosing your career should not be treated as finding your true calling.*”

**Have the mentality of a craftsman**: Instead of focusing on whether the jobs fulfills any dream-job fantasies we may have, we should focus on the value that we offer, enjoy the process of the work and be proud of the output generated. This is, of course, easier said than done. When our focus shifts to what we produce, the goal becomes much clearer: improve the outcome of our work. To do that, we must engage in deliberate practice, which is the style of difficult practice that is required to improve in a task. It is the kind of practice that will involve learning new techniques, practice for hours and continually face our ignorance. It is the difference between a master and a middle of the pack. It requires that we stretch beyond what is comfortable and we are willing to accept ruthless criticism. Most of us have experienced this kind of practice when going through the educational system and trying to come to grips with the material we didn’t understand. However, once we are out of education, it becomes increasingly difficult to do so, because we are not forced to, and because it’s easier to tell ourselves we have too much email. But forcing ourselves to engage in deliberate practice will increase our abilities and help us become so good that we will get noticed.

**Leverage skills to obtain more control**: Some people enjoy tremendous freedom and control over their working life, while others don’t. It begs the question: why do they get those perks? In most cases, those people have rare and valuable skills, that are so priced by their employers, that they can leverage them to their advantage. That is what Newport calls “*career capital*”. Through deliberate practice, those people have built up career capital – a valuable set of skills that allows them to trade them for more control of their own working life. However, attaining control is tricky. If it is attained before we have enough career capital to make it sustainable, we will fail, but also, once we have built enough career capital, we might face a pushback from the very people we have become invaluable to, they might try to prevent change that benefits us and not them. So how can we decide if a bid for more control is the right step to take? Newport’s law of financial viability: “*When deciding whether to follow an appealing pursuit that will introduce more control into your working life, ask yourself whether people are willing to pay you for it. If so, continue. If not, move on*.”

**Find a mission**: “*A career mission is an organising purpose to your working life*”. Finding a mission can be condensed in two actions: doing a series of little bets to scout out different ideas that might succeed, and having the mindset of a marketeer, i.e. being able to identify why some ideas catch while others fall flat. True missions require a specific lifestyle: patience to build career capital as well as being constantly searching for the next big idea. These ideas tend to lie in the space just beyond the cutting edge of a field, which has been referred to as the “*adjacent possible*”. Identifying these big ideas requires dedication to brainstorming and exposure to new ideas. But how can we figure out if a chosen mission is likely to succeed? The answer is given by Newport’s law of remarkability: “*For a mission-driven project to succeed, it should be remarkable in two different ways. First, it must compel people who encounter it to remark about it to others. Second, it must be launched in a venue that supports such remarking.*”

Reading this book made me have a different perspective on the career decisions that lead me to where I am now. I can see how hard work and little bets allowed me to move forward and I can also see the mistakes I made. I found the book very compelling because it gives me a framework that will help me achieve a fulfilling career, it gives me tools to achieve that desire rather than just the inspiration. Also, Newport relates his personal journey as a researcher and what deliberate practices he engages on, as well as how he develops his career mission. These are practices I can directly apply myself to grow as a physicist. While reading I felt I couldn’t wait to put this framework in practice, this blog is a result of that.

]]>

In any standard course on quantum computing and its implementations, usually one learns about the DiVincenzo criteria, which states that a quantum computer should have:

- A scalable physical system with well-characterized qubits.
- The ability to initialize the state of the qubits to a simple ﬁducial state.
- Long relevant decoherence times, much longer than gate operation times.
- A “universal” set of quantum gates.
- A qubit-specific measurement capability.

These criteria were formulated in the year 2000 with a specific computational model in mind, the circuit model. Since then, other ways of implementing quantum computing have sprung out, from computational models (such as cluster state and adiabatic models) to physical implementations that don’t deal with qubits but rather with continuous variable systems. Many of these new developments don’t quite fit the DiVincenzo criteria, so are there any better definitions of what quantum computers are?

Searching through the scientific literature, and through statements of experts to the media, we can find five distinct ways in which to specify what a quantum computer is:

**Abstract theoretical definitions**: Such is Deutsch’s definition in his 1984 paper where he formulates a physical version of the Church-Turing thesis that is compatible with quantum theory: “*a quantum computer is a […] quantum generalization of the class of Turing machines*”. Or Feynman’s: “*It’s not a Turing machine, but a machine of a different kind*”. These definitions are very abstract and lack details and specifications on the structural components, making them not very useful in practical scenarios.**Implicit definitions**: A quantum computer is not defined necessarily by its components but rather by stating that it uses the laws of quantum mechanics to perform computation. As true as this definition is, it is no help when trying to decide whether a computer is quantum or not. How can we assert from outside which laws govern the logical operations inside?**Comparative definitions**: A quantum computer is a device that can outperform classical computers. While we certainly expect them to be able to solve problems that would otherwise be outside our reach if we only had access to classical computers, relying on the classification of problems in complexity theory is uncertain business, as this classification is not written in stone and evolves as the field develops.**Constructive definitions**: These definitions specify what a quantum computer is by stating their components or the way information is processed. For example, defining the quantum computer as a machine that fulfils the DiVincenzo criteria falls in this category. This kind of definitions share the characteristic of being narrow and tied to a specific implementation, and therefore not general enough to apply to all architectures, physical implementations and computational models.**Operational definitions**: The quantum computer is fully defined by what it does, if a machine acts like a quantum computer, it is a quantum computer. The definition makes no assumptions about the theory of computation or the nature of physical reality, and therefore different interpretations of quantum mechanics should agree that the machine is a quantum computer.

There is an excellent paper which intends to figure out a useful operational definition for quantum computers that will stand the test of time, and by making no reference to how the computer exactly works, it should still hold in years to come when new techniques have been developed. As end users we care mainly about performance, and not necessarily about the nitty-gritty details of how that performance is achieved. As an example, how many of you understand all the complexity of the device you are using to read this blogpost? I certainly don’t.

The proposed operational definition of a quantum computer in this paper is as follows:

“We define a quantum computer as a device that can run a quantum algorithm efficiently, where a quantum algorithm is a classical bit string that encodes a series of quantum operations. The quantum computer should be able to take this string as input and produce another one as output. The probability distribution of the output should be consistent with the predictions of quantum theory. Finally, the time it takes the computer to produce the output should be in agreement with the difficulty of the algorithm.”

Note that this definition makes no mention of the way in which the quantum operations are performed, and therefore is open to different models of computation. The time constraint in this definition is crucial, as it excludes classical computers, because, for example, if we ran Shor’s factoring algorithm in a quantum computer we would expect an answer in polynomial time and a classical computer would require exponential time. But also this definition doesn’t depend on the current classification of complexity problems, as “*the difficulty of the algorithm*” mentioned in the definition would refer to the difficulty of a particular problem at the time of the test.

It is also worth noting that both the input and output are classical bit strings. After all, the input and output are the interactions of the machine with the controller, which lives in a classical world. The *quantum program* will be the encoding of the operations of a particular quantum algorithm, which will be part of the input string along with the initial state of the quantum computer. The instructions for generating the initial state on the quantum computer must have an efficient classical representation, this is the case for all computational models. It is worth noting that for all the current proposals of quantum computers, some classical pre and post-processing are assumed, which agrees with the above definition as long as this classical processing takes only polynomial time in the problem size.

In this paper they also have a set of criteria for building a quantum computer, that is general enough to fit all computational models (currently known) and all physical implementations proposed. It is based on 4 statements:

**Any quantum computer should have a quantum memory**. Quantum memory is the broad term used to state that the quantum computer must have the capability of efficiently representing any computable pure quantum state (of a size accordant with the size of the computer) in its internal degrees of freedom. This quantum state will not have, in general, an efficient classical representation.**Any quantum computer must facilitate a controlled quantum evolution of the quantum memory, which allows for universal quantum computation**. By controlled quantum evolution, the authors mean that the evolution of the internal state of the quantum computer must follow the laws of quantum mechanics, and will ultimately be controlled by the end user.**Any quantum computer must have a method for cooling the quantum memory**. Entropy is accumulated in the quantum memory as a product of a previous computation or because of the presence of errors due to noise from the environment.*Cooling*refers to information-theoretic cooling, where the entropy is extracted from the quantum memory. It encompasses the process of initialisation of the quantum memory as well as the error correction procedure.**Any quantum computer must provide a readout mechanism for subsets of the quantum memory**. The computer must have a mechanism to translate the outcome of the computation to classical bits for the controller to obtain the result of the computation. The authors refer to subsets of measurements as during most error correcting procedures, there are intermediate measurements used to assess the presence of errors, and this kind of measurements are not of much use to the end user.

There are two other essential characteristics for a quantum computer the authors require:

**Fault-tolerance**: Fluctuations from the environment can cause stochastic errors to appear in the quantum computer. If a quantum computer still works according to its definition in the presence of such errors, it will be deemed fault-tolerant. This will be the case if the computer uses some form of error correction that has an error threshold (maximum size of individual errors) higher than the errors caused by the environment.**Scalability**: Currently any claims of scalability in any particular physical implementation of a quantum computer are predictions, as no reasonably large quantum computer exists. Theoretically, a quantum computer is scalable if the resources required to run the same polynomial time algorithm on a larger input scaled polynomially on the size of the input. What makes a particular architecture scalable depends heavily on the architecture and technological prediction and it is difficult to make general statements without getting into the detail any given implementation. An excellent read on this subject can be found in the Quantum Pontiff blog post “Climbing Mount Scalable”.

It is perhaps because we don’t have a functioning large-scale quantum computer that it is difficult to give accurate definitions without hiding behind the “spooky” laws of quantum mechanics. At the end of the day definitions are not the most important thing, and proof of that is that Nielsen & Chuang, the must-read book for any quantum information scientist, does not define what a quantum computer is and rather lets the reader build up an intuition. But it is important to know what we talk about when we talk about quantum computers, for us to be able to make informed decisions about whether a machine is a quantum computer or not, in case aliens or someone else came to us with sales pitch.

]]>

The idea for starting this blog comes from the group meetings we had at the Controlled Quantum Dynamics Theory Group at Imperial College, where I did my PhD. Every Wednesday morning, we’d gather in a meeting room with beautiful views over London, had breakfast and talked about physics… most of the time. Every week a member of the group would prepare a talk on a subject of their choice, it could be about their own work, an interesting paper they had read, something they had recently learnt about or anything else they thought was interesting. We had fascinating talks on art, bike physics, coffee, songbirds and many other topics.

I took the approach of using my allotted talks as an opportunity to learn about some new physics, what better incentive to get your facts straight than a room full of physicists ready to argue? Jokes aside, preparing an hour of spoken material on a new subject allowed me to grasp the basic concepts of the new topic, and I don’t think I would have tried so hard to learn about something which had (most of the time) not much to do with my PhD work, had it not been for these talks. Also, the questions raised by some members of the audience, made me look at a topic in a different or find out connexions I didn’t know about. Overall, I think those talks were one of the most formative experiences I’ve ever had.

This blog will be my online replacement for those group meetings. Topic-wise, I think it’s reasonable to say that most posts will be quantum-related, but I also hope to include some posts on computer science, outreach and other miscellaneous topics. I will sometimes post about things I know about, but my intention is to learn new things for each post. So if I ever get something wrong in a post, please point it out in the comments! Hopefully, with time I will get the chance to learn from the comments on my posts as well.

]]>