Faulttolerant thresholds for quantum error correction with the surface code
Abstract
The surface code is a promising candidate for faulttolerant quantum computation, achieving a high threshold error rate with nearestneighbor gates in two spatial dimensions. Here, through a series of numerical simulations, we investigate how the precise value of the threshold depends on the noise model, measurement circuits, and decoding algorithm. We observe thresholds between 0.502(1)% and 1.140(1)% per gate, values which are generally lower than previous estimates.
pacs:
03.67.Lx, 03.67.PpI Introduction
In theory, scalable quantum computation is possible if errors affecting qubits are not too strongly correlated and occur with a probability below some threshold value Aliferis06 . If the physical error rate is below the threshold, then quantum gates protected by an errorcorrection code can be arranged in a faulttolerant manner such that any quantum circuit can be efficiently simulated to any accuracy Aharonov99 ; Kitaev97 ; Knill97 ; Preskill98 . The precise value of the threshold depends on an interplay between the effective noise in the quantum computer and the structure of the errorcorrection code in question, as well as the sophistication of the classical processing that accompanies the system Gottesman1 .
Recently, the surface code has emerged as a promising candidate for faulttolerant quantum computation Kitaev2003 ; Bravyi2 ; Freedman1 ; Dennis2002 ; Raussendorf4 ; Raussendorf3 ; Raussendorf2007 ; Fowler1 ; Fowler2 . The surface code requires nearestneighbor gates in two spatial dimensions with physical error rates of roughly one per cent or less, depending on the noise model. These requirements compare favorably with other codes, which may require non local gates Knill1 or may have significantly lower tolerance to errors Svore07 ; Stephens08 ; Stephens09 ; Spedalieri09 . For this reason, the surface code has underpinned several proposals for quantum computer architectures in a range of physical systems, including superconducting systems, atomoptical systems, trapped ions, quantum dots, and nitrogenvacancy centers in diamond Fowler2 ; Stock09 ; Devitt09 ; Meter10 ; Yao12 ; Nickerson1 ; Monroe12 ; Nemoto13 .
This article concerns the value of the threshold error rate for the surface code. Previous numerical estimates of the threshold are in general agreement, ranging from 0.57% to 1.40% per gate, depending on various assumptions Raussendorf4 ; Raussendorf3 ; Raussendorf2007 ; Fowler1 ; Barrett1 ; Wang10 ; Wang11 ; Fowler3 ; Fowler2 . However, the use of different methods to arrive at these values makes it difficult to faithfully compare them. The threshold is an important target for experimental devices and, in part, determines the overhead of scalable quantum computation Devitt13 . Given this and considering the increasing relevance of the surface code to the development of quantum computer architectures, it is important to clearly understand how the precise value of the threshold depends on assumptions related to the noise model, measurement circuits, and decoding algorithm.
Here, through a series of numerical simulations, we investigate how the threshold is affected by these assumptions. We estimate thresholds for several syndrome measurement circuits under a range of physically motivated noise models. In general, our results highlight the dependency of the threshold on properties of the underlying physical system. In some cases, our results indicate that the threshold may be significantly lower than previously thought. Our work complements other recent results concerning the dependency of the threshold on correlated errors caused by the presence of a bosonic bath Mucciolo1 ; Mucciolo2 and on the effective noise in superconducting quantum circuits Ghosh1 .
Notwithstanding the recent development of several alternative decoding algorithms for topological codes DuclosCianci1 ; DuclosCianci2 ; Bravyi1b ; Sarvepalli1 ; Wootton1 ; Wootton2 ; Wootton3 ; Delfosse1 , we restrict ourselves to decoding via Edmonds’ minimumweight perfect matching algorithm Edmonds1 . Also, we do not consider other topological codes, such as color codes Bombin1 , instead, referring the interested reader to the recent article of Landahl et al. Landahl1 .
Ii The surface code
The surface code, also known as the planar code, is a variation of Kitaev’s toric code Kitaev2003 . The toric code is defined over qubits located on the edges of a square lattice embedded on a twodimensional torus, where is the code distance. The fourdimensional code space is the simultaneous +1 eigenspace of the stabilizer generators Gottesman97 , defined as
(1) 
and
(2) 
where is a vertex in the embedding, is a face in the embedding, refers to the four neighboring qubits, and and are the usual singlequbit Pauli operators. The surface code is similarly defined, but its topology is modified from a torus to a twodimensional plane with boundaries that alternate between open and closed faces. Then, the twodimensional code space encodes a single logical qubit Bravyi2 ; Freedman1 . The logical Pauli operators are the pair of homologically nontrivial chains of and operators that connect opposite boundaries of the same kind, which preserve the code space, as they commute with the stabilizer generators but act nontrivially on the logical qubit. Although the logical Pauli operators can be deformed by the stabilizer generators, their minimum length is always equal to . The structure of the surface code for is illustrated in Fig. 1.
Universal quantum computation is achieved by manipulating the logical operators, using the techniques developed by Raussendorf et al. Raussendorf4 ; Raussendorf3 . By defining a surface code on a plane with a more complicated topology, multiple logical qubits are introduced. The various logical operators are manipulated by deforming the topology of the surface through a series of measurements Fowler1 . Here, we restrict our study to the case where a surface code encodes a single logical qubit. In particular, we are interested in the active process of quantum error correction, which is used to preserve the quantum information stored in the surface code. Since this process is largely unchanged in the presence of additional logical qubits, our results are applicable in general.
Iii Measuring and interpreting
the error syndrome
Pauli errors affecting qubits in the surface code anticommute with a subset of the stabilizer generators. For example, an error anticommutes with the type stabilizer generators associated with the adjacent vertices, which will have eigenvalues equal to . Connected chains of errors anticommute with the stabilizer generators at the end points of the chains, which may be hidden if the chains terminate on boundaries as shown in Fig. 1.
In order to identify errors, we measure the eigenvalues of the stabilizer generators, giving us an error syndrome. These measurements are performed by introducing ancillary qubits as shown in Fig. 1 and executing the measurement circuits shown in Figs. 2 and 3. The circuits require nearestneighbor gates in two spatial dimensions and can be performed in parallel (with one circuit for each stabilizer generator) across the entire surface code. In general, the error syndrome may be unreliable due to errors affecting the ancillary qubits, such as measurement errors. To mitigate this, the measurement circuits are repeated times, and we record when a measurement outcome changes from its previous value, which indicates that an error of some kind has occurred. An error affecting a data qubit will cause a pair of measurements separated in space to change from its previous values, whereas, an error affecting an ancillary qubit will cause a single measurement to change from its previous value and then to immediately change back again. In general, connected chains of errors can involve both kinds of errors, so the end points, indicated by the changing measurement outcomes, may be separated in both space and time. Thus, the error syndrome is the entire spacetime volume of these changes.
Since errors perturb the state of the system from the code space, error correction involves identifying a set of corrections that will restore the state to the code space while preserving the encoded quantum information. There are several algorithms to interpret or to decode the error syndrome, which, in general terms, balance accuracy (having a high likelihood of identifying the correct homology class of the errors) with efficiency (capable of decoding the syndrome for large codes in a sufficiently short time) Dennis2002 ; Wang03 ; Harrington04 ; DuclosCianci1 ; DuclosCianci2 ; Bravyi1b ; Sarvepalli1 ; Wootton1 ; Wootton2 ; Wootton3 ; Delfosse1 . Here, we use a decoding algorithm that identifies the most likely set of errors consistent with the error syndrome where we consider and errors separately Dennis2002 ; Raussendorf3 ; Wang10 . In the algorithm, each measurement change is represented by a node in a graph. Edges between nodes are weighted to reflect the probability of the associated measurement changes being caused by a connected chain of errors. A perfect matching of the graph reveals a set of errors consistent with the error syndrome, and the minimumweight perfect matching reveals the most likely set. From this set, an appropriate correction can be inferred. Care must be taken to account for correlated errors that arise in the measurement circuits, and edges should be appropriately weighted to account for the fact that different kinds of errors (which cause different pairs of measurement changes) may occur with different probabilities Raussendorf3 .
Iv Overview of numerical methods
Our aim is to determine the threshold error rate of the surface code. For physical error rates below this value, increasing the code distance (linearly) will decrease the logical error rate (exponentially). To determine the logical error rate as a function of the physical error rate, we perform Monte Carlo simulations. In each instance, a set of errors is generated based on some noise model, the error syndrome is calculated and decoded, a correction is applied, and the resulting homology class is calculated to test for the presence or absence of a logical error. For noise models in which the error syndrome is unreliable, the measurement circuits are repeated times before the error syndrome is decoded. In our simulations, minimumweight perfect matching is performed with Kolmogorov’s implementation Kolmogorov1 of Edmonds’ perfect matching algorithm Edmonds1 , and we use a Mersenne twister pseudorandom number generator Saito1 . For each physical error rate, the logical error rate is an average of approximately independent instances, where we ensure that at least logical errors are observed per point to limit statistical uncertainty.
For a local error model, decoding of the surface code can be mapped to a threedimensional randomplaquette gauge model on classical spins where the zerotemperature phase transition corresponds to the threshold error rate Dennis2002 ; Wang03 ; Harrington04 . Following Wang et al. Wang03 , the behavior of the logical error rate near the threshold corresponds to critical behavior in the spin model where the spincorrelation length scales according to
(3) 
where is some physical error rate, is the threshold error rate, and is the scaling exponent corresponding to the universality class of the model. Thus, for sufficiently large , the logical error rate should follow
(4) 
Allowing for systematic finitesize effects, we fit our data to a quadratic universal scaling function,
(5) 
from which we determine and . We perform simulations for odd values of between and where . Violations of the scaling ansatz are discernible for the smallest codes such that the minimum code distance for strong agreement between the numerical data and the ansatz appears to be . To account for this, values of and are determined from a best fit of the data for . In every case, , indicating accurate fitting. When plotting the data in Figs. 4 and 5, the curves for follow the universal scaling function in Eq. (5). Data for are included for completeness, however, the corresponding curves are independent polynomial fits that serve only as a guide for the eye. Our results indicate that, for the various circuitbased noise models we consider, which introduce only shortrange correlated errors, the value of is consistent with the universality class of the strictly local threedimensional randomplaquette gauge model Wang03 .
The surface code is defined by its hard boundaries. However, it has been common to, instead, study the threshold of the toric code, which effectively has periodic boundary conditions in two spatial dimensions. Here, we present results for the surface code. In this case, the measurement circuits at the boundaries of the surface code are modified to account for the omitted qubits. This changes their effective error rate from the measurement circuits in the bulk. However, we will see that the logical error rate rapidly converges to a single value at the threshold as the code distance is increased, indicating that these boundary effects are significant only for the smallest codes. This suggests that the toric code and the surface code will share the same threshold. However, because the structure of the logical operators depends on the boundary conditions, the correct boundary conditions should be used when an estimate of the logical error rate is sought for some physical error rate.
Lastly, the threshold is sensitive to errors that arise in the measurement circuits, which will, in turn, depend on the set of gates native to the quantum computer. We consider three cases, which are parametrized by the overall circuit depth:

Deptheight circuits. First, we assume the gate set consists of the preparation of state , the singlequbit Hadamard rotation, the twoqubit controllednot gate, and measurement in the basis. Then, referring to the circuits in Figs. 2 and 3, the overall circuit depth is equal to eight. In this case, there is an asymmetry between the two measurement circuits with the longer circuit being more unreliable due to the additional gates. This causes the threshold to split into an error threshold and a error threshold.

Depthsix circuits. Second, we assume that the gate set is extended to include the preparation of state and measurement in the basis. This removes the need for the Hadamard rotations in Fig. 2, and so, the overall circuit depth is reduced to six.

Depthfive circuits. Third, we assume measurement is nondestructive and prepares the ancillary qubit in a known state (either and or and , depending on the measurement basis). This allows the measurement and state preparation to be combined, and so, the overall circuit depth is reduced to five.
In each of these cases, all measurement circuits are performed in parallel and repeated times where identity gates are inserted whenever qubits are required to be idle. In the first case, we give the error threshold, which is the lower of the two thresholds and, therefore, sets the overall threshold. In all other cases, we give the error threshold. These thresholds set targets for the highlevel gates specified in the circuits, rather than for any lowerlevel physical operations. Also, we have ignored gates that are not required for error correction, but which may be required to achieve universality by distillation BK05 .
V Numerical results
v.1 Code capacity noise model
We begin with an idealized case in which the error syndrome of the surface code can be measured perfectly. Singlequbit Pauli errors are applied to data qubits with probability . In this case, we are effectively testing the code capacity of the surface code. Because it is perfectly reliable, the error syndrome only needs to be measured once. This eliminates the timelike aspect of the decoding algorithm, and error correction is reduced to interpreting the error syndrome in two spatial dimensions. Note that this simplified decoding problem can be mapped to the twodimensional randombond Ising model on classical spins Dennis2002 ; Wang03 ; Harrington04 ; Stace1 . For the code capacity noise model, we find
(6)  
(7) 
consistent with Wang et al. Wang03 , who found and . Our threshold is lower than the threshold of 0.109 found for an optimal decoding algorithm Honecker ; Merz ; Ohzeki09 ; deQueiroz09 but higher than the threshold of 0.09 found for a renormalizationgroup decoding algorithm DuclosCianci1 .
v.2 Phenomenological noise model
Next, we move to a case in which errors can occur on both data and ancillary qubits. Singlequbit Pauli errors are applied to all qubits with probability . This noise model neglects the propagation of errors between data and ancillary qubits in the measurement circuits but captures the essential challenge of faulttolerant error correction where the process of error correction itself is inherently faulty. In this case, the full decoding algorithm is required to account for the unreliable error syndrome. For the phenomenological noise model, we find
(8)  
(9) 
Again, this is consistent with Wang et al. Wang03 , who found and . Our threshold is lower than the threshold of 0.033 found for an optimal decoding algorithm Ohno04 but higher than the threshold of 0.0194 found for a renormalizationgroup decoding algorithm DuclosCianci2 .
v.3 Standard circuitbased noise model
Next, we move to a more general noise model, assuming that all gates in the measurement circuits may introduce errors. This is the most relevant case for faulttolerant quantum computation, although we note that the particulars of the noise model will depend on the physical system under consideration. For example, measurements may be slower and less reliable than other gates. First, we consider a socalled standard noise model. Erroneous singlequbit gates occur with probability , acting ideally followed by a singlequbit Pauli error chosen randomly from set . Similarly, erroneous twoqubit gates occur with probability , acting ideally followed by a twoqubit Pauli error chosen randomly from set . Lastly, erroneous initialization and measurement each occur with probability , preparing or reporting the incorrect orthogonal eigenstate. Under the standard noise model, for the deptheight circuits, we find
(10)  
(11) 
for the depthsix circuits, we find
(12)  
(13) 
and, for the depthfive circuits, we find
(14)  
(15) 
as shown in Fig. 4.
v.4 Balanced circuitbased noise model
The standard noise model is somewhat unreasonable as the qubits involved in twoqubit gates are more reliable than idle qubits. So, next, we consider a socalled balanced noise model, which ensures that idle qubits have the same probability of error as the qubits involved in twoqubit gates and accounts for the fact that measurement is only sensitive to errors in one basis. Specifically, the standard noise model is modified so that erroneous singlequbit gates occur with the probability of and erroneous initialization and measurement occurs with the probability of . Under the balanced noise model, for the deptheight circuits, we find
(16)  
(17) 
for the depthsix circuits, we find
(18)  
(19) 
and, for the depthfive circuits, we find
(20)  
(21) 
as shown in Fig. 5.
v.5 Perfect singlequbit gates
In some physical systems, singlequbit gates may be significantly faster and more reliable than twoqubit gates. In this case, the threshold will depend mainly on the twoqubit controllednot gates in the measurement circuits. We can approximate this case by modifying the standard noise model so that all singlequbit gates (including measurement and initialization) are perfectly reliable. In this case, we find
(22)  
(23) 
v.6 Decoding algorithm with a rectilinear metric
Next, we consider the effect of simplifying the decoding algorithm. Following Raussendorf et al. Raussendorf3 , our decoding algorithm accounts for the relative probabilities of errors, including correlated errors, that arise in the measurement circuits. However, the threshold was previously estimated using a decoding algorithm that ignores these correlated errors Fowler1 ; Wang10 . This algorithm is also based on minimumweight matching on a graph, but the weights of edges between nodes are made to equal the rectilinear distance between those nodes, simply reflecting the minimum number of singlequbit Pauli errors in a chain connecting the endpoints. Without accounting for correlated errors, the surface code corrects fewer errors than the code distance implies, negatively affecting its performance, particularly at low error rates. In fact, for , the code cannot reliably correct even a single error. With this simplification, under the standard noise model, for the depthsix circuits, we find
(24)  
(25) 
Fortunately, there is no significant cost to accurately accounting for correlated errors in the surface code. Similar methods exist for accounting for correlated errors in concatenated quantum error correction, also leading to significantly improved performance Knill1 ; Poulin1 ; Evans2 .
v.7 Threedimensional topological cluster states
Lastly, we consider an interesting and closely related scheme known as topological clusterstate quantum error correction Raussendorf4 ; Raussendorf3 . In this scheme, the measurement circuits are simulated by a series of singlequbit measurements on a particular threedimensional cluster state Raussendorf3 . The scheme may be more practical than the surface code in some physical systems partly due to its elegant tolerance against qubit loss, which was shown by Barrett and Stace Barrett1 . A modified depthsix circuit is required to prepare the cluster state from unentangled qubits and then to measure each qubit in the appropriate basis Raussendorf3 . However, the decoding algorithm is largely unchanged from the algorithm for the surface code. Under the standard noise model, we find
(26)  
(27) 
and, under the balanced noise model, we find
(28)  
(29) 
Standard noise model  Balanced noise model  
Deptheight circuits  
^{1}^{1}1Threshold for errors. For this noise model, the error threshold is lower and, therefore, sets the overall threshold.  ^{1}^{1}footnotemark: 1  
0.0057^{1}^{1}footnotemark: 1^{2}^{2}2Estimated from the logical error rate per round of measurement, rather than per rounds of measurement. Fowler2  
Depthsix circuits  
0.0075^{3}^{3}3Not directly comparable due to minor differences in the measurement circuits and noise model. Raussendorf3  
Depthfive circuits  
0.009^{2}^{2}footnotemark: 2 Fowler3  0.0012^{2}^{2}footnotemark: 2 Wang11  
0.011^{2}^{2}footnotemark: 2 Wang11  
Perfect singlequbit gates  –  
0.0125^{2}^{2}footnotemark: 2 Fowler2  
0.014^{2}^{2}footnotemark: 2 Wang11  
Rectilinear metric  –  
(depthsix circuits)  0.006^{2}^{2}footnotemark: 2 Fowler1  
0.0078^{2}^{2}footnotemark: 2 Wang10  
Topological cluster states  
0.0063 Barrett1  
0.0067^{3}^{3}footnotemark: 3 Raussendorf4 
Vi Discussion
It is instructive to compare our results with a range of previous estimates of the threshold. We begin by noting that it is reasonable to expect some slight variation between estimates due to different implementations of the decoding algorithm and the numerical simulations. Nevertheless, for the code capacity and phenomenological noise models, our results are consistent with Wang et al. Wang03 . For the remaining circuitbased noise models, our results are summarized in Table 1 and are compared with a range of previous estimates. Of the values that can be directly compared, our results are consistent only with the estimate of the threshold for topological clusterstate error correction due to Barrett and Stace Barrett1 . Beyond this result, there is some variation with our thresholds being significantly lower than those previously reported. This discrepancy appears to be independent of the particular measurement circuit, noise model, and decoding algorithm.
To investigate this discrepancy, let us consider the definition of the logical error rate. Recall that measurement circuits are repeated to account for the fact that the error syndrome is unreliable. We define the logical error rate to be the error rate per rounds of measurement, following Raussendorf et al. Raussendorf4 ; Raussendorf3 . This definition reflects the fact that, for a roughly isotropic noise model, rounds are required to achieve the same protection against errors affecting ancillary qubits as against errors affecting data qubits. In other words, if we increase the code distance, then error correction takes more time, which should be accounted for when calculating the logical error rate. On the other hand, the estimates in Refs. Fowler1 ; Wang10 ; Wang11 ; Fowler3 ; Fowler2 share a different definition (also see Refs. Nickerson1 ; Bravyi13 ). According to this definition, the logical error rate is the error rate per round of measurement (or, equivalently, the logical error rate per round is reciprocated to give the expected number of rounds until a logical error occurs). Note that this definition is independent of the code distance . In both cases, for various code distances, the logical error rate is plotted over a range of physical error rates, and the threshold is estimated to be the physical error rate for which these curves intersect.
Let us define the logical error rate to be the error rate per round of measurement as per Refs. Fowler1 ; Wang10 ; Wang11 ; Fowler3 ; Fowler2 and consider two surface codes with code distances and . For some physical error rate , the logical error rate of the two codes will be equal. However, if we fix the physical error rate at and perform rounds of measurement as required, then the larger surface code will be more likely to fail. In other words, according to this new definition, the two codes are equally reliable, but according to our original definition, the larger code is less reliable. The latter implies that the threshold is actually at some physical error rate . If becomes larger, then the relative difference between the two code distances becomes smaller as does the relative difference between their reliability over rounds of measurement. So, as , we may expect from above. This would suggest that defining the logical error rate to be the error rate per round of measurement could lead to an overestimate of the threshold.
To test this assertion, we return to the depthfive circuits under the standard noise model. Figure 6 shows the logical error rate per round of measurement as a function of the physical error rate for various code distances. As the code distance increases, the physical error rate at which consecutive curves intersect decreases. In particular, the intersection moves from physical error rates above 0.01 to approximately 0.0095. This is roughly consistent with the data in Refs. Wang11 ; Fowler3 , and qualitatively similar behavior can also be seen in Refs. Fowler1 ; Wang10 ; Fowler2 . In Ref. Wang11 , the threshold was estimated to be from the intersection of the and curves, and in Ref. Fowler3 , the threshold was estimated to be from the intersection of the and curves. The discrepancy between these two values was attributed to significant boundary effects for . However, our earlier simulations indicate that boundary effects are negligible for , pointing to another explanation for this behavior. Also, in Ref. Fowler3 , there appears to be no consistent intersection, even for . This implies that the threshold is actually lower than 0.009. Recall that, under the same assumptions, we found as shown in Fig. 4(c). Ultimately, given the lack of error analysis in Refs. Fowler1 ; Wang10 ; Wang11 ; Fowler3 ; Fowler2 , it is difficult to make a conclusive statement about the discrepancy between these estimates and our results.
Vii Conclusion and further work
To summarize, we have performed a series of numerical simulations of the surface code, finding that the value of the threshold error rate varies between 0.502(1)% and 1.140(1)% per gate for typical assumptions made in studies of this kind. Our results highlight the dependency of the threshold on properties of the underlying physical system. For example, having to perform additional gates to access initialization and measurement in the conjugate basis significantly reduces the threshold. Similarly, the highest thresholds will only be realized if measurements (in both the and bases) are nondestructive or if all singlequbit gates are effectively free from noise. However, in some cases, our results indicate that the threshold may be significantly lower than previously thought. The target for experimental devices may be lower still, assuming that gates, such as the twoqubit controllednot gate, will be composed of several physical operations. The operational error rate must also be sufficiently below the threshold to limit the overhead due to error correction. Lastly, our results indicate that the threshold for topological clusterstate error correction is lower than for the surface code under an identical noise model. However, like other schemes based on cluster states, this scheme has several desirable properties that may offset this disadvantage in some physical systems, particularly systems with nondeterministic gates or systems significantly affected by qubit loss or leakage.
We have limited ourselves to the question of the threshold for the surface code with decoding via Edmonds’ minimumweight perfect matching algorithm. Naturally, there are many avenues for further work. Given the recent proliferation of alternative decoding algorithms for topological codes, such as the surface code DuclosCianci1 ; DuclosCianci2 ; Bravyi1b ; Sarvepalli1 ; Wootton1 ; Wootton2 ; Wootton3 ; Delfosse1 , it would be valuable to determine circuitlevel thresholds for these algorithms, making it easier to understand their practical costs and benefits. It may also be possible to improve these thresholds by accounting for additional correlations present in some noise models (for example, the correlation between and errors in depolarizing noise) DuclosCianci1 . Comparing these thresholds in a consistent manner will be necessary to draw strong conclusions about the different approaches to error correction in the surface code.
Another important open question is the performance of the surface code at error rates well below the threshold. A greater understanding of this regime—including an understanding of how performance is affected by the introduction of additional logical qubits and nontrivial logical gates—will assist in determining the true overhead of scalable quantum computation under various assumptions. This question was recently addressed by Bravyi and Vargo for the standard noise model Bravyi13 . Expanding their work to consider a range of noise models and decoding algorithms would be instructive.
Lastly, we highlight related schemes for topological quantum error correction against noise models that differ significantly from the typical models considered here. These include schemes to tolerate high rates of qubit loss Barrett1 ; Barrett2 ; Fujii10 and a concatenated code tailored to highly dephasingbiased noise Stephens13 . Considering other physically motivated noise models may lead to new schemes that could underpin quantum computer architectures in the future.
Acknowledgements— This work was supported by the FIRST Program in Japan. Thanks to W. Munro and K. Nemoto for helpful advice and to M. Carrasco for commenting on several versions of the paper.
References
 [1] P. Aliferis, D. Gottesman, and J. Preskill, Quantum Inf. Comput. 6, 97 (2006).
 [2] D. Aharonov and M. BenOr, in Proceedings of the 29th ACM Symposium on the Theory of Computation (Association for Computing Machinery, New York, 1998), p. 176.
 [3] A. Y. Kitaev, Russian Math. Surveys 52, 1191 (1997).
 [4] E. Knill, R. Laflamme, and W. Zurek, Proc. R. Soc. London, Ser. A 454, 365 (1998).
 [5] J. Preskill, in Introduction to Quantum Computation and Information, edited by H.K. Lo, T. Spiller, and S. Popescu (World Scientific, Singapore, 1998).
 [6] D. Gottesman. Proc. Symp. Appl. Math. 68, 13 (2009).
 [7] A. Y. Kitaev, Ann. Phys. 303, 2 (2003).
 [8] S. B. Bravyi and A. Y. Kitaev, arXiv:quantph/9811052.
 [9] M. H. Freedman and D. A. Meyer, Found. Comp. Math. 1, 325 (2001).
 [10] E. Dennis, A. Kitaev, A. Landahl, and J. Preskill, J. Math. Phys. 43, 4452 (2002).
 [11] R. Raussendorf, J. Harrington, and K. Goyal, Ann. Phys. 321, 2242 (2006).
 [12] R. Raussendorf, J. Harrington, and K. Goyal, New J. Phys. 9, 199 (2007).
 [13] R. Raussendorf and J. Harrington, Phys. Rev. Lett. 98, 190504 (2007).
 [14] A. G. Fowler, A. M. Stephens, and P. Groszkowski, Phys. Rev. A 80, 052312 (2009).
 [15] A. G. Fowler, M. Mariantoni, J. M. Martinis, and A. N. Cleland, Phys. Rev. A 86, 032324 (2012).
 [16] E. Knill. Nature (London) 434, 39 (2005).
 [17] K. M. Svore, D. P. DiVincenzo, and B. M. Terhal, Quantum Inf. Comput. 7, 297 (2007).
 [18] A. M. Stephens, A. G. Fowler, and L. C. L. Hollenberg, Quantum Inf. Comput. 8, 330 (2008).
 [19] A. M. Stephens and Z. W. E. Evans, Phys. Rev. A 80, 022313 (2009).
 [20] F. M. Spedalieri and V. P. Roychowdhury, Quantum Inf. Comput. 9, 666 (2009).
 [21] R. Stock and D. F. V. James, Phys. Rev. Lett. 102, 170501 (2009).
 [22] S. J. Devitt, A. G. Fowler, A. M. Stephens, A. D. Greentree, L. C. L. Hollenberg, W. J. Munro, and K. Nemoto, New. J. Phys. 11, 083032 (2009).
 [23] R. Van Meter, T. D. Ladd, A. G. Fowler, and Y. Yamamoto, Int. J. Quantum Inf. 8, 295 (2010).
 [24] N. Y. Yao, L. Jiang, A. V. Gorshkov, P. C. Maurer, G. Giedke, J. I. Cirac, and M. D. Lukin, Nature Commun. 3, 800 (2012).
 [25] N. H. Nickerson, Y. Li, and S. C. Benjamin, Nat. Commun. 4, 1756 (2013).
 [26] C. Monroe, R. Raussendorf, A. Ruthven, K. R. Brown, P. Maunz, L.M. Duan, and J. Kim, arXiv:1208.0391.
 [27] K. Nemoto, M. Trupke, S. J. Devitt, A. M. Stephens, K. Buczak, T. Nöbauer, M. S. Everitt, J. Schmiedmayer, and W. J. Munro, arXiv:1309.4277.
 [28] S. D. Barrett and T. M. Stace, Phys. Rev. Lett. 105, 200502 (2010).
 [29] D. S. Wang, A. G. Fowler, A. M. Stephens, and L. C. L. Hollenberg, Quantum Inf. Comput. 10, 456 (2010).
 [30] D. S. Wang, A. G. Fowler, and L. C. L. Hollenberg, Phys. Rev. A 83, 020302(R) (2011).
 [31] A. G. Fowler, A. C. Whiteside, and L. C. L. Hollenberg, Phys. Rev. Lett. 108, 180501 (2012).
 [32] S. J. Devitt, A. M. Stephens, W. J. Munro, and K. Nemoto, Nat. Commun. 4, 2524 (2013).
 [33] E. Novais and E. R. Mucciolo, Phys. Rev. Lett. 110, 010502 (2013).
 [34] P. Jouzdani, E. Novais, and E. R. Mucciolo, Phys. Rev. A 88, 012336 (2013).
 [35] J. Ghosh, A. G. Fowler, and M. R. Geller, Phys. Rev. A 86, 062318 (2012).
 [36] G. DuclosCianci and D. Poulin, Phys. Rev. Lett. 104, 050504 (2010).
 [37] G. DuclosCianci and D. Poulin, Quantum Inf. Comput. 14, 721 (2014).
 [38] S. Bravyi and J. Haah, Phys. Rev. Lett. 111, 200501 (2013).
 [39] P. Sarvepalli and R. Raussendorf, Phys. Rev. A 85, 022317 (2012).
 [40] J. R. Wootton and D. Loss, Phys. Rev. Lett. 109, 160503 (2012).
 [41] A. Hutter, J. R. Wootton, D. Loss, arXiv:1302.2669.
 [42] J. R. Wootton, arXiv:1310.2393.
 [43] N. Delfosse, Phys. Rev. A 89, 012317 (2014).
 [44] J. Edmonds, Canad. J. Math., 17, 449 (1965).
 [45] H. Bombin and M. A. MartinDelgado, Phys. Rev. Lett. 97, 180501 (2006).
 [46] A. J. Landahl, J. T. Anderson, and P. R. Rice, arXiv:1108.5738.
 [47] D. Gottesman, Ph.D. thesis, California Institute of Technology, 1997.
 [48] J. Harrington, Ph.D. thesis, California Institute of Technology, 2004.
 [49] C. Wang, J. Harrington, and J. Preskill, Ann. Phys. 303, 31 (2003).
 [50] V. Kolmogorov, Math. Program. Comput. 1, 43 (2009).
 [51] M. Saito and M. Matsumoto, in Monte Carlo and QuasiMonte Carlo Methods 2006, edited by A. Keller, S. Heinrich, and H. Niederreiter (Springer, Berlin, 2008), Vol. 2, p. 607.
 [52] S. Bravyi and A. Kitaev, Phys. Rev. A 71, 022316 (2005).
 [53] T. M. Stace and S. D. Barrett, Phys. Rev. A 81, 022317 (2010).
 [54] A. Honecker, M. Picco, and P. Pujol, Phys. Rev. Lett. 87, 047201 (2001).
 [55] F. Merz and J. T. Chalker, Phys. Rev. B 65, 054425 (2002).
 [56] M. Ohzeki, Phys. Rev. E 79, 021129 (2009).
 [57] S. L. A. de Queiroz, Phys. Rev. B 79, 174408 (2009).
 [58] T. Ohno, G. Arakawa, I. Ichinose, and T. Matsui, Nucl. Phys. B 697, 462 (2004).
 [59] D. Poulin, Phys. Rev. A 74, 052333 (2006).
 [60] Z. W. E. Evans and A. M. Stephens, Quantum Inf. Process. 11, 1511 (2012).
 [61] S. Bravyi and A. Vargo, Phys. Rev. A 88, 062308 (2013).
 [62] Y. Li, S. D. Barrett, T. M. Stace, and S. C. Benjamin, Phys. Rev. Lett. 105, 250502 (2010).
 [63] K. Fujii and Y. Tokunaga, Phys. Rev. Lett. 105, 250503 (2010).
 [64] A. M. Stephens, W. J. Munro, and K. Nemoto, Phys. Rev. A 88, 060301(R) (2013).