Systematic quantum supremacy

I have been hearing more and more experimental groups claiming they are approaching so called quantum supremacy at least for analog quantum simulators.  As of now, most talks that I see rely on the following justification

  1. First a heuristic, often uncontrolled derivation is presented, where one can be convinced that an idealized model (such as the Bose-Hubbard or XX spin chain) can be realized in a lab.
  2.  Numerical simulations are carried out on state of the art classical computers for said idealized model.
  3.  Data in the lab is found to agree with these simulations.
  4.  The group claims that soon they can go beyond the classical computer, thus achieving quantum supremacy.

The issue I have with the above is how do you know your errors are in control? at least some other people seem to be thinking about this however it does not seem to be a widely acknowledged criticism.

In particular, one would like some theoretical tool by which to systematically organize sources of error. Then some observable could be measured in the lab to quantify the error experimentally.

In particular a tool that comes to mind is an effective Lagrangian written only in terms of the degrees of freedom of interest. An example I am aware of where this is done explicitly is in the experimental realization of the Lieb-Liniger model using bosons in an atomic waveguide. Although the leading order result is the Lieb-Liniger model, higher order corrections appear, and a detailed analysis shows they are suppressed by quantum degeneracy effects.

While in this instance the parameter of interest (namely the 3-body interaction) was calculated, it could be measured, and by measuring it one could ascertain how closely one’s experiment realized the Lieb-Liniger model. A similar protocol should be applicable more generally.

The use of a Lagrangian formalism, however, is obviously limited to some kind of conservative dynamics. Often experiments are open systems, or at bare minimum our theoretical tool should be able to parametrize their openness systematically. This leads naturally to the overlap of two fields that are often discussed separately, namely open quantum systems and effective field theory (although some recent work has started to investigate their shared real estate).

I think developing these kinds of tools is essential if one wants to be able to quantify errors of an analogue simulation that surpasses classical computers.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s