Spatial and internal control of atomic ensembles with radiofrequency and microwave driving

This weeks AMOPP seminar was given by Dr. German Sinuco-Leon from the University of Sussex on the topic of “Spatial and internal control of atomic ensembles with radiofrequency and microwave driving”. The abstract for this talk can be found below.

Spatial and internal control of atomic ensembles with radiofrequency and microwave driving

The ability to apply well-controlled perturbation to quantum systems is essential to modern methodologies to study their properties (e.g. in high-precision-spectroscopy), and developing quantum technologies (e.g. atomic-clocks and quantum processors). In most of the experimental platforms available today, such perturbations arise from the interaction of a quantum system with electromagnetic radiation, which creates harmonically oscillating couplings between the states of the system. Within this context, in this talk, I will describe our recent studies of the use of low-frequency electromagnetic radiation to control the external and internal degrees of freedom of ultracold atomic ensembles [1,2]. I will outline the relation of this problem with Floquet Engineering and the more general issue of describing the dynamic of the driven quantum systems. Finally, I will explain the challenges of describing the quantum dynamics of systems driven and highlight eh need for developing new conceptual and mathematical tools to identify universal characteristics and limitation of their dynamics.

[1] G. A. Sinuco-Leon, B. M. Garraway, H. Mas, S. Pandey, G. Vasilakis, V. Bolpasi, W. von Klitzing, B. Foxon, S. Jammi, K. Poulios, T. Fernholz, Microwave spectroscopy of radio-frequency dressed alkali atoms, Physical Review A, accepted (2019). [ArXiv:1904.12073].
[2] G. Sinuco-León and B.M. Garraway, Addressed qubit manipulation in radio-frequency dressed lattices, New Journal of Physics. 18, 035009 (2016)

State-selective field ionization of Rydberg positronium

All atomic systems, including positronium (Ps) can be excited to states with high principal quantum number n using lasers, these are called Rydberg states. Atoms in such states exhibit interesting features that can be exploited in a variety of ways. For example, Rydberg states have very long radiative lifetimes (on the order of 10 µfor our experiments). This is a particularly useful feature in Ps because when it is excited to large-n states, the overlap between the electron and positron wavefunction is suppressed. Therefore the self-annihilation lifetime becomes so large in comparison to the fluorescence lifetime, that the effective lifetime of Ps in a Rydberg state becomes the radiative lifetime of the Rydberg state. Most Rydberg Ps atom will decay back to the ground state first, before self-annihilating [Phys. Rev. A 93, 062513 (2016)]. The large distance between the positron and electron centers of charge in certain Rydberg states also means that they exhibit large static electric dipole moments, and thus their motion can be manipulated by applying forces with inhomogeneous electric fields [Phys. Rev. Lett. 117, 073202 (2016), Phys. Rev. A 95, 053409 (2017)]

In addition to these properties, Rydberg atoms have high tunnel ionization rates at relatively low electric fields. This property forms the basis for state-selective detection by electric field ionization. In a recent series of experiments, we have demonstrated state-selective field ionization of positronium atoms in Rydberg states (n = 18- 25) in both static and time-varying (pulsed) electric fields.

The set-up for this experiment is shown below where the target (T) holds a SiO2 film that produces Ps when positrons are implanted onto it. The first grid (G1) allows us to control the electric field in the laser excitation region, and a second Grid (G2) with a varying voltage provides a well defined ionization region. An electric field is applied by either applying a constant voltage to Grid 2 as in the case of the static field configuration, or by ramping a potential on Grid 2 as in the case of the pulsed field configuration.

Figure 1: Experimental arrangement showing separated laser excitation and field ionization regions.

In this experiment we detect the annihilation gamma rays from:

  • the direct annihilation of positronium

  • annihilations that occur when positronium crashes into the grids and chamber walls

  • annihilations that occur after the positron, released via the tunnel ionization process, crashes into the grids or chamber walls

We subtract the time-dependent gamma ray signal when ground state Ps traverses the apparatus from the signal detected from Rydberg atoms when an electric field is applied in the ionizing region. This forms a background subtracted signal that tells us where in time there is an excess or lack of annihilation radiation occurring when compared to background (this SSPALS method is described further in NIM. A  828, 163 (2016)  and and here).

 

Static Electric Field Configuration

In this version of the experiment, we let the excited positronium atoms fly into the ionization region where they experience a constant electric field. In the case where a small electric field (~ 0 kV/cm) is applied in the ionizing region, the excited atoms fly unimpeded through the chamber as shown in the animation below. Consequently, the background subtracted spectrum is identical to what we expect for a typical Rydberg signal (see the Figure below for n=20). There is a lack of ionization events early on (between 0 and 160 ns) compared to the background (ground state) signal that manifests itself as a sharp negative peak. This is because the lifetime of Rydberg Ps is orders of magnitude larger than the ground state lifetime.

Later on at ~ 200 ns, we observe a bump that arises from an excess of Rydberg atoms crashing into Grid 2. Finally, we see a long positive tail due to long-lived Rydberg atoms crashing into the chamber walls.

 

Figure 2: Trajectory simulation of Rydberg Ps atoms travelling through the ~0 V/cm electric field region (left panel) and measured background-subtracted gamma-ray flux , the shaded region indicates the average time during which Ps atoms  travel from he Target to Grid 2 (right panel).

On the other hand, when the applied electric field is large enough, all atoms are quickly ionized as they enter the ionizing region. Correspondingly, the ionization signal in this case is large and positive early on (again between 0 and 160 ns). Furthermore, instead of a long positive tail, we now have a long negative tail due to the lack of annihilations later in the experiment (since most, if not all, atoms have already been ionized). Importantly, since in this case field ionization occurs almost instantaneously as the atoms enter the ionization region, the shape of the initial ionization peak is a function of the velocity distribution of the atoms in the direction of propagation of the beam.

 

 

Figure 3: Trajectory simulation of Rydberg Ps atoms travelling through the ~2.6 kV/cm electric field region (left panel) and measured background-subtracted gamma-ray flux , the shaded region indicates the average time during which Ps atoms  travel from he Target to Grid 2 (right panel).

We measure these annihilation signal profiles over a range of fields and calculate the signal parameter S. A positive value of S implies that there is an excess of ionization occurring within the ionization region; whereas, a negative S means that there is a lack of ionization within the region with respect to background. Therefore, if S is approximately  equal to 0%, only half of the Ps atoms re being ionized. A plot of the experimental S parameter for different applied fields and for different n’s is shown in the plot below.Figure 4: Electric field scans for a range of n states ranging from 18 to 25 showing that at low electric fields none of the states ionize (thus the negative values of S) and as the electric field is increased, different n states can be observed to have varying ionizing electric field thresholds.

It is clear that different n-states can be distinguished using these characteristic S curves. However, the main drawback in this method is that both the background subtracted profiles and the S curves are convoluted with the velocity profile of the beam of Rydberg Ps atoms. This drawback can be eliminated by performing pulsed field ionization.

Pulsed Electric Field Configuration

We have also demonstrated the possibility of distinguishing different Rydberg states of positronium by ionization in a ramped electric field. The set-up is the same as in the static field scenario but now instead of fixing a potential on Grid 2, the potential on this grid is decreased from 3 kV to 0 kV hence increasing the field from 0 kV/cm to ~ 1800 kV/cm (the initial 3kV is necessary to help cool down Ps [New J. Phys17,043059 (2015)]).

The advantage of performing state selective field ionization this way is that we can allow most of the atoms to enter the ionization region before pulsing the field. This eliminates the dependence of the signal on the velocity distribution of the atoms and thus the signal is only dependent on the ionization rates of that Rydberg state in the increasing electric field.

Below is a plot of our results with a comparison to simulations (dashed lines). We see broad agreement between simulation and experiment and, we are able to distinguish between different Rydberg states depending on where in time the ionization peak occurs. This means that we should be able to detect a change in an initially prepared Rydberg population due to some process such as microwave induced transitions.

Figure 5: Pulsed-field ionization signal as a function of electric field for a range of n states.

The development of state selective ionization techniques for Rydberg Ps opens the door to measuring the effect of blackbody transitions on an initially prepared Rydberg population and a methodology for detecting transitions between nearby Rydberg-levels in Ps. Which could also be used for electric field cancellation methods to generate circular Rydberg states of Ps.

Ethics in research: what a student should know.

Ethics in research is something everyone is expected to understand and respect, from beginning students to established professors. However, very rarely are such things discussed formally. In effect one is expected to simply know all about these matters by appealing to “common sense” and also to the examples set by mentors. This is not an ideal situation, as is evidenced by numerous high profile cases of scientific misconduct. The purpose of this post is to briefly mention some important aspects of ethical research practices, mostly to encourage further thought in students before they have a chance to influenced by whatever environment in which they are working.

In some cases “misconduct” is easy to spot, and is then (usually) universally condemned. For example, making up or manipulating data is something we all know is wrong and for which there can never be a legitimate defense. At the same time, some students might see nothing wrong in leaving off a few outlier data points that clearly do not fit with others, or in failing to refer to some earlier paper. It is not always necessarily obvious what the right (that is to say, ethical) thing to do might be. What harm is there, after all, in putting an esteemed professor as a co-author on a paper, even though they have provided no real contribution? (Incidentally, merely providing lab space, equipment or even funding doesn’t, or shouldn’t, count as such). Well, there is harm in gift authorship; it devalues the work put in by the real authors, and makes it possible for a certain level of authority to guarantee future (perhaps unwarranted) success. And does it really matter if an extreme outlier is left off of an otherwise nice looking graph? The answer is yes, absolutely.

At the heart of most unethical behavior in science, or misconduct, is the nature of truth, and the much revered “scientific method”. Never mind that there is no such thing, or that the way science is actually done is not even remotely similar to the romantic notions so often associated with this very human activity. Nevertheless, there is a very strong element of trust implicit in the scientific endeavor. When we read scientific papers we have no choice but to assume that the experiments were carried out as reported, and that the data presented is what was actually observed. The entire scientific enterprise depends on this kind of trust; if we had to try and sort through potentially fraudulent reports we would never get anywhere. For this reason, trust in science is extremely important, and one could say that, regardless of any mechanisms of self correction that might exist in science, and the concomitant inevitability of discovery of fraud, we are almost compelled to trust scientists if we wish science to be done in a reasonable manner[a]. By the same token, those scientists who choose to betray this trust pollute not only their own research (and personal character), but all science. The enormity of this offense should not be understated. In particular, students should be aware of the fact that being a dishonest scientist is oxymoronic.

What is misconduct? A formal definition of scientific misconduct of the sort that would be necessary in legal action is an extremely difficult thing to produce. There have been many attempts, most of which are centered on the ideas of FFP: that is, fraud, falsification and plagiarism. These are things that we can easily understand and, most of the time, identify when they arise. However, things can (and invariably do) get complicated in real world situations, especially when people try to gauge intention, or when the available information is incomplete.

A definition of scientific misconduct, as it pertains to conducting experiments or observations to generate data, was given by Charles Babbage (Babbage 1830) that still has a great deal of relevance today. Babbage is perhaps best known for his “difference engine”, a mechanical computer he built to take the drudgery out of performing certain calculations. He was a well-respected scientist working in many areas, Professor of physics at Cambridge University and a fellow of the Royal society. His litany of offences, which is frequently cited in discussions of scientific misconduct, was categorized into four classes.

Babbage’s taxonomy of fraud:

1.) Hoaxing is not fraud in the usual sense, in that it is generally intended as a joke, or to embarrass or attack individuals (or institutions). A hoax is more like a prank than an attempt to benefit the hoaxster in the scientific realm; it is not often that a hoax remains undisclosed for a long time. Hoaxes are not generally intended to promote the perpetrator, who will prefer to remain anonymous, (at least until the jig is up). A Hoax is far more likely to involve someone dressing up as Bigfoot, or lowering a Frisbee outside a window on a fishing line than fabricating IV curves for electronic devices that have never been built.

A famous and amusing example is the so-called Sokal hoax (Sokal 1996). In 1996 a physicist from New York University submitted a paper to the journal “Social Text”. This journal publishes work in the field of “postmodernism” or cultural studies, an area in which there had been many critiques of science based on unscientific criteria. For example, the idea that scientific objectivity was a myth, and that certain cultural elements (e.g., “male privilege” or western societal norms) are determining factors in the underlying nature of scientific discovery. This attitude vexed many physicists, leading to the so-called “science wars” (Ross 1996), and was the impetus for Sokal’s hoax. The paper he submitted to Social Text was entitled “Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity”. This article was deliberately written in impenetrable language (as was often the case in articles in Social text, and other similar publications) but was in fact pure nonsense (as was also often the case in articles in Social text and other similar publications). The paper was designed with the intention of impressing the editors, not only with the stature of the author (Sokal was then a respected physicist, and seems at the time of writing, some 16 years later, still to be one) but also by appealing to their ideological positions. Sokal revealed the hoax immediately after his paper was published; he never had any intention of allowing it to stand, but rather wanted to critique the journal. As one might expect, a long series of arguments arose from this hoax, although the main effect was to tempt more actual scientists into commenting on the “science wars” that were already occurring. In this sense Sokal may have done more harm than good with his hoax, as the inclusion of scientists in the “debate” served only to legitimize what had before been largely the province of people with no training in, or, frequently, understanding of, science.

 2.) Forging is just what it sounds like, simply inventing data and passing it off as real. Of course, one might do the same in a hoax, but the forger seeks not to fool his audience for a while, later to reveal the hoax for his or her own purposes, but to permanently perpetrate the deception. The forger will imitate data and make it look as real as possible, and then steadfastly stick to their claim that it is indeed genuine. Often such claims can stand up to scrutiny even once the forging has been exposed, since if it is done well the bogus data is in no way obviously so. This unfortunate fact means that we cannot know for sure how much forging is really going on. However, the forger does not have any new information and so cannot genuinely advance science. They may be able to appear to be doing so, by confirming theoretical predictions, for example, but such predictions are not always correct. Nevertheless, the forger must have a good knowledge of their field, and enough understanding of what is expected to be able to produce convincing data out of thin air. If one were to do this then, on rare occasions, it is even possible that one might guess at some reality, later proved, and really get away with it. However, the fleeting glory of a major discovery seems to be somewhat addictive to those so inclined, and some famous forgers have started out small, only to allow themselves to be tempted into such outrageous claims that from the outside it seems as though they wanted to get caught.

A well known case in which exactly this happened is the “Schön affair”, which has been covered in detail in the excellent book “plastic fantastic” (Reich 2009). Hendrik Schön was a young and, apparently, brilliant young physicist employed at Bell Labs in the early 2000’s. His area of expertise was in organic semiconductors, which held (and continue to hold) great promise for the production of cheap and small electronic devices. Schön published a large number of papers in the most high profile journals (including many in Science and Nature) that purported to show organic crystals exhibiting remarkable electronic properties, such as superconductivity, the fractional quantum hall-effect, lasing, single molecule switches and so on. At one point he was publishing almost one paper per week, which is impressive just from the point of view of being able to write at such a rate; most scientists will probably spend one or two months just writing a paper, never mind the time taken to actually get the data or design and build an experiment. Before the fraud was exposed one professor at Princeton University[b], who had been asked to recommend Schön for a faculty position, refused to do so, and stated that he should be investigated for misconduct based solely on his extreme publication rate (Reich 2006).

Schön’s incredible success seemed to be predicated on his unique ability to grow high quality crystals (he was described on one occasion as having “magic hands”). This skill eluded other researchers, and nobody was able to reproduce any of Schön’s results. As he piled spectacular discovery upon spectacular discovery, suspicion grew. Eventually it was noticed that in some of his published data Schön had used the same curves, right down to the noise, to represent completely distinct phenomena, in completely different samples (that, in fact, had never really existed). After trying to claim that this was simply an accident, that he had just mislabeled some graphs and used the wrong data, more anomalies were revealed. The management at Bell labs started an investigation and Schön continued to make excuses, stating (amazingly, considering the quality of the real scientists that worked at this world leading institution) that he had not kept records in a notebook, and that he had erased his primary data to make space on his computer’s hard drive[c]!

None of these excuses were convincing, and indeed such poor research techniques should themselves probably be grounds for dismissal, even in the absence of fraud. Fraud was not absent, however, it was prevalent, and on a truly astounding scale. Schön had invented all of his data, wholesale, had never grown any high quality crystals, and had lied to many of his colleagues, at Bell labs and elsewhere. He was fired, and his PhD was revoked by Konstantz university in Germany for “dishonorable conduct”. There was no suggestion that the rather pedestrian research Schön had completed for his doctorate was fraudulent, but his actual fraud was so egregious that his university felt that he had dishonored not just himself and their institution, but science itself, and as such did not deserve his doctorate.

It seems obvious in retrospect that Schön could never have expected to get away with his forging indefinitely. Even if he had been a more careful forger, and had made up distinct data sets for all of his papers, the failure of others to reproduce his work would have eventually revealed his conduct. Even before he was exposed, pressure was mounting on Schön to explain to other scientists how he made his crystals, or to provide them with samples. He had numerous, and very well respected, co-authors who themselves were becoming nervous about the increasingly frustrated complaints from the other groups unable to replicate his experiments. It was only a matter of time before it all came out, regardless of the manifestly fake published data.

This unfortunate affair raises some interesting questions about misconduct in science. For example, Schön’s many co-authors seem to have failed in their responsibilities. Although they were found innocent of any actual complicity in the fraud, as co-authors on so many, and on such astounding, papers, it seems clear that they should have paid more attention. Some of the co-authors were highly accomplished scientists, while Schön himself was relatively junior, which only compounds the apparent lack of oversight. Indeed, it is likely that the pressure of working for such distinguished people, and at such a well-respected institution as Bell Labs, was what first tempted Schön into his shameful acts.

No doubt winning numerous prizes, being offered tenured positions at an Ivy League school (Princeton; not bad if you can get an offer) and generally being perceived as the new boy-wonder physicist played a role in the seemingly insane need to keep claiming ever more incredible results. It is hard to escape the conclusion that some sort of mental breakdown contributed to this folly. However, another pathological element was also likely at play, which is that Schön was sure that his speculations would eventually be borne out by other researchers.

If this had actually happened, and if he had not been a poor forger (or, perhaps, an overworked forger, with barely enough time to invent the required data), he might have enjoyed many years of continued prosperity. As it happens, some of the fake results published by Schön have in fact been shown to be roughly correct; sometimes the theorists guess right. Ultimately though, forging will usually be found out, and the more that is forged the faster this will happen. The only way to forge and get away with it is to know in advance what nature will do, and if you know that the forging is redundant[d].

3.) Trimming is the practice of throwing away some data points that seem to be at odds with the rest of the measurements, or at least are not as close to the “right” answer as expected. In this case the misconduct is primarily designed to make the experimenter seem more competent, although actually a poor knowledge of statistics might mean that it has the opposite effect; outliers sometimes seem out of place, but, to those who have a good knowledge of statistics, their absence may be quite suspicious. Because it can seem so innocuous, trimming is probably much more common than forging. When he invents data from nothing, the forger has already committed himself to what he must surely know is an immoral and heinous action; whatever justification he might employ towards this action cannot shield him from such knowledge. The trimmer, on the other hand, might feel that his scientific integrity is unaffected, and that all he is doing is making his true data a little more presentable; so as to better explain his truth. A trimmer who believes this is wrong may well be less offensive than the forger, but his actions are still unconscionable, will still constitute a fraud, and may equally well pervert the scientific record, even when the essential facts of the measurement remain. For example, by trimming a data set to look “nicer” you might significantly alter the statistical significance of some slight effect. Indeed, in this way you can create a non-existent effect, or destroy evidence for a real one; without the truth of nature’s reference there is no way to know the difference.

The trimmer has a great advantage over the forger, insofar as he is not trying to second guess nature, but rather obtains the answer from nature in the form of his primary measurements, and then attempts to make his methodology in so doing appear to be “better” than it really was. The trimmed data, therefore, will likely not give a very different answer than the untrimmed set, but will seem to have been obtained with greater skill and precision.

There are no well known examples of trimming because, by its very nature, it is hard to detect. Who will know if some few data points are omitted from an otherwise legitimate data set? What indicators might allow us to distinguish a careful experimenter from a carefree trimmer? Even Newton has been accused of trimming some of his observations of lunar orbits to obtain better agreement with his theory of gravitation (Westfell 1996). Objectively, even for the amoral, trimming is quite pointless when one considers that, if found out, the trimmer will suffer substantial damage to his reputation, whereas even if it is not found out, the advantage to that reputation is slight.

4.) Cooking refers to selecting data that agrees with the hypothesis you are trying to prove and rejecting other data, that may be equally valid, but that doesn’t. In other words, cherry picking. This is similar to trimming inasmuch as it is selecting from real data (i.e., not forged), but with no real justification for doing so, other than ones own presupposition about what the “right” answer is. It differs from trimming, for the cook will take lots of data and then select those that serve his purpose, while the trimmer will simply make the best out of whatever data he has. This is lying, just as forging, but with the limitation that nature herself has at least suggested which lies to tell.

Cooking also carries with it an ill defined aspect of experimental science, that of the “art” of the experimentalist. It is certainly true to say that experimental physics (and no doubt other branches of science too) contains some element of artistry. Setting up a complicated apparatus and making it work is an underappreciated skill, but a crucial one. When taking data with a temperamental system it is routine to throw some of it out, but only if one can justify doing so it in terms of the operation of the device. Almost all experiments include some tuning up time in which data are neglected because of various experimental problems. However, once all of these problems have been ironed out and the data collection begins “for real” one has to be very careful about excluding any measurements. If some obvious problems attend (a power supply failed, a necessary cable was unplugged, batboy was seen chewing on the beamline, whatever it may be) then there is no moral difficulty associated with ditching the resulting data. When this sort of thing happens (and it will), the best thing to do is throw out all of the data and start again.

The situation gets more complicated if an intermittent problem arises during an experiment that affects only some of the data. In that case you might want to select the “good” data and get rid of the “bad”. Unfortunately all of the resulting data will then be “ugly”. This is not a good idea because it borders on cooking. It is much better to throw away all potentially suspect data and start again. In data collection, as in life, even the appearance of impropriety can be a problem, regardless of if propriety has in fact been maintained. It may be the case that there is an easy way to identify data points that are problematic (for example, the detector output voltage might be zero sometimes and normal at others), but there will remain the possibility that the “good” data is also affected by the underlying experimental problem leading to “bad” data, but in a less obvious manner. The best thing to do in this situation will depend on many factors. It is not always possible to repeat experiments; you might have a high confidence in some data because of the exact nature of your apparatus and so on. Even so, “when in doubt, throw it out” is the safest policy.

Cooking is a little like forging, but with an insurance policy. That is, since the data is not completely made up there is a better chance that it does really correspond to nature, and therefore you will not be found out later because you are (sort of) measuring something real. This is not necessarily the case, however, and will depend on the skill of the cook. By selecting data from a large array it is usually possible to produce “support” for any conclusion one wants, and as a result the cook who seeks to prove an incorrect hypothesis loses his edge over the forger.

A well known case illustrates that it is not so simple to decouple experimental methodology with cookery. Robert Millikan is well known for his oil drop experiment in which he was able to obtain a very accurate value for the charge on an electron, work for which he was awarded the Nobel Prize. This work has been the subject of some controversy, however, since Historian of Science Gerald Holton [Holton 1978] went through Millikan’s notebooks and revealed that not all of the available data had been used, even though Millikan specifically said that it had. Since then there have been numerous arguments about whether the selection of data was simply the shrewd action of an expert experimentalist, or one of dishonest cooking. As discussed by numerous authors [e.g., Segerstråle, 1995] it is neither one nor the other, and the situation is more nuanced. The reality is that his answer was remarkably accurate, as we now know. The question then is did Millikan know what to expect? If he did not then accusations of cooking seem unfounded, since he would have had to know what he was trying to “demonstrate”. If he got the right answer without knowing it in advance, he must have been doing good science, right? WRONG! If Millikan was cooking (or even doing a little trimming) and he somehow got the right answer this does not in any way mitigate the action. Since we cannot say whether he really did do any cooking or not, one might believe that getting the right answer implies that he was being honest; as it turns out, he would have obtained an even more accurate answer if he had used all of his data (Judson 2004).

Furthermore, the question of whether Millikan already knew the “right” answer is meaningless. The right answer is the one we get from an experiment, and we only know it to the extent that we can trust the experiment. An army of theoreticians can be (have been, perhaps now are) wrong. Dirac predicted that the electron g factor would be exactly 2, and it almost is. Knowing only this, the cook or trimmer might be sorely tempted to nudge the data to be exactly 2.0000 and not 2.0023, but that would be a terrible mistake, one that would miss out on a fantastic discovery. One can only hope that somewhere in the world there is at least one cook who allowed a Nobel Prize (or some major discovery) to slip through his dishonest fingers, and cannot even tell anyone how close he came. Did Millikan knowingly select data to make his measurements look more accurate? There is no way to know for sure.

Pathological science is a term coined by Irving Langmuir, the well known physicist. He defined this as science in which the observed effects are barely detectable, and the signals cannot be increased, but are nevertheless claimed to be made with great accuracy; pathological science also frequently involves unusual theories that seem to contradict what is known (Langmuir 1968). However, pathological science is not usually an act of deliberate fraud, but rather one of self delusion. This is why the effect in question is always at the very limit of what can be detected, for in this case all kinds of mechanisms can be used (even subconsciously) to convince one that the effect really is real. In this context, then, it is worth asking, does the cook necessarily know what he is doing? That is, when one wishes to believe something so very strongly, perhaps it becomes possible to fool ones own brain! This kind of self delusion is more common than you might think, and happens in everyday life all the time. Although we cannot directly choose what we believe is true, when we don’t know one way or the other, our psychology makes it easy for us to accept the explanation that causes the least internal conflict (also known as cognitive dissonance).

When a researcher is so deluded as to engage in pathological science, it is difficult to categorize this activity as one of misconduct. The forger has to know what he is doing. The trimmer or cook generally tries to make his data look the way he wants, but if he does so without real justification then it is wrong. By making justifications that seem reasonable but are not, one could conceivably fool oneself into thinking that there was nothing improper about cooking or trimming. Certainly, this will sometimes really be the case, which only makes it easier to come up with such justifications.

In many regards the pathological scientist is not so different from one who is simply wrong. The most famous example of this might be the cold fusion debacle (Seife 2009). Pons and Fleischmann claimed to see evidence for room temperature fusion reactions and amazed the world with their press conferences (not so much with their data). Later their fusion claims turned out to be false, as proved by Kelvin Lynn, Mario Gai and others (Gai et al 1989). However, the effect they said they saw was also observed by some other researchers, and as a result many reasons were promulgated as to why one experiment might see it and another might not. Then, when it was pointed out that there ought to be neutron emission if there was fusion, and no neutrons were observed, it became necessary to modify the theory so as to make the neutrons disappear. This was classic pathological science, hitting just about every symptom laid out by Langmuir.

Plagiarism was not explicitly discussed by Babbage but does feature prominently in modern definitions of misconduct. Directly copying other peoples work is relatively rare for professional scientists, perhaps because it is so likely to be discovered. There have been some cases in which people have taken papers published in relatively obscure journals and submitted them, almost verbatim, to other, even more obscure, journals. Since only relatively unknown work is susceptible to this sort of plagiarism it does little damage to the scientific record, but is no less of an egregious act for this. Certainly the psychology of the “scientist” who would engage in such actions is no less damaged than that of a Schön.

In one case a plagiarist, Andrzej Jendryczko, tried to pass of the work of others as his own by translating it (from English, mostly) into Polish, and then publishing direct copies in specialized Polish medical journals under his own name (E.g., Judson 2004). This may have seemed like a safe strategy, but a well read Polish-American physician had no trouble tracking down all the offending papers using the internet. Indeed, wholesale plagiarism is now very easy to uncover via online databases and a multitude of specialized software applications. For the student, plagiarism should seem like a very risky business indeed. With minimal effort it can be found out, and no matter how clever the potential plagiarist might be, Google is usually able to do better with sheer force of computing power and some very sophisticated search algorithms. As is often the case, this sort of cheating does not even pass a rudimentary cost-benefit analysis[e]. It is another inverse to Pascal’s wager (the idea that it is better to believe in God than not, just because the eternity of paradise always beats any temporary secular position) inasmuch as the actual gain attending ripping off some (necessarily) obscure article found online is virtually nil, whereas exposure as a fraud for doing this will contaminate everything you have ever done, or will ever do, in science. How can this ever be worth even considering?

Plagiarism is not as simple as copying the work of others; providing incomplete references in a paper could be construed as a form of plagiarism, insofar as one is not providing the appropriate background, thereby perhaps implying more originality than really exists. References are a very important part of scientific publishing, and not just because it is important to give due respect to the work that you might be building upon. A well referenced paper not only shows that you have a good understanding and knowledge of your field, but will also make it possible for the reader to properly follow the research trail, which puts everything in context and helps enormously in understanding the work[f].

Stealing the ideas of others, if not their words, is also, of course, a form of plagiarism. This is harder to prove, which may be why it is something most often discussed in the context of grant proposals. This forms a rather unfortunate set of circumstances in which a researcher finds himself having to submit his very best ideas to some agency (in the hope of obtaining funding), which are promptly sent directly to his most immediate competitors for evaluation! After all, it is your competition who are best placed to evaluate you work. In most cases, if an obvious conflict exists, it is possible to specify individuals who should not be consulted as referees, although this is more common in the publication of papers than it is in proposal reviews. For researchers in niche fields, in which there may not be as much rivalry, this is not going to be a common problem. For researchers in direct competition, however, things might not be so nice. There have in fact been some well publicized examples of this sort of plagiarism, but it is probably not very common because grant proposals don’t often contain ideas that have not been discussed, to some degree, in the literature, and in those rare cases when this isn’t so, it is probably going to be obvious if such an idea is purloined. Also, let us not forget, most scientists are really not so unscrupulous. Thus, for this to happen you’d need an unlikely confluence of an unusually good idea sent to an abnormally unprincipled referee who happened to be in just the right position to make use of it.

 

References & Further Reading  

Misconduct in science is a vast area of study, and the brief synopsis we have given here is just the tip of the iceberg. The use of Babbage’s taxonomy is commonplace, and in depth discussions of every aspect of this can be found in many books. The following is small selection. The book by Judson is particularly recommended as it is fairly recent and goes into just the right amount of detail in some important case studies.

https://retractionwatch.com/

Judson H. F. (2004). The Great Betrayal: Fraud in Science, Houghton Mifflin Harcourt.

Alfredo K and Hart H, “The University and the Responsible Conduct of Research: Who is Responsible for What?” Science and Engineering Ethics, (2010).

Sterken, C., “Writing a Scientific Paper III. Ethical Aspects”, EAS Publications Series 50, 173 (2011)

Reich, E. S. (2009). Plastic Fantastic: How the Biggest Fraud in Physics Shook the Scientific World, Palgrave Macmillan.

Broad, W. and Wade, N. (1983). Betrayers of the Truth: Fraud and deceit in the halls of science, Simon & Schuster.

Babbage C. (1830) Reflections on the Decline of Science in England, August M. Kelley, New York (1970).

Holton G. (1978) Subelectrons, presuppositions and the Millikan-Ehrenhaft dispute, In: Holton G., ed., The Scientific Imagination, Cambridge University Press, Cambridge, UK.

Langmuir I. (1989) Pathological science, Physics Today 42: 36–48. Reprinted from the original in General Electric Research and Development Center Report 86-C-035, April 1968

Ross, Andrew, ed. (1996). Science Wars. Duke University Press.

Segerstråle, U. (1995). Good to the last drop? Millikan stories as “canned” pedagogy, Science and Engineering Ethics 1, 197.

Sokal, A., D. and Bricmont, J. (1998). Fashionable Nonsense: Postmodern Intellectuals’ Abuse of Science. Picador USA: New York.  

Westfall, R., S. (1994). The Life of Isaac Newton. Cambridge University Press.

Seife, C. (2009). Sun in a Bottle: The Strange History of Fusion and the Science of Wishful Thinking,  Penguin Books.

Gai, M. et al., (1989). Upper limits on neutron and gamma ray emission from cold fusion, Nature 340, 29.

 

Footnotes

[a] What is this reasonable manner? We just mean that if one cannot trust scientific reports to be honest representations of real work undertaken in good faith, then standing on the shoulders of our colleagues (who may or may not be giants) becomes pointless, everything has to be independently verified, and the whole scientific endeavor becomes exponentially more difficult.

[b] Reich’s book does not name the professor in question, but clearly he or she had a good grasp on how much can be done, even by a highly motivated genius.

[c] This is a particularly ludicrous claim since to a scientist experimental data is a highly valuable commodity, whereas disc space is trivially available: the idea that one would delete primary data just to make space on a hard drive is like burning down one’s house in order to have a bigger garden.

[d] Another way to increase your chances of getting away with forgery is to fake data that nobody cares about. However, this obviously has an even less attractive cost-benefit ratio.

[e] We obviously do not mean to suggest that some clever form of plagiarism that can escape detection does pass a cost-benefit analysis, and is therefore a good idea: the uncounted cost in any fraud, even a trivial one, is the absolute destruction of one’s scientific integrity. We just mean that this sort of thing, which is never worthwhile, is even more foolish if there isn’t some actual (albeit temporary) advantage.

[f] On a more mundane level, a referee who has not been cited, but should have been, is less likely to look favorably upon your paper than he or she might otherwise have done. This also applies to grant proposals, so it is in your own interests to make sure your references are correct and proper.

Rydberg Ps electrostatically guided in curved quadrupole

The latest efforts of our research at UCL have been focused on manipulating Positronium (Ps) atoms in highly-excited principal quantum number n states (Rydberg states) [PRL. 114, 173001]. In one of our latest works we showed how we can exploit the large electric dipole moment of low-field-seeking Rydberg states (those states which have positive Stark shift) to confine them in a quadrupole “guide” [PRL 117, 073202].

As a direct follow-up to that study, we devised a modified version of a quadrupole guide with a 45° bend that would allow us to perform velocity selection on the atoms being guided by tuning the efficiency with which the Rydberg Ps atoms are transmitted through the bend, in addition, in our previous set-up we experienced technical difficulties since the detection scheme was in-line with our positron beam, so having a curved guide would also be beneficial for that reason.

curvedguidechamber

The schematic figure above depicts our current experimental setup, which we have used to guide Rydber Ps atoms around a 45° bend into a region off-axis with our positron beam. We have not yet implemented velocity selection, but we have clear evidence that we can efficiently guide Ps atoms in this configuration.histtotal_lyso2nai

The left panel in the figure above shows the time of flight (TOF) distribution of n = 14 atoms excited to high-field seeking states (as measured by the detectors at the end of the curved guide, i.e. “LYSO C” and “NaI”), and a background wavelength with is off-resonant with any transition, essentially acting like a “laser on” and “laser off” measurement. The right panel shows the background-subtracted trigger rate for this measurement (“laser on” – “laser off”), which shows clear evidence of atoms with a TOF arrival time of ~8 \mu \mathrm{s}.

In addition to this being a stepping stone to demonstrate velocity selection due to the acceptance of the curved section of the guide, we may also improve this set-up into eventually developing a ring-like stark decelerator, and other Ps atom optics.

Production and time-of-flight measurements of high Rydberg states of Positronium

One of our recent studies focused on measuring the lifetimes of Rydberg states of Positronium (Ps) [PRA. 93, 062513]. However, some of the limitations that prevented us from measuring lifetimes of states with higher principal quantum number (n), is the fact that such states can be easily ionised by the electric fields generated by the electrodes in our laser-excitation region (these electrodes are normally required to achieve an excitation electric field of nominally ~ 0 V/cm).

We have recently implemented a simple scheme to overcome this complication, whereby we make use of a high-voltage switch to turn discharge the electrodes in the interaction region after the laser excitation has taken place.

n19ionThe figure shown above show the Background-subtracted spectra (the SSPALS detector trace is recorded with a background and resonant wavelength, they are then normalised and subtracted from each other) for n = 18 and n = 19. It is clear from the “Switch Off” that when the high voltage switched is not utilised (and the voltages to all electrodes are always on), that most of the annihilations happen at early times, especially around ~100ns, this is the time it takes for the atoms to travel out of the low-field region, and become field-ionised by the DC voltage on the electrodes.

On the other hand, the “Switch On” curves show that both n = 18 and 19 have many more delayed events (after ~ 400 ns) due to Rydberg Ps being able to travel for much longer distances before annihilating when the switch is used to discharge the electrode biases.

n19ion_tof

The figure above shows  data taken by a detector set up for single-gamma-ray detection, approximately 12 cm away from the Ps production target, on the same experiment as described for the previous figure. It is clear from this data that the time-of-flight (TOF) to this detector is ~2 \mu \mathrm{s} However, in this case it is clear that only the n = 19 state benefited from having the “switch on”, indicating that is the smallest-n state that this scheme is necessary for our current electric-field configuration.

Comparing the SSPALS and TOF figures it can be seen that even though the n = 18 SSPALS signal was changed drastically, the n = 18 TOF distribution remained the same, this is a clear example of how changes in the SSPALS spectrum discussed in the first figure are indicative of changes in atom distributions close to the Ps production region, but are not necessarily correlated to TOF distributions measured at different positions across the Ps flight paths

These methods will eventually lead to more accurate measurement of the lifetimes of higher n-states of Ps, and the possibility of using those states with higher electric dipole moments for future atom-optics experiments, such as Ps electrostatic lenses and Stark decelerators.

Efficient production of n = 2 Positronium in S states

We routinely excite Positronium (Ps) into its first excited state (n = 2) via 1-photon resonant excitation [NJP. 17 043059], and even though most of the time this is an intermediate step for subsequent excitation to Rydberg (high n) states [PRL. 114, 173001], there is plenty of interesting physics to be explored in n = 2 alone, as we discussed in one of our recent studies [PRL. 115, 183401 and  PRA. 93, 012506].

In this study we showed that the polarisation of the excitation laser, as well as the electric field that the atoms are subjected to, have a drastic effect on the effective lifetime of the excited states and when Ps annihilates.

qexp

Above you can see the data for two laser polarisations, showing the Signal parameter S(%) as a function of electric field, this is essentially a measure of how likely Ps is to annihilate compared to ground-state (n = 1) Ps, that is to say, if S(%) is positive then n = 2 Ps in such configuration annihilates with shorter lifetimes than n = 1 Ps (142 ns), whereas if S(%) is negative then n = 2 Ps will annihilate with longer lifetimes than 142 ns, These longer lifetimes are present in the parallel polarisation (pannel a).

Using this polarisation, and applying a large negative or positive electric field (around 3 kV/cm), provides such long lifetimes due to the excited state containing a significant amount of triplet S character (2S), a substate of = 2 with spin = 1 and \ell = 0. If the Ps atoms are then allowed to travel (adiabatically) to a region of zero nominal electric field (our experimental set-up [RSI. 86, 103101] guarantees such transport), then they will be made up almost entirely of this long-lived triplet S character, and will thus annihilate at much later times than the background n = 1 atoms. These delayed annihilations can be easily detected by simply looking at the gamma-ray spectrum recorded by our LYSO detectors [NIMA. 828, 163] when the laser is on resonance (“Signal”), and subtracting it from the spectrum when the laser is off resonance (“Background”).

The figure above shows such spectra taken with the parallel laser polarisation, at a field where there should be minimal 2S Production (a), and a field where triplet S character is maximised (b).   It is obvious that on the second case, there are far more annihilations at later times, indicated by the positive values of the data on times up to 800 ns. This is clear evidence that we have efficiently produced = 2 triplet S states of Ps using single-photon excitation. Previous studies of 2S Ps produced such states either by collisional methods [PRL34, 1541], which is much more inefficient than single-photon excitation,  or by two-photon excitation, which is also more inefficient, requires much more laser power and is limited by photo-ionisation [PRL. 52, 1689].

This observation is the initial step before we begin a new set of experiments where we  will attempt to measure the = 2 hyperfine structure of Ps using microwaves!

P.A.M. Dirac

Yesterday marked the 114th anniversary of the birth of Paul Adrien Maurice Dirac, one of the world’s greatest ever theoretical physicists. Born on the 8th of August 1902 in Bristol (UK), Dirac studied for his PhD at St. John’s college Cambridge University, where he would subsequently discover the equation that now bears his name,

iγ·∂ψ =  mψ .

The Dirac equation is a solution to the problem of describing an electron in a way that is consistent with both quantum mechanics and Einstein’s theory of relativity. His solution was unique in its natural inclusion of the electron “spin”, which had to otherwise be invoked to account for fine structure in atomic spectra. His brilliant contemporary, Wolfgang Pauli, described Dirac’s thinking as acrobatic. And several of Dirac’s theories are regarded as among the most beautiful and elegant of modern physics.

An important prediction of the Dirac equation is the existence of the anti-electron (also known as the  positron). This particle is equal in mass to the more familiar electron, but has the opposite electric charge. Dirac published his theory of the anti-electron in 1931 – two years before “the positive electron” was discovered by Carl Anderson. Dirac accurately mused that the anti-proton might also exist, and most physicists now believe that all particles posses an antimatter counterpart. But antimatter is apparently – and as yet inexplicably – much scarcer than matter.

In 1933 Dirac shared the Nobel prize in physics with Erwin Schrödinger “for the discovery of new productive forms of atomic theory”. Dirac died aged 82 in 1984. He’s commemorated in Westminster Abbey by an inscription in the Nave, not far from Newton’s monument. Separated in life by more than two centuries, Paul Dirac and Sir Isaac Newton are arguably the fathers of antimatter and gravity.

http://www.westminster-abbey.org/our-history/people/paul-dirac

The Strangest Man by Graham Farmelo is a fascinating account of Dirac’s life and work.

Rydberg Positronium Special Report, ICPEAC 2015

One of the conferences that we attended during the summer (ICPEAC 2015) had the necessary set-up to film one of our talks about our recent Rydberg paper, this was summarised on a published IOP abstract.

You can watch our talk along with the rest of the lectures on ICPEAC’s youtube channel: https://www.youtube.com/watch?v=Cytjc2Er2Co.

ANTIMATTER: who ordered that?

The existence of antimatter became known following Dirac’s formulation of relativistic quantum mechanics, but this incredible development was not anticipated. These days conjuring up a new particle or field (or perhaps even new dimensions) to explain unknown observations is pretty much standard operating procedure, but it was not always so. The famous “who ordered that” statement of I. I. Rabi was made in reference to the discovery of the muon, a heavy electron whose existence seemed a bit unnecessary at the time; in fact it was the harbinger of a subatomic zoo.

The story of Dirac’s relativistic reformulation of the Schrödinger wave equation, and the subsequent prediction of antiparticles, is particularly appealing; the story is nicely explained in a recent biography of Dirac (Farmelo 2009). As with Einstein’s theory of relativity, Dirac’s relativistic quantum mechanics seemed to spring into existence without any experimental imperative. That is to say, nobody ordered it! The reality, of course, is a good deal more complicated and nuanced, but it would not be inaccurate to suggest that Dirac was driven more by mathematical aesthetics than experimental anomalies when he developed his theory.

The motivation for any modification of the Schrödinger equation is that it does not describe the energy of a free particle in a way that is consistent with the special theory of relativity. At first sight it might seem like a trivial matter to simply re-write the equation to include the energy in the necessary form, but things are not so simple. In order to illustrate why this is so it is instructive to briefly consider the Dirac equation, and how it was developed. For explicit mathematical details of the formulation and solution of the Dirac equation see, for example, Griffiths 2008.

The basic form of the Schrödinger wave equation (SWE) is

(-\frac{\hbar^2}{2m}\nabla^2+V)\psi = i\hbar \frac{\partial}{\partial t}\psi.                                                    (1)

The fundamental departure from classical physics embodied in eq (1) is the quantity \psi , which represents not a particle but a wavefunction. That is, the SWE describes how this wavefunction (whatever it may be) will behave. This is not the same thing at all as describing, for example, the trajectory of a particle. Exactly what a wavefunction is remains to this day rather mysterious. For many years it was thought that the wavefunction was simply a handy mathematical tool that could be used to describe atoms and molecules even in the absence of a fully complete theory (e.g., Bohm 1952). This idea, originally suggested by de Broglie in his “pilot wave” description, has been disproved by numerous ingenious experiments (e.g., Aspect et al., 1982). It now seems unavoidable to conclude that wavefunctions represent actual descriptions of reality, and that the “weirdness” of the quantum world is in fact an intrinsic part of that reality, with the concept of “particle” being only an approximation to that reality, only appropriate to a coarse-grained view of the world. Nevertheless, by following the rules that have been developed regarding the application of the SWE, and quantum physics in general, it is possible to describe experimental observations with great accuracy. This is the primary reason why many physicists have, for over 80 years, eschewed the philosophical difficulties associated with wavefunctions and the like, and embraced the sheer predictive power of the theory.

We will not discuss quantum mechanics in any detail here; there are many excellent books on the subject at all levels (e.g., Dirac 1934, Shankar 1994, Schiff 1968). In classical terms the total energy of a particle E can be described simply as the sum of the kinetic energy (KE) and the potential energy (PE) as

KE+PE=\frac{p^2}{2m}+V=E                                                 (2)

where p = mv represents the momentum of a particle of mass m and velocity v. In quantum theory such quantities are described not by simple formulae, but rather by operators that act on the wavefunction. We describe momentum via the operator -i \hbar\nabla and energy by i\hbar \partial / \partial t and so on. The first term of eq (1) represents the total energy of the system, and is also known as the Hamiltonian, H. Thus, the SWE may be written as

H\psi=i\hbar\frac{\partial\psi}{\partial t}=E\psi                                                              (3)

The reason why eq (3) is non-relativistic is that the energy-momentum relation in the Hamiltonian is described in the well-known non-relativistic form. As we know from Einstein, however, the total energy of a free particle does not reside only in its kinetic energy; there is also the rest mass energy, embodied in what may be the most famous equation in all of physics:

E=mc^2.                                                                    (4)

This equation tells us that a particle of mass m has an equivalent energy E, with c2 being a rather large number, illustrating that even a small amount of mass (m) can, in principle, be converted into a very large amount of energy (E). Despite being so famous as to qualify as a cultural icon, the equation E = mc2 is, at best, incomplete. In fact the total energy of a free particle (i.e., V = 0) as prescribed by the theory of relativity is given by

E^2=m^2c^4 +p^2c^2.                                                        (5)

Clearly this will reduce to E = mc2 for a particle at rest (i.e., p = 0): or will it? Actually, we shall have E = ± mc2, and in some sense one might say that the negative solutions to this energy equation represent antimatter, although, as we shall see, the situation is not so clear cut. In order to make the SWE relativistic then, one need only replace the classical kinetic energy E = p2/2m with the relativistic energy E = [m2c4+p2c2]1/2. This sounds simple enough, but the square root sign leads to quite a lot of trouble! This is largely because when we make the “quantum substitution” p \rightarrow -i\hbar\nabla  we find we have to deal with the square root of an operator, which, as it turns out, requires some mathematical sophistication. Moreover, in quantum physics we must deal with operators that act upon complex wavefunctions, so that negative square roots may in fact correspond to a physically meaningful aspect of the system, and cannot simply be discarded as might be the case in a classical system.

To avoid these problems we can instead start with eq (5) interpreted via the operators for momentum and energy so that eq (3) becomes

(- \frac{1}{c^2}\frac{\partial^2}{\partial t^2} + \nabla^2)\psi=\frac{m^2 c^2}{\hbar^2}\psi.                                                (6)

This equation is known as the Klein Gordon equation (KGE), although it was first obtained by Schrödinger in his original development of the SWE. He abandoned it, however, when he found that it did not properly describe the energy levels of the hydrogen atom. It subsequently became clear that when applied to electrons this equation also implied two things that were considered to be unacceptable; negative energy solutions, and, even worse, negative probabilities. We now know that the KGE is not appropriate for electrons, but does describe some massive particles with spin zero when interpreted in the framework of quantum field theory (QFT); neither mesons nor QFT were known when the KGE was formulated.

Some of the problems with the KGE arise from the second order time derivative, which is itself a direct result of squaring everything to avoid the intractable mathematical form of the square root of an operator. The fundamental connection between time and space at the heart of relativity leads to a similar connection between energy and momentum, a connection that is overlooked in the KGE. Dirac was thus motivated by the principles of relativity to keep a first order time derivative, which meant that he had to confront the difficulties associated with using the relativistic energy head on. We will not discuss the details of its derivation but will simply consider the form of the resulting Dirac equation:

(c \alpha \cdot \mathrm{P}+\beta mc^2)\psi=i\hbar \frac{\partial\psi}{\partial t}.                                                     (7)

This equation has the general form of the SWE, but with some significant differences. Perhaps the most important of these is that the Hamiltonian now includes both the kinetic energy and the electron rest mass, but the coefficients αi and \beta  have to be four-component matrices to satisfy the equation. That is, the Dirac equation is really a matrix equation, and the wavefunction it describes must be a four component wavefunction. Although there are no problems with negative probabilities, the negative energy solutions seen in the KGE remain. These initially seemed to be a fatal flaw in Dirac’s work, but were overlooked because in every other aspect the equation was spectacularly successful. It reproduced the hydrogen atomic spectra perfectly (at least, as perfectly as it was known at the time) and even included small relativistic effects, as a proper relativistic wave equation should. For example, when the electromagnetic interaction is included the Dirac equation predicts an electron magnetic moment:

\mu_e = \frac{\hbar e}{2m} = \mu_B                                                                   (8)

where \mu_B is known as the Bohr magneton. This expression is also in agreement with experiment, almost: it was later discovered that the magnetic moment of the electron differs from the value predicted by eq (8) by about 0.1% (Kusch and Foley 1948).  The fact that Dirac’s theory was able to predict these quantities was considered to be a triumph, despite the troublesome negative energy solutions.

Another intriguing aspect of the Dirac equation was noticed by Schrödinger in 1930. He realised that interference between positive and negative energy terms would lead to oscillations of the wavepacket of an electron (or positron) about some central point at the speed of light. This fast motion was given the name zitterbewegung (which is German for “trembling motion”). The underlying physical mechanism that gives rise to the zitterbewegung effect may be interpreted in several different ways but one way to look at it is as an interaction of the electron with the zero-point energy of the (quantised) electromagnetic field. Such electronic oscillations have not been directly observed as they occur at a very high frequency (~ 1021 Hz), but since zitterbewegung also applies to electrons bound to atoms, this motion can affect atomic energy levels in an observable way. In a hydrogen atom the zitterbewegung acts to “smear out” the electron charge over a larger area, lowering the strength of its interaction with the proton charge. Since S states have a non-zero expectation value at the origin, the effect is larger for these than it is for P states. The splitting between the hydrogen 2S1/2 and 2P1/2 states, that are degenerate in the Dirac theory, is known as the Lamb Shift (Lamb, 1947). This shift, which amounts to ~1 GHz was observed in an experiment by Willis Lamb and his student Robert Retherford (not to be confused Ernest Rutherford!). The need to explain this shift, which requires a proper explanation of the electron interacting with the electromagnetic field, gave birth to the theory of quantum electrodynamics, pioneered by Bethe, Tomanoga, Schwinger and Feynman.

The solutions to the SWE for free particles (i.e., neglecting the potential V) are of the form

\psi = A \mathrm{exp}(-iEt / \hbar).                                                       (9)

Here A is some function that depends only on the spatial properties of the wavefunction (i.e., not on t). Note that this wavefunction represents two electron states, corresponding to the two separate spin states. The corresponding solutions to the Dirac equation may be represented as

                                                            \psi_1 = A_1 \mathrm{exp}(-iEt / \hbar),

\psi_2 = A_2 \mathrm{exp}(+iEt / \hbar).                                                   (10)

Here \psi_2 represents the negative energy solutions that have caused so much trouble. The existence of these states is central to the theory they cannot simply be labelled as “unphysical” and discarded. The complete set of solutions is required in quantum mechanics, in which everything is somewhat “unphysical”. More properly, since the wavefunction is essentially a complex probability density function that yields a real result when its absolute value is squared, the negative energy solutions are no less physical than the positive energy solutions; it is in fact simply a matter of convention as to which states are positive and which are negative. However you set things up, you will always have some “wrong” energy states that you can’t get rid of. Thus, Dirac was able to eliminate the negative probabilities and produce a wave equation that was consistent with special relativity, but the negative energy states turned out to be a fundamental part of the theory and could not be eliminated, despite many attempts to get rid of them.

After his first paper in 1928 (The quantum theory of the electron) Dirac had established that his equation was a viable relativistic wave equation, but the negative energy aspects remained controversial. He worried about this for some time, and tried to develop a “hole” theory to explain their seemingly undeniable existence. A serious problem with negative energy solutions is that one would expect all electrons to decay into the lowest energy state available, which would be the negative energy states. Since this would not be consistent with observations there must, so Dirac reasoned, be some mechanism to prevent it. He suggested that the states were already filled with an infinite “sea” of electrons, and therefore the Pauli Exclusion Principle would prevent such decay, just as it prevents more than two electrons from occupying the lowest energy level in an atom. (Note that this scheme does not work for Bosons, which do not obey the exclusion principle). Such an infinite electron sea would have no observable properties, as long as the underlying vacuum has a positive “bare” charge to cancel out the negative electron charge. Since only changes in the energy density of this sea would be apparent, we would not normally notice its presence. Moreover, Dirac suggested that if a particle were missing from the sea the resulting hole would be indistinguishable from a positively charged particle, which he speculated was a proton, protons being the only positively charged subatomic particles known at the time.

This idea was presented in a paper in 1930 (A Theory of Electrons and Protons, Dirac 1930). The theory was less than successful, however, and the deficiencies served only to undermine confidence in the entire Dirac theory. Attempts to identify holes as protons only made matters worse; it was shown independently by Heisenberg, Oppenheimer and Pauli that the holes must have the electron mass, but of course protons are almost 2000 times heavier. Moreover, the instability between electrons and holes completely ruled out stable atomic states made from these entities (bad news for hydrogen, and all other atoms). Eventually Dirac was forced to conclude that the negative energy solutions must correspond to real particles with the same mass as the electron and a positive charge. He called these anti-electrons (Quantised Singularities in the Electromagnetic Field, Dirac 1931).

This almost reluctant conclusion was not based on a full understanding of what the negative energy states were, but rather the fact that the entire theory, which was so beautiful in other ways that it was hard to resist, depended on them. It turns out that to properly understand the negative energy solutions requires the formalism of quantum field theory (QFT). In this description particles (and antiparticles) can be created or destroyed, so it is no longer necessarily appropriate to consider these particles to be the fundamental elements of the theory. If the total number of particles in a system is not conserved then one might prefer to describe that system in terms of the entities that give rise to the particles rather than the particles themselves. These are the quantum fields, and the standard model of particle physics is at its heart a QFT. By describing particles as oscillations in a quantum field not only do we have an immediate mechanism by which they may be created or destroyed, but the problem of negative energies is also removed, as this simply becomes a different kind of variation in the underlying quantum field. Dirac didn’t explicitly know this at the time, although it would be fair to say that he essentially invented QFT, when he produced a quantum theory that included quantized electromagnetic fields (Dirac, 1927, The Quantum Theory of the Emission and Absorption of Radiation). This led, eventually, to what would be known as quantum electrodynamics. Dirac would undoubtedly have been able to make much more use of his creation if he had not been so appalled by the notion of renormalization. Unfortunately this procedure, which in some ways can be thought of as subtracting infinite quantities from each other to leave a finite quantity, was incompatible with his sense of mathematical aesthetics.

So, despite initially struggling with the interpretation of his theory, there can be no question that Dirac did indeed explicitly predict the existence of the positron before it was experimentally observed. This observation came almost immediately in cloud chamber experiments conducted by Carl Anderson in California (C. D. Anderson: The apparent existence of easily deflectable positives, Science 76 238, 1932).  Curiously, however, Anderson was not aware of the prediction, and the proximity of the observation was apparently coincidental. We will discuss this remarkable observation in a later post.

*This post is adapted from an as-yet unpublished book chapter by D. B. Cassidy and A. P. Mills, Jr.

 

References:

Griffiths, D. (2008). Introduction to Elementary Particles Wiley-VCH; 2nd edition.

Farmelo, “The Strangest Man: The Hidden Life of Paul Dirac, Mystic of the Atom” Basic Books, New York, (2011).

Dirac, P.A.M. (1927). The Quantum Theory of the Emission and Absorption of Radiation, Proceedings of the Royal Society of London, Series A, Vol. 114, p. 243.

P. A. M. Dirac, Proc. Phys. Soc. London Sect. A 117, 610 (1928).

P. A. M. Dirac, Proc. Phys. Soc. London Sect. A 126, 360 (1930).

P. A. M. Dirac, Proc. Phys. Soc. London Sect. A 133, 60 (1931).

Anderson, C. D. (1932). The apparent existence of easily deflectable positives, Science 76, 238.

A.  Aspect, D. Jean, R. Gerard (1982). Experimental Test of Bell’s Inequalities Using Time- Varying Analyzers, Phys. Rev. Lett. 49 1804

P. Kusch and H. M. Foley “The Magnetic Moment of the Electron”, Phys. Rev. 74, 250 (1948).

System modification for Rydberg Ps imaging

A key milestone along the road to Ps gravity measurements is control of the motion of  long-lived states of positronium. Using methods previously developed for atoms and molecules we aim to manipulate low-field seeking Stark states within the Rydberg-Stark manifold (see below) using inhomogeneous electric fields [1, 2].

n11Dfrac

The force exerted on Rydberg atoms due to their electric dipole moment can be described as:

where n is the principal quantum number, k is the parabolic quantum number (ranging from –(n-1-|m|) to n-1-|m| in steps of 2), and F is the electric field strength [3, 4]. The figure above shows an example of Rydberg-Stark state manifold for n=11.

We have recently modified our experimental system to accommodate an MCP for imaging Ps atoms. This involved the extension of our beamline with another multi-port vacuum chamber, within which we should be able to reproduce laser excitation of Ps to Rydberg states.  These will be formed at the centre of the chamber and directed along a 45 degree path towards the MCP. If imaging Ps* proves successful we will then use electrodes to create the inhomogeneous electric fields needed to manipulate their flight path.

The addition of the new vacuum chamber to our beamline is shown below.

newchamber

Refs.

[1] S. D. Hogan and F. Merkt (2008). Demonstration of Three-Dimensional Electrostatic Trapping of State-Selected Rydberg Atoms. Physical Review Letters, 100:043001.                http://dx.doi.org/10.1103/PhysRevLett.100.043001.

[2] E. Vliegen, P. A. Limacher and F. Merkt (2006). Measurement of the three-dimensional velocity distribution of Stark-decelerated Rydberg atoms. European Journal of Physics D, 40:73-80.  http://dx.doi.org/10.1140/epjd/e2006-00095-1.

[3] E. Vliegen and F. Merkt (2006). Normal-Incidence Electrostatic Rydberg Atom Mirror. Physical Review Letters, 97:033002. http://dx.doi.org/10.1103/PhysRevLett.97.033002.

[4] S. D. Hogan (2012). Cold atoms and molecules by Zeeman deceleration and Rydberg-Stark deceleration, Habilitation Thesis. Laboratory of Physical Chemistry, ETH Zurich. https://www.ucl.ac.uk/phys/amopp/people/stephen_hogan/publications.