AMOPP Seminar: Towards enhanced interferometry using quantum states of light

On Wednesday 17th of October we had the pleasure to welcome Dr. Chris Wade from Oxford University to give a seminar to the AMOPP group about progress towards interferometry with exotic quantum states of light, more specifically Holland-Burnett states. This was a very interesting talk with a great mix of theory and experimental results. The abstract can be seen below.

Towards enhanced interferometry using quantum states of light

Quantum metrology is concerned with the enhanced measurement precision that may be gained by exploiting quantum mechanical correlations. In the scenario presented by optical inteferometry, several successful implementations have already been demonstrated including gravitational wave detectors [1], and lab-scale experiments [2,3]. However there are still open problems to be solved, including loss tolerance and scalability. In this seminar I will present progress implementing loss-tolerant Holland-Burnett states [4], and work searching for other practical states to implement [5].

[1] Schnabel et al. Nat. Comm. 1. 121 (2010)
[2] Slussarenko et al. Nat. Phot. 11, 700 (2017)
[3] Yonezawa et al. Science, 337, 1514 (2012)
[4] Holland and Burnett, PRL, 71, 1355 (1993)
[5] Knott et al, PRA, 93, 033859 (2016)

AMOPP Seminar: The Measurement Postulates of Quantum Mechanics are Redundant

On Wednesday 10th October we had Dr. Luis  Masanes from within the UCL AMOPP group give a very interesting seminar. His talk was focused on the fundamental questions posed by the measurement postulates of quantum mechanics, and how they are redundant given the other postulates that form the basis of quantum mechanics. Dr. Masanes was kind enough to provide a copy of his slides here and the abstract can be seen below.

The Measurement Postulates of Quantum Mechanics are Redundant

Understanding the core content of quantum mechanics requires us to disentangle the hidden logical relationships between the postulates of this theory. The theorem presented in this work shows that the mathematical structure of quantum measurements, the formula for assigning outcome probabilities (Born’s rule) and the post-measurement state-update rule, can be deduced from the other quantum postulates, often referred to as “unitary quantum mechanics”. This result unveils a deep connection between the dynamical and probabilistic parts of quantum mechanics, and it brings us one step closer to understand what is this theory telling us about the inner workings of Nature.

State-selective field ionization of Rydberg positronium

All atomic systems, including positronium (Ps) can be excited to states with high principal quantum number n using lasers, these are called Rydberg states. Atoms in such states exhibit interesting features that can be exploited in a variety of ways. For example, Rydberg states have very long radiative lifetimes (on the order of 10 µfor our experiments). This is a particularly useful feature in Ps because when it is excited to large-n states, the overlap between the electron and positron wavefunction is suppressed. Therefore the self-annihilation lifetime becomes so large in comparison to the fluorescence lifetime, that the effective lifetime of Ps in a Rydberg state becomes the radiative lifetime of the Rydberg state. Most Rydberg Ps atom will decay back to the ground state first, before self-annihilating [Phys. Rev. A 93, 062513 (2016)]. The large distance between the positron and electron centers of charge in certain Rydberg states also means that they exhibit large static electric dipole moments, and thus their motion can be manipulated by applying forces with inhomogeneous electric fields [Phys. Rev. Lett. 117, 073202 (2016), Phys. Rev. A 95, 053409 (2017)]

In addition to these properties, Rydberg atoms have high tunnel ionization rates at relatively low electric fields. This property forms the basis for state-selective detection by electric field ionization. In a recent series of experiments, we have demonstrated state-selective field ionization of positronium atoms in Rydberg states (n = 18- 25) in both static and time-varying (pulsed) electric fields.

The set-up for this experiment is shown below where the target (T) holds a SiO2 film that produces Ps when positrons are implanted onto it. The first grid (G1) allows us to control the electric field in the laser excitation region, and a second Grid (G2) with a varying voltage provides a well defined ionization region. An electric field is applied by either applying a constant voltage to Grid 2 as in the case of the static field configuration, or by ramping a potential on Grid 2 as in the case of the pulsed field configuration.

Figure 1: Experimental arrangement showing separated laser excitation and field ionization regions.

In this experiment we detect the annihilation gamma rays from:

  • the direct annihilation of positronium

  • annihilations that occur when positronium crashes into the grids and chamber walls

  • annihilations that occur after the positron, released via the tunnel ionization process, crashes into the grids or chamber walls

We subtract the time-dependent gamma ray signal when ground state Ps traverses the apparatus from the signal detected from Rydberg atoms when an electric field is applied in the ionizing region. This forms a background subtracted signal that tells us where in time there is an excess or lack of annihilation radiation occurring when compared to background (this SSPALS method is described further in NIM. A  828, 163 (2016)  and and here).

 

Static Electric Field Configuration

In this version of the experiment, we let the excited positronium atoms fly into the ionization region where they experience a constant electric field. In the case where a small electric field (~ 0 kV/cm) is applied in the ionizing region, the excited atoms fly unimpeded through the chamber as shown in the animation below. Consequently, the background subtracted spectrum is identical to what we expect for a typical Rydberg signal (see the Figure below for n=20). There is a lack of ionization events early on (between 0 and 160 ns) compared to the background (ground state) signal that manifests itself as a sharp negative peak. This is because the lifetime of Rydberg Ps is orders of magnitude larger than the ground state lifetime.

Later on at ~ 200 ns, we observe a bump that arises from an excess of Rydberg atoms crashing into Grid 2. Finally, we see a long positive tail due to long-lived Rydberg atoms crashing into the chamber walls.

 

Figure 2: Trajectory simulation of Rydberg Ps atoms travelling through the ~0 V/cm electric field region (left panel) and measured background-subtracted gamma-ray flux , the shaded region indicates the average time during which Ps atoms  travel from he Target to Grid 2 (right panel).

On the other hand, when the applied electric field is large enough, all atoms are quickly ionized as they enter the ionizing region. Correspondingly, the ionization signal in this case is large and positive early on (again between 0 and 160 ns). Furthermore, instead of a long positive tail, we now have a long negative tail due to the lack of annihilations later in the experiment (since most, if not all, atoms have already been ionized). Importantly, since in this case field ionization occurs almost instantaneously as the atoms enter the ionization region, the shape of the initial ionization peak is a function of the velocity distribution of the atoms in the direction of propagation of the beam.

 

 

Figure 3: Trajectory simulation of Rydberg Ps atoms travelling through the ~2.6 kV/cm electric field region (left panel) and measured background-subtracted gamma-ray flux , the shaded region indicates the average time during which Ps atoms  travel from he Target to Grid 2 (right panel).

We measure these annihilation signal profiles over a range of fields and calculate the signal parameter S. A positive value of S implies that there is an excess of ionization occurring within the ionization region; whereas, a negative S means that there is a lack of ionization within the region with respect to background. Therefore, if S is approximately  equal to 0%, only half of the Ps atoms re being ionized. A plot of the experimental S parameter for different applied fields and for different n’s is shown in the plot below.Figure 4: Electric field scans for a range of n states ranging from 18 to 25 showing that at low electric fields none of the states ionize (thus the negative values of S) and as the electric field is increased, different n states can be observed to have varying ionizing electric field thresholds.

It is clear that different n-states can be distinguished using these characteristic S curves. However, the main drawback in this method is that both the background subtracted profiles and the S curves are convoluted with the velocity profile of the beam of Rydberg Ps atoms. This drawback can be eliminated by performing pulsed field ionization.

Pulsed Electric Field Configuration

We have also demonstrated the possibility of distinguishing different Rydberg states of positronium by ionization in a ramped electric field. The set-up is the same as in the static field scenario but now instead of fixing a potential on Grid 2, the potential on this grid is decreased from 3 kV to 0 kV hence increasing the field from 0 kV/cm to ~ 1800 kV/cm (the initial 3kV is necessary to help cool down Ps [New J. Phys17,043059 (2015)]).

The advantage of performing state selective field ionization this way is that we can allow most of the atoms to enter the ionization region before pulsing the field. This eliminates the dependence of the signal on the velocity distribution of the atoms and thus the signal is only dependent on the ionization rates of that Rydberg state in the increasing electric field.

Below is a plot of our results with a comparison to simulations (dashed lines). We see broad agreement between simulation and experiment and, we are able to distinguish between different Rydberg states depending on where in time the ionization peak occurs. This means that we should be able to detect a change in an initially prepared Rydberg population due to some process such as microwave induced transitions.

Figure 5: Pulsed-field ionization signal as a function of electric field for a range of n states.

The development of state selective ionization techniques for Rydberg Ps opens the door to measuring the effect of blackbody transitions on an initially prepared Rydberg population and a methodology for detecting transitions between nearby Rydberg-levels in Ps. Which could also be used for electric field cancellation methods to generate circular Rydberg states of Ps.

Ethics in research: what a student should know.

Ethics in research is something everyone is expected to understand and respect, from beginning students to established professors. However, very rarely are such things discussed formally. In effect one is expected to simply know all about these matters by appealing to “common sense” and also to the examples set by mentors. This is not an ideal situation, as is evidenced by numerous high profile cases of scientific misconduct. The purpose of this post is to briefly mention some important aspects of ethical research practices, mostly to encourage further thought in students before they have a chance to influenced by whatever environment in which they are working.

In some cases “misconduct” is easy to spot, and is then (usually) universally condemned. For example, making up or manipulating data is something we all know is wrong and for which there can never be a legitimate defense. At the same time, some students might see nothing wrong in leaving off a few outlier data points that clearly do not fit with others, or in failing to refer to some earlier paper. It is not always necessarily obvious what the right (that is to say, ethical) thing to do might be. What harm is there, after all, in putting an esteemed professor as a co-author on a paper, even though they have provided no real contribution? (Incidentally, merely providing lab space, equipment or even funding doesn’t, or shouldn’t, count as such). Well, there is harm in gift authorship; it devalues the work put in by the real authors, and makes it possible for a certain level of authority to guarantee future (perhaps unwarranted) success. And does it really matter if an extreme outlier is left off of an otherwise nice looking graph? The answer is yes, absolutely.

At the heart of most unethical behavior in science, or misconduct, is the nature of truth, and the much revered “scientific method”. Never mind that there is no such thing, or that the way science is actually done is not even remotely similar to the romantic notions so often associated with this very human activity. Nevertheless, there is a very strong element of trust implicit in the scientific endeavor. When we read scientific papers we have no choice but to assume that the experiments were carried out as reported, and that the data presented is what was actually observed. The entire scientific enterprise depends on this kind of trust; if we had to try and sort through potentially fraudulent reports we would never get anywhere. For this reason, trust in science is extremely important, and one could say that, regardless of any mechanisms of self correction that might exist in science, and the concomitant inevitability of discovery of fraud, we are almost compelled to trust scientists if we wish science to be done in a reasonable manner[a]. By the same token, those scientists who choose to betray this trust pollute not only their own research (and personal character), but all science. The enormity of this offense should not be understated. In particular, students should be aware of the fact that being a dishonest scientist is oxymoronic.

What is misconduct? A formal definition of scientific misconduct of the sort that would be necessary in legal action is an extremely difficult thing to produce. There have been many attempts, most of which are centered on the ideas of FFP: that is, fraud, falsification and plagiarism. These are things that we can easily understand and, most of the time, identify when they arise. However, things can (and invariably do) get complicated in real world situations, especially when people try to gauge intention, or when the available information is incomplete.

A definition of scientific misconduct, as it pertains to conducting experiments or observations to generate data, was given by Charles Babbage (Babbage 1830) that still has a great deal of relevance today. Babbage is perhaps best known for his “difference engine”, a mechanical computer he built to take the drudgery out of performing certain calculations. He was a well-respected scientist working in many areas, Professor of physics at Cambridge University and a fellow of the Royal society. His litany of offences, which is frequently cited in discussions of scientific misconduct, was categorized into four classes.

Babbage’s taxonomy of fraud:

1.) Hoaxing is not fraud in the usual sense, in that it is generally intended as a joke, or to embarrass or attack individuals (or institutions). A hoax is more like a prank than an attempt to benefit the hoaxster in the scientific realm; it is not often that a hoax remains undisclosed for a long time. Hoaxes are not generally intended to promote the perpetrator, who will prefer to remain anonymous, (at least until the jig is up). A Hoax is far more likely to involve someone dressing up as Bigfoot, or lowering a Frisbee outside a window on a fishing line than fabricating IV curves for electronic devices that have never been built.

A famous and amusing example is the so-called Sokal hoax (Sokal 1996). In 1996 a physicist from New York University submitted a paper to the journal “Social Text”. This journal publishes work in the field of “postmodernism” or cultural studies, an area in which there had been many critiques of science based on unscientific criteria. For example, the idea that scientific objectivity was a myth, and that certain cultural elements (e.g., “male privilege” or western societal norms) are determining factors in the underlying nature of scientific discovery. This attitude vexed many physicists, leading to the so-called “science wars” (Ross 1996), and was the impetus for Sokal’s hoax. The paper he submitted to Social Text was entitled “Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity”. This article was deliberately written in impenetrable language (as was often the case in articles in Social text, and other similar publications) but was in fact pure nonsense (as was also often the case in articles in Social text and other similar publications). The paper was designed with the intention of impressing the editors, not only with the stature of the author (Sokal was then a respected physicist, and seems at the time of writing, some 16 years later, still to be one) but also by appealing to their ideological positions. Sokal revealed the hoax immediately after his paper was published; he never had any intention of allowing it to stand, but rather wanted to critique the journal. As one might expect, a long series of arguments arose from this hoax, although the main effect was to tempt more actual scientists into commenting on the “science wars” that were already occurring. In this sense Sokal may have done more harm than good with his hoax, as the inclusion of scientists in the “debate” served only to legitimize what had before been largely the province of people with no training in, or, frequently, understanding of, science.

 2.) Forging is just what it sounds like, simply inventing data and passing it off as real. Of course, one might do the same in a hoax, but the forger seeks not to fool his audience for a while, later to reveal the hoax for his or her own purposes, but to permanently perpetrate the deception. The forger will imitate data and make it look as real as possible, and then steadfastly stick to their claim that it is indeed genuine. Often such claims can stand up to scrutiny even once the forging has been exposed, since if it is done well the bogus data is in no way obviously so. This unfortunate fact means that we cannot know for sure how much forging is really going on. However, the forger does not have any new information and so cannot genuinely advance science. They may be able to appear to be doing so, by confirming theoretical predictions, for example, but such predictions are not always correct. Nevertheless, the forger must have a good knowledge of their field, and enough understanding of what is expected to be able to produce convincing data out of thin air. If one were to do this then, on rare occasions, it is even possible that one might guess at some reality, later proved, and really get away with it. However, the fleeting glory of a major discovery seems to be somewhat addictive to those so inclined, and some famous forgers have started out small, only to allow themselves to be tempted into such outrageous claims that from the outside it seems as though they wanted to get caught.

A well known case in which exactly this happened is the “Schön affair”, which has been covered in detail in the excellent book “plastic fantastic” (Reich 2009). Hendrik Schön was a young and, apparently, brilliant young physicist employed at Bell Labs in the early 2000’s. His area of expertise was in organic semiconductors, which held (and continue to hold) great promise for the production of cheap and small electronic devices. Schön published a large number of papers in the most high profile journals (including many in Science and Nature) that purported to show organic crystals exhibiting remarkable electronic properties, such as superconductivity, the fractional quantum hall-effect, lasing, single molecule switches and so on. At one point he was publishing almost one paper per week, which is impressive just from the point of view of being able to write at such a rate; most scientists will probably spend one or two months just writing a paper, never mind the time taken to actually get the data or design and build an experiment. Before the fraud was exposed one professor at Princeton University[b], who had been asked to recommend Schön for a faculty position, refused to do so, and stated that he should be investigated for misconduct based solely on his extreme publication rate (Reich 2006).

Schön’s incredible success seemed to be predicated on his unique ability to grow high quality crystals (he was described on one occasion as having “magic hands”). This skill eluded other researchers, and nobody was able to reproduce any of Schön’s results. As he piled spectacular discovery upon spectacular discovery, suspicion grew. Eventually it was noticed that in some of his published data Schön had used the same curves, right down to the noise, to represent completely distinct phenomena, in completely different samples (that, in fact, had never really existed). After trying to claim that this was simply an accident, that he had just mislabeled some graphs and used the wrong data, more anomalies were revealed. The management at Bell labs started an investigation and Schön continued to make excuses, stating (amazingly, considering the quality of the real scientists that worked at this world leading institution) that he had not kept records in a notebook, and that he had erased his primary data to make space on his computer’s hard drive[c]!

None of these excuses were convincing, and indeed such poor research techniques should themselves probably be grounds for dismissal, even in the absence of fraud. Fraud was not absent, however, it was prevalent, and on a truly astounding scale. Schön had invented all of his data, wholesale, had never grown any high quality crystals, and had lied to many of his colleagues, at Bell labs and elsewhere. He was fired, and his PhD was revoked by Konstantz university in Germany for “dishonorable conduct”. There was no suggestion that the rather pedestrian research Schön had completed for his doctorate was fraudulent, but his actual fraud was so egregious that his university felt that he had dishonored not just himself and their institution, but science itself, and as such did not deserve his doctorate.

It seems obvious in retrospect that Schön could never have expected to get away with his forging indefinitely. Even if he had been a more careful forger, and had made up distinct data sets for all of his papers, the failure of others to reproduce his work would have eventually revealed his conduct. Even before he was exposed, pressure was mounting on Schön to explain to other scientists how he made his crystals, or to provide them with samples. He had numerous, and very well respected, co-authors who themselves were becoming nervous about the increasingly frustrated complaints from the other groups unable to replicate his experiments. It was only a matter of time before it all came out, regardless of the manifestly fake published data.

This unfortunate affair raises some interesting questions about misconduct in science. For example, Schön’s many co-authors seem to have failed in their responsibilities. Although they were found innocent of any actual complicity in the fraud, as co-authors on so many, and on such astounding, papers, it seems clear that they should have paid more attention. Some of the co-authors were highly accomplished scientists, while Schön himself was relatively junior, which only compounds the apparent lack of oversight. Indeed, it is likely that the pressure of working for such distinguished people, and at such a well-respected institution as Bell Labs, was what first tempted Schön into his shameful acts.

No doubt winning numerous prizes, being offered tenured positions at an Ivy League school (Princeton; not bad if you can get an offer) and generally being perceived as the new boy-wonder physicist played a role in the seemingly insane need to keep claiming ever more incredible results. It is hard to escape the conclusion that some sort of mental breakdown contributed to this folly. However, another pathological element was also likely at play, which is that Schön was sure that his speculations would eventually be borne out by other researchers.

If this had actually happened, and if he had not been a poor forger (or, perhaps, an overworked forger, with barely enough time to invent the required data), he might have enjoyed many years of continued prosperity. As it happens, some of the fake results published by Schön have in fact been shown to be roughly correct; sometimes the theorists guess right. Ultimately though, forging will usually be found out, and the more that is forged the faster this will happen. The only way to forge and get away with it is to know in advance what nature will do, and if you know that the forging is redundant[d].

3.) Trimming is the practice of throwing away some data points that seem to be at odds with the rest of the measurements, or at least are not as close to the “right” answer as expected. In this case the misconduct is primarily designed to make the experimenter seem more competent, although actually a poor knowledge of statistics might mean that it has the opposite effect; outliers sometimes seem out of place, but, to those who have a good knowledge of statistics, their absence may be quite suspicious. Because it can seem so innocuous, trimming is probably much more common than forging. When he invents data from nothing, the forger has already committed himself to what he must surely know is an immoral and heinous action; whatever justification he might employ towards this action cannot shield him from such knowledge. The trimmer, on the other hand, might feel that his scientific integrity is unaffected, and that all he is doing is making his true data a little more presentable; so as to better explain his truth. A trimmer who believes this is wrong may well be less offensive than the forger, but his actions are still unconscionable, will still constitute a fraud, and may equally well pervert the scientific record, even when the essential facts of the measurement remain. For example, by trimming a data set to look “nicer” you might significantly alter the statistical significance of some slight effect. Indeed, in this way you can create a non-existent effect, or destroy evidence for a real one; without the truth of nature’s reference there is no way to know the difference.

The trimmer has a great advantage over the forger, insofar as he is not trying to second guess nature, but rather obtains the answer from nature in the form of his primary measurements, and then attempts to make his methodology in so doing appear to be “better” than it really was. The trimmed data, therefore, will likely not give a very different answer than the untrimmed set, but will seem to have been obtained with greater skill and precision.

There are no well known examples of trimming because, by its very nature, it is hard to detect. Who will know if some few data points are omitted from an otherwise legitimate data set? What indicators might allow us to distinguish a careful experimenter from a carefree trimmer? Even Newton has been accused of trimming some of his observations of lunar orbits to obtain better agreement with his theory of gravitation (Westfell 1996). Objectively, even for the amoral, trimming is quite pointless when one considers that, if found out, the trimmer will suffer substantial damage to his reputation, whereas even if it is not found out, the advantage to that reputation is slight.

4.) Cooking refers to selecting data that agrees with the hypothesis you are trying to prove and rejecting other data, that may be equally valid, but that doesn’t. In other words, cherry picking. This is similar to trimming inasmuch as it is selecting from real data (i.e., not forged), but with no real justification for doing so, other than ones own presupposition about what the “right” answer is. It differs from trimming, for the cook will take lots of data and then select those that serve his purpose, while the trimmer will simply make the best out of whatever data he has. This is lying, just as forging, but with the limitation that nature herself has at least suggested which lies to tell.

Cooking also carries with it an ill defined aspect of experimental science, that of the “art” of the experimentalist. It is certainly true to say that experimental physics (and no doubt other branches of science too) contains some element of artistry. Setting up a complicated apparatus and making it work is an underappreciated skill, but a crucial one. When taking data with a temperamental system it is routine to throw some of it out, but only if one can justify doing so it in terms of the operation of the device. Almost all experiments include some tuning up time in which data are neglected because of various experimental problems. However, once all of these problems have been ironed out and the data collection begins “for real” one has to be very careful about excluding any measurements. If some obvious problems attend (a power supply failed, a necessary cable was unplugged, batboy was seen chewing on the beamline, whatever it may be) then there is no moral difficulty associated with ditching the resulting data. When this sort of thing happens (and it will), the best thing to do is throw out all of the data and start again.

The situation gets more complicated if an intermittent problem arises during an experiment that affects only some of the data. In that case you might want to select the “good” data and get rid of the “bad”. Unfortunately all of the resulting data will then be “ugly”. This is not a good idea because it borders on cooking. It is much better to throw away all potentially suspect data and start again. In data collection, as in life, even the appearance of impropriety can be a problem, regardless of if propriety has in fact been maintained. It may be the case that there is an easy way to identify data points that are problematic (for example, the detector output voltage might be zero sometimes and normal at others), but there will remain the possibility that the “good” data is also affected by the underlying experimental problem leading to “bad” data, but in a less obvious manner. The best thing to do in this situation will depend on many factors. It is not always possible to repeat experiments; you might have a high confidence in some data because of the exact nature of your apparatus and so on. Even so, “when in doubt, throw it out” is the safest policy.

Cooking is a little like forging, but with an insurance policy. That is, since the data is not completely made up there is a better chance that it does really correspond to nature, and therefore you will not be found out later because you are (sort of) measuring something real. This is not necessarily the case, however, and will depend on the skill of the cook. By selecting data from a large array it is usually possible to produce “support” for any conclusion one wants, and as a result the cook who seeks to prove an incorrect hypothesis loses his edge over the forger.

A well known case illustrates that it is not so simple to decouple experimental methodology with cookery. Robert Millikan is well known for his oil drop experiment in which he was able to obtain a very accurate value for the charge on an electron, work for which he was awarded the Nobel Prize. This work has been the subject of some controversy, however, since Historian of Science Gerald Holton [Holton 1978] went through Millikan’s notebooks and revealed that not all of the available data had been used, even though Millikan specifically said that it had. Since then there have been numerous arguments about whether the selection of data was simply the shrewd action of an expert experimentalist, or one of dishonest cooking. As discussed by numerous authors [e.g., Segerstråle, 1995] it is neither one nor the other, and the situation is more nuanced. The reality is that his answer was remarkably accurate, as we now know. The question then is did Millikan know what to expect? If he did not then accusations of cooking seem unfounded, since he would have had to know what he was trying to “demonstrate”. If he got the right answer without knowing it in advance, he must have been doing good science, right? WRONG! If Millikan was cooking (or even doing a little trimming) and he somehow got the right answer this does not in any way mitigate the action. Since we cannot say whether he really did do any cooking or not, one might believe that getting the right answer implies that he was being honest; as it turns out, he would have obtained an even more accurate answer if he had used all of his data (Judson 2004).

Furthermore, the question of whether Millikan already knew the “right” answer is meaningless. The right answer is the one we get from an experiment, and we only know it to the extent that we can trust the experiment. An army of theoreticians can be (have been, perhaps now are) wrong. Dirac predicted that the electron g factor would be exactly 2, and it almost is. Knowing only this, the cook or trimmer might be sorely tempted to nudge the data to be exactly 2.0000 and not 2.0023, but that would be a terrible mistake, one that would miss out on a fantastic discovery. One can only hope that somewhere in the world there is at least one cook who allowed a Nobel Prize (or some major discovery) to slip through his dishonest fingers, and cannot even tell anyone how close he came. Did Millikan knowingly select data to make his measurements look more accurate? There is no way to know for sure.

Pathological science is a term coined by Irving Langmuir, the well known physicist. He defined this as science in which the observed effects are barely detectable, and the signals cannot be increased, but are nevertheless claimed to be made with great accuracy; pathological science also frequently involves unusual theories that seem to contradict what is known (Langmuir 1968). However, pathological science is not usually an act of deliberate fraud, but rather one of self delusion. This is why the effect in question is always at the very limit of what can be detected, for in this case all kinds of mechanisms can be used (even subconsciously) to convince one that the effect really is real. In this context, then, it is worth asking, does the cook necessarily know what he is doing? That is, when one wishes to believe something so very strongly, perhaps it becomes possible to fool ones own brain! This kind of self delusion is more common than you might think, and happens in everyday life all the time. Although we cannot directly choose what we believe is true, when we don’t know one way or the other, our psychology makes it easy for us to accept the explanation that causes the least internal conflict (also known as cognitive dissonance).

When a researcher is so deluded as to engage in pathological science, it is difficult to categorize this activity as one of misconduct. The forger has to know what he is doing. The trimmer or cook generally tries to make his data look the way he wants, but if he does so without real justification then it is wrong. By making justifications that seem reasonable but are not, one could conceivably fool oneself into thinking that there was nothing improper about cooking or trimming. Certainly, this will sometimes really be the case, which only makes it easier to come up with such justifications.

In many regards the pathological scientist is not so different from one who is simply wrong. The most famous example of this might be the cold fusion debacle (Seife 2009). Pons and Fleischmann claimed to see evidence for room temperature fusion reactions and amazed the world with their press conferences (not so much with their data). Later their fusion claims turned out to be false, as proved by Kelvin Lynn, Mario Gai and others (Gai et al 1989). However, the effect they said they saw was also observed by some other researchers, and as a result many reasons were promulgated as to why one experiment might see it and another might not. Then, when it was pointed out that there ought to be neutron emission if there was fusion, and no neutrons were observed, it became necessary to modify the theory so as to make the neutrons disappear. This was classic pathological science, hitting just about every symptom laid out by Langmuir.

Plagiarism was not explicitly discussed by Babbage but does feature prominently in modern definitions of misconduct. Directly copying other peoples work is relatively rare for professional scientists, perhaps because it is so likely to be discovered. There have been some cases in which people have taken papers published in relatively obscure journals and submitted them, almost verbatim, to other, even more obscure, journals. Since only relatively unknown work is susceptible to this sort of plagiarism it does little damage to the scientific record, but is no less of an egregious act for this. Certainly the psychology of the “scientist” who would engage in such actions is no less damaged than that of a Schön.

In one case a plagiarist, Andrzej Jendryczko, tried to pass of the work of others as his own by translating it (from English, mostly) into Polish, and then publishing direct copies in specialized Polish medical journals under his own name (E.g., Judson 2004). This may have seemed like a safe strategy, but a well read Polish-American physician had no trouble tracking down all the offending papers using the internet. Indeed, wholesale plagiarism is now very easy to uncover via online databases and a multitude of specialized software applications. For the student, plagiarism should seem like a very risky business indeed. With minimal effort it can be found out, and no matter how clever the potential plagiarist might be, Google is usually able to do better with sheer force of computing power and some very sophisticated search algorithms. As is often the case, this sort of cheating does not even pass a rudimentary cost-benefit analysis[e]. It is another inverse to Pascal’s wager (the idea that it is better to believe in God than not, just because the eternity of paradise always beats any temporary secular position) inasmuch as the actual gain attending ripping off some (necessarily) obscure article found online is virtually nil, whereas exposure as a fraud for doing this will contaminate everything you have ever done, or will ever do, in science. How can this ever be worth even considering?

Plagiarism is not as simple as copying the work of others; providing incomplete references in a paper could be construed as a form of plagiarism, insofar as one is not providing the appropriate background, thereby perhaps implying more originality than really exists. References are a very important part of scientific publishing, and not just because it is important to give due respect to the work that you might be building upon. A well referenced paper not only shows that you have a good understanding and knowledge of your field, but will also make it possible for the reader to properly follow the research trail, which puts everything in context and helps enormously in understanding the work[f].

Stealing the ideas of others, if not their words, is also, of course, a form of plagiarism. This is harder to prove, which may be why it is something most often discussed in the context of grant proposals. This forms a rather unfortunate set of circumstances in which a researcher finds himself having to submit his very best ideas to some agency (in the hope of obtaining funding), which are promptly sent directly to his most immediate competitors for evaluation! After all, it is your competition who are best placed to evaluate you work. In most cases, if an obvious conflict exists, it is possible to specify individuals who should not be consulted as referees, although this is more common in the publication of papers than it is in proposal reviews. For researchers in niche fields, in which there may not be as much rivalry, this is not going to be a common problem. For researchers in direct competition, however, things might not be so nice. There have in fact been some well publicized examples of this sort of plagiarism, but it is probably not very common because grant proposals don’t often contain ideas that have not been discussed, to some degree, in the literature, and in those rare cases when this isn’t so, it is probably going to be obvious if such an idea is purloined. Also, let us not forget, most scientists are really not so unscrupulous. Thus, for this to happen you’d need an unlikely confluence of an unusually good idea sent to an abnormally unprincipled referee who happened to be in just the right position to make use of it.

 

References & Further Reading  

Misconduct in science is a vast area of study, and the brief synopsis we have given here is just the tip of the iceberg. The use of Babbage’s taxonomy is commonplace, and in depth discussions of every aspect of this can be found in many books. The following is small selection. The book by Judson is particularly recommended as it is fairly recent and goes into just the right amount of detail in some important case studies.

https://retractionwatch.com/

Judson H. F. (2004). The Great Betrayal: Fraud in Science, Houghton Mifflin Harcourt.

Alfredo K and Hart H, “The University and the Responsible Conduct of Research: Who is Responsible for What?” Science and Engineering Ethics, (2010).

Sterken, C., “Writing a Scientific Paper III. Ethical Aspects”, EAS Publications Series 50, 173 (2011)

Reich, E. S. (2009). Plastic Fantastic: How the Biggest Fraud in Physics Shook the Scientific World, Palgrave Macmillan.

Broad, W. and Wade, N. (1983). Betrayers of the Truth: Fraud and deceit in the halls of science, Simon & Schuster.

Babbage C. (1830) Reflections on the Decline of Science in England, August M. Kelley, New York (1970).

Holton G. (1978) Subelectrons, presuppositions and the Millikan-Ehrenhaft dispute, In: Holton G., ed., The Scientific Imagination, Cambridge University Press, Cambridge, UK.

Langmuir I. (1989) Pathological science, Physics Today 42: 36–48. Reprinted from the original in General Electric Research and Development Center Report 86-C-035, April 1968

Ross, Andrew, ed. (1996). Science Wars. Duke University Press.

Segerstråle, U. (1995). Good to the last drop? Millikan stories as “canned” pedagogy, Science and Engineering Ethics 1, 197.

Sokal, A., D. and Bricmont, J. (1998). Fashionable Nonsense: Postmodern Intellectuals’ Abuse of Science. Picador USA: New York.  

Westfall, R., S. (1994). The Life of Isaac Newton. Cambridge University Press.

Seife, C. (2009). Sun in a Bottle: The Strange History of Fusion and the Science of Wishful Thinking,  Penguin Books.

Gai, M. et al., (1989). Upper limits on neutron and gamma ray emission from cold fusion, Nature 340, 29.

 

Footnotes

[a] What is this reasonable manner? We just mean that if one cannot trust scientific reports to be honest representations of real work undertaken in good faith, then standing on the shoulders of our colleagues (who may or may not be giants) becomes pointless, everything has to be independently verified, and the whole scientific endeavor becomes exponentially more difficult.

[b] Reich’s book does not name the professor in question, but clearly he or she had a good grasp on how much can be done, even by a highly motivated genius.

[c] This is a particularly ludicrous claim since to a scientist experimental data is a highly valuable commodity, whereas disc space is trivially available: the idea that one would delete primary data just to make space on a hard drive is like burning down one’s house in order to have a bigger garden.

[d] Another way to increase your chances of getting away with forgery is to fake data that nobody cares about. However, this obviously has an even less attractive cost-benefit ratio.

[e] We obviously do not mean to suggest that some clever form of plagiarism that can escape detection does pass a cost-benefit analysis, and is therefore a good idea: the uncounted cost in any fraud, even a trivial one, is the absolute destruction of one’s scientific integrity. We just mean that this sort of thing, which is never worthwhile, is even more foolish if there isn’t some actual (albeit temporary) advantage.

[f] On a more mundane level, a referee who has not been cited, but should have been, is less likely to look favorably upon your paper than he or she might otherwise have done. This also applies to grant proposals, so it is in your own interests to make sure your references are correct and proper.

AMOPP Seminar: Levitated Quantum Nanophotonics

On Wednesday 1st November we had Professor Lukas Novotny from the Photonics Laboratory in ETH, Zürich give an insightful AMOPP seminar. His expertise spans many  areas, from optical antennas, near-field optics, nonlinear plasmonics and more. However, during his talk he focused on nanoparticle trapping and cooilng. The abstract for his talk can be found below, and he agreed to provide a copy of his slides which you can download here.

Abstract:

Levitated Quantum Nanophotonics
Vijay Jain [a], Martin Frimmer [a], Erik Hebestreit [a], Jan Gieseler[a], Romain Quidant [b], Christoph Dellago [c], and Lukas Novotny [a]
a) ETH Zurich, Photonics Laboratory, 8093 Zurich, Switzerland.
b) ICFO, Mediterranean Technology Park, 08860 Castelldefels, Spain.
c) University of Vienna, Faculty of Physics, 1090 Vienna, Austria.

I discuss our experiments with optically levitated nanoparticles in ultrahigh vacuum. Using active parametric feedback we cool the particle’s center-of-mass temperature to T = 100μK and reach mean quantum occupation numbers of n = 15. I show that mechanical quality factors of Q = 109 can be reached and that damping is dominated by photon recoil heating. The vacuum-trapped nanoparticle forms an ideal model system for studying non-equilibrium processes, nonlinear interactions, and ultrasmall forces.

AMOPP Seminar: Towards endoscopic magnetic field sensors for biomedical applications

On Wednesday 25th of October, we had our weekly AMOPP seminar by Dr. Arne Wickenbrock from the Budker Group at the Helmholtz Institute of Johannes Gutenberg-University in Mainz, Germany. His research spans a wide range of fields including dark matter and dark energy constituents (GNOMECASPEr), zero- and ultralow-field nuclear magnetic resonance (ZULF-NMR) and many more. But this time his talk was actually focused on using Nitrogen-Vacancy centers in diamond as a means of detection for small magnetic fields, in the hopes of being able to use this as a medical diagnosis technique in the near future. The abstract for his talk can be found below.

Abstract:

Towards endoscopic magnetic field sensors for
biomedical applications
Arne Wickenbrock1,2, Georgios Chatzidrosos1, Huijie Zheng1, Lykourgos Bougas1, Dmitry
Budker1,2,3
1Johannes Gutenberg-University, Mainz, Germany,
2Helmholtz Institut Mainz, Mainz, Germany,
3Department of Physics, University of California, Berkeley, CA 94720-7300, USA
 
We propose and report on the progress towards a miniaturized endoscopic magnetic field sensor based on color center ensembles in diamond. The unique design of the sensor enables spatially resolved in-vivo measurements of static and oscillating magnetic fields with a broad bandwidth and high sensitivity. An endoscopic magnetometer could boost the size of magnetic signals of the heart, the brain or other organs due to the reduced distance to the underlying current densities. The high-bandwidth of the device enables spatially resolved methods for tissue discrimination such as nuclear magnetic resonance or eddy-current detection in vivo.  
An endoscopic sensor motivates two simultaneous approaches, firstly, we present a highly sensitive magnetometer that measures magnetic fields by monitoring cavity-enhanced absorption on the singlet transition of the negatively charged nitrogen-vacancy (NV) center in diamond under radio-frequency irradiation and optical pumping with a green laser. We achieve shot-noise limited performance with sensitivities better than 30 pT/Hz1/2 [1].
Secondly, the rapidly changing environment in the human body as well as exposure limits for electromagnetic radiation motivate the use of a microwave-free magnetometer. We demonstrated such a device based on a narrow magnetic feature due to the ground-state level anticrossing (GSLAC) of the NV center at a background field of 102 mT to measure magnetic fields without microwaves [2]. Additionally, we plan to combine the NV center magnetometer with a much more sensitive alkali vapor cell magnetometer to build a novel brain-machine interface at room temperature and in an unshielded environment. 
 
[1] G. Chatzidrosos, A. Wickenbrock, L. Bougas, N. Leefer, T. Wu, K. Jensen, Y. Dumeige, and D. Budker, Miniature cavity-enhanced diamond magnetometer, in preparation, 2017.
[2] A. Wickenbrock, H. Zheng, L. Bougas, N. Leefer, S. Afach, A. Jarmola, V. M. Acosta, and D. Budker, Microwave-free magnetometry with nitrogenvacancy centers in diamond, Applied Physics Letters 109, 053505 (2016)
[3] H. Zheng, G. Chatzidrosos, A. Wickenbrock, L. Bougas, R. Lazda, A. Berzins, F. H.Gahbauer, M. Auzinsh, R. Ferber, and D. Budker, Level anticrossing magnetometry with color centers in diamond, Proc. of SPIE Vol. 10119 101190X-1, 2017.

AMOPP Seminar: Taming polar molecules for quantum experiments

On Wednesday 11th of October our weekly AMOPP seminar was given by Dr. Martin Zeppenfeld from the Rempe Group at the Max Planck institute for quantum optics in Garching, Germany. His talk focused on their experiments involving the manipulation of cold polar molecules which Dr. Zeppenfeld leads. You can visit their website to find out more about their research, and the abstract for his talk is given below.

Abstract:

Polar molecules offer fascinating opportunities for quantum experiments at cold and ultracold temperatures. For example, chemistry at low temperatures features new possibilities such as controlling chemical reactions via electric and magnetic fields or observing reactions based on tunneling through a reaction barrier. Precision measurements on molecules provide insight into fundamental physics, allowing investigation of physics beyond the standard model. Attaining sufficient control over molecules provides opportunities for quantum simulations and quantum information processing.

In my talk I will present two aspects of our work on polar molecules. First, I will present our toolbox of techniques to produce molecule ensembles at very low temperatures. This includes centrifuge deceleration of cryogenic-buffer-gas cooled molecular beams as well as optoelectrical Sisyphus cooling of formaldehyde to sub-millikelvin temperatures. Second, I will present our progress towards quantum experiments coupling polar molecules to Rydberg atoms. As a first step, we have investigated electric field controlled collisions between polar molecules.

AMOPP Seminar: Optical spectroscopy for nuclear and atomic science at JYFL, Finland

On Wednesday 4th of October, we had the privilege of having Professor Iain D. Moore from the University of Jyväskylä as an invited speaker for one of our AMOPP seminars. The seminar focused mainly on the work they have been doing at the various accelerator facilities at JYFL and had a good mix of nuclear and atomic physics content.

Professor Iain D. Moore kindly agreed to provide a copy of his presentation slides which you can download here.

Abstract:

Optical spectroscopy for nuclear and atomic science at JYFL, Finland

Iain D. Moore, University of Jyväskylä

 

High-resolution optical measurements of the atomic level structure readily yield fundamental and model-independent data on nuclear ground and isomeric states, namely changes in the size and shape of the nucleus, as well as the nuclear spin and electromagnetic moments [1]. Laser spectroscopy combined with on-line isotope separators and novel ion manipulation techniques provides the only mechanism for such studies in exotic nuclear systems.

Internationally, there are a myriad of tools in use however these are traditionally variants of two main workhorses in the field – collinear laser spectroscopy and resonance ionization spectroscopy. Following a short overview of the Accelerator Laboratory at the University of Jyväskylä, I will briefly present both techniques and their use in accessing the heavy element region of the nuclear landscape which exhibits rather scarce information from optical studies. This reflects a combination of the difficulty in producing such elements (low production cross sections) and a lack of stable isotopes (thus few optical transitions available in literature). Indeed, this past year has seen a number of exciting developments including optical studies of exotic atoms produced at the level of one atom-at-a-time [2], and high-resolution spectroscopy in supersonic gas expansions [3].

Recently, we have initiated a new program on the actinide elements in collaboration with the University of Mainz. I will summarize the current status of the work which includes collinear laser spectroscopy on Pu, the heaviest element attempted with this particular technique [4]. Our focus has recently turned to the study of the lowest-lying isomeric state in the nuclear chart, 229Th.  Almost 40 years of research has been invested into efforts to observe the isomeric transition which, if found, may be directly accessed by lasers. In 2016, the community was given a tremendous boost with the unambiguous identification of the state by a group in Munich, providing a stepping stone towards a future realization of a “nuclear clock” [5].

[1] P. Campbell, I.D. Moore and M.R. Pearson, Progress in Part. and Nucl. Phys. 86 (2016) 127.

[2] M. Laataioui et al., Nature 538 (2016) 495.

[3] R. Ferrer et al., Nature Communications 8 (2017) 14520.

[4] A. Voss et al., Phys. Rev. A 95 (2017) 032506.

[5] L. von der Wense et al., Nature 533 (2016) 47.

12th International Workshop on Positron and Positronium Chemistry

Our research group was recently represented by Dr David Cassidy at the 12th International Workshop on Positron and Positronium Chemistry (PPC12), which took place between the 28th of August and 1st of September in in Ludblin, Poland. The main focus of this meeting was the interaction of positrons and positronium with other materials and atoms, including polymers, soft matter, surface states and more.

David presented our recent advances in producing a beam of Rydberg positronium atoms (PRL 117, 073202 & PRA 95, 053409) and the prospects of using such techniques to form the yet-unobserved positron-atom bound states (PRA 93, 052712).

You can have a look at the abstracts in the conference website. We are grateful to the organizers for this opportunity and their hard work.

Eberhard Widman, In-beam hyperfine spectroscopy of (anti)hydrogen

20170222_160853

On Wednesday 22nd of February, we had the pleasure of to host Professor Eberhard Widman as one of our weekly invited speakers for AMOPP seminars.  His research in the ASACUSA (Atomic Spectroscopy And Collisions Using Slow Antiprotons) collaboration is heavily related to the kind of antimatter experiments that we do at UCL, except they deal with anti-hydrogen atoms which they produce using positrons and anti-protons from the Antiproton Decelerator (AD) at CERN. The main focus of his talk was the prospect of making measurements of the hyperfine structure of anti-hydrogen.

He kindly agreed to provide a copy of his slides, which you can download here.

Abstract:

In-beam hyperfine spectroscopy of (anti)hydrogen

Prof. Dr. Eberhard Widmann, Director, Stefan Meyer Institute for Subatomic Physics, Austrian Academy of Sciences Boltzmanngasse 3, A-1090 Wien

The ground-state hyperfine structure (GS-HFS) of hydrogen is known from the hydrogen maser to relative precision of 10–12. It is of great interest to measure the same quantity for its antimatter counterpart, antihydrogen, to test the fundamental CPT symmetry, which states that all particles and antiparticles have exactly equal or exactly opposite properties. Since CPT is strictly conserved in the Standard Model of particle physics, a violation, if found, would point directly to theories behind this framework. The application of the maser technique requires the confinement of the atoms in a matter box for 1000 seconds and is currently not applicable to antihydrogen. Therefore, the ASACUSA collaboration at the Antiproton Decelerator of CERN has built a Rabi-type beam spectroscopy setup for a measurement of GS-HFS.

With the initial aim of characterizing the setup devised to measure the GS-HFS and to evaluate its potential, a beam of cold, polarized, monoatomic hydrogen was built and used together with the microwave cavity and sextupole magnet designed for the antihydrogen experiment. The (F,M)=(1,0) to (0,0) transition was measured to a precision of several ppb [1], more than a factor 10 better than in the previous measurement using a hydrogen beam. This result shows that the apparatus developed is capable of making a precise measurement of the GS-HFS of antihydrogen provided a beam of similar characteristics (velocity, polarization, quantum state) becomes available.

In a recent publication on the non-minimal Standard Model Extension (SME), describing possible violations of Lorentz and CPT invariance, Kostelecky and Vargas [2] conclude that the in-beam hyperfine measurements of hydrogen alone can be used to constrain certain coefficients of their model, which have never been measured before. The status and prospects of in-beam measurements of hydrogen and antihydrogen will be presented.

[1] M. Diermaier et al., arxiv : 1610.06392

[2] V.A. Kostelecky and A.J. Vargas, Physical Review D 92, 056002 (2015).