The microwave spectroscopy of the Ps n=2 fine structure experiment is nearly over. Can you guess which transition the graph below corresponds to?#physics #positronium #antimatter #spectroscopy #precision #research #UCL pic.twitter.com/CAG9ctGNO4
— PsSpectroscopyUCL (@UclPs) July 12, 2019
Atoms and molecules in high Rydberg states can possess large electric dipole moments which can be exploited to control their motion using external electric fields. Ever since the demonstration of deflection of Rydberg Kr atoms using electric fields (https://doi.org/10.1088/0953-4075/34/3/319), the field of atom optics has been widely investigated. Different techniques such as decelerators [EPJ Techniques and Instrumentation (2016) 3:2], mirrors [Phys. Rev. Lett. 97, 033002 (2006)], and traps [Phys. Rev. Lett. 100, 043001 (2008)] have been developed, however applications to positronium (Ps) are relatively recent. Only a handful of Ps atom optics experiments have been performed to date [Phys. Rev. Lett. 117, 073202 (2016) , Phys. Rev. A 95, 053409 (2017) , Phys. Rev. Lett. 119, 053201 (2017)]. Several Ps experiments can benefit from use of a cold Ps source but, due to the low mass of Ps, speeds are often on the order of 100 km/s when produced in targets like mesoporous silica. Also, using mesoporous silica, the Ps angular distribution during emission is broad and if a well collimated beam is required, significant loss in Ps number is unavoidable. The use of time-varying electric fields generated by a multi-ring structure can offer a solution to these limitations by capturing more emitted Ps and allowing manipulation via their electric dipole moments. Recently, we have developed a multi-ring structure which has been implemented to guide Rydberg Ps atoms using inhomogeneous electric fields, and, in principle, can also be used to decelerate and trap Ps.
The experimental set-up of the target and electrostatic guide including the excitation lasers and the gamma-ray detectors is shown in Figure 1. The Ps producing target is labelled T and the 11 electrodes of the guide are labelled 1, 2, 3,…, 11. Ground state Ps atoms, produced from the silica target, are excited to the n=13 Rydberg level in a two step process by the UV (1S –>2P) and IR (2P–>nD/nS) lasers in the region between T and electrode 1 (E1). Once excited to Rydberg states, Ps atoms have lifetimes on the order of μs, significantly longer than the triplet ground state lifetime of 142 ns. A 500 V/cm electric field, generated by biasing T and E1, Stark splits the Rydberg spectrum [Phys. Rev. Lett. 114, 173001 (2015)], enabling us to select specific states in the Stark manifold by tuning the IR laser wavelength. States selected with positive Stark shift(shorter wavelengths) are called low field seekers (LFS), because of their property to move toward regions of low field. States selected with negative Stark shift (longer wavelengths) are called high field seekers (HFS), because of their property to move toward regions of high field. Any atoms that are able to traverse the guide, LFS or HFS, can then be detected by detectors D2, D3, D4, and D5.
According to our simulations, see Figure 2, when the guide is not in operation (all electrodes grounded at 0V), the diverging cloud of Ps atoms from the target do not experience any external forces as expected, and therefore annihilate after colliding against the guide electrodes, with only a few forward collimated atoms reaching the end of the guide. If we apply a voltage to alternate electrodes (3,5,7 etc.), this turns the guide on. In this case the HFS states are deflected toward regions of high electric fields (i.e. away from the axis of the guide) while the LFS states are confined axially and transported along the length of the guide. These LFS atoms diverge after exiting the guide and some annihilate due to collisions with the surrounding vacuum chamber. Detectors D2, D3, D4, and D5, as shown in Figure 1, are able to record these annihilation events as time of flight (TOF) spectra, the results of which are shown in figure 3 below.
Neither the LFS states nor the HFS are detected at the end of the guide when the guide is off but an increased count rate is seen when both the guide is on and the IR laser is tuned to excite atoms to the LFS states.
To implement this device as a trap, we only have to reconfigure the voltages applied to the electrodes. For guiding, the odd numbered electrodes of the guide have a voltage applied (-4 kV) while the even numbered electrodes are kept at ground potential (0 V). If a positive voltage of order +1 kV is applied to E2 and E4 while E3 is at a potential of -4kV, then a region of electric field minimum is created in the centre of E3 (the voltages applied to the electrodes are based on the choice of n due to the field ionisation limit). Once the atoms have entered the trap, the LFS atoms should be confined in this region until the voltage applied to E4 is lowered, opening the trap gate for the atoms to be guided towards the detectors. If atoms are trapped and guided out, then the TOF spectra, like in Figure 3(a), is expected to show a peak in events but occurring with a delay consistent with the length of trapping.
Experiments involving positronium (Ps), such as laser excitation for Rydberg state production, precision spectroscopy and positronium chemistry can benefit from a slow Ps source. The technical word for slow is cold. As the mass of positronium is twice the mass of an electron, even room temperature Ps have speeds on the order of 100 km/s. In comparison, atomic beams produced from supersonic jets have speeds of roughly 2 km/s. Atom optics techniques such a decelerating the Ps with electric fields to slow down the atoms may be possible, but at the cost of loss in intensity. An ideal alternative would be to fabricate a target that intrinsically produces slow positronium, and efficiently too.
In most of our experiments, Ps is produced from a mesoporous silica target grown on a silicon substrate. Ps, initially with energy of 1 eV, is emitted from the same side as the positrons enter the converter (“reflection geometry” production) and one can generally obtain Ps with final energy of approximately 50 meV. Once produced inside granulated powders, such as silica or magnesium oxide (MgO), Ps will lose some of its energy by making collisions with the surrounding internal surfaces before being emitted into vacuum. Generally, more collisions means greater amounts of energy loss, therefore colder Ps. A silica target was fabricated so that the Ps traverses the whole thickness of the target to be emitted out of the opposite side after making the maximum amount of collisions with the internal surfaces (transmission geometry). However, the internal spacing of this target was too large to efficiently cool Ps below 200 meV (~200 km/s), which is still hotter than Ps emitted from mesoporous silica. Ps can also be produced from MgO, albeit with a slightly higher initial energy of around 4 eV. Previous studies have indicated that this 4 eV energy is reduced down to energies of around 300 meV in a 6 μm thick MgO layer. So, that gave us an idea. If we make a thicker MgO layer to increase collisional cooling, Ps atoms could be produced with energies lower than 50 meV (Positronium emission from MgO smoke nanocrystals).
To make the MgO target, we set fire to a strip of magnesium ribbon and collected the smoke on a suitable substrate (tinted goggles and a fume cupboard are necessary as you might imagine). This procedure produces perfect cubic crystals, in a 30 μm MgO layer, that have a wide size distribution, as evident from the SEM image above. The substrate of our choice was a 50 nm silicon nitride film, as it allows us to implant the positrons into the SiN side, to make positronium in the SiN-MgO boundary, which is then emitted into vacuum from the opposite side in transmission geometry. In this configuration, Ps atoms will travel through the 30 μm thick MgO, making the maximum amount of collisions. For comparison, we can rotate the target by 180° so the positron hits the MgO side (“reflection geometry”) and Ps makes the least amount of collisions, for positrons which are implanted to a relatively shallow depth of only 100 nm. We expect colder Ps to be produced in the former case compared to the latter, due to the distance travelled by the Ps (30 μm vs 100 nm). These two orientations are shown below, including the positron pulse and the excitation lasers. VT and VG refer to voltages applied to the target and grid electrode to control the positron energy and electric field.
Once positronium atoms are emitted in either of the two set-ups, they are excited with UV and IR lasers to measure the Doppler profile, which gives an indication of their kinetic energy, KE. In the “reflection geometry” configuration, VT controls how deep into the MgO layer the positrons are implanted. Higher voltages results in deeper implantation, hence larger amount of collisions made by the Ps on their way out and colder Ps. However in the “transmission” set-up, around 2 keV voltage on VT is enough to make the positrons go through the SiN and form Ps in MgO. We found that for the 2 – 5 keV range we measured, positron penetration depths into the MgO layer in transmission set-up were essentially the same, meaning that Ps always travelled through 30 μm of MgO in this configuration. The resultant KE obtained from the Doppler profiles are shown below.
Surprisingly, Ps appears to be emitted with the same energy regardless of the amount of collisions it makes with the smoked MgO surfaces. This is inconsistent with the idea that Ps is formed in MgO with 4 eV of energy and reduction in energy takes place due to the collisions. If this is true then we expect KE in reflection set-up, when VT is 1 kV, to be very close to 4 eV. However, Ps kinetic energy is always around 400 meV. This is because Ps, in smoked MgO, is intrinsically produced with around 300 meV of energy with a wide distribution due to the grain sizes, and because of the large open volumes between the MgO crystals, cooling is rather inefficient.
This rules out smoked MgO as a candidate for cold Ps production but there are many experiments where the Ps production and interaction region needs to be separated from the positron beam line. In such experiments, a simple and easily producible MgO target as discussed here, in “transmission geometry” configuration, can be employed where, for example, a scattering gas cell can be installed without interfering with the incoming positrons.
We have also noticed some interesting observations when the lasers are fired into the MgO rather than travelling just in front of the MgO surface. More on that in the next blog post.
On Wednesday 17th of October we had the pleasure to welcome Dr. Chris Wade from Oxford University to give a seminar to the AMOPP group about progress towards interferometry with exotic quantum states of light, more specifically Holland-Burnett states. This was a very interesting talk with a great mix of theory and experimental results. The abstract can be seen below.
Towards enhanced interferometry using quantum states of light
Quantum metrology is concerned with the enhanced measurement precision that may be gained by exploiting quantum mechanical correlations. In the scenario presented by optical inteferometry, several successful implementations have already been demonstrated including gravitational wave detectors , and lab-scale experiments [2,3]. However there are still open problems to be solved, including loss tolerance and scalability. In this seminar I will present progress implementing loss-tolerant Holland-Burnett states , and work searching for other practical states to implement .
 Schnabel et al. Nat. Comm. 1. 121 (2010)
 Slussarenko et al. Nat. Phot. 11, 700 (2017)
 Yonezawa et al. Science, 337, 1514 (2012)
 Holland and Burnett, PRL, 71, 1355 (1993)
 Knott et al, PRA, 93, 033859 (2016)
On Wednesday 10th October we had Dr. Luis Masanes from within the UCL AMOPP group give a very interesting seminar. His talk was focused on the fundamental questions posed by the measurement postulates of quantum mechanics, and how they are redundant given the other postulates that form the basis of quantum mechanics. Dr. Masanes was kind enough to provide a copy of his slides here and the abstract can be seen below.
The Measurement Postulates of Quantum Mechanics are Redundant
Understanding the core content of quantum mechanics requires us to disentangle the hidden logical relationships between the postulates of this theory. The theorem presented in this work shows that the mathematical structure of quantum measurements, the formula for assigning outcome probabilities (Born’s rule) and the post-measurement state-update rule, can be deduced from the other quantum postulates, often referred to as “unitary quantum mechanics”. This result unveils a deep connection between the dynamical and probabilistic parts of quantum mechanics, and it brings us one step closer to understand what is this theory telling us about the inner workings of Nature.
All atomic systems, including positronium (Ps) can be excited to states with high principal quantum number n using lasers, these are called Rydberg states. Atoms in such states exhibit interesting features that can be exploited in a variety of ways. For example, Rydberg states have very long radiative lifetimes (on the order of 10 µs for our experiments). This is a particularly useful feature in Ps because when it is excited to large-n states, the overlap between the electron and positron wavefunction is suppressed. Therefore the self-annihilation lifetime becomes so large in comparison to the fluorescence lifetime, that the effective lifetime of Ps in a Rydberg state becomes the radiative lifetime of the Rydberg state. Most Rydberg Ps atom will decay back to the ground state first, before self-annihilating [Phys. Rev. A 93, 062513 (2016)]. The large distance between the positron and electron centers of charge in certain Rydberg states also means that they exhibit large static electric dipole moments, and thus their motion can be manipulated by applying forces with inhomogeneous electric fields [Phys. Rev. Lett. 117, 073202 (2016), Phys. Rev. A 95, 053409 (2017)]
In addition to these properties, Rydberg atoms have high tunnel ionization rates at relatively low electric fields. This property forms the basis for state-selective detection by electric field ionization. In a recent series of experiments, we have demonstrated state-selective field ionization of positronium atoms in Rydberg states (n = 18- 25) in both static and time-varying (pulsed) electric fields.
The set-up for this experiment is shown below where the target (T) holds a SiO2 film that produces Ps when positrons are implanted onto it. The first grid (G1) allows us to control the electric field in the laser excitation region, and a second Grid (G2) with a varying voltage provides a well defined ionization region. An electric field is applied by either applying a constant voltage to Grid 2 as in the case of the static field configuration, or by ramping a potential on Grid 2 as in the case of the pulsed field configuration.
Figure 1: Experimental arrangement showing separated laser excitation and field ionization regions.
In this experiment we detect the annihilation gamma rays from:
the direct annihilation of positronium
annihilations that occur when positronium crashes into the grids and chamber walls
annihilations that occur after the positron, released via the tunnel ionization process, crashes into the grids or chamber walls
We subtract the time-dependent gamma ray signal when ground state Ps traverses the apparatus from the signal detected from Rydberg atoms when an electric field is applied in the ionizing region. This forms a background subtracted signal that tells us where in time there is an excess or lack of annihilation radiation occurring when compared to background (this SSPALS method is described further in NIM. A 828, 163 (2016) and and here).
Static Electric Field Configuration
In this version of the experiment, we let the excited positronium atoms fly into the ionization region where they experience a constant electric field. In the case where a small electric field (~ 0 kV/cm) is applied in the ionizing region, the excited atoms fly unimpeded through the chamber as shown in the animation below. Consequently, the background subtracted spectrum is identical to what we expect for a typical Rydberg signal (see the Figure below for n=20). There is a lack of ionization events early on (between 0 and 160 ns) compared to the background (ground state) signal that manifests itself as a sharp negative peak. This is because the lifetime of Rydberg Ps is orders of magnitude larger than the ground state lifetime.
Later on at ~ 200 ns, we observe a bump that arises from an excess of Rydberg atoms crashing into Grid 2. Finally, we see a long positive tail due to long-lived Rydberg atoms crashing into the chamber walls.
Figure 2: Trajectory simulation of Rydberg Ps atoms travelling through the ~0 V/cm electric field region (left panel) and measured background-subtracted gamma-ray flux , the shaded region indicates the average time during which Ps atoms travel from he Target to Grid 2 (right panel).
On the other hand, when the applied electric field is large enough, all atoms are quickly ionized as they enter the ionizing region. Correspondingly, the ionization signal in this case is large and positive early on (again between 0 and 160 ns). Furthermore, instead of a long positive tail, we now have a long negative tail due to the lack of annihilations later in the experiment (since most, if not all, atoms have already been ionized). Importantly, since in this case field ionization occurs almost instantaneously as the atoms enter the ionization region, the shape of the initial ionization peak is a function of the velocity distribution of the atoms in the direction of propagation of the beam.
Figure 3: Trajectory simulation of Rydberg Ps atoms travelling through the ~2.6 kV/cm electric field region (left panel) and measured background-subtracted gamma-ray flux , the shaded region indicates the average time during which Ps atoms travel from he Target to Grid 2 (right panel).
We measure these annihilation signal profiles over a range of fields and calculate the signal parameter Sᵧ. A positive value of Sᵧ implies that there is an excess of ionization occurring within the ionization region; whereas, a negative Sᵧ means that there is a lack of ionization within the region with respect to background. Therefore, if Sᵧ is approximately equal to 0%, only half of the Ps atoms re being ionized. A plot of the experimental Sᵧ parameter for different applied fields and for different n’s is shown in the plot below.Figure 4: Electric field scans for a range of n states ranging from 18 to 25 showing that at low electric fields none of the states ionize (thus the negative values of Sᵧ) and as the electric field is increased, different n states can be observed to have varying ionizing electric field thresholds.
It is clear that different n-states can be distinguished using these characteristic Sᵧ curves. However, the main drawback in this method is that both the background subtracted profiles and the Sᵧ curves are convoluted with the velocity profile of the beam of Rydberg Ps atoms. This drawback can be eliminated by performing pulsed field ionization.
Pulsed Electric Field Configuration
We have also demonstrated the possibility of distinguishing different Rydberg states of positronium by ionization in a ramped electric field. The set-up is the same as in the static field scenario but now instead of fixing a potential on Grid 2, the potential on this grid is decreased from 3 kV to 0 kV hence increasing the field from 0 kV/cm to ~ 1800 kV/cm (the initial 3kV is necessary to help cool down Ps [New J. Phys. 17,043059 (2015)]).
The advantage of performing state selective field ionization this way is that we can allow most of the atoms to enter the ionization region before pulsing the field. This eliminates the dependence of the signal on the velocity distribution of the atoms and thus the signal is only dependent on the ionization rates of that Rydberg state in the increasing electric field.
Below is a plot of our results with a comparison to simulations (dashed lines). We see broad agreement between simulation and experiment and, we are able to distinguish between different Rydberg states depending on where in time the ionization peak occurs. This means that we should be able to detect a change in an initially prepared Rydberg population due to some process such as microwave induced transitions.
The development of state selective ionization techniques for Rydberg Ps opens the door to measuring the effect of blackbody transitions on an initially prepared Rydberg population and a methodology for detecting transitions between nearby Rydberg-levels in Ps. Which could also be used for electric field cancellation methods to generate circular Rydberg states of Ps.
Ethics in research is something everyone is expected to understand and respect, from beginning students to established professors. However, very rarely are such things discussed formally. In effect one is expected to simply know all about these matters by appealing to “common sense” and also to the examples set by mentors. This is not an ideal situation, as is evidenced by numerous high profile cases of scientific misconduct. The purpose of this post is to briefly mention some important aspects of ethical research practices, mostly to encourage further thought in students before they have a chance to influenced by whatever environment in which they are working.
In some cases “misconduct” is easy to spot, and is then (usually) universally condemned. For example, making up or manipulating data is something we all know is wrong and for which there can never be a legitimate defense. At the same time, some students might see nothing wrong in leaving off a few outlier data points that clearly do not fit with others, or in failing to refer to some earlier paper. It is not always necessarily obvious what the right (that is to say, ethical) thing to do might be. What harm is there, after all, in putting an esteemed professor as a co-author on a paper, even though they have provided no real contribution? (Incidentally, merely providing lab space, equipment or even funding doesn’t, or shouldn’t, count as such). Well, there is harm in gift authorship; it devalues the work put in by the real authors, and makes it possible for a certain level of authority to guarantee future (perhaps unwarranted) success. And does it really matter if an extreme outlier is left off of an otherwise nice looking graph? The answer is yes, absolutely.
At the heart of most unethical behavior in science, or misconduct, is the nature of truth, and the much revered “scientific method”. Never mind that there is no such thing, or that the way science is actually done is not even remotely similar to the romantic notions so often associated with this very human activity. Nevertheless, there is a very strong element of trust implicit in the scientific endeavor. When we read scientific papers we have no choice but to assume that the experiments were carried out as reported, and that the data presented is what was actually observed. The entire scientific enterprise depends on this kind of trust; if we had to try and sort through potentially fraudulent reports we would never get anywhere. For this reason, trust in science is extremely important, and one could say that, regardless of any mechanisms of self correction that might exist in science, and the concomitant inevitability of discovery of fraud, we are almost compelled to trust scientists if we wish science to be done in a reasonable manner[a]. By the same token, those scientists who choose to betray this trust pollute not only their own research (and personal character), but all science. The enormity of this offense should not be understated. In particular, students should be aware of the fact that being a dishonest scientist is oxymoronic.
What is misconduct? A formal definition of scientific misconduct of the sort that would be necessary in legal action is an extremely difficult thing to produce. There have been many attempts, most of which are centered on the ideas of FFP: that is, fraud, falsification and plagiarism. These are things that we can easily understand and, most of the time, identify when they arise. However, things can (and invariably do) get complicated in real world situations, especially when people try to gauge intention, or when the available information is incomplete.
A definition of scientific misconduct, as it pertains to conducting experiments or observations to generate data, was given by Charles Babbage (Babbage 1830) that still has a great deal of relevance today. Babbage is perhaps best known for his “difference engine”, a mechanical computer he built to take the drudgery out of performing certain calculations. He was a well-respected scientist working in many areas, Professor of physics at Cambridge University and a fellow of the Royal society. His litany of offences, which is frequently cited in discussions of scientific misconduct, was categorized into four classes.
Babbage’s taxonomy of fraud:
1.) Hoaxing is not fraud in the usual sense, in that it is generally intended as a joke, or to embarrass or attack individuals (or institutions). A hoax is more like a prank than an attempt to benefit the hoaxster in the scientific realm; it is not often that a hoax remains undisclosed for a long time. Hoaxes are not generally intended to promote the perpetrator, who will prefer to remain anonymous, (at least until the jig is up). A Hoax is far more likely to involve someone dressing up as Bigfoot, or lowering a Frisbee outside a window on a fishing line than fabricating IV curves for electronic devices that have never been built.
A famous and amusing example is the so-called Sokal hoax (Sokal 1996). In 1996 a physicist from New York University submitted a paper to the journal “Social Text”. This journal publishes work in the field of “postmodernism” or cultural studies, an area in which there had been many critiques of science based on unscientific criteria. For example, the idea that scientific objectivity was a myth, and that certain cultural elements (e.g., “male privilege” or western societal norms) are determining factors in the underlying nature of scientific discovery. This attitude vexed many physicists, leading to the so-called “science wars” (Ross 1996), and was the impetus for Sokal’s hoax. The paper he submitted to Social Text was entitled “Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity”. This article was deliberately written in impenetrable language (as was often the case in articles in Social text, and other similar publications) but was in fact pure nonsense (as was also often the case in articles in Social text and other similar publications). The paper was designed with the intention of impressing the editors, not only with the stature of the author (Sokal was then a respected physicist, and seems at the time of writing, some 16 years later, still to be one) but also by appealing to their ideological positions. Sokal revealed the hoax immediately after his paper was published; he never had any intention of allowing it to stand, but rather wanted to critique the journal. As one might expect, a long series of arguments arose from this hoax, although the main effect was to tempt more actual scientists into commenting on the “science wars” that were already occurring. In this sense Sokal may have done more harm than good with his hoax, as the inclusion of scientists in the “debate” served only to legitimize what had before been largely the province of people with no training in, or, frequently, understanding of, science.
2.) Forging is just what it sounds like, simply inventing data and passing it off as real. Of course, one might do the same in a hoax, but the forger seeks not to fool his audience for a while, later to reveal the hoax for his or her own purposes, but to permanently perpetrate the deception. The forger will imitate data and make it look as real as possible, and then steadfastly stick to their claim that it is indeed genuine. Often such claims can stand up to scrutiny even once the forging has been exposed, since if it is done well the bogus data is in no way obviously so. This unfortunate fact means that we cannot know for sure how much forging is really going on. However, the forger does not have any new information and so cannot genuinely advance science. They may be able to appear to be doing so, by confirming theoretical predictions, for example, but such predictions are not always correct. Nevertheless, the forger must have a good knowledge of their field, and enough understanding of what is expected to be able to produce convincing data out of thin air. If one were to do this then, on rare occasions, it is even possible that one might guess at some reality, later proved, and really get away with it. However, the fleeting glory of a major discovery seems to be somewhat addictive to those so inclined, and some famous forgers have started out small, only to allow themselves to be tempted into such outrageous claims that from the outside it seems as though they wanted to get caught.
A well known case in which exactly this happened is the “Schön affair”, which has been covered in detail in the excellent book “plastic fantastic” (Reich 2009). Hendrik Schön was a young and, apparently, brilliant young physicist employed at Bell Labs in the early 2000’s. His area of expertise was in organic semiconductors, which held (and continue to hold) great promise for the production of cheap and small electronic devices. Schön published a large number of papers in the most high profile journals (including many in Science and Nature) that purported to show organic crystals exhibiting remarkable electronic properties, such as superconductivity, the fractional quantum hall-effect, lasing, single molecule switches and so on. At one point he was publishing almost one paper per week, which is impressive just from the point of view of being able to write at such a rate; most scientists will probably spend one or two months just writing a paper, never mind the time taken to actually get the data or design and build an experiment. Before the fraud was exposed one professor at Princeton University[b], who had been asked to recommend Schön for a faculty position, refused to do so, and stated that he should be investigated for misconduct based solely on his extreme publication rate (Reich 2006).
Schön’s incredible success seemed to be predicated on his unique ability to grow high quality crystals (he was described on one occasion as having “magic hands”). This skill eluded other researchers, and nobody was able to reproduce any of Schön’s results. As he piled spectacular discovery upon spectacular discovery, suspicion grew. Eventually it was noticed that in some of his published data Schön had used the same curves, right down to the noise, to represent completely distinct phenomena, in completely different samples (that, in fact, had never really existed). After trying to claim that this was simply an accident, that he had just mislabeled some graphs and used the wrong data, more anomalies were revealed. The management at Bell labs started an investigation and Schön continued to make excuses, stating (amazingly, considering the quality of the real scientists that worked at this world leading institution) that he had not kept records in a notebook, and that he had erased his primary data to make space on his computer’s hard drive[c]!
None of these excuses were convincing, and indeed such poor research techniques should themselves probably be grounds for dismissal, even in the absence of fraud. Fraud was not absent, however, it was prevalent, and on a truly astounding scale. Schön had invented all of his data, wholesale, had never grown any high quality crystals, and had lied to many of his colleagues, at Bell labs and elsewhere. He was fired, and his PhD was revoked by Konstantz university in Germany for “dishonorable conduct”. There was no suggestion that the rather pedestrian research Schön had completed for his doctorate was fraudulent, but his actual fraud was so egregious that his university felt that he had dishonored not just himself and their institution, but science itself, and as such did not deserve his doctorate.
It seems obvious in retrospect that Schön could never have expected to get away with his forging indefinitely. Even if he had been a more careful forger, and had made up distinct data sets for all of his papers, the failure of others to reproduce his work would have eventually revealed his conduct. Even before he was exposed, pressure was mounting on Schön to explain to other scientists how he made his crystals, or to provide them with samples. He had numerous, and very well respected, co-authors who themselves were becoming nervous about the increasingly frustrated complaints from the other groups unable to replicate his experiments. It was only a matter of time before it all came out, regardless of the manifestly fake published data.
This unfortunate affair raises some interesting questions about misconduct in science. For example, Schön’s many co-authors seem to have failed in their responsibilities. Although they were found innocent of any actual complicity in the fraud, as co-authors on so many, and on such astounding, papers, it seems clear that they should have paid more attention. Some of the co-authors were highly accomplished scientists, while Schön himself was relatively junior, which only compounds the apparent lack of oversight. Indeed, it is likely that the pressure of working for such distinguished people, and at such a well-respected institution as Bell Labs, was what first tempted Schön into his shameful acts.
No doubt winning numerous prizes, being offered tenured positions at an Ivy League school (Princeton; not bad if you can get an offer) and generally being perceived as the new boy-wonder physicist played a role in the seemingly insane need to keep claiming ever more incredible results. It is hard to escape the conclusion that some sort of mental breakdown contributed to this folly. However, another pathological element was also likely at play, which is that Schön was sure that his speculations would eventually be borne out by other researchers.
If this had actually happened, and if he had not been a poor forger (or, perhaps, an overworked forger, with barely enough time to invent the required data), he might have enjoyed many years of continued prosperity. As it happens, some of the fake results published by Schön have in fact been shown to be roughly correct; sometimes the theorists guess right. Ultimately though, forging will usually be found out, and the more that is forged the faster this will happen. The only way to forge and get away with it is to know in advance what nature will do, and if you know that the forging is redundant[d].
3.) Trimming is the practice of throwing away some data points that seem to be at odds with the rest of the measurements, or at least are not as close to the “right” answer as expected. In this case the misconduct is primarily designed to make the experimenter seem more competent, although actually a poor knowledge of statistics might mean that it has the opposite effect; outliers sometimes seem out of place, but, to those who have a good knowledge of statistics, their absence may be quite suspicious. Because it can seem so innocuous, trimming is probably much more common than forging. When he invents data from nothing, the forger has already committed himself to what he must surely know is an immoral and heinous action; whatever justification he might employ towards this action cannot shield him from such knowledge. The trimmer, on the other hand, might feel that his scientific integrity is unaffected, and that all he is doing is making his true data a little more presentable; so as to better explain his truth. A trimmer who believes this is wrong may well be less offensive than the forger, but his actions are still unconscionable, will still constitute a fraud, and may equally well pervert the scientific record, even when the essential facts of the measurement remain. For example, by trimming a data set to look “nicer” you might significantly alter the statistical significance of some slight effect. Indeed, in this way you can create a non-existent effect, or destroy evidence for a real one; without the truth of nature’s reference there is no way to know the difference.
The trimmer has a great advantage over the forger, insofar as he is not trying to second guess nature, but rather obtains the answer from nature in the form of his primary measurements, and then attempts to make his methodology in so doing appear to be “better” than it really was. The trimmed data, therefore, will likely not give a very different answer than the untrimmed set, but will seem to have been obtained with greater skill and precision.
There are no well known examples of trimming because, by its very nature, it is hard to detect. Who will know if some few data points are omitted from an otherwise legitimate data set? What indicators might allow us to distinguish a careful experimenter from a carefree trimmer? Even Newton has been accused of trimming some of his observations of lunar orbits to obtain better agreement with his theory of gravitation (Westfell 1996). Objectively, even for the amoral, trimming is quite pointless when one considers that, if found out, the trimmer will suffer substantial damage to his reputation, whereas even if it is not found out, the advantage to that reputation is slight.
4.) Cooking refers to selecting data that agrees with the hypothesis you are trying to prove and rejecting other data, that may be equally valid, but that doesn’t. In other words, cherry picking. This is similar to trimming inasmuch as it is selecting from real data (i.e., not forged), but with no real justification for doing so, other than ones own presupposition about what the “right” answer is. It differs from trimming, for the cook will take lots of data and then select those that serve his purpose, while the trimmer will simply make the best out of whatever data he has. This is lying, just as forging, but with the limitation that nature herself has at least suggested which lies to tell.
Cooking also carries with it an ill defined aspect of experimental science, that of the “art” of the experimentalist. It is certainly true to say that experimental physics (and no doubt other branches of science too) contains some element of artistry. Setting up a complicated apparatus and making it work is an underappreciated skill, but a crucial one. When taking data with a temperamental system it is routine to throw some of it out, but only if one can justify doing so it in terms of the operation of the device. Almost all experiments include some tuning up time in which data are neglected because of various experimental problems. However, once all of these problems have been ironed out and the data collection begins “for real” one has to be very careful about excluding any measurements. If some obvious problems attend (a power supply failed, a necessary cable was unplugged, batboy was seen chewing on the beamline, whatever it may be) then there is no moral difficulty associated with ditching the resulting data. When this sort of thing happens (and it will), the best thing to do is throw out all of the data and start again.
The situation gets more complicated if an intermittent problem arises during an experiment that affects only some of the data. In that case you might want to select the “good” data and get rid of the “bad”. Unfortunately all of the resulting data will then be “ugly”. This is not a good idea because it borders on cooking. It is much better to throw away all potentially suspect data and start again. In data collection, as in life, even the appearance of impropriety can be a problem, regardless of if propriety has in fact been maintained. It may be the case that there is an easy way to identify data points that are problematic (for example, the detector output voltage might be zero sometimes and normal at others), but there will remain the possibility that the “good” data is also affected by the underlying experimental problem leading to “bad” data, but in a less obvious manner. The best thing to do in this situation will depend on many factors. It is not always possible to repeat experiments; you might have a high confidence in some data because of the exact nature of your apparatus and so on. Even so, “when in doubt, throw it out” is the safest policy.
Cooking is a little like forging, but with an insurance policy. That is, since the data is not completely made up there is a better chance that it does really correspond to nature, and therefore you will not be found out later because you are (sort of) measuring something real. This is not necessarily the case, however, and will depend on the skill of the cook. By selecting data from a large array it is usually possible to produce “support” for any conclusion one wants, and as a result the cook who seeks to prove an incorrect hypothesis loses his edge over the forger.
A well known case illustrates that it is not so simple to decouple experimental methodology with cookery. Robert Millikan is well known for his oil drop experiment in which he was able to obtain a very accurate value for the charge on an electron, work for which he was awarded the Nobel Prize. This work has been the subject of some controversy, however, since Historian of Science Gerald Holton [Holton 1978] went through Millikan’s notebooks and revealed that not all of the available data had been used, even though Millikan specifically said that it had. Since then there have been numerous arguments about whether the selection of data was simply the shrewd action of an expert experimentalist, or one of dishonest cooking. As discussed by numerous authors [e.g., Segerstråle, 1995] it is neither one nor the other, and the situation is more nuanced. The reality is that his answer was remarkably accurate, as we now know. The question then is did Millikan know what to expect? If he did not then accusations of cooking seem unfounded, since he would have had to know what he was trying to “demonstrate”. If he got the right answer without knowing it in advance, he must have been doing good science, right? WRONG! If Millikan was cooking (or even doing a little trimming) and he somehow got the right answer this does not in any way mitigate the action. Since we cannot say whether he really did do any cooking or not, one might believe that getting the right answer implies that he was being honest; as it turns out, he would have obtained an even more accurate answer if he had used all of his data (Judson 2004).
Furthermore, the question of whether Millikan already knew the “right” answer is meaningless. The right answer is the one we get from an experiment, and we only know it to the extent that we can trust the experiment. An army of theoreticians can be (have been, perhaps now are) wrong. Dirac predicted that the electron g factor would be exactly 2, and it almost is. Knowing only this, the cook or trimmer might be sorely tempted to nudge the data to be exactly 2.0000 and not 2.0023, but that would be a terrible mistake, one that would miss out on a fantastic discovery. One can only hope that somewhere in the world there is at least one cook who allowed a Nobel Prize (or some major discovery) to slip through his dishonest fingers, and cannot even tell anyone how close he came. Did Millikan knowingly select data to make his measurements look more accurate? There is no way to know for sure.
Pathological science is a term coined by Irving Langmuir, the well known physicist. He defined this as science in which the observed effects are barely detectable, and the signals cannot be increased, but are nevertheless claimed to be made with great accuracy; pathological science also frequently involves unusual theories that seem to contradict what is known (Langmuir 1968). However, pathological science is not usually an act of deliberate fraud, but rather one of self delusion. This is why the effect in question is always at the very limit of what can be detected, for in this case all kinds of mechanisms can be used (even subconsciously) to convince one that the effect really is real. In this context, then, it is worth asking, does the cook necessarily know what he is doing? That is, when one wishes to believe something so very strongly, perhaps it becomes possible to fool ones own brain! This kind of self delusion is more common than you might think, and happens in everyday life all the time. Although we cannot directly choose what we believe is true, when we don’t know one way or the other, our psychology makes it easy for us to accept the explanation that causes the least internal conflict (also known as cognitive dissonance).
When a researcher is so deluded as to engage in pathological science, it is difficult to categorize this activity as one of misconduct. The forger has to know what he is doing. The trimmer or cook generally tries to make his data look the way he wants, but if he does so without real justification then it is wrong. By making justifications that seem reasonable but are not, one could conceivably fool oneself into thinking that there was nothing improper about cooking or trimming. Certainly, this will sometimes really be the case, which only makes it easier to come up with such justifications.
In many regards the pathological scientist is not so different from one who is simply wrong. The most famous example of this might be the cold fusion debacle (Seife 2009). Pons and Fleischmann claimed to see evidence for room temperature fusion reactions and amazed the world with their press conferences (not so much with their data). Later their fusion claims turned out to be false, as proved by Kelvin Lynn, Mario Gai and others (Gai et al 1989). However, the effect they said they saw was also observed by some other researchers, and as a result many reasons were promulgated as to why one experiment might see it and another might not. Then, when it was pointed out that there ought to be neutron emission if there was fusion, and no neutrons were observed, it became necessary to modify the theory so as to make the neutrons disappear. This was classic pathological science, hitting just about every symptom laid out by Langmuir.
Plagiarism was not explicitly discussed by Babbage but does feature prominently in modern definitions of misconduct. Directly copying other peoples work is relatively rare for professional scientists, perhaps because it is so likely to be discovered. There have been some cases in which people have taken papers published in relatively obscure journals and submitted them, almost verbatim, to other, even more obscure, journals. Since only relatively unknown work is susceptible to this sort of plagiarism it does little damage to the scientific record, but is no less of an egregious act for this. Certainly the psychology of the “scientist” who would engage in such actions is no less damaged than that of a Schön.
In one case a plagiarist, Andrzej Jendryczko, tried to pass of the work of others as his own by translating it (from English, mostly) into Polish, and then publishing direct copies in specialized Polish medical journals under his own name (E.g., Judson 2004). This may have seemed like a safe strategy, but a well read Polish-American physician had no trouble tracking down all the offending papers using the internet. Indeed, wholesale plagiarism is now very easy to uncover via online databases and a multitude of specialized software applications. For the student, plagiarism should seem like a very risky business indeed. With minimal effort it can be found out, and no matter how clever the potential plagiarist might be, Google is usually able to do better with sheer force of computing power and some very sophisticated search algorithms. As is often the case, this sort of cheating does not even pass a rudimentary cost-benefit analysis[e]. It is another inverse to Pascal’s wager (the idea that it is better to believe in God than not, just because the eternity of paradise always beats any temporary secular position) inasmuch as the actual gain attending ripping off some (necessarily) obscure article found online is virtually nil, whereas exposure as a fraud for doing this will contaminate everything you have ever done, or will ever do, in science. How can this ever be worth even considering?
Plagiarism is not as simple as copying the work of others; providing incomplete references in a paper could be construed as a form of plagiarism, insofar as one is not providing the appropriate background, thereby perhaps implying more originality than really exists. References are a very important part of scientific publishing, and not just because it is important to give due respect to the work that you might be building upon. A well referenced paper not only shows that you have a good understanding and knowledge of your field, but will also make it possible for the reader to properly follow the research trail, which puts everything in context and helps enormously in understanding the work[f].
Stealing the ideas of others, if not their words, is also, of course, a form of plagiarism. This is harder to prove, which may be why it is something most often discussed in the context of grant proposals. This forms a rather unfortunate set of circumstances in which a researcher finds himself having to submit his very best ideas to some agency (in the hope of obtaining funding), which are promptly sent directly to his most immediate competitors for evaluation! After all, it is your competition who are best placed to evaluate you work. In most cases, if an obvious conflict exists, it is possible to specify individuals who should not be consulted as referees, although this is more common in the publication of papers than it is in proposal reviews. For researchers in niche fields, in which there may not be as much rivalry, this is not going to be a common problem. For researchers in direct competition, however, things might not be so nice. There have in fact been some well publicized examples of this sort of plagiarism, but it is probably not very common because grant proposals don’t often contain ideas that have not been discussed, to some degree, in the literature, and in those rare cases when this isn’t so, it is probably going to be obvious if such an idea is purloined. Also, let us not forget, most scientists are really not so unscrupulous. Thus, for this to happen you’d need an unlikely confluence of an unusually good idea sent to an abnormally unprincipled referee who happened to be in just the right position to make use of it.
References & Further Reading
Misconduct in science is a vast area of study, and the brief synopsis we have given here is just the tip of the iceberg. The use of Babbage’s taxonomy is commonplace, and in depth discussions of every aspect of this can be found in many books. The following is small selection. The book by Judson is particularly recommended as it is fairly recent and goes into just the right amount of detail in some important case studies.
Judson H. F. (2004). The Great Betrayal: Fraud in Science, Houghton Mifflin Harcourt.
Alfredo K and Hart H, “The University and the Responsible Conduct of Research: Who is Responsible for What?” Science and Engineering Ethics, (2010).
Sterken, C., “Writing a Scientific Paper III. Ethical Aspects”, EAS Publications Series 50, 173 (2011)
Reich, E. S. (2009). Plastic Fantastic: How the Biggest Fraud in Physics Shook the Scientific World, Palgrave Macmillan.
Broad, W. and Wade, N. (1983). Betrayers of the Truth: Fraud and deceit in the halls of science, Simon & Schuster.
Babbage C. (1830) Reflections on the Decline of Science in England, August M. Kelley, New York (1970).
Holton G. (1978) Subelectrons, presuppositions and the Millikan-Ehrenhaft dispute, In: Holton G., ed., The Scientific Imagination, Cambridge University Press, Cambridge, UK.
Langmuir I. (1989) Pathological science, Physics Today 42: 36–48. Reprinted from the original in General Electric Research and Development Center Report 86-C-035, April 1968
Ross, Andrew, ed. (1996). Science Wars. Duke University Press.
Segerstråle, U. (1995). Good to the last drop? Millikan stories as “canned” pedagogy, Science and Engineering Ethics 1, 197.
Sokal, A., D. and Bricmont, J. (1998). Fashionable Nonsense: Postmodern Intellectuals’ Abuse of Science. Picador USA: New York.
Westfall, R., S. (1994). The Life of Isaac Newton. Cambridge University Press.
Seife, C. (2009). Sun in a Bottle: The Strange History of Fusion and the Science of Wishful Thinking, Penguin Books.
Gai, M. et al., (1989). Upper limits on neutron and gamma ray emission from cold fusion, Nature 340, 29.
[a] What is this reasonable manner? We just mean that if one cannot trust scientific reports to be honest representations of real work undertaken in good faith, then standing on the shoulders of our colleagues (who may or may not be giants) becomes pointless, everything has to be independently verified, and the whole scientific endeavor becomes exponentially more difficult.
[b] Reich’s book does not name the professor in question, but clearly he or she had a good grasp on how much can be done, even by a highly motivated genius.
[c] This is a particularly ludicrous claim since to a scientist experimental data is a highly valuable commodity, whereas disc space is trivially available: the idea that one would delete primary data just to make space on a hard drive is like burning down one’s house in order to have a bigger garden.
[d] Another way to increase your chances of getting away with forgery is to fake data that nobody cares about. However, this obviously has an even less attractive cost-benefit ratio.
[e] We obviously do not mean to suggest that some clever form of plagiarism that can escape detection does pass a cost-benefit analysis, and is therefore a good idea: the uncounted cost in any fraud, even a trivial one, is the absolute destruction of one’s scientific integrity. We just mean that this sort of thing, which is never worthwhile, is even more foolish if there isn’t some actual (albeit temporary) advantage.
[f] On a more mundane level, a referee who has not been cited, but should have been, is less likely to look favorably upon your paper than he or she might otherwise have done. This also applies to grant proposals, so it is in your own interests to make sure your references are correct and proper.
On Wednesday 1st November we had Professor Lukas Novotny from the Photonics Laboratory in ETH, Zürich give an insightful AMOPP seminar. His expertise spans many areas, from optical antennas, near-field optics, nonlinear plasmonics and more. However, during his talk he focused on nanoparticle trapping and cooilng. The abstract for his talk can be found below, and he agreed to provide a copy of his slides which you can download here.