International Conference on Precision Physics of Simple Atomic Systems

Back in May, three of the Ps Spectroscopy team had the pleasure of attending PSAS 2022, which took place at the University of Warsaw, Poland. The conference focuses on precision measurements of simple atomic and molecular systems, including the development of new experimental methods and refinement of the theoretical calculations and models.

Group photo of the PSAS’22 Participants

We heard talks from groups around the world, on Hydrogen, QED theory, exotic atoms and more. David’s talk was on our recent measurements of the microwave spectroscopy of the Ps n=2 fine structure. These experiments had the highest precision to date, though they significantly disagreed with theory and produced asymmetric lineshapes (published here and here). Later, we found out that the experimental vacuum chamber was causing reflections of the microwave fields, which appear to be the cause of the observed asymmetric lineshapes and shifts (published here).  Recent experiments with a smaller vacuum chamber initially seem to have reduced the reflections in the chamber leading to symmetric lineshapes, more info to come. All of the talks are available on YouTube and are well worth a watch.

David giving his talk

Tamara and Sam both presented posters on upcoming experiments, Ramsey interferometry of Ps and THz spectroscopy of He respectively. Tamara’s poster detailed our new DC Ps beamline (now in development!) in which an energy tunable 2S Ps beam will be created via collisions with Xe gas. With this we’re aiming to perform Ramsey interferometry using two waveguides instead of one, which we anticipate will improve our microwave spectroscopy measurements greatly. Sam’s poster showed some preliminary data on the THz spectroscopy of Rydberg He atoms. We’re planning to do more measurements like this in well-defined electric fields to perform the spectroscopy between stark manifolds, but more on all of these experiments is to come.

Sam presenting his poster

We’d like to thank the Candela foundation and the faculty of Physics at the university of Warsaw for organizing the conference and hosting us. We’re looking forward to the next PSAS in Wuhan!

A note on unexpectedly useful transferable skills for experimental physicists

Careers officers at universities often talk about transferrable skills when discussing options for job hunting undergraduate physicists. It’s true that many of the skills taught at degree level are invaluable across a broad range of professions. Nowadays, you’d be hard pressed to find to find a job that doesn’t value basic programming, data analysis or problem-solving skills, to mention a few. However, graduates that choose to stay in academia and embark on careers in experimental research may be surprised to learn of all the skills not taught in their degrees that will likely be essential in the coming years. Here, I would like to discuss a few of the skills that I wouldn’t have attributed to a physics PhD until they came in very handy during my first year. 

First of all, patience. This may seem obvious, but experimental research is not to be rushed for a multitude of reasons, not least because it can be extremely dangerous to do so. Pragmatically, it simply isn’t efficient to try and do things as quickly as possible; speedy experimentation will likely lead to mistakes being made, false or useless data being taken and inevitably, many hours being spent on re-doing all your work. Of course, this is far easier said than done. Experiments in undergraduate lab modules are often laid out for the students with minimal or sometimes comprehensive instructions, and rarely take longer to perform than 6 to 12 hours. At PhD level however, experiments can take weeks, months, even years if you’re continuing the work of a long line of past students (or your lab is cursed). Many hours will be spent watching vacuum chambers pump down only to realise there’s a leak, or that the thing you’re supposed to be testing in there is still sitting on the bench where you left it. Long days can be spent aligning beams, measuring fields or taking data to no avail; either because you’ve made an honest mistake, or the universe just doesn’t want to play fair. Days like this can be frustrating, and the temptation to rush things will be strong, but this will only beget more waiting in the end. In my experience, the trick to patience is positivity and productivity. When you’ve found a bolt missing, a leak in your system or a blocked laser beam, try to substitute “Oh God I have to start all over again” for “That could have been so much worse if I hadn’t just spotted it”. See these things as small wins as opposed to massive losses and not only will you have the motivation to correct things and crack on with your experiment, but you will feel better too. Finally, when you have lots of time because you’re waiting for something, fill it! If your data set takes hours to record, catch up on some reading, try that simulation you were supposed to do last week or write a blog post… If your mind is occupied on producing work, you shan’t have the time to feel frustrated when you’re waiting for something. Patience is a virtue, yes, but it’s also a skill and arguably the most useful one in research.

Second, there will undoubtedly be a day where you walk into the lab alone for the first time and something has gone very wrong. Perhaps some equipment that should be firmly attached to the experiment is not so firmly attached to the ground instead. Perhaps a wire has shorted, and all your magnets have stopped working or the air conditioning has malfunctioned, and everything is far too hot. Whatever the case may be, you’ll be on your own, you’ll be unsure of yourself and without crisis management skills things are only going to get worse. Experiments at undergraduate level are very unlikely to fail in a manner more dire than a student having to retake some data, but in research labs equipment is bigger, more specialised and in many cases more dangerous. Being able to identify the problem, assess whether or not you are capable of fixing it yourself and act on these assessments is vital for the sake of the experiment and in rare cases, for your safety. These are not always intuitive skills, and often aren’t covered at undergraduate level meaning PhD students may find themselves underprepared for such situations. The reality of experimental labs is that equipment will go wrong at some point, sometimes for good reasons, and sometimes just to spite you. Power supplies will trip, vacuum pumps will give up the ghost and anything that’s water cooled is most definitely going to flood the lab. The trick is not to panic, to remember that your supervisor chose you for good reason, to notify the right people and to roll your sleeves up and get mopping.

Finally, by the time you get to PhD level, the experiments you’ll be working on will be cutting edge and may even be at the forefront of your respective field. With specialism like this comes the need for custom built equipment and as a budding independent researcher, you might be expected to design and build it yourself. Being able to drill and tap holes, cut and file metal and assemble complex systems from mismatched components are all skills you will likely need in the lab. Undergraduate physics courses certainly provide some good experience here; students are expected to take some initiative in designing their experiments and assembling them. Wiring basic circuits, aligning interferometers, and constructing pendulums are common in undergrad labs and some courses offer complete experimental freedom by the end of third year. These courses in particular are excellent training for research at higher levels but are usually limited by time, funding and course learning objectives to go into workshop skills with any detail. Unless you took design at school, or are well versed in DIY, you’ll be at a disadvantage in a research lab. Thankfully, these skills can be picked up elsewhere and are fairly intuitive; a summer internship for instance is an excellent opportunity to learn these skills in a laboratory setting.   

What’s the take home message? Being an experimental physicist isn’t just about knowing your equations, being able to code or remembering if the cat in the box is alive or dead. Research requires skills from all walks of life to perform well and the most unexpected of things may be worth knowing. So, if you’re considering doing a PhD in an experimental physics, next time your mate buys a flat pack wardrobe, offer to help them assemble it. Next time all the lights go out in the house, find the fuse box and see if you can track down the dodgy component. And next time someone in your halls forgets to close the door on the washing machine and floods the kitchen, grab a mop and bucket and get stuck in. These are the makings of good housemates, good physicists and you never know when you might need one of these skills in a pinch.

– Sam (1st year PhD student)

The inspiration for this post.

Precision microwave spectroscopy of Ps

Positronium is an atom which is half-matter and half-antimatter. Its energy structure is very well defined by the theory of quantum electrodynamics (QED) [1]. QED essentially describes how photons (light particles) and matter interact. If you imagine an electron in the vicinity of another electron, the classical picture says that the electric field of one electron exerts a repulsive force on the other electron, and vice versa. In the QED picture the two electrons interact by exchanging photons. Beyond electrons, the protons and neutrons in a nucleus are held together by the Strong force via the exchange of another type of particle called gluons.

The aim of precision spectroscopy is to carry out new measurements that will be compared to theoretical calculations. Measuring these energy structure provide a way of verifying the predictions made by theory. Atomic systems like hydrogen and helium are widely studied for QED testing, but the presence of heavy hadrons like protons introduces complications. One does not need to worry about such complications in positronium as there are no hadrons involved.

Within positronium there are many energy levels one could choose to measure. The separation of the triplet and singlet states of the n=1 level (i.e., the hyperfine structure interval), the 1S-2S interval, and the 2S-2P intervals are all excellent candidates [2]. The theoretical calculations for these three intervals have been very precisely calculated, and have also been previously measured. The last measurement of the fine structure intervals [3], however, is now over 25 years old and is much less precise than theory. Because Ps is very well defined by QED theory, any disagreements between calculations and measurements could be an indication of new physics. To be sensitive to new physics, the experiments have to done with precision comparable to calculations.

Figure 1: The Ps n=2 fine structure.

Recently we have measured the 2S-2P fine structure intervals of positronium [4]. There are three transitions within this branch and in this post, we’ll talk about the ν0 transition (23S1 – 23P0) which is resonant around 18 GHz. This transition, including the other two, is illustrated in figure 1. Initially, the Ps atoms (which are formed in the 1S state) have to be excited to the 2S state. This can be done in several ways (direct 1S-2S transition with one photon is not allowed), and we will cover our method in detail in another blog post soon. For now, let’s assume that the atoms are already in the 2S state. These atoms then fly into a waveguide where the microwaves drive them to the 2P state (via stimulated emission) as shown in figure 2. The atom then emits a 243 nm photon and drops down to the 1S state, where it will annihilate into gamma-rays after 142 ns (remember the lifetime of Ps in the ground state?). If nothing happens in the waveguide, the 23S1 state atom will annihilate after 1 μs.

Figure 2: (a) Target, laser, and waveguide schematic. (b) Placement of detectors, D1-D4, around the chamber.

We placed gamma-ray detectors (D1-D4) around the target chamber, as shown in figure 2, to monitor the annihilation signal. The detector signal was then used to quantify the microwave radiation induced signal, Sγ. We scanned over a frequency range to generate a lineshape that describes the transition; the centre describes the resonance frequency and the width is due to the lifetime of the excited state. A Lorentzian function was fitted to extract this information and for the example shown in figure 3, the centroid and line width are 18500.65 MHz and 60 MHz respectively. The centroid is slightly off from theory because the lineshape was measured in a magnetic field which introduces Zeeman shift to the centroid. The measured width is 60 MHz, slightly wider than the expected 50 MHz, and is due to the time taken to travel through the waveguide.

Figure 3: Measured 23S1 – 23P0 transition lineshape with theoretical resonance frequency of 18498.25 MHz.

Similar lineshapes were measured over a range of magnetic fields in order to account for the Zeeman shift. These data are shown in figure 4. Extrapolating to zero with a quadratic function allows us to obtain the field free resonance frequency, free of Zeeman shifts. However, all of the measured points, including the extrapolated number, are offset from theoretical calculation (dashed curve) by about 3 MHz. There are a few systematic effects to consider and the largest of them is the Doppler shift arising from the laser and waveguide misalignment, which amounts to 215 kHz. Our result, compared with theory and previous measurements, is shown in figure 5 and disagrees with theory by 2.77 MHz (4.5 standard deviations). While the precision has improved by over a factor of 6, the disagreement with theory is significant.

Precision measurements can be vulnerable to interference effects and there are two main types of effects that can cause lineshape distortion and/or shifts in line centre. Whenever the radiation emitted from the excited state (2P state in our case) is monitored to generate the signal, the emitted radiation can interfere with the incident/driving radiation (microwaves in our case)[5]. This leads to shift in the resonance frequency, but we are not sensitive to this kind of effect as we monitor the gamma-rays instead of the 243 nm emitted radiation (figure 1). Another type of interference arises from the presence of neighbouring resonance states [6], such as the two other 2P states in the Ps fine structure. The further apart the states are, the lesser the interference effect is and we expect a shift of 200 kHz in our line shape. This is, however, over 10 times smaller than the observed shift, and therefore, cannot be the reason for the disagreement.

There are two more transitions in the fine structure we have measured and they reveal interesting new features which were not previously seen. These additional data will provide a broader picture that will help us explain the shift we see in this transition. We’ll discuss those results in the next blog post.

[1] Karshenboim, S.G., Precision Study of Positronium: Testing Bound State QED Theory. Int. J. Mod. Phys. A, 19 (2004)

[2] Rich, A., Recent Experimental Advances in Positronium Research. Rev. Mod. Phys., 53 (1981)

[3] Hagena, D., Ley, R., Weil, D., Werth, G., Arnold, W. and Schneider, H., Precise Measurement of n=2 Positronium Fine-Structure Intervals. Phys. Rev. Lett., 71 (1993)

[4] Gurung, L., Babij, T. J., Hogan, S. D. and Cassidy, D. B., Precision Microwave Spectroscopy of the n=2 Positronium Fine Structure . Phys. Rev. Lett., 125 (2020)

[5] Beyer, A., Maisenbacher, L., Matveev, A., Pohl, R., Khabarova, K., Grinin, A., Lamour, T., Yost, D.C., Ha ̈nsch, T.W., Kolachevsky, N. and Udem, T., The Rydberg Constant and Proton Size From Atomic Hydrogen. Science, 358 (2017)

[6] Horbatsch, M. and Hessels, E.A., Shifts From a Distant Neighboring Resonance. Phys. Rev. A, 82 (2010)

Shifts of positronium energy levels in MgO

The observation of a positronium (Ps) Bose-Einstein condensate (BEC) has been a long sought after achievement in Ps physics. When an ensemble of identical particles collects into  the lowest energy state, i.e. approaching 0 K, a BEC is formed. A Ps BEC can be a source of a highly directional monoenergetic positronium beam, with applications in precision spectroscopy and gravitational interferometry. While BEC of normal atoms have been performed through evaporative cooling or laser cooling, the propensity of Ps to annihilate complicates this endeavour. One way to circumvent this problem was proposed by Platzman and Mills where Ps atoms are made and trapped inside a cavity [1]. However, a smaller cavity which has a higher cooling rate also has higher annihilation rates. One may use larger cavities at the expense of slower cooling, which can then perhaps be compensated by laser cooling.

Fig. 1: Excitation of Ps in vacuum (a) and inside MgO (b).

Following the discussion about producing Ps in MgO target, we decided to examine the effects of cavities on the positronium with laser spectroscopy [2]. Ps atoms produced by MgO were excited after being emitted in vacuum [Fig. 1(a)] or while they were still inside the powder [Fig. 1(b)]. The former will tell us the energy levels in vacuum, which are well known while the latter can tell us about the Ps-cavity interaction. The 1S->2P transition for both excitation cases are shown below in Fig. 2. Excitation in vacuum has a single curve centered around 243 nm as expected, with a Doppler width that implies kinetic energy of 350 meV. The reason for such a high energy and absence of cooling was discussed in a previous blogpost, which you may read here. But, when the Ps is probed inside the MgO, multiple peaks are visible. The redshifted peak (peak on the right, red dotted line) is due to some Ps which are already in vacuum and moving towards the lasers being excited. Another peak for atoms in vacuum moving away from the lasers (blue dotted) is also present. The third peak (blue dashed) arises from the excitation of Ps inside the MgO, which is shifted away from the vacuum resonance by 0.2 nm or 1000 GHz. The 1000 GHz shift is too large to be a confinement effect as the MgO cavities are too large. Similar measurement with silica was done previously by the Riverside Ps group [3] but not for Rydberg states as presented here.

Fig. 2: 1S-2P transition in Ps in vacuum (a) and inside MgO (b).

Similar to the data in Fig. 2, excitation to Rydberg states (2P -> 11S/11D) was also measured and is shown below in Fig. 3. Again, the vacuum excitation results in only one peak at the expected resonance [Fig. 3(a)] but excitation inside MgO [Fig. 3(b)] has a blueshifted and a redshifted peak. Two observations are apparent from the data: the Rydberg atoms are able to leave the target even after several collisions, and more surprisingly, Rydberg states (for n= 10-17) are all shifted by the same amount as shown in Fig. 4.

Fig. 3: 2P-11S/11D transition in Ps in vacuum (a) and inside MgO (b).


Fig. 4: (a) Shift of Rydberg transitions and (b) 2P- nS/nD transition for n= 10-17 for vacuum and MgO excitation.

Rydberg atoms which are more sensitive to interactions should have some n dependence, contrary to what was observed.  This appears to be because MgO has photoluminescence (PL) absorption bands in the 240 nm range [4], which overlaps with the 1S-2P transition wavelength in Ps and are coupled resonantly. There are no PL bands that overlap with the Rydberg energies, thus, these states are unaffected. The resulting 1S and 2P energy levels in MgO are therefore shifted, with higher states unshifted as shown in Fig. 5. A detailed dipole-dipole treatment of the interaction of Ps and MgO crystals is outlined in the paper. These PL bands which are also present in silica may have caused the shift previously seen but without further Rydberg data we cannot tell if individual levels are shifted.

Fig. 5: Representation of Ps energy levels in (a) vacuum and (b) inside MgO powder inferred from the data.

Thus, the presence of such PL absorption in other materials for Ps confinement purposes can also give rise to energy shifts and can affect the control of atoms, which is necessary for high-density Ps experiments. It may be possible to carefully engineer material which can confine Ps but will not exhibit these characteristics.

[1] P. M. Platzman and A. P. Mills, Jr. PRB, 49, 454 (1994)

[2] L. Gurung, B. S. Cooper, S. D. Hogan, and D. B. Cassidy, PRA, 101, 012701 (2020)

[3] D. B. Cassidy, M. W. J. Bromley, L. C. Cota, T. H. Hisakado, H. W. K. Tom and A. P. Mills, Jr. PRL, 106, 023401 (2011)

[4] C. Chizallet, G. Costentin, H. Lauron-Pernot, J-M. Krafft, M. Che, F. Delbecq, and P. Sautet, J. Phys. Chem. C, 112, 19710 (2008)


Towards trapping of Rydberg Ps

Atoms and molecules in high Rydberg states can possess large electric dipole moments which can be exploited to control their motion using external electric fields. Ever since the demonstration of deflection of Rydberg Kr atoms using electric fields (, the field of atom optics has been widely investigated. Different techniques such as decelerators [EPJ Techniques and Instrumentation (2016) 3:2], mirrors [Phys. Rev. Lett. 97, 033002 (2006)], and traps [Phys. Rev. Lett. 100, 043001 (2008)] have been developed, however applications to positronium (Ps) are relatively recent. Only a handful of Ps atom optics experiments have been performed to date [Phys. Rev. Lett. 117, 073202 (2016) , Phys. Rev. A 95, 053409 (2017) , Phys. Rev. Lett. 119, 053201 (2017)]. Several Ps experiments can benefit from use of a cold Ps source but, due to the low mass of Ps, speeds are often on the order of 100 km/s when produced in targets like mesoporous silica. Also, using mesoporous silica, the Ps angular distribution during emission is broad and if a well collimated beam is required, significant loss in Ps number is unavoidable. The use of time-varying electric fields generated by a multi-ring structure can offer a solution to these limitations by capturing more emitted Ps and allowing manipulation via their electric dipole moments. Recently, we have developed a multi-ring structure which has been implemented to guide Rydberg Ps atoms using inhomogeneous electric fields, and, in principle, can also be used to decelerate and trap Ps.

The experimental set-up of the target and electrostatic guide including the excitation lasers and the gamma-ray detectors is shown in Figure 1. The Ps producing target is labelled T and the 11 electrodes of the guide are labelled 1, 2, 3,…, 11. Ground state Ps atoms, produced from the silica target, are excited to the n=13 Rydberg level in a two step process by the UV (1S –>2P) and IR (2P–>nD/nS) lasers in the region between T and electrode 1 (E1). Once excited to Rydberg states, Ps atoms have lifetimes on the order of μs, significantly longer than the triplet ground state lifetime of 142 ns. A 500 V/cm electric field, generated by biasing T and E1, Stark splits the Rydberg spectrum [Phys. Rev. Lett. 114, 173001 (2015)], enabling us to select specific states in the Stark manifold by tuning the IR laser wavelength. States selected with positive Stark shift(shorter wavelengths) are called low field seekers (LFS), because of their property to move toward regions of low field. States selected with negative Stark shift (longer wavelengths) are called high field seekers (HFS), because of their property to move toward regions of high field. Any atoms that are able to traverse the guide, LFS or HFS, can then be detected by detectors D2, D3, D4, and D5.

Figure 1: Schematic of the Ps production and excitation chamber. Inset shows the target electrode, T, and guide assembly comprising of 11 electrodes.

Figure 2: Simulated trajectories of atoms in HFS states (red solid lines) and LFS states (white solid lines) of the n=13 Stark manifold, with guide “off” (a) & (b) and “on” (c) & (d)

According to our simulations, see Figure 2, when the guide is not in operation (all electrodes grounded at 0V), the diverging cloud of Ps atoms from the target do not experience any external forces as expected, and therefore annihilate after colliding against the guide electrodes, with only a few forward collimated atoms reaching the end of the guide. If we apply a voltage to alternate electrodes (3,5,7 etc.), this turns the guide on. In this case the HFS states are deflected toward regions of high electric fields (i.e. away from the axis of the guide) while the LFS states are confined axially and transported along the length of the guide. These LFS atoms diverge after exiting the guide and some annihilate due to collisions with the surrounding vacuum chamber. Detectors D2, D3, D4, and D5, as shown in Figure 1, are able to record these annihilation events as time of flight (TOF) spectra, the results of which are shown in figure 3 below.

Figure 3: TOF spectra/annihilation events seen by detectors D2, D3, D4, and D5 for Ps atoms in (a) LFS and (b) HFS states with guide both on and off.

Neither the LFS states nor the HFS are detected at the end of the guide when the guide is off but an increased count rate is seen when both the guide is on and the IR laser is tuned to excite atoms to the LFS states.

To implement this device as a trap, we only have to reconfigure the voltages applied to the electrodes. For guiding, the odd numbered electrodes of the guide have a voltage applied (-4 kV) while the even numbered electrodes are kept at ground potential (0 V). If a positive voltage of order +1 kV is applied to E2 and E4 while E3 is at a potential of -4kV, then a region of electric field minimum is created in the centre of E3 (the voltages applied to the electrodes are based on the choice of n due to the field ionisation limit). Once the atoms have entered the trap, the LFS atoms should be confined in this region until the voltage applied to E4 is lowered,  opening the trap gate for the atoms to be guided towards the detectors. If atoms are trapped and guided out, then the TOF spectra, like in Figure 3(a), is expected to show a peak in events but occurring with a delay consistent with the length of trapping.

In pursuit of cold Ps

Experiments involving positronium (Ps), such as laser excitation for Rydberg state production, precision spectroscopy and positronium chemistry can benefit from a slow Ps source. The technical word for slow is cold. As the mass of positronium is twice the mass of an electron, even room temperature Ps have speeds on the order of 100 km/s. In comparison, atomic beams produced from supersonic jets have speeds of roughly 2 km/s. Atom optics techniques such a decelerating the Ps with electric fields to slow down the atoms may be possible, but at the cost of loss in intensity. An ideal alternative would be to fabricate a target that intrinsically produces slow positronium, and efficiently too.

In most of our experiments, Ps is produced from a mesoporous silica target grown on a silicon substrate. Ps, initially with energy of 1 eV, is emitted from the same side as the positrons enter the converter (“reflection geometry” production) and one can generally obtain Ps with final energy of approximately 50 meV.  Once produced inside granulated powders, such as silica or magnesium oxide (MgO), Ps will lose some of its energy by making collisions with the surrounding internal surfaces before being emitted into vacuum. Generally, more collisions means greater amounts of energy loss, therefore colder Ps. A silica target was fabricated so that the Ps traverses the whole thickness of the target to be emitted out of the opposite side after making the maximum amount of collisions with the internal surfaces (transmission geometry).  However, the internal spacing of this target was too large to efficiently cool Ps below 200 meV (~200 km/s), which is still hotter than Ps emitted from mesoporous silica. Ps can also be produced from MgO, albeit with a slightly higher initial energy of around 4 eV. Previous studies have indicated that this 4 eV energy is reduced down to energies of around 300 meV in a 6 μm thick MgO layer. So, that gave us an idea. If we make a thicker MgO layer to increase collisional cooling, Ps atoms could be produced with energies lower than 50 meV (Positronium emission from MgO smoke nanocrystals).

A scanning electron microscope (SEM) image of smoked MgO.

To make the MgO target, we set fire to a strip of magnesium ribbon and collected the smoke on a suitable substrate (tinted goggles and a fume cupboard are necessary as you might imagine). This procedure produces perfect cubic crystals, in a 30 μm MgO layer, that have a wide size distribution, as evident from the SEM image above. The substrate of our choice was a 50 nm silicon nitride film, as it allows us to implant the positrons into the SiN side, to make positronium in the SiN-MgO boundary, which is then emitted into vacuum from the opposite side in transmission geometry. In this configuration, Ps atoms will travel through the 30 μm thick MgO, making the maximum amount of collisions. For comparison, we can rotate the target by 180° so the positron hits the MgO side (“reflection geometry”) and Ps makes the least amount of collisions, for positrons which are implanted to a relatively shallow depth of only 100 nm. We expect colder Ps to be produced in the former case compared to the latter, due to the distance travelled by the Ps (30 μm vs 100 nm). These two orientations are shown below, including the positron pulse and the excitation lasers. VT and VG refer to voltages applied to the target and grid electrode to control the positron energy and electric field.

Experimental setup showing the target utilised in reflection and transmission geometry.

Once positronium atoms are emitted in either of the two set-ups, they are excited with UV and IR lasers to measure the Doppler profile, which gives an indication of their kinetic energy, KE. In the “reflection geometry” configuration, VT controls how deep into the MgO layer the positrons are implanted. Higher voltages results in deeper implantation, hence larger amount of collisions made by the Ps on their way out and colder Ps. However in the “transmission” set-up, around 2 keV voltage on VT is enough to make the positrons go through the SiN and form Ps in MgO. We found that for the 2 – 5 keV range we measured, positron penetration depths into the MgO layer in transmission set-up were essentially the same, meaning that Ps always travelled through 30 μm of MgO in this configuration. The resultant KE obtained from the Doppler profiles are shown below.

Left: An example of a Doppler profile. Right: Kinetic energies obtained from the Doppler profiles.

Surprisingly, Ps appears to be emitted with the same energy regardless of the amount of collisions it makes with the smoked MgO surfaces. This is inconsistent with the idea that Ps is formed in MgO with 4 eV of energy and reduction in energy takes place due to the collisions. If this is true then we expect KE in reflection set-up, when VT is 1 kV, to be very close to 4 eV. However, Ps kinetic energy is always around 400 meV. This is because Ps, in smoked MgO, is intrinsically produced with around 300 meV of energy with a wide distribution due to the grain sizes, and because of the large open volumes between the MgO crystals, cooling is rather inefficient.

This rules out smoked MgO as a candidate for cold Ps production but there are many experiments where the Ps production and interaction region needs to be separated from the positron beam line. In such experiments, a simple and easily producible MgO target as discussed here, in “transmission geometry” configuration, can be employed where, for example, a scattering gas cell can be installed without interfering with the incoming positrons.

We have also noticed some interesting observations when the lasers are fired into the MgO rather than travelling just in front of the MgO surface. More on that in the next blog post.

State-selective field ionization of Rydberg positronium

All atomic systems, including positronium (Ps) can be excited to states with high principal quantum number n using lasers, these are called Rydberg states. Atoms in such states exhibit interesting features that can be exploited in a variety of ways. For example, Rydberg states have very long radiative lifetimes (on the order of 10 µfor our experiments). This is a particularly useful feature in Ps because when it is excited to large-n states, the overlap between the electron and positron wavefunction is suppressed. Therefore the self-annihilation lifetime becomes so large in comparison to the fluorescence lifetime, that the effective lifetime of Ps in a Rydberg state becomes the radiative lifetime of the Rydberg state. Most Rydberg Ps atom will decay back to the ground state first, before self-annihilating [Phys. Rev. A 93, 062513 (2016)]. The large distance between the positron and electron centers of charge in certain Rydberg states also means that they exhibit large static electric dipole moments, and thus their motion can be manipulated by applying forces with inhomogeneous electric fields [Phys. Rev. Lett. 117, 073202 (2016), Phys. Rev. A 95, 053409 (2017)]

In addition to these properties, Rydberg atoms have high tunnel ionization rates at relatively low electric fields. This property forms the basis for state-selective detection by electric field ionization. In a recent series of experiments, we have demonstrated state-selective field ionization of positronium atoms in Rydberg states (n = 18- 25) in both static and time-varying (pulsed) electric fields.

The set-up for this experiment is shown below where the target (T) holds a SiO2 film that produces Ps when positrons are implanted onto it. The first grid (G1) allows us to control the electric field in the laser excitation region, and a second Grid (G2) with a varying voltage provides a well defined ionization region. An electric field is applied by either applying a constant voltage to Grid 2 as in the case of the static field configuration, or by ramping a potential on Grid 2 as in the case of the pulsed field configuration.

Figure 1: Experimental arrangement showing separated laser excitation and field ionization regions.

In this experiment we detect the annihilation gamma rays from:

  • the direct annihilation of positronium

  • annihilations that occur when positronium crashes into the grids and chamber walls

  • annihilations that occur after the positron, released via the tunnel ionization process, crashes into the grids or chamber walls

We subtract the time-dependent gamma ray signal when ground state Ps traverses the apparatus from the signal detected from Rydberg atoms when an electric field is applied in the ionizing region. This forms a background subtracted signal that tells us where in time there is an excess or lack of annihilation radiation occurring when compared to background (this SSPALS method is described further in NIM. A  828, 163 (2016)  and and here).


Static Electric Field Configuration

In this version of the experiment, we let the excited positronium atoms fly into the ionization region where they experience a constant electric field. In the case where a small electric field (~ 0 kV/cm) is applied in the ionizing region, the excited atoms fly unimpeded through the chamber as shown in the animation below. Consequently, the background subtracted spectrum is identical to what we expect for a typical Rydberg signal (see the Figure below for n=20). There is a lack of ionization events early on (between 0 and 160 ns) compared to the background (ground state) signal that manifests itself as a sharp negative peak. This is because the lifetime of Rydberg Ps is orders of magnitude larger than the ground state lifetime.

Later on at ~ 200 ns, we observe a bump that arises from an excess of Rydberg atoms crashing into Grid 2. Finally, we see a long positive tail due to long-lived Rydberg atoms crashing into the chamber walls.


Figure 2: Trajectory simulation of Rydberg Ps atoms travelling through the ~0 V/cm electric field region (left panel) and measured background-subtracted gamma-ray flux , the shaded region indicates the average time during which Ps atoms  travel from he Target to Grid 2 (right panel).

On the other hand, when the applied electric field is large enough, all atoms are quickly ionized as they enter the ionizing region. Correspondingly, the ionization signal in this case is large and positive early on (again between 0 and 160 ns). Furthermore, instead of a long positive tail, we now have a long negative tail due to the lack of annihilations later in the experiment (since most, if not all, atoms have already been ionized). Importantly, since in this case field ionization occurs almost instantaneously as the atoms enter the ionization region, the shape of the initial ionization peak is a function of the velocity distribution of the atoms in the direction of propagation of the beam.



Figure 3: Trajectory simulation of Rydberg Ps atoms travelling through the ~2.6 kV/cm electric field region (left panel) and measured background-subtracted gamma-ray flux , the shaded region indicates the average time during which Ps atoms  travel from he Target to Grid 2 (right panel).

We measure these annihilation signal profiles over a range of fields and calculate the signal parameter S. A positive value of S implies that there is an excess of ionization occurring within the ionization region; whereas, a negative S means that there is a lack of ionization within the region with respect to background. Therefore, if S is approximately  equal to 0%, only half of the Ps atoms re being ionized. A plot of the experimental S parameter for different applied fields and for different n’s is shown in the plot below.Figure 4: Electric field scans for a range of n states ranging from 18 to 25 showing that at low electric fields none of the states ionize (thus the negative values of S) and as the electric field is increased, different n states can be observed to have varying ionizing electric field thresholds.

It is clear that different n-states can be distinguished using these characteristic S curves. However, the main drawback in this method is that both the background subtracted profiles and the S curves are convoluted with the velocity profile of the beam of Rydberg Ps atoms. This drawback can be eliminated by performing pulsed field ionization.

Pulsed Electric Field Configuration

We have also demonstrated the possibility of distinguishing different Rydberg states of positronium by ionization in a ramped electric field. The set-up is the same as in the static field scenario but now instead of fixing a potential on Grid 2, the potential on this grid is decreased from 3 kV to 0 kV hence increasing the field from 0 kV/cm to ~ 1800 kV/cm (the initial 3kV is necessary to help cool down Ps [New J. Phys17,043059 (2015)]).

The advantage of performing state selective field ionization this way is that we can allow most of the atoms to enter the ionization region before pulsing the field. This eliminates the dependence of the signal on the velocity distribution of the atoms and thus the signal is only dependent on the ionization rates of that Rydberg state in the increasing electric field.

Below is a plot of our results with a comparison to simulations (dashed lines). We see broad agreement between simulation and experiment and, we are able to distinguish between different Rydberg states depending on where in time the ionization peak occurs. This means that we should be able to detect a change in an initially prepared Rydberg population due to some process such as microwave induced transitions.

Figure 5: Pulsed-field ionization signal as a function of electric field for a range of n states.

The development of state selective ionization techniques for Rydberg Ps opens the door to measuring the effect of blackbody transitions on an initially prepared Rydberg population and a methodology for detecting transitions between nearby Rydberg-levels in Ps. Which could also be used for electric field cancellation methods to generate circular Rydberg states of Ps.

Ethics in research: what a student should know.

Ethics in research is something everyone is expected to understand and respect, from beginning students to established professors. However, very rarely are such things discussed formally. In effect one is expected to simply know all about these matters by appealing to “common sense” and also to the examples set by mentors. This is not an ideal situation, as is evidenced by numerous high profile cases of scientific misconduct. The purpose of this post is to briefly mention some important aspects of ethical research practices, mostly to encourage further thought in students before they have a chance to influenced by whatever environment in which they are working.

In some cases “misconduct” is easy to spot, and is then (usually) universally condemned. For example, making up or manipulating data is something we all know is wrong and for which there can never be a legitimate defense. At the same time, some students might see nothing wrong in leaving off a few outlier data points that clearly do not fit with others, or in failing to refer to some earlier paper. It is not always necessarily obvious what the right (that is to say, ethical) thing to do might be. What harm is there, after all, in putting an esteemed professor as a co-author on a paper, even though they have provided no real contribution? (Incidentally, merely providing lab space, equipment or even funding doesn’t, or shouldn’t, count as such). Well, there is harm in gift authorship; it devalues the work put in by the real authors, and makes it possible for a certain level of authority to guarantee future (perhaps unwarranted) success. And does it really matter if an extreme outlier is left off of an otherwise nice looking graph? The answer is yes, absolutely.

At the heart of most unethical behavior in science, or misconduct, is the nature of truth, and the much revered “scientific method”. Never mind that there is no such thing, or that the way science is actually done is not even remotely similar to the romantic notions so often associated with this very human activity. Nevertheless, there is a very strong element of trust implicit in the scientific endeavor. When we read scientific papers we have no choice but to assume that the experiments were carried out as reported, and that the data presented is what was actually observed. The entire scientific enterprise depends on this kind of trust; if we had to try and sort through potentially fraudulent reports we would never get anywhere. For this reason, trust in science is extremely important, and one could say that, regardless of any mechanisms of self correction that might exist in science, and the concomitant inevitability of discovery of fraud, we are almost compelled to trust scientists if we wish science to be done in a reasonable manner[a]. By the same token, those scientists who choose to betray this trust pollute not only their own research (and personal character), but all science. The enormity of this offense should not be understated. In particular, students should be aware of the fact that being a dishonest scientist is oxymoronic.

What is misconduct? A formal definition of scientific misconduct of the sort that would be necessary in legal action is an extremely difficult thing to produce. There have been many attempts, most of which are centered on the ideas of FFP: that is, fraud, falsification and plagiarism. These are things that we can easily understand and, most of the time, identify when they arise. However, things can (and invariably do) get complicated in real world situations, especially when people try to gauge intention, or when the available information is incomplete.

A definition of scientific misconduct, as it pertains to conducting experiments or observations to generate data, was given by Charles Babbage (Babbage 1830) that still has a great deal of relevance today. Babbage is perhaps best known for his “difference engine”, a mechanical computer he built to take the drudgery out of performing certain calculations. He was a well-respected scientist working in many areas, Professor of physics at Cambridge University and a fellow of the Royal society. His litany of offences, which is frequently cited in discussions of scientific misconduct, was categorized into four classes.

Babbage’s taxonomy of fraud:

1.) Hoaxing is not fraud in the usual sense, in that it is generally intended as a joke, or to embarrass or attack individuals (or institutions). A hoax is more like a prank than an attempt to benefit the hoaxster in the scientific realm; it is not often that a hoax remains undisclosed for a long time. Hoaxes are not generally intended to promote the perpetrator, who will prefer to remain anonymous, (at least until the jig is up). A Hoax is far more likely to involve someone dressing up as Bigfoot, or lowering a Frisbee outside a window on a fishing line than fabricating IV curves for electronic devices that have never been built.

A famous and amusing example is the so-called Sokal hoax (Sokal 1996). In 1996 a physicist from New York University submitted a paper to the journal “Social Text”. This journal publishes work in the field of “postmodernism” or cultural studies, an area in which there had been many critiques of science based on unscientific criteria. For example, the idea that scientific objectivity was a myth, and that certain cultural elements (e.g., “male privilege” or western societal norms) are determining factors in the underlying nature of scientific discovery. This attitude vexed many physicists, leading to the so-called “science wars” (Ross 1996), and was the impetus for Sokal’s hoax. The paper he submitted to Social Text was entitled “Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity”. This article was deliberately written in impenetrable language (as was often the case in articles in Social text, and other similar publications) but was in fact pure nonsense (as was also often the case in articles in Social text and other similar publications). The paper was designed with the intention of impressing the editors, not only with the stature of the author (Sokal was then a respected physicist, and seems at the time of writing, some 16 years later, still to be one) but also by appealing to their ideological positions. Sokal revealed the hoax immediately after his paper was published; he never had any intention of allowing it to stand, but rather wanted to critique the journal. As one might expect, a long series of arguments arose from this hoax, although the main effect was to tempt more actual scientists into commenting on the “science wars” that were already occurring. In this sense Sokal may have done more harm than good with his hoax, as the inclusion of scientists in the “debate” served only to legitimize what had before been largely the province of people with no training in, or, frequently, understanding of, science.

 2.) Forging is just what it sounds like, simply inventing data and passing it off as real. Of course, one might do the same in a hoax, but the forger seeks not to fool his audience for a while, later to reveal the hoax for his or her own purposes, but to permanently perpetrate the deception. The forger will imitate data and make it look as real as possible, and then steadfastly stick to their claim that it is indeed genuine. Often such claims can stand up to scrutiny even once the forging has been exposed, since if it is done well the bogus data is in no way obviously so. This unfortunate fact means that we cannot know for sure how much forging is really going on. However, the forger does not have any new information and so cannot genuinely advance science. They may be able to appear to be doing so, by confirming theoretical predictions, for example, but such predictions are not always correct. Nevertheless, the forger must have a good knowledge of their field, and enough understanding of what is expected to be able to produce convincing data out of thin air. If one were to do this then, on rare occasions, it is even possible that one might guess at some reality, later proved, and really get away with it. However, the fleeting glory of a major discovery seems to be somewhat addictive to those so inclined, and some famous forgers have started out small, only to allow themselves to be tempted into such outrageous claims that from the outside it seems as though they wanted to get caught.

A well known case in which exactly this happened is the “Schön affair”, which has been covered in detail in the excellent book “plastic fantastic” (Reich 2009). Hendrik Schön was a young and, apparently, brilliant young physicist employed at Bell Labs in the early 2000’s. His area of expertise was in organic semiconductors, which held (and continue to hold) great promise for the production of cheap and small electronic devices. Schön published a large number of papers in the most high profile journals (including many in Science and Nature) that purported to show organic crystals exhibiting remarkable electronic properties, such as superconductivity, the fractional quantum hall-effect, lasing, single molecule switches and so on. At one point he was publishing almost one paper per week, which is impressive just from the point of view of being able to write at such a rate; most scientists will probably spend one or two months just writing a paper, never mind the time taken to actually get the data or design and build an experiment. Before the fraud was exposed one professor at Princeton University[b], who had been asked to recommend Schön for a faculty position, refused to do so, and stated that he should be investigated for misconduct based solely on his extreme publication rate (Reich 2006).

Schön’s incredible success seemed to be predicated on his unique ability to grow high quality crystals (he was described on one occasion as having “magic hands”). This skill eluded other researchers, and nobody was able to reproduce any of Schön’s results. As he piled spectacular discovery upon spectacular discovery, suspicion grew. Eventually it was noticed that in some of his published data Schön had used the same curves, right down to the noise, to represent completely distinct phenomena, in completely different samples (that, in fact, had never really existed). After trying to claim that this was simply an accident, that he had just mislabeled some graphs and used the wrong data, more anomalies were revealed. The management at Bell labs started an investigation and Schön continued to make excuses, stating (amazingly, considering the quality of the real scientists that worked at this world leading institution) that he had not kept records in a notebook, and that he had erased his primary data to make space on his computer’s hard drive[c]!

None of these excuses were convincing, and indeed such poor research techniques should themselves probably be grounds for dismissal, even in the absence of fraud. Fraud was not absent, however, it was prevalent, and on a truly astounding scale. Schön had invented all of his data, wholesale, had never grown any high quality crystals, and had lied to many of his colleagues, at Bell labs and elsewhere. He was fired, and his PhD was revoked by Konstantz university in Germany for “dishonorable conduct”. There was no suggestion that the rather pedestrian research Schön had completed for his doctorate was fraudulent, but his actual fraud was so egregious that his university felt that he had dishonored not just himself and their institution, but science itself, and as such did not deserve his doctorate.

It seems obvious in retrospect that Schön could never have expected to get away with his forging indefinitely. Even if he had been a more careful forger, and had made up distinct data sets for all of his papers, the failure of others to reproduce his work would have eventually revealed his conduct. Even before he was exposed, pressure was mounting on Schön to explain to other scientists how he made his crystals, or to provide them with samples. He had numerous, and very well respected, co-authors who themselves were becoming nervous about the increasingly frustrated complaints from the other groups unable to replicate his experiments. It was only a matter of time before it all came out, regardless of the manifestly fake published data.

This unfortunate affair raises some interesting questions about misconduct in science. For example, Schön’s many co-authors seem to have failed in their responsibilities. Although they were found innocent of any actual complicity in the fraud, as co-authors on so many, and on such astounding, papers, it seems clear that they should have paid more attention. Some of the co-authors were highly accomplished scientists, while Schön himself was relatively junior, which only compounds the apparent lack of oversight. Indeed, it is likely that the pressure of working for such distinguished people, and at such a well-respected institution as Bell Labs, was what first tempted Schön into his shameful acts.

No doubt winning numerous prizes, being offered tenured positions at an Ivy League school (Princeton; not bad if you can get an offer) and generally being perceived as the new boy-wonder physicist played a role in the seemingly insane need to keep claiming ever more incredible results. It is hard to escape the conclusion that some sort of mental breakdown contributed to this folly. However, another pathological element was also likely at play, which is that Schön was sure that his speculations would eventually be borne out by other researchers.

If this had actually happened, and if he had not been a poor forger (or, perhaps, an overworked forger, with barely enough time to invent the required data), he might have enjoyed many years of continued prosperity. As it happens, some of the fake results published by Schön have in fact been shown to be roughly correct; sometimes the theorists guess right. Ultimately though, forging will usually be found out, and the more that is forged the faster this will happen. The only way to forge and get away with it is to know in advance what nature will do, and if you know that the forging is redundant[d].

3.) Trimming is the practice of throwing away some data points that seem to be at odds with the rest of the measurements, or at least are not as close to the “right” answer as expected. In this case the misconduct is primarily designed to make the experimenter seem more competent, although actually a poor knowledge of statistics might mean that it has the opposite effect; outliers sometimes seem out of place, but, to those who have a good knowledge of statistics, their absence may be quite suspicious. Because it can seem so innocuous, trimming is probably much more common than forging. When he invents data from nothing, the forger has already committed himself to what he must surely know is an immoral and heinous action; whatever justification he might employ towards this action cannot shield him from such knowledge. The trimmer, on the other hand, might feel that his scientific integrity is unaffected, and that all he is doing is making his true data a little more presentable; so as to better explain his truth. A trimmer who believes this is wrong may well be less offensive than the forger, but his actions are still unconscionable, will still constitute a fraud, and may equally well pervert the scientific record, even when the essential facts of the measurement remain. For example, by trimming a data set to look “nicer” you might significantly alter the statistical significance of some slight effect. Indeed, in this way you can create a non-existent effect, or destroy evidence for a real one; without the truth of nature’s reference there is no way to know the difference.

The trimmer has a great advantage over the forger, insofar as he is not trying to second guess nature, but rather obtains the answer from nature in the form of his primary measurements, and then attempts to make his methodology in so doing appear to be “better” than it really was. The trimmed data, therefore, will likely not give a very different answer than the untrimmed set, but will seem to have been obtained with greater skill and precision.

There are no well known examples of trimming because, by its very nature, it is hard to detect. Who will know if some few data points are omitted from an otherwise legitimate data set? What indicators might allow us to distinguish a careful experimenter from a carefree trimmer? Even Newton has been accused of trimming some of his observations of lunar orbits to obtain better agreement with his theory of gravitation (Westfell 1996). Objectively, even for the amoral, trimming is quite pointless when one considers that, if found out, the trimmer will suffer substantial damage to his reputation, whereas even if it is not found out, the advantage to that reputation is slight.

4.) Cooking refers to selecting data that agrees with the hypothesis you are trying to prove and rejecting other data, that may be equally valid, but that doesn’t. In other words, cherry picking. This is similar to trimming inasmuch as it is selecting from real data (i.e., not forged), but with no real justification for doing so, other than ones own presupposition about what the “right” answer is. It differs from trimming, for the cook will take lots of data and then select those that serve his purpose, while the trimmer will simply make the best out of whatever data he has. This is lying, just as forging, but with the limitation that nature herself has at least suggested which lies to tell.

Cooking also carries with it an ill defined aspect of experimental science, that of the “art” of the experimentalist. It is certainly true to say that experimental physics (and no doubt other branches of science too) contains some element of artistry. Setting up a complicated apparatus and making it work is an underappreciated skill, but a crucial one. When taking data with a temperamental system it is routine to throw some of it out, but only if one can justify doing so it in terms of the operation of the device. Almost all experiments include some tuning up time in which data are neglected because of various experimental problems. However, once all of these problems have been ironed out and the data collection begins “for real” one has to be very careful about excluding any measurements. If some obvious problems attend (a power supply failed, a necessary cable was unplugged, batboy was seen chewing on the beamline, whatever it may be) then there is no moral difficulty associated with ditching the resulting data. When this sort of thing happens (and it will), the best thing to do is throw out all of the data and start again.

The situation gets more complicated if an intermittent problem arises during an experiment that affects only some of the data. In that case you might want to select the “good” data and get rid of the “bad”. Unfortunately all of the resulting data will then be “ugly”. This is not a good idea because it borders on cooking. It is much better to throw away all potentially suspect data and start again. In data collection, as in life, even the appearance of impropriety can be a problem, regardless of if propriety has in fact been maintained. It may be the case that there is an easy way to identify data points that are problematic (for example, the detector output voltage might be zero sometimes and normal at others), but there will remain the possibility that the “good” data is also affected by the underlying experimental problem leading to “bad” data, but in a less obvious manner. The best thing to do in this situation will depend on many factors. It is not always possible to repeat experiments; you might have a high confidence in some data because of the exact nature of your apparatus and so on. Even so, “when in doubt, throw it out” is the safest policy.

Cooking is a little like forging, but with an insurance policy. That is, since the data is not completely made up there is a better chance that it does really correspond to nature, and therefore you will not be found out later because you are (sort of) measuring something real. This is not necessarily the case, however, and will depend on the skill of the cook. By selecting data from a large array it is usually possible to produce “support” for any conclusion one wants, and as a result the cook who seeks to prove an incorrect hypothesis loses his edge over the forger.

A well known case illustrates that it is not so simple to decouple experimental methodology with cookery. Robert Millikan is well known for his oil drop experiment in which he was able to obtain a very accurate value for the charge on an electron, work for which he was awarded the Nobel Prize. This work has been the subject of some controversy, however, since Historian of Science Gerald Holton [Holton 1978] went through Millikan’s notebooks and revealed that not all of the available data had been used, even though Millikan specifically said that it had. Since then there have been numerous arguments about whether the selection of data was simply the shrewd action of an expert experimentalist, or one of dishonest cooking. As discussed by numerous authors [e.g., Segerstråle, 1995] it is neither one nor the other, and the situation is more nuanced. The reality is that his answer was remarkably accurate, as we now know. The question then is did Millikan know what to expect? If he did not then accusations of cooking seem unfounded, since he would have had to know what he was trying to “demonstrate”. If he got the right answer without knowing it in advance, he must have been doing good science, right? WRONG! If Millikan was cooking (or even doing a little trimming) and he somehow got the right answer this does not in any way mitigate the action. Since we cannot say whether he really did do any cooking or not, one might believe that getting the right answer implies that he was being honest; as it turns out, he would have obtained an even more accurate answer if he had used all of his data (Judson 2004).

Furthermore, the question of whether Millikan already knew the “right” answer is meaningless. The right answer is the one we get from an experiment, and we only know it to the extent that we can trust the experiment. An army of theoreticians can be (have been, perhaps now are) wrong. Dirac predicted that the electron g factor would be exactly 2, and it almost is. Knowing only this, the cook or trimmer might be sorely tempted to nudge the data to be exactly 2.0000 and not 2.0023, but that would be a terrible mistake, one that would miss out on a fantastic discovery. One can only hope that somewhere in the world there is at least one cook who allowed a Nobel Prize (or some major discovery) to slip through his dishonest fingers, and cannot even tell anyone how close he came. Did Millikan knowingly select data to make his measurements look more accurate? There is no way to know for sure.

Pathological science is a term coined by Irving Langmuir, the well known physicist. He defined this as science in which the observed effects are barely detectable, and the signals cannot be increased, but are nevertheless claimed to be made with great accuracy; pathological science also frequently involves unusual theories that seem to contradict what is known (Langmuir 1968). However, pathological science is not usually an act of deliberate fraud, but rather one of self delusion. This is why the effect in question is always at the very limit of what can be detected, for in this case all kinds of mechanisms can be used (even subconsciously) to convince one that the effect really is real. In this context, then, it is worth asking, does the cook necessarily know what he is doing? That is, when one wishes to believe something so very strongly, perhaps it becomes possible to fool ones own brain! This kind of self delusion is more common than you might think, and happens in everyday life all the time. Although we cannot directly choose what we believe is true, when we don’t know one way or the other, our psychology makes it easy for us to accept the explanation that causes the least internal conflict (also known as cognitive dissonance).

When a researcher is so deluded as to engage in pathological science, it is difficult to categorize this activity as one of misconduct. The forger has to know what he is doing. The trimmer or cook generally tries to make his data look the way he wants, but if he does so without real justification then it is wrong. By making justifications that seem reasonable but are not, one could conceivably fool oneself into thinking that there was nothing improper about cooking or trimming. Certainly, this will sometimes really be the case, which only makes it easier to come up with such justifications.

In many regards the pathological scientist is not so different from one who is simply wrong. The most famous example of this might be the cold fusion debacle (Seife 2009). Pons and Fleischmann claimed to see evidence for room temperature fusion reactions and amazed the world with their press conferences (not so much with their data). Later their fusion claims turned out to be false, as proved by Kelvin Lynn, Mario Gai and others (Gai et al 1989). However, the effect they said they saw was also observed by some other researchers, and as a result many reasons were promulgated as to why one experiment might see it and another might not. Then, when it was pointed out that there ought to be neutron emission if there was fusion, and no neutrons were observed, it became necessary to modify the theory so as to make the neutrons disappear. This was classic pathological science, hitting just about every symptom laid out by Langmuir.

Plagiarism was not explicitly discussed by Babbage but does feature prominently in modern definitions of misconduct. Directly copying other peoples work is relatively rare for professional scientists, perhaps because it is so likely to be discovered. There have been some cases in which people have taken papers published in relatively obscure journals and submitted them, almost verbatim, to other, even more obscure, journals. Since only relatively unknown work is susceptible to this sort of plagiarism it does little damage to the scientific record, but is no less of an egregious act for this. Certainly the psychology of the “scientist” who would engage in such actions is no less damaged than that of a Schön.

In one case a plagiarist, Andrzej Jendryczko, tried to pass of the work of others as his own by translating it (from English, mostly) into Polish, and then publishing direct copies in specialized Polish medical journals under his own name (E.g., Judson 2004). This may have seemed like a safe strategy, but a well read Polish-American physician had no trouble tracking down all the offending papers using the internet. Indeed, wholesale plagiarism is now very easy to uncover via online databases and a multitude of specialized software applications. For the student, plagiarism should seem like a very risky business indeed. With minimal effort it can be found out, and no matter how clever the potential plagiarist might be, Google is usually able to do better with sheer force of computing power and some very sophisticated search algorithms. As is often the case, this sort of cheating does not even pass a rudimentary cost-benefit analysis[e]. It is another inverse to Pascal’s wager (the idea that it is better to believe in God than not, just because the eternity of paradise always beats any temporary secular position) inasmuch as the actual gain attending ripping off some (necessarily) obscure article found online is virtually nil, whereas exposure as a fraud for doing this will contaminate everything you have ever done, or will ever do, in science. How can this ever be worth even considering?

Plagiarism is not as simple as copying the work of others; providing incomplete references in a paper could be construed as a form of plagiarism, insofar as one is not providing the appropriate background, thereby perhaps implying more originality than really exists. References are a very important part of scientific publishing, and not just because it is important to give due respect to the work that you might be building upon. A well referenced paper not only shows that you have a good understanding and knowledge of your field, but will also make it possible for the reader to properly follow the research trail, which puts everything in context and helps enormously in understanding the work[f].

Stealing the ideas of others, if not their words, is also, of course, a form of plagiarism. This is harder to prove, which may be why it is something most often discussed in the context of grant proposals. This forms a rather unfortunate set of circumstances in which a researcher finds himself having to submit his very best ideas to some agency (in the hope of obtaining funding), which are promptly sent directly to his most immediate competitors for evaluation! After all, it is your competition who are best placed to evaluate you work. In most cases, if an obvious conflict exists, it is possible to specify individuals who should not be consulted as referees, although this is more common in the publication of papers than it is in proposal reviews. For researchers in niche fields, in which there may not be as much rivalry, this is not going to be a common problem. For researchers in direct competition, however, things might not be so nice. There have in fact been some well publicized examples of this sort of plagiarism, but it is probably not very common because grant proposals don’t often contain ideas that have not been discussed, to some degree, in the literature, and in those rare cases when this isn’t so, it is probably going to be obvious if such an idea is purloined. Also, let us not forget, most scientists are really not so unscrupulous. Thus, for this to happen you’d need an unlikely confluence of an unusually good idea sent to an abnormally unprincipled referee who happened to be in just the right position to make use of it.


References & Further Reading  

Misconduct in science is a vast area of study, and the brief synopsis we have given here is just the tip of the iceberg. The use of Babbage’s taxonomy is commonplace, and in depth discussions of every aspect of this can be found in many books. The following is small selection. The book by Judson is particularly recommended as it is fairly recent and goes into just the right amount of detail in some important case studies.

Judson H. F. (2004). The Great Betrayal: Fraud in Science, Houghton Mifflin Harcourt.

Alfredo K and Hart H, “The University and the Responsible Conduct of Research: Who is Responsible for What?” Science and Engineering Ethics, (2010).

Sterken, C., “Writing a Scientific Paper III. Ethical Aspects”, EAS Publications Series 50, 173 (2011)

Reich, E. S. (2009). Plastic Fantastic: How the Biggest Fraud in Physics Shook the Scientific World, Palgrave Macmillan.

Broad, W. and Wade, N. (1983). Betrayers of the Truth: Fraud and deceit in the halls of science, Simon & Schuster.

Babbage C. (1830) Reflections on the Decline of Science in England, August M. Kelley, New York (1970).

Holton G. (1978) Subelectrons, presuppositions and the Millikan-Ehrenhaft dispute, In: Holton G., ed., The Scientific Imagination, Cambridge University Press, Cambridge, UK.

Langmuir I. (1989) Pathological science, Physics Today 42: 36–48. Reprinted from the original in General Electric Research and Development Center Report 86-C-035, April 1968

Ross, Andrew, ed. (1996). Science Wars. Duke University Press.

Segerstråle, U. (1995). Good to the last drop? Millikan stories as “canned” pedagogy, Science and Engineering Ethics 1, 197.

Sokal, A., D. and Bricmont, J. (1998). Fashionable Nonsense: Postmodern Intellectuals’ Abuse of Science. Picador USA: New York.  

Westfall, R., S. (1994). The Life of Isaac Newton. Cambridge University Press.

Seife, C. (2009). Sun in a Bottle: The Strange History of Fusion and the Science of Wishful Thinking,  Penguin Books.

Gai, M. et al., (1989). Upper limits on neutron and gamma ray emission from cold fusion, Nature 340, 29.



[a] What is this reasonable manner? We just mean that if one cannot trust scientific reports to be honest representations of real work undertaken in good faith, then standing on the shoulders of our colleagues (who may or may not be giants) becomes pointless, everything has to be independently verified, and the whole scientific endeavor becomes exponentially more difficult.

[b] Reich’s book does not name the professor in question, but clearly he or she had a good grasp on how much can be done, even by a highly motivated genius.

[c] This is a particularly ludicrous claim since to a scientist experimental data is a highly valuable commodity, whereas disc space is trivially available: the idea that one would delete primary data just to make space on a hard drive is like burning down one’s house in order to have a bigger garden.

[d] Another way to increase your chances of getting away with forgery is to fake data that nobody cares about. However, this obviously has an even less attractive cost-benefit ratio.

[e] We obviously do not mean to suggest that some clever form of plagiarism that can escape detection does pass a cost-benefit analysis, and is therefore a good idea: the uncounted cost in any fraud, even a trivial one, is the absolute destruction of one’s scientific integrity. We just mean that this sort of thing, which is never worthwhile, is even more foolish if there isn’t some actual (albeit temporary) advantage.

[f] On a more mundane level, a referee who has not been cited, but should have been, is less likely to look favorably upon your paper than he or she might otherwise have done. This also applies to grant proposals, so it is in your own interests to make sure your references are correct and proper.

12th International Workshop on Positron and Positronium Chemistry

Our research group was recently represented by Dr David Cassidy at the 12th International Workshop on Positron and Positronium Chemistry (PPC12), which took place between the 28th of August and 1st of September in in Ludblin, Poland. The main focus of this meeting was the interaction of positrons and positronium with other materials and atoms, including polymers, soft matter, surface states and more.

David presented our recent advances in producing a beam of Rydberg positronium atoms (PRL 117, 073202 & PRA 95, 053409) and the prospects of using such techniques to form the yet-unobserved positron-atom bound states (PRA 93, 052712).

You can have a look at the abstracts in the conference website. We are grateful to the organizers for this opportunity and their hard work.

Rydberg Positronium-electron scattering

In our last experiment, we showed how a curved quadrupole guide can be used to transport Rydberg Positronium (Ps*) around a 45 degree bend (blue path in picture below) into a region away from the positron beam line (yellow path below), which can then be implemented into a scattering region (PRA 95 053409). In addition, this new off-axis setup eliminated detection difficulties that were encountered in our previous straight guide experiments.

The scattering chamber setup is shown in the right hand side picture below. Guided Ps* atoms emerging from the end of the quadrupole are introduced into a scattering region where they collide with electrons that are thermoinically ejected (orange path) from a Molybdenum filament . This region is surrounded by gamma-ray detectors coupled to photomultiplier tubes (PMT) which record annihilation events inside the chamber. However, these PMTs in themselves are not sufficient to conclude what kind of interaction has taken place, we can only conclude that a Ps atom annihilated if an event is detected, but not if there were intermediate steps (such as charge exchange between the free electrons and the valence electron) before annihilation. To overcome this, we used a micro-channel plate (MCP) which has the benefit of being able to detect only certain charged particles (i.e. differentiating between electrons and positrons) depending on the electric fields applied in the detection region.

When the Ps*collides with free electrons a few different things can happen; the electron could break up the Ps* into electron and positron (ionisation: e^{-} + Ps^{*} \rightarrow e^{-} + e^{+} + e^{-} ) or, the positron from the Ps* will be stolen away by the incoming free electron, resulting in a new short lived Ps which annihilates before hitting the MCP, thus giving a diminished signal (charge exchange: e^{-} + Ps^{*} \rightarrow Ps + e^{-}). Additionally, Positronium and an electron can form a negative Ps ion (Ps¯ ), which is the bound state of a Ps atom and an orbiting electron, this ions have been shown to be quite unstable, having a lifetime of <1ns.

Above you can see some experimental Time of Flight (TOF) data showing the possibility of having detected a charge exchange process. The blue curve on the left panel is a TOF signal of guided Positronium atoms after hitting the MCP detector. The Ps* atoms have a mean flight time of around 10\mus. When we let the electron beam into the scattering region  we observe a suppression of the TOF spectra (green curve), this is due to the electric fields in front of the MCP detector, which should only allow positrons to be detected. The PMT signal for both cases (shown in the right panel) are the same indicating that this drop in MCP signal is in fact electron related, not as a result of less Positronium being guided. All that remains to do now is to determine the nature of this signal but nonetheless, it’s intriguing how readily Ps* interacts with electrons (maybe other particles too), especially when considering that a direct method of producing anti-Hydrogen is via charge exchange collisions between Ps* and antiproton ( Ps^{*} + \overline{p} \rightarrow \overline{H}+ e^{-} ).