A team of physicists from the UW Precision Muon Physics Group has been part of a larger international effort to probe the boundaries of quantum physics. The first results of the Muon g-2 experiment, released earlier this month, revealed a discrepancy between the way a muon should behave in theory and how it behaves in real life.
A muon is a fundamental subatomic particle (like an electron), meaning it cannot be broken down into smaller fragments of matter. While sharing many properties with the electron, the muon only exists for two millionths of a second before decaying, and it is almost 200 times larger than the electron.
This size difference is key, because a particle’s sensitivity to external influences scales with its mass squared. Since the muon is 200 times larger than an electron, it is 40,000 times more sensitive to any possible effects. Thus, the more accurately we can measure its properties, the more information we can learn about quantum mechanics.
“When we make this measurement of the muon, it's actually a direct probe [because] it's actually interacting with all of the particles and forces that we might not even know about,” Brynn MacCoy, a physics Ph.D. student involved with the research group, said. “So the muons might know about something we don’t.”
The ongoing experiment aims to precisely measure a property of the muon called the g factor (the “g” in the experiment name), which describes how a muon’s internal magnet “wobbles.”
“You can think of it as sort of like a spinning top,” Joshua Labounty, another physics Ph.D. candidate involved in the experiment, said. “You spin a top, [and] after a while it starts to wobble around, spin and precess. Muons are doing that same sort of motion, just at a super super subatomic scale.”
However, the theoretically calculated g factor of a muon does not align with the experimentally determined g factor. The g-2 experiment aims to measure the muon’s g factor as precisely as possible in order to determine if this discrepancy is a statistical error or if it is evidence of as-yet undiscovered physics.
The experimental value of the muon’s g factor can be calculated from the Standard Model of particle physics, which is a theory describing all known particles in the universe and three out of the four known fundamental forces. This discrepancy between the measurements could be due to unknown particles or forces not yet included in the Standard Model.
“There's a lot of things that are missing from the Standard Model,” MacCoy said. “It's not surprising that there could be physics beyond [it]. We actually expect there to be physics beyond the Standard Model.”
The Standard Model does not include gravity, and it only accounts for about 5% of the matter and energy in the universe, the rest being unknown substances we call dark matter and dark energy. For years, these questions have led optimistic scientists to search for the “theory of everything,” or a single theory that unites both quantum physics and Einstein’s theory of relativity.
“It's accurate to say that at the scale that we can measure, our current model of gravity is correct,” Hannah Binney, a physics Ph.D. student involved in the experiment, said.
“And for the most part, at the scale we can measure, our current model of particle physics is correct. It's just that our brain, our physics brain, says ‘Surely we should be able to connect them.’”
There are a wide range of theories attempting to explain the g factor anomaly, but the goal of the experiment isn’t to prove or disprove any particular one. Rather, the more accurate a measurement the physicists can determine, the more theories they are able to rule out.
“There's a big parameter space that you start off with, and every new experiment sort of erases a little bit of that parameter space that you could still have a particle in until you finally can maybe shrink down to one area that’s still left and say, ‘OK, there should be a particle here,’” Labounty said.
In order to confirm the existence of a new particle, the data would have to show a significance of five standard deviations, which is the typical benchmark scientists use to accept a new discovery. This benchmark means that there is a one in 3.5 million chance that the results are a statistical error, and reflects an incredibly high confidence in their accuracy.
The data that was just released correlated to a one in 40,000 chance that the results are a fluke. However, this first release represents only 6% of the data that is planned to be collected over the course of the experiment. This data also comes from the team’s first run in 2018, and they are currently on their fourth.
In order to reduce their statistical uncertainty, the researchers plan to continue taking more measurements and analyzing the data from their subsequent experimental runs.
“It's a very unique experience to actually have the prospect of pushing the boundaries of physics and possibly finding something new,” Binney said.
Reach reporter Sarah Kahle at email@example.com. Twitter: @karahsahle
Like what you’re reading? Support high-quality student journalism by donating here.