symmetry

Shaking the dark matter paradigm

A theory about gravity challenges our understanding of the universe.

Gravity vs. Dark Matter Reflection alternatives with dark matter on the left and no dark matter on the right.

For millennia, humans held a beautiful belief. Our planet, Earth, was at the center of a vast universe, and all of the planets and stars and celestial bodies revolved around us. This geocentric model, though it had floated around since 6th century BCE, was written in its most elegant form by Claudius Ptolemy in 140 AD.

When this model encountered problems, such as the retrograde motions of planets, scientists reworked the data to fit the model by coming up with phenomena such as epicycles, mini orbits.

It wasn’t until 1543, 1400 years later, that Nicolaus Copernicus set in motion a paradigm shift that would give way to centuries of new discoveries. According to Copernicus’ radical theory, Earth was not the center of the universe but simply one of a long line of planets orbiting around the sun.

But even as evidence that we lived in a heliocentric system piled up and scientists such as Galileo Galilei perfected the model, society held onto the belief that the entire universe orbited around Earth until the early 19th century.

To Erik Verlinde, a theoretical physicist at the University of Amsterdam, the idea of dark matter is the geocentric model of the 21st century. 

“What people are doing now is allowing themselves free parameters to sort of fit the data,” Verlinde says. “You end up with a theory that has so many free parameters it's hard to disprove.”

Dark matter, an as-yet-undetected form of matter that scientists believe makes up more than a quarter of the mass and energy of the universe, was first theorized when scientists noticed that stars at the outer edges of galaxies and galaxy clusters were moving much faster than Newton’s theory of gravity said they should. Up until this point, scientists have assumed that the best explanation for this is that there must be missing mass in the universe holding those fast-moving stars in place in the form of dark matter. 

But Verlinde has come up with a set of equations that explains these galactic rotation curves by viewing gravity as an emergent force — a result of the quantum structure of space.

The idea is related to dark energy, which scientists think is the cause for the accelerating expansion of our universe. Verlinde thinks that what we see as dark matter is actually just interactions between galaxies and the sea of dark energy in which they’re embedded.

“Before I started working on this I never had any doubts about dark matter,” Verlinde says. “But then I started thinking about this link with quantum information and I had the idea that dark energy is carrying more of the dynamics of reality than we realize.”

Verlinde is not the first theorist to come up with an alternative to dark matter. Many feel that his theory echoes the sentiment of physicist Mordehai Milgrom’s equations of “modified Newtonian dynamics,” or MOND. Just as Einstein modified Newton’s laws of gravity to fit to the scale of planets and solar systems, MOND modifies Einstein’s laws of gravity to fit to the scale of galaxies and galaxy clusters.

Verlinde, however, makes the distinction that he’s not deriving the equations of MOND, rather he’s deriving what he calls a “scaling relation,” or a volume effect of space-time that only becomes important at large distances. 

Stacy McGaugh, an astrophysicist at Case Western Reserve University, says that while MOND is primarily the notion that the effective force of gravity changes with acceleration, Verlinde’s ideas are more of a ground-up theoretical work.

“He's trying to look at the structure of space-time and see if what we call gravity is a property that emerges from that quantum structure, hence the name emergent gravity,” McGaugh says. “In principle, it's a very different approach that doesn't necessarily know about MOND or have anything to do with it.”

One of the appealing things about Verlinde’s theory, McGaugh says, is that it naturally produces evidence of MOND in a way that “just happens.” 

“That's the sort of thing that one looks for,” McGaugh says. “There needs to be some basis of why MOND happens, and this theory might provide it.”

Verlinde’s ideas have been greeted with a fair amount of skepticism in the scientific community, in part because, according to Kathryn Zurek, a theoretical physicist at the US Department of Energy’s Lawrence Berkeley National Laboratory, his theory leaves a lot unexplained. 

“Theories of modified gravity only attempt to explain galactic rotation curves [those fast-moving planets],” Zurek says. “As evidence for dark matter, that's only one very small part of the puzzle. Dark matter explains a whole host of observations from the time of the cosmic microwave background when the universe was just a few hundred thousand years old through structure formation all the way until today.”

 

Inline: Shaking the dark matter paradigm
Illustration by Ana Kova

Zurek says that in order for scientists to start lending weight to his claims, Verlinde needs to build the case around his theory and show that it accommodates a wider range of observations. But, she says, this doesn’t mean that his ideas should be written off.

“One should always poke at the paradigm,” Zurek says, “even though the cold dark matter paradigm has been hugely successful, you always want to check your assumptions and make sure that you're not missing something that could be the tip of the iceberg.”

McGaugh had a similar crisis of faith in dark matter when he was working on an experiment wherein MOND’s predictions were the only ones that came true in his data. He had been making observations of low-surface-brightness galaxies, wherein stars are spread more thinly than galaxies such as the Milky Way where the stars are crowded relatively close together.

McGaugh says his results did not make sense to him in the standard dark matter context, and it turned out that the properties that were confusing to him had already been predicted by Milgrom’s MOND equations in 1983, before people had even begun to take seriously the idea of low-surface-brightness galaxies.

Although McGaugh’s experience caused him to question the existence of dark matter and instead argue for MOND, others have not been so quick to join the cause.

“We subscribe to a particular paradigm and most of our thinking is constrained within the boundaries of that paradigm, and so if we encounter a situation in which there is a need for a paradigm shift, it's really hard to think outside that box,” McGaugh says. “Even though we have rules for the game as to when you're supposed to change your mind and we all in principle try to follow that, in practice there are some changes of mind that are so big that we just can't overcome our human nature.”

McGaugh says that many of his colleagues believe that there’s so much evidence for dark matter that it’s a waste of time to consider any alternatives. But he believes that all of the evidence for dark matter might instead be an indication that there is something wrong with our theories of gravity. 

“I kind of worry that we are headed into another thousand years of dark epicycles,” McGaugh says.

But according to Zurek, if MOND came up with anywhere near the evidence that has been amassed for the dark matter paradigm, people would be flocking to it. The problem, she says, is that at the moment MOND just does not come anywhere near to passing the number of tests that cold dark matter has. She adds that there are some physicists who argue that the cold dark matter paradigm can, in fact, explain those observations about low-surface-brightness galaxies.

Recently, Case Western held a workshop wherein they gathered together representatives from different communities, including those working on dark matter models, to discuss dwarf galaxies and the external field effect, which is the notion that very low-density objects will be affected by what’s around them. MOND predicts that the dynamics of a small satellite galaxy will depend on its proximity to its giant host in a way that doesn't happen with dark matter.

McGaugh says that in attendance at the workshop were a group of more philosophically inclined people who use a set of rules to judge theories, which they’ve put together by looking back at how theories have developed in the past. 

“One of the interesting things that came out of that was that MOND is doing better on that score card,” he says. “It’s more progressive in the sense that it's making successful predictions for new phenomena whereas in the case of dark matter we've had to repeatedly invoke ad hoc fixes to patch things up.”

Verlinde’s ideas, however, didn’t come up much within the workshop. While McGaugh says that the two theories are closely enough related that he would hope the same people pursuing MOND would be interested in Verlinde’s theory, he added that not everyone shares that attitude. Many are waiting for more theoretical development and further observational tests.

“The theory needs to make a clear prediction so that we can then devise a program to go out and test it,” he says. “It needs to be further worked out to get beyond where we are now.”

Verlinde says he realizes that he still needs to develop his ideas further and extend them to explain things such as the formation of galaxies and galaxy clusters. Although he has mostly been working on this theory on his own, he recognizes the importance of building a community around his ideas.

Over the past few months, he has been giving presentations at different universities, including Princeton, Harvard, Berkeley, Stanford, and Caltech. There is currently a large community of people working on ideas of quantum information and gravity, he says, and his main goal is to get more people, in particular string theorists, to start thinking about his ideas to help him improve them.

“I think that when we understand gravity better and we use those equations to describe the evolution of the universe, we may be able to answer questions more precisely about how the universe started,” Verlinde says. “I really think that the current description is only part of the story and there's a much deeper way of understanding it—maybe an even more beautiful way.”

 

Continue reading

SLAC accelerator plans appear in Smithsonian art exhibit

The late artist June Schwarcz found inspiration in some unusual wrapping paper her husband brought home from the lab.

 

Photograph of June Schwarcz at home

Leroy Schwarcz, one of the first engineers hired to build SLAC National Accelerator Laboratory’s original 2-mile-long linear accelerator, thought his wife might like to use old mechanical drawings of the project as wrapping paper. So, he brought them home.

His wife, acclaimed enamelist June Schwarcz, had other ideas.

Today, works called SLAC Drawing III, VII and VIII, created in 1974 and 1975 from electroplated copper and enamel, form a unique part of a retrospective at the Smithsonian’s Renwick Gallery in Washington, D.C.

Among the richly formed and boldly textured and colored vessels that make up the majority of June’s oeuvre, the SLAC-inspired panels stand out for their fidelity to the mechanical design of their inspiration. 

The description next to the display at the gallery describe the “SLAC Blueprints” as resembling “ancient pictographs drawn on walls of a cave or glyphs carved in stone.” The designs appear to depict accelerator components, such as electromagnets and radio frequency structures.

According to Harold B. Nelson, who curated the exhibit with Bernard N. Jazzar, “The panels are quite unusual in the subtle color palette she chose; in her use of predominantly opaque enamels; in her reliance on a rectilinear, geometric format for her compositions; and in her reference in the work to machines, plans, numbers, and mechanical parts. 

“We included them because they are extremely beautiful and visually powerful. Together they form an important group within her body of work.”

Making history

June and Leroy Schwarcz met in the late 1930s and were married in 1943. Two years later they moved to Chicago where Leroy would become chief mechanical engineer for the University of Chicago’s synchrocyclotron, which was at the time the highest-energy proton accelerator in the world.

Having studied art and design at the Pratt Institute in Brooklyn several years earlier, June found her way into a circle of notable artists in Chicago, including Bauhaus legend László Moholy-Nagy, founder of Chicago’s Institute of Design.

Around 1954, June was introduced to enameling and shortly thereafter began to exhibit her art. She and her husband had two children and relocated several times during the 1950s for Leroy’s work. In 1958 they settled in Sausalito, California, where June set up her studio in the lower level of their hillside home. 

In 1961, Leroy became the first mechanical engineer hired by Stanford University to work on “Project M,” which would become the famous 2-mile-long linear accelerator at SLAC. He oversaw the engineers during early design and construction of the linac, which eventually enabled Nobel-winning particle physics research.

June and Leroy’s daughter, Kim Schwarcz, who made a living as a glass blower and textile artist until the mid 1980s and occasionally exhibited with her mother, remembers those early days at the future lab.

“Before SLAC was built, the offices were in Quonset huts, and my father used to bring me down, and I would bicycle all over the campus,” she recalled. “Pief was a family friend and so was Bob Mozley. Mom introduced Bob to his future wife…It was a small community and a really nice community.” 

W.K.H. “Pief” Panofsky was the first director of SLAC; he and Mozley were renowned SLAC physicists and national arms control experts.

June Schwarcz, SLAC Drawing III, 1974, electroplated copper and enamel. (Photo by Cate Hurst)
June Schwarcz, SLAC Drawing III, 1974, electroplated copper and enamel. (Photo by Cate Hurst)
June Schwarcz, SLAC Drawing VII, 1975, electroplated copper and enamel. (Photo by Cate Hurst)
June Schwarcz, SLAC Drawing VII, 1975, electroplated copper and enamel. (Photo by Cate Hurst)
June Schwarcz, SLAC Drawing VIII, 1975, electroplated copper and enamel. (Photo by Cate Hurst)
June Schwarcz, SLAC Drawing VIII, 1975, electroplated copper and enamel. (Photo by Cate Hurst)
June Schwarcz, SLAC Design Box, 1989, electroplated copper and enamel, mounted in a cherry box. (Photo by M. Lee Fatherree)
June Schwarcz, SLAC Design Box, 1989, electroplated copper and enamel, mounted in a cherry box. (Photo by M. Lee Fatherree)
June Schwarcz, Vessel, electroplated copper foil and enamel, sandblasted. (Photo by Cate Hurst)
June Schwarcz, Vessel, electroplated copper foil and enamel, sandblasted. (Photo by Cate Hurst)
Bowl, 1980, electroplated copper foil and enamel, iron plated. (Photo by Gene Young)
June Schwarcz, Bowl, 1980, electroplated copper foil and enamel, iron plated. (Photo by Gene Young)
Previous Next

Finding beauty

Kim was not surprised that her mother made art based on the SLAC drawings. She remembers June photographing the foggy view outside their home and getting inspiration from nature, ethnic art and Japanese clothing.

“She would take anything and make something out of it,” Kim said. “She did an enamel of an olive oil can once and a series called Adam’s Pants that were based on the droopy pants my son wore as a teen.”

But the fifteen SLAC-inspired compositions were unique and a family favorite; Kim and her brother Carl both own some of them, and others are at museums.

In a 2001 oral history interview with the Smithsonian Institution's Archives of American Art, June explained the detailed work involved in creating the SLAC drawings by varnishing, scribing, electroplating and enameling a copper sheet: “I'm primarily interested in having things that are beautiful, and of course, beauty is a complicated thing to devise, to find.”

Engineering art

Besides providing inspiration in the form of technical drawings, Leroy was influential in June’s career in other ways.

Around 1962 he introduced her to Jimmy Pope at the SLAC machine shop, who showed June how to do electroplating, a signature technique of her work. Electroplating involves using an electric current to deposit a coating of metal onto another material. She used it to create raised surfaces and to transform thin sheets of copper—which she stitched together using copper wire—into substantial, free-standing vessel-like forms. She then embellished these sculptures with colored enamel.

Leroy built a 30-gallon plating bath and other tools for June’s art-making at their shared workshop. 

“Mom was tiny, 5 feet tall, and she had these wobbly pieces on the end of a fork that she would put into a hot kiln. It was really heavy. Dad made a stand so she could rest her arm and slide the piece in,” Kim recalls.

“He was very inventive in that way, and very creative himself,” she said. “He did macramé in the 1960s, made wooden spoons and did scrimshaw carvings on bone that were really good.”

Kim remembers the lower-level workshop as a chaotic and inventive space. “For the longest time, there was a wooden beam in the middle of the workshop we would trip over. It was meant for a boat dad wanted to build—and eventually did build after he retired,” she said.

At SLAC Leroy’s work was driven by his “amazingly good intuition,” according to a tribute written by Mozley upon his colleague’s death in 1993. Even when he favored crude drawings to exact math, “his intuitive designs were almost invariably right,” he wrote. 

After the accelerator was built, Leroy turned his attention to the design, construction and installation of a streamer chamber scientists at SLAC used as a particle detector. In 1971 he took a leave of absence from the California lab to go back to Chicago and move the synchrocyclotron’s 2000-ton magnet from the university to Fermi National Accelerator Laboratory. 

“[Leroy] was the only person who could have done this because, although drawings existed, knowledge of the assembly procedures existed only in the minds of Leroy and those who had helped him put the cyclotron together,” Mozley wrote.

Beauty on display

June continued making art at her Sausalito home studio up until two weeks before her death in 2015 at the age of 97. A 2007 video shows the artist at work there 10 years prior to her passing. 

After Leroy died, her own art collection expanded on the shelves and walls of her home.

“As a kid, the art was just what mom did, and it never changed,” Kim remembers. “She couldn’t wait for us to go to school so she could get to work, and she worked through health challenges in later years.”

The Smithsonian exhibit is a unique collection of June’s celebrated work, with its traces of a shared history with SLAC and one of the lab’s first mechanical engineers.

“June had an exceptionally inquisitive mind, and we think you get a sense of the rich breadth of her vision in this wonderful body of work,” says curator Jazzar.

June Schwarcz: Invention and Variation is the first retrospective of the artist’s work in 15 years and includes almost 60 works. The exhibit runs through August 27 at the Smithsonian American Art Museum Renwick Gallery. 

Editor's note: Some of the information from this article was derived from an essay written by Jazzar and Nelson that appears in a book based on the exhibition with the same title.

Continue reading

A new model for standards

In an upcoming refresh, particle physics will define units of measurement such as the meter, the kilogram and the second.

Image of yellow ruler background with moon and red and Plank graphics

While America remains obstinate about using Imperial units such as miles, pounds and degrees Fahrenheit, most of the world has agreed that using units that are actually divisible by 10 is a better idea. The metric system, also known as the International System of Units (SI), is the most comprehensive and precise system for measuring the universe that humans have developed. 

In 2018, the 26th General Conference on Weights and Measures will convene and likely adopt revised definitions for the seven base metric system units for measuring: length, mass, time, temperature, electric current, luminosity and quantity.

The modern metric system owes its precision to particle physics, which has the tools to investigate the universe more precisely than any microscope. Measurements made by particle physicists can be used to refine the definitions of metric units. In May, a team of German physicists at the Physikalisch-Technische Bundesanstalt made the most precise measurements yet of the Boltzmann constant, which will be used to define units of temperature.

Since the metric system was established in the 1790s, scientists have attempted to give increasingly precise definitions to these units. The next update will define every base unit using fundamental constants of the universe that have been derived by particle physics.

meter (distance): 

Starting in 1799, the meter was defined by a prototype meter bar, which was just a platinum bar. Physicists eventually realized that distance could be defined by the speed of light, which has been measured with an accuracy to one part in a billion using an interferometer (interestingly, the same type of detector the LIGO collaboration used to discover gravitational waves). The meter is currently defined as the distance traveled by light (in a vacuum) for 1/299,792,458 of a second, and will remain effectively unchanged in 2018.

kilogram (mass):

For over a century, the standard kilogram has been a small platinum-iridium cylinder housed at the International Bureau of Weights and Measures in France. But even its precise mass fluctuates due to factors such as accumulation of microscopic dust. Scientists hope to redefine the kilogram in 2018 by setting the value of Planck’s constant to exactly 6.626070040×1034 kilograms times meters squared per second. Planck’s constant is the smallest amount of quantized energy possible. This fundamental value, which is represented with the letter h, is integral to calculating energies in particle physics.

second (time):

The earliest seconds were defined as divisions of time between full moons. Later, seconds were defined by solar days, and eventually the time it took Earth to revolve around the sun. Today, seconds are defined by atomic time, which is precise to 1 part in 10 billion. Atomic time is calculated by periods of radiation by atoms, a measurement that relies heavily on particle physics techniques. One second is currently defined as 9,192,631,770 periods of the radiation for a Cesium-133 atom and will remain effectively unchanged. 

kelvin (temperature):

Kelvin is the temperature scale that starts at the coldest possible state of matter. Currently, a kelvin is defined by the triple point of water—where water can exist as a solid, liquid and gas. The triple point is 273.16 Kelvin, so a single kelvin is 1/273.16 of the triple point. But because water can never be completely pure, impurities can influence the triple point. In 2018 scientists hope to redefine kelvin by setting the value of Boltzmann’s constant to exactly 1.38064852×10−23 joules per kelvin. Boltzmann’s constant links the movement of particles in a gas (the average kinetic energy) to the temperature of the gas. Denoted by the symbol k, the Boltzmann constant is ubiquitous throughout physics calculations that involve temperature and entropy.  

ampere (electric current):

André-Marie Ampère, who is often considered the father of electrodynamics, has the honor of having the basic unit of electric current named after him. Right now, the ampere is defined by the amount of current required to produce of a force of 2×10−7 newtons for each meter between two parallel conductors of infinite length. Naturally, it’s a bit hard to come by things of infinite length, so the proposed definition is instead to define amperes by the fundamental charge of a particle. This new definition would rely on the charge of the electron, which will be set to 1.6021766208×10−19 amperes per second.

candela (luminosity):

The last of the base SI units to be established, the candela measures luminosity—what we typically refer to as brightness. Early standards for the candela used a phenomenon from quantum mechanics called “black body radiation.” This is the light that all objects radiate as a function of their heat. Currently, the candela is defined more fundamentally as 1/683 watt per square radian at a frequency of 540×1012 herz over a certain area, a definition which will remain effectively unchanged. Hard to picture? A candle, conveniently, emits about one candela of luminous intensity.

mole (quantity):

Different from all the other base units, the mole measures quantity alone. Over hundreds of years, scientists starting from Amedeo Avogadro worked to better understand how the number of atoms was related to mass, leading to the current definition of the mole: the number of atoms in 12 grams of carbon-12. This number, which is known as Avogadro’s constant and used in many calculations of mass in particle physics, is about 6 x 10^23. To make the mole more precise, the new definition would set Avogadro’s constant to exactly 6.022140857×1023, decoupling it from the kilogram.

Continue reading

Quirks of the arXiv

Sometimes, physics papers turn funny.

Header: Quirks of the arXiv

Since it went up in 1991, the arXiv (pronounced like the word “archive”) has been a hub for scientific papers in quantitative fields such as physics, math and computer science. Many of its million-plus papers are serious products of intense academic work that are later published in peer-reviewed journals. Still, some manage to have a little more character than the rest. For your consideration, we’ve gathered seven of the quirkiest physics papers on the arXiv.

Can apparent superluminal neutrino speeds be explained as a quantum weak measurement?

M V Berry, N Brunner, S Popescu and P Shukla

In 2011, an experiment appeared to find particles traveling faster than the speed of light. To spare readers uninterested in lengthy calculations demonstrating the unlikeliness of this probably impossible phenomenon, the abstract for this analysis cut to the chase.

Paper Thumbnail

Quantum Tokens for Digital Signatures

Shalev Ben-David and Or Sattath

Sometimes the best way to explain something is to think about how you might explain it to a child—for example, as a fairy tale.

Paper Thumbnail

A dialog on quantum gravity

Carlo Rovelli

Unless you’re intimately familiar with string theory and quantum loop gravity, this Socratic dialogue is like Plato’s Republic: It’s all Greek to you.

Paper Thumbnail

The Proof of Innocence

Dmitri Krioukov

Pulled over after he was apparently observed failing to halt at a stop sign, the author of this paper, Dmitri Krioukov, was determined to prove his innocence—as only a scientist would.

Using math, he demonstrated that, to a police officer measuring the angular speed of Krioukov’s car, a brief obstruction from view could cause an illusion that the car did not stop. Krioukov submitted his proof to the arXiv; the judge ruled in his favor.

Paper Thumbnail

Quantum weak coin flipping with arbitrarily small bias

Carlos Mochon

Not many papers in the arXiv illustrate their point with a tale involving human sacrifice. There’s something about quantum informatics that brings out the weird side of physicists.

Paper Thumbnail
Paper Thumbnail

10 = 6 + 4

Frank D. (Tony) Smith, Jr.

A theorist calculated an alternative decomposition of 10 dimensions into 6 spacetime dimensions with local Conformal symmetry and 4-dimensional compact Internal Symmetry Space. For the title of his paper, he decided to go with something a little simpler.

Paper Thumbnail

Would Bohr be born if Bohm were born before Born?

Hrvoje Nikolic

This tricky tongue-twisting treatise theorizes a tangential timeline to testify that taking up quantum theories turns on timeliness.

Paper Thumbnail
Continue reading

When was the Higgs actually discovered?

The announcement on July 4 was just one part of the story. Take a peek behind the scenes of the discovery of the Higgs boson.

Photo from the back of a crowded conference room on the day of the Higgs announcement

Joe Incandela sat in a conference room at CERN and watched with his arms folded as his colleagues presented the latest results on the hunt for the Higgs boson. It was December 2011, and they had begun to see the very thing they were looking for—an unexplained bump emerging from the data.

“I was far from convinced,” says Incandela, a professor at the University of California, Santa Barbara and the former spokesperson of the CMS experiment at the Large Hadron Collider.

For decades, scientists had searched for the elusive Higgs boson: the holy grail of modern physics and the only piece of the robust and time-tested Standard Model that had yet to be found.

The construction of the LHC was motivated in large part by the absence of this fundamental component from our picture of the universe. Without it, physicists couldn’t explain the origin of mass or the divergent strengths of the fundamental forces.

“Without the Higgs boson, the Standard Model falls apart,” says Matthew McCullough, a theorist at CERN. “The Standard Model was fitting the experimental data so well that most of the theory community was convinced that something playing the role of Higgs boson would be discovered by the LHC.”

The Standard Model predicted the existence of the Higgs but did not predict what the particle’s mass would be. Over the years, scientists had searched for it across a wide range of possible masses. By 2011, there was only a tiny region left to search; everything else had been excluded by previous generations of experimentation. If the predicted Higgs boson were anywhere, it had to be there, right where the LHC scientists were looking.

But Incandela says he was skeptical about these preliminary results. He knew that the Higgs could manifest itself in many different forms, and this particular channel was extremely delicate.

“A tiny mistake or an unfortunate distribution of the background events could make it look like a new particle is emerging from the data when in reality, it’s nothing,” Incandela says.

A common mantra in science is that extraordinary claims require extraordinary evidence. The challenge isn’t just collecting the data and performing the analysis; it’s deciding if every part of the analysis is trustworthy. If the analysis is bulletproof, the next question is whether the evidence is substantial enough to claim a discovery. And if a discovery can be claimed, the final question is what, exactly, has been discovered? Scientists can have complete confidence in their results but remain uncertain about how to interpret them.

In physics, it’s easy to say what something is not but nearly impossible to say what it is. A single piece of corroborated, contradictory evidence can discredit an entire theory and destroy an organization’s credibility.

“We’ll never be able to definitively say if something is exactly what we think it is, because there’s always something we don’t know and cannot test or measure,” Incandela says. “There could always be a very subtle new property or characteristic found in a high-precision experiment that revolutionizes our understanding.”

With all of that in mind, Incandela and his team made a decision: From that point on, everyone would refine their scientific analyses using special data samples and a patch of fake data generated by computer simulations covering the interesting areas of their analyses. Then, when they were sure about their methodology and had enough data to make a significant observation, they would remove the patch and use their algorithms on all the real data in a process called unblinding.

“This is a nice way of providing an unbiased view of the data and helps us build confidence in any unexpected signals that may be appearing, particularly if the same unexpected signal is seen in different types of analyses,” Incandela says.

A few weeks before July 4, all the different analysis groups met with Incandela to present a first look at their unblinded results. This time the bump was very significant and showing up at the same mass in two independent channels.

“At that point, I knew we had something,” Incandela says. “That afternoon we presented the results to the rest of the collaboration. The next few weeks were among the most intense I have ever experienced.”

Meanwhile, the other general-purpose experiment at the LHC, ATLAS, was hot on the trail of the same mysterious bump.

Andrew Hard was a graduate student at The University of Wisconsin, Madison working on the ATLAS Higgs analysis with his PhD thesis advisor Sau Lan Wu.

“Originally, my plan had been to return home to Tennessee and visit my parents over the winter holidays,” Hard says. “Instead, I came to CERN every day for five months—even on Christmas. There were a few days when I didn't see anyone else at CERN. One time I thought some colleagues had come into the office, but it turned out to be two stray cats fighting in the corridor.”

Hard was responsible for writing the code that selected and calibrated the particles of light the ATLAS detector recorded during the LHC’s high-energy collisions. According to predictions from the Standard Model, the Higgs can transform into two of these particles when it decays, so scientists on both experiments knew that this project would be key to the discovery process.

“We all worked harder than we thought we could,” Hard says. “People collaborated well and everyone was excited about what would come next. All in all, it was the most exciting time in my career. I think the best qualities of the community came out during the discovery.”

At the end of June, Hard and his colleagues synthesized all of their work into a single analysis to see what it revealed. And there it was again—that same bump, this time surpassing the statistical threshold the particle physics community generally requires to claim a discovery.

“Soon everyone in the group started running into the office to see the number for the first time,” Hard says. “The Wisconsin group took a bunch of photos with the discovery plot.”

Hard had no idea whether CMS scientists were looking at the same thing. At this point, the experiments were keeping their latest results secret—with the exception of Incandela, Fabiola Gianotti (then ATLAS spokesperson) and a handful of CERN’s senior management, who regularly met to discuss their progress and results.

“I told the collaboration that the most important thing was for each experiment to work independently and not worry about what the other experiment was seeing,” Incandela says. “I did not tell anyone what I knew about ATLAS. It was not relevant to the tasks at hand.”

Still, rumors were circulating around theoretical physics groups both at CERN and abroad. Mccullough, then a postdoc at the Massachusetts Institute of Technology, was avidly following the progress of the two experiments.

“We had an update in December 2011 and then another one a few months later in March, so we knew that both experiments were seeing something,” he says. “When this big excess showed up in July 2012, we were all convinced that it was the guy responsible for curing the ails of the Standard Model, but not necessarily precisely that guy predicted by the Standard Model. It could have properties mostly consistent with the Higgs boson but still be not absolutely identical.”

The week before announcing what they’d found, Hard’s analysis group had daily meetings to discuss their results. He says they were excited but also nervous and stressed: Extraordinary claims require extraordinary confidence.

“One of our meetings lasted over 10 hours, not including the dinner break halfway through,” Hard says. “I remember getting in a heated exchange with a colleague who accused me of having a bug in my code.”

After both groups had independently and intensely scrutinized their Higgs-like bump through a series of checks, cross-checks and internal reviews, Incandela and Gianotti decided it was time to tell the world.

“Some people asked me if I was sure we should say something,” Incandela says. “I remember saying that this train has left the station. This is what we’ve been working for, and we need to stand behind our results.”

On July 4, 2012, Incandela and Gianotti stood before an expectant crowd and, one at a time, announced that decades of searching and generations of experiments had finally culminated in the discovery of a particle “compatible with the Higgs boson.”

Science journalists rejoiced and rushed to publish their stories. But was this new particle the long-awaited Higgs boson? Or not?

Discoveries in science rarely happen all at once; rather, they build slowly over time. And even when the evidence overwhelmingly points in a clear direction, scientists will rarely speak with superlatives or make definitive claims.

“There is always a risk of overlooking the details,” Incandela says, “and major revolutions in science are often born in the details.”

Immediately after the July 4 announcement, theorists from around the world issued a flurry of theoretical papers presenting alternative explanations and possible tests to see if this excess really was the Higgs boson predicted by the Standard Model or just something similar.

“A lot of theory papers explored exotic ideas,” McCullough says. “It’s all part of the exercise. These papers act as a straw man so that we can see just how well we understand the particle and what additional tests need to be run.”

For the next several months, scientists continued to examine the particle and its properties. The more data they collected and the more tests they ran, the more the discovery looked like the long-awaited Higgs boson. By March, both experiments had twice as much data and twice as much evidence.

“Amongst ourselves, we called it the Higgs,” Incandela says, “but to the public, we were more careful.”

It was increasingly difficult to keep qualifying their statements about it, though. “It was just getting too complicated,” Incandela says. “We didn’t want to always be in this position where we had to talk about this particle like we didn’t know what it was.”

On March 14, 2013—nine months and 10 days after the original announcement—CERN issued a press release quoting Incandela as saying, “to me, it is clear that we are dealing with a Higgs boson, though we still have a long way to go to know what kind of Higgs boson it is.”​

To this day, scientists are open to the possibility that the Higgs they found is not exactly the Higgs they expected.

“We are definitely, 100 percent sure that this is a Standard-Model-like Higgs boson,” Incandela says. “But we’re hoping that there’s a chink in that armor somewhere. The Higgs is a sign post, and we’re hoping for a slight discrepancy which will point us in the direction of new physics.”

Continue reading

What’s really happening during an LHC collision?

It’s less of a collision and more of a symphony.

Illustration of a particle collision inside the Large Hadron Collider

The Large Hadron Collider is definitely large. With a 17-mile circumference, it is the biggest collider on the planet. But the latter fraction of its name is a little misleading. That’s because what collides in the LHC are the tiny pieces inside the hadrons, not the hadrons themselves.

Hadrons are composite particles made up of quarks and gluons. The gluons carry the strong force, which enables the quarks to stick together and binds them into a single particle. The main fodder for the LHC are hadrons called protons. Protons are made up of three quarks and an indefinable number of gluons. (Protons in turn make up atoms, which are the building blocks of everything around us.)

If a proton were enlarged to the size of a basketball, it would look empty. Just like atoms, protons are mostly empty space. The individual quarks and gluons inside are known to be extremely small, less than 1/10,000th the size of the entire proton.

“The inside of a proton would look like the atmosphere around you,” says Richard Ruiz, a theorist at Durham University. “It’s a mixture of empty space and microscopic particles that, for all intents and purposes, have no physical volume.

“But if you put those particles inside a balloon, you’ll see the balloon expand. Even though the internal particles are microscopic, they interact with each other and exert a force on their surroundings, inevitably producing something which does have an observable volume.”

So how do you collide two objects that are effectively empty space? You can’t. But luckily, you don’t need a classical collision to unleash a particle’s full potential.

In particle physics, the term “collide” can mean that two protons glide through each other, and their fundamental components pass so close together that they can talk to each other. If their voices are loud enough and resonate in just the right way, they can pluck deep hidden fields that will sing their own tune in response—by producing new particles.

“It’s a lot like music,” Ruiz says. “The entire universe is a symphony of complex harmonies which call and respond to each other. We can easily produce the mid-range tones, which would be like photons and muons, but some of these notes are so high that they require a huge amount of energy and very precise conditions to resonate.”

Space is permeated with dormant fields that can briefly pop a particle into existence when vibrated with the right amount of energy. These fields play important roles but almost always work behind the scenes. The Higgs field, for instance, is always interacting with other particles to help them gain mass. But a Higgs particle will only appear if the field is plucked with the right resonance.

When protons meet during an LHC collision, they break apart and the quarks and gluons come spilling out. They interact and pull more quarks and gluons out of space, eventually forming a shower of fast-moving hadrons.

This subatomic symbiosis is facilitated by the LHC and recorded by the experiment, but it’s not restricted to the laboratory environment; particles are also accelerated by cosmic sources such as supernova remnants. “This happens everywhere in the universe,” Ruiz says. “The LHC and its experiments are not special in that sense. They’re more like a big concert hall that provides the energy to pop open and record the symphony inside each proton.”

Continue reading

The rise of LIGO’s space-studying super-team

The era of multi-messenger astronomy promises rich rewards—and a steep learning curve.

Two women dancing in space

Sometimes you need more than one perspective to get the full story.

Scientists including astronomers working with the Fermi Large Area Telescope have recorded brief bursts of high-energy photons called gamma rays coming from distant reaches of space. They suspect such eruptions result from the merging of two neutron stars—the collapsed cores of dying stars—or from the collision of a neutron star and a black hole. 

But gamma rays alone can’t tell them that. The story of the dense, crashing cores would be more convincing if astronomers saw a second signal coming from the same event—for example, the release of ripples in space-time called gravitational waves.

“The Fermi Large Area Telescope detects a few short gamma ray bursts per year already, but detecting one in correspondence to a gravitational-wave event would be the first direct confirmation of this scenario,” says postdoctoral researcher Giacomo Vianello of the Kavli Institute for Particle Astrophysics and Cosmology, a joint institution of SLAC National Accelerator Laboratory and Stanford University.

Scientists discovered gravitational waves in 2015 (announced in 2016). Using the Laser Interferometer Gravitational-Wave Observatory, or LIGO, they detected the coalescence of two massive black holes.

LIGO scientists are now sharing their data with a network of fellow space watchers to see if any of their signals match up. Combining multiple signals to create a more complete picture of astronomical events is called multi-messenger astronomy.​

Looking for a match

“We had this dream of finding astronomical events to match up with our gravitational wave triggers,” says LIGO scientist Peter Shawhan of the University of Maryland. ​

But LIGO can only narrow down the source of its signals to a region large enough to contain roughly 100,000 galaxies. 

Searching for contemporaneous signals within that gigantic volume of space is extremely challenging, especially since most telescopes only view a small part of the sky at a time. So Shawhan and his colleagues developed a plan to send out an automatic alert to other observatories whenever LIGO detected an interesting signal of its own. The alert would contain preliminary calculations and the estimated location of the source of the potential gravitational waves.

“Our early efforts were pretty crude and only involved a small number of partners with telescopes, but it kind of got this idea started,” Shawhan says. The LIGO Collaboration and the Virgo Collaboration, its European partner, revamped and expanded the program while upgrading their detectors. Since 2014, 92 groups have signed up to receive alerts from LIGO, and the number is growing. 

LIGO is not alone in latching onto the promise of multi-messenger astronomy. The Supernova Early Warning System (SNEWS) also unites multiple experiments to look at the same event in different ways. Neutral, rarely interacting particles called neutrinos escape more quickly from collapsing stars than optical light, so a network of neutrino experiments is prepared to alert optical observatories as soon as they get the first warning of a nearby supernova in the form of a burst of neutrinos. 

National Science Foundation Director France Córdova has lauded multi-messenger astronomy, calling it in 2016 a bold research idea that would lead to transformative discoveries.​

The learning curve

Catching gamma ray bursts alongside gravitational waves is no simple feat. 

The Fermi Large Area Telescope orbits the earth as the primary instrument on the Fermi Gamma-ray Space Telescope. The telescope is constantly in motion and has a large field of view that surveys the entire sky multiple times per day. 

But a gamma-ray burst lasts just a few seconds, and it takes about three hours for LAT to complete its sweep. So even if an event that releases gravitational waves also produces a gamma-ray burst, LAT might not be looking in the right direction at the right time. It would need to catch the afterglow of the event. 

Fermi LAT scientist Nicola Omodei of Stanford University acknowledges another challenge: The window to see the burst alongside gravitational waves might not line up with the theoretical predictions. It’s never been done before, so the signal could look different or come at a different time than expected. 

That doesn’t stop him and his colleagues from trying, though. “We want to cover all bases, and we adopt different strategies,” he says. “To make sure we are not missing any preceding or delayed signal, we also look on much longer time scales, analyzing the days before and after the trigger.”

Scientists using the second instrument on the Fermi Gamma-ray Space Telescope have already found an unconfirmed signal that aligned with the first gravitational waves LIGO detected, says scientist Valerie Connaughton of the Universities Space Research Association, who works on the Gamma-Ray Burst Monitor. “We were surprised to find a transient event 0.4 seconds after the first GW seen by LIGO.”

While the event is theoretically unlikely to be connected to the gravitational wave, she says the timing and location “are enough for us to be interested and to challenge the theorists to explain how something that was not expected to produce gamma rays might have done so.”

From the ground up

It’s not just space-based experiments looking for signals that align with LIGO alerts. A working group called DESgw, members of the Dark Energy Survey with independent collaborators, have found a way to use the Dark Energy Camera, a 570-Megapixel digital camera mounted on a telescope in the Chilean Andes, to follow up on gravitational wave detections.​

“We have developed a rapid response system to interrupt the planned observations when a trigger occurs,” says DES scientist Marcelle Soares-Santos of Fermi National Accelerator Laboratory. “The DES is a cosmological survey; following up gravitational wave sources was not originally part of the DES scientific program.” 

Once they receive a signal, the DESgw collaborators meet to evaluate the alert and weigh the cost of changing the planned telescope observations against what scientific data they could expect to see—most often how much of the LIGO source location could be covered by DECam observations.

“We could, in principle, put the telescope onto the sky for every event as soon as night falls,” says DES scientist Jim Annis, also of Fermilab. “In practice, our telescope is large and the demand for its time is high, so we wait for the right events in the right part of the sky before we open up and start imaging.”

At an even lower elevation, scientists at the IceCube neutrino experiment—made up of detectors drilled down into Antarctic ice—are following LIGO’s exploits as well.

“The neutrinos IceCube is looking for originate from the most extreme environment in the cosmos,” says IceCube scientist Imre Bartos of Columbia University. “We don't know what these environments are for sure, but we strongly suspect that they are related to black holes.”

LIGO and IceCube are natural partners. Both gravitational waves and neutrinos travel for the most part unimpeded through space. Thus, they carry pure information about where they originate, and the two signals can be monitored together nearly in real time to help refine the calculated location of the source.

The ability to do this is new, Bartos says. Neither gravitational waves nor high-energy neutrinos had been detected from the cosmos when he started working on IceCube in 2008. “During the past few years, both of them were discovered, putting the field on a whole new footing.”

Shawhan and the LIGO collaboration are similarly optimistic about the future of their program and multi-messenger astronomy. More gravitational wave detectors are planned or under construction, including an upgrade to the European detector Virgo, the KAGRA detector in Japan, and a third LIGO detector in India, and that means scientists will home in closer and closer on their targets.​

Continue reading

World’s biggest neutrino experiment moves one step closer

The startup of a 25-ton test detector at CERN advances technology for the Deep Underground Neutrino Experiment.

People in hard hats install the 311 detector

In a lab at CERN sits a very important box. It covers about three parking spaces and is more than a story tall. Sitting inside is a metal device that tracks energetic cosmic particles.

This is a prototype detector, a stepping-stone on the way to the future Deep Underground Neutrino Experiment (DUNE). On June 21, it recorded its first particle tracks.

So begins the largest ever test of an extremely precise method for measuring elusive particles called neutrinos, which may hold the key to why our universe looks the way it does and how it came into being.

A two-phase detector

The prototype detector is named WA105 3x1x1 (its dimensions in meters) and holds five active tons—3000 liters—of liquid argon. Argon is well suited to interacting with neutrinos then transmitting the subsequent light and electrons for collection. Previous liquid argon neutrino detectors, such as ICARUS and MicroBooNE, detected signals from neutrinos using wires in the liquid argon. But crucially, this new test detector also holds a small amount of gaseous argon, earning it the special status of a two-phase detector.

As particles pass through the detector, they interact with the argon atoms inside. Electrons are stripped off of atoms and drift through the liquid toward an “extraction grid,” which kicks them into the gas. There, large electron multipliers create a cascade of electrons, leading to a stronger signal that scientists can use to reconstruct the particle track in 3D. Previous tests of this method were conducted in small detectors using about 250 active liters of liquid argon.

“This is the first time anyone will demonstrate this technology at this scale,” says Sebastien Murphy, who led the construction of the detector at CERN.

The 3x1x1 test detector represents a big jump in size compared to previous experiments, but it’s small compared to the end goal of DUNE, which will hold 40,000 active tons of liquid argon. Scientists say they will take what they learn and apply it (and some of the actual electronic components) to next-generation single- and dual-phase prototypes, called ProtoDUNE.

The technology used for both types of detectors is a time projection chamber, or TPC. DUNE will stack many large modules snugly together like LEGO blocks to create enormous DUNE detectors, which will catch neutrinos a mile underground at Sanford Underground Research Facility in South Dakota. Overall development for liquid argon TPCs has been going on for close to 40 years, and research and development for the dual-phase for more than a decade. The idea for this particular dual-phase test detector came in 2013.

“The main goal [with WA105 3x1x1] is to demonstrate that we can amplify charges in liquid argon detectors on the same large scale as we do in standard gaseous TPCs,” Murphy says.

By studying neutrinos and antineutrinos that travel 800 miles through the Earth from the US Department of Energy’s Fermi National Accelerator Laboratory to the DUNE detectors, scientists aim to discover differences in the behavior of matter and antimatter. This could point the way toward explaining the abundance of matter over antimatter in the universe. The supersensitive detectors will also be able to capture neutrinos from exploding stars (supernovae), unveiling the formation of neutron stars and black holes. In addition, they allow scientists to hunt for a rare phenomenon called proton decay.

“All the R&D we did for so many years and now want to do with ProtoDUNE is the homework we have to do,” says André Rubbia, the spokesperson for the WA105 3x1x1 experiment and former co-spokesperson for DUNE. “Ultimately, we are all extremely excited by the discovery potential of DUNE itself.”

Image of particle tracks

One of the first tracks in the prototype detector, caused by a cosmic ray.

André Rubbia

Testing, testing, 3-1-1, check, check

Making sure a dual-phase detector and its electronics work at cryogenic temperatures of minus 184 degrees Celsius (minus 300 degrees Fahrenheit) on a large scale is the primary duty of the prototype detector—but certainly not its only one. The membrane that surrounds the liquid argon and keeps it from spilling out will also undergo a rigorous test. Special cryogenic cameras look for any hot spots where the liquid argon is predisposed to boiling away and might cause voltage breakdowns near electronics.

After many months of hard work, the cryogenic team and those working on the CERN neutrino platform have already successfully corrected issues with the cryostat, resulting in a stable level of incredibly pure liquid argon. The liquid argon has to be pristine and its level just below the large electron multipliers so that the electrons from the liquid will make it into the gaseous argon.

“Adding components to a detector is never trivial, because you’re adding impurities such as water molecules and even dust,” says Laura Manenti, a research associate at the University College London in the UK. “That is why the liquid argon in the 311—and soon to come ProtoDUNEs—has to be recirculated and purified constantly.”

While ultimately the full-scale DUNE detectors will sit in the most intense neutrino beam in the world, scientists are testing the WA105 3x1x1 components using muons from cosmic rays, high-energy particles arriving from space. These efforts are supported by many groups, including the Department of Energy’s Office of Science.

The plan is now to run the experiment, gather as much data as possible, and then move on to even bigger territory.

“The prospect of starting DUNE is very exciting, and we have to deliver the best possible detector,” Rubbia says. “One step at a time, we’re climbing a large mountain. We’re not at the top of Everest yet, but we’re reaching the first chalet.”

Continue reading