Deep learning takes on physics
Can the same type of technology Facebook uses to recognize faces also recognize particles?

When you upload a photo of one of your friends to Facebook, you set into motion a complex behind-the-scenes process. An algorithm whirs away, analyzing the pixels in the photo until it spits out your friend’s name. This same cutting-edge technique enables self-driving cars to distinguish pedestrians and other vehicles from the scenery around them.
Can this technology also be used to tell a muon from an electron? Many physicists believe so. Researchers in the field are beginning to adapt it to analyze particle physics data.
Proponents hope that using deep learning will save experiments time, money and manpower, freeing physicists to do other, less tedious work. Others hope they will improve the experiments’ performance, making them better able to identify particles and analyze data than any algorithm used before. And while physicists don’t expect deep learning to be a cure-all, some think it could be key to warding off an impending data-processing crisis.
Neural networks
Up until now, computer scientists have often coded algorithms by hand, a task that requires countless hours of work with complex computer languages. “We still do great science,” says Gabe Perdue, a scientist at Fermi National Accelerator Laboratory. “But I think we could do better science.”
Deep learning, on the other hand, requires a different kind of human input.
One way to conduct deep learning is to use a convolutional neural network, or CNN. CNNs are modeled after human visual perception. Humans process images using a network of neurons in the body; CNNs process images through layers of inputs called nodes. People train CNNs by feeding them pre-processed images. Using these inputs, an algorithm continuously tweaks the weight it places on each node and learns to identify patterns and points of interest. As the algorithm refines these weights, it becomes more and more accurate, often outperforming humans.
Convolutional neural networks break down data processing in a way that short-circuits steps by tying multiple weights together, meaning fewer elements of the algorithm have to be adjusted.
CNNs have been around since the late ’90s. But in recent years, breakthroughs have led to more affordable hardware for processing graphics, bigger data sets for training and innovations in the design of the CNNs themselves. As a result, more and more researchers are starting to use them.
The development of CNNs has led to advances in speech recognition and translation, as well as in other tasks traditionally completed by humans. A London-based company owned by Google used a CNN to create AlphaGo, a computer program that in March beat the second-ranked international player of Go, a strategy board game far more complex than chess.
CNNs have made it much more feasible to handle previously prohibitively large amounts of image-based data—the kind of amounts seen often in high-energy physics.
Reaching the field of physics
CNNs became practical around the year 2006 with the emergence of big data and graphics processing units, which have the necessary computing power to process large amounts of information. “There was a big jump in accuracy, and people have been innovating like wild on top of that ever since,” Perdue says.
Around a year ago, researchers at various high-energy experiments began to consider the possibility of applying CNNs to their experiments. “We’ve turned a physics problem into, ‘Can we tell a car from a bicycle?’” says SLAC National Accelerator Laboratory researcher Michael Kagan. “We’re just figuring out how to recast problems in the right way.”
For the most part, CNNs will be used for particle identification and classification and particle-track reconstruction. A couple of experiments are already using CNNs to analyze particle interactions, with high levels of accuracy. Researchers at the NOvA neutrino experiment, for example, have applied a CNN to their data.
“This thing was really designed for identifying pictures of dogs and cats and people, but it’s also pretty good at identifying these physics events,” says Fermilab scientist Alex Himmel. “The performance was very good—equivalent to 30 percent more data in our detector.”
Scientists on experiments at the Large Hadron Collider hope to use deep learning to make their experiments more autonomous, says CERN physicist Maurizio Pierini. “We’re trying to replace humans on a few tasks. It’s much more costly to have a person watching things than a computer.”
CNNs promise to be useful outside of detector physics as well. On the astrophysics side, some scientists are working on developing CNNs that can discover new gravitational lenses, massive celestial objects such as galaxy clusters that can distort light from distant galaxies behind them. The process of scanning the telescope data for signs of lenses is highly time-consuming, and normal pattern-recognizing programs have a hard time distinguishing their features.
“It’s fair to say we’ve only begun to scratch the surface when it comes to using these tools,” says Alex Radovic, a postdoctoral fellow at the College of William and Mary who works on the NOvA experiment at Fermilab.
The upcoming data flood
Some believe neural networks could help avert what they see as an upcoming data processing crisis.
An upgraded version of the Large Hadron Collider planned for 2025 will produce roughly 10 times as much data. The Dark Energy Spectroscopic Instrument will collect data from about 35 million cosmic objects, and the Large Synoptic Survey Telescope will capture high-resolution video of nearly 40 billion galaxies.
Data streams promise to grow, but previously exponential growth in the power of computer chips is predicted to falter. That means greater amounts of data will become increasingly expensive to process.
“You may need 100 times more capability for 10 times more collisions,” Pierini says. “We are going toward a dead end for the traditional way of doing things.”
Not all experiments are equally fit for the technology, however.
“I think this'll be the right tool sometimes, but it won’t be all the time,” Himmel says. “The more dissimilar your data is from natural images, the less useful the networks are going to be.”
Most physicists would agree that CNNs are not appropriate for data analysis at experiments that are just starting up, for example—neural networks are not very transparent about how they do their calculations. “It would be hard to convince people that they have discovered things,” Pierini says. “I still think there’s value to doing things with paper and pen.”
In some cases, the challenges of running a CNN will outweigh the benefits. For one, the data need to be converted to image form if they aren’t already. And the networks require huge amounts of data for the training—sometimes millions of images taken from simulations. Even then, simulations aren’t as good as real data. So the networks have to be tested with real data and other cross-checks.
“There’s a high standard for physicists to accept anything new,” says Amir Farbin, an associate professor of physics at The University of Texas, Arlington. “There’s a lot of hoops to jump through to convince everybody this is right.”
Looking to the future
For those who are already convinced, CNNs spawn big dreams for faster physics and the possibility of something unexpected.
Some look forward to using neural networks for detecting anomalies in the data—which could indicate a flaw in a detector or possibly a hint of a new discovery. Rather than trying to find specific signs of something new, researchers looking for new discoveries could simply direct a CNN to work through the data and try to find what stands out. “You don’t have to specify which new physics you’re searching for,” Pierini says. “It’s a much more open-minded way of taking data.”
Someday, researchers might even begin to take tackle physics data with unsupervised learning. In unsupervised learning, as the name suggests, an algorithm would train with vast amounts of data without human guidance. Scientists would be able to give algorithms data, and the algorithms would be able to figure out what conclusions to draw from it themselves.
“If you had something smart enough, you could use it to do all types of things,” Perdue says. “If it could infer a new law of nature or something, that would be amazing.”
“But,” he adds, “I would also have to go look for new employment.”
Viewing our turbulent universe
Construction has begun for the CTA, a discovery machine that will study the highest energy objects and events across the entire sky.

Billions of light-years away, a supermassive black hole is spewing high-energy radiation, launching it far outside of the confines of its galaxy. Some of the gamma rays released by that turbulent neighborhood travel unimpeded across the universe, untouched by the magnetic fields threading the cosmos, toward our small, rocky, blue planet.
We have space-based devices, such as the Fermi Gamma-ray Space Telescope, that can detect those messengers, allowing us to see into the black hole’s extreme environment or search for evidence of dark matter. But Earth’s atmosphere blocks gamma rays. When they meet the atmosphere, sequences of interactions with gas molecules break them into a shower of fast-moving secondary particles. Some of those generated particles—which could be, for example, fast-moving electrons and their antiparticles, positrons—speed through the atmosphere so quickly that they generate a faint flash of blue light, called Cherenkov radiation.
A special type of telescope—large mirrors fitted with small reflective cones to funnel the faint light—can detect this blue flash in the atmosphere. Three observatories equipped with Cherenkov telescopes look at the sky during moonless hours of the night: VERITAS in Arizona has an array of four; MAGIC in La Palma, Spain, has two; and HESS in Namibia, Africa, has an array of five. All three observatories have operated for at least 10 years, revealing a gamma-ray sky to astrophysicists.
“Those telescopes really have helped to open the window, if you like, on this particular region of the electromagnetic spectrum,” says Paula Chadwick, a gamma-ray astronomer at Durham University in the United Kingdom. But that new window has also hinted at how much more there is to learn.
“It became pretty clear that what we needed was a much bigger instrument to give us much better sensitivity,” she says. And so gamma-ray scientists have been working since 2005 to develop the next-generation Cherenkov observatory: “a discovery machine,” as Stefan Funk of Germany’s Erlangen Centre for Astroparticle Physics calls it, that will reveal the highest energy objects and events across the entire sky. This is the Cherenkov Telescope Array (CTA), and construction has begun.
Ironing out the details
As of now, nearly 1400 researchers and engineers from 32 countries are members of the CTA collaboration, and membership continues to grow. “If we look at the number of CTA members as a function of time, it’s essentially a linear increase,” says CTA spokesperson Werner Hofmann.
Technology is being developed in laboratories spread across the globe: in Germany, Italy, the United Kingdom, Japan, the United States (supported by the NSF—given the primarily astrophysics science mission of the CTA, it is not a part of the Department of Energy High Energy Physics program), and others. Those nearly 1400 researchers are collaborating and working together to gain a better understanding of how our universe works. “It’s the science that’s got everybody together, got everybody excited, and devoting so much of their time and energy to this,” Chadwick says.
The CTA will be split between two locations, with one array in the Northern Hemisphere and a larger one in the Southern Hemisphere. The dual location enables a view of the entire sky.
CTA’s northern site will host four large telescopes (23 meters wide) and 15 medium telescopes (12 meters wide). The southern site will also host four large telescopes, plus 25 medium and 70 small telescopes (4 meters) that will use three different designs. The small telescopes are equipped to capture the highest energy gamma rays, which emanate, for example, from the center of our galaxy. That high-energy source is visible only from the Southern Hemisphere.
In July 2015, the CTA Observatory (CTAO) council—the official governing body that acts on behalf of the observatory—chose their top locations in each hemisphere. And in 2016, the council has worked to make those preferences official. On September 19 the council and the Instituto de Astrofísica de Canarias signed an agreement stating that the Roque de los Muchachos Observatory on the Canary Island of La Palma would host the northern array and its 19 constituent telescopes. This same site hosts the current-generation Cherenkov array MAGIC.
Construction of the foundation is progressing at the La Palma site to prepare for a prototype of the large telescope. The telescope itself is expected to be complete in late 2017.
“It’s an incredibly aggressive schedule,” Hofmann says. “With a bit of luck we’ll have the first of these big telescopes operational at La Palma a year from now.”
While the large telescope prototype is being built on the La Palma site, the medium and small prototype telescopes are being built in laboratories across the globe and installed at observatories similarly scattered. The prototypes’ optical designs and camera technologies need to be tested in a variety of environments. For example, the team working on one of the small telescope designs has a prototype on the slope of Mount Etna in Sicily. There, volcanic ash sometimes batters the mirrors and attached camera, providing a test to ensure CTA telescopes and instruments can withstand the environment. Unlike optical telescopes, which sit in protective domes, Cherenkov telescopes are exposed to the open air.
The CTAO council expects to complete negotiations with the European Southern Observatory before the end of 2016 to finalize plans for the southern array. The current plan is to build 99 telescopes in Chile.
This year, the council also chose the location of the CTA Science Management Center, which will be the central point of data processing, software updates and science coordination. This building, which will be located at Deutsches Elektronen-Synchrotron (also known as DESY) outside of Berlin, has not yet been built, but Hofmann says that should happen in 2018.
The observatory is on track for the first trial observations (essentially, testing) in 2021 and the first regular observations beginning in 2022. How close the project’s construction stays to this outlined schedule depends on funding from nations across the globe. But if the finances remain on track, then in 2024, the full observatory should be complete, and its 118 telescopes will then look for bright flashes of Cherenkov light signaling a violent event or object in the universe.
Hacking for humanity
THE Port humanitarian hackathon at CERN brings people from multiple industries together to make the world a better place.

In October, scientists, humanitarian workers and people across various industries came together at CERN for an annual hackathon to develop solutions to some of today’s pressing humanitarian issues. The challenges included creating emergency housing networks, reducing counterfeit drugs and finding clean and safe ways to dispose of explosives.
The hackathon is run by an organization called THE Port. THE Port was born out of casual conversations at CERN’s social events, which tend to draw people from myriad industries—scientists and engineers, people working for international companies like IBM or Proctor & Gamble, and humanitarians and human rights workers.
“During these parties and barbecues, we had a lot of discussions, especially with people from the Red Cross and United Nations, and we often found instances where the technology we have here at CERN could help them do better work,” says Daniel Dobos, a physicist at CERN and a founding member of THE Port. The problem, however, was that after these events, the great ideas they generated would usually get lost in the rush of busy work lives.
In May 2014, 13 people decided to come together to try to make these ideas a reality. The best way to do this, they believed, was to host an event where people from various industries around the world could meet and tackle the challenges faced by humanitarian organizations. Later that year, they hosted their first hackathon at CERN’s IdeaSquare, a collective space for innovation-related events. Since then, the core team has grown to about 35 people and has hosted five more hackathons over the last two years, including a diplomatic hackathon with United Nations ambassadors as participants and a biotech hackathon aimed at solving health-related challenges.
These hackathons are typically hosted over three days, but the organizers carefully plan for many months before the event. Unlike typical hackathons that invite people to pitch their ideas from scratch, THE Port creates challenges based on the problems international organizations, nongovernmental organizations and individuals need solved.
“We wanted to make a hackathon that is much less spontaneous than normal hackathons,” Dobos says. “We spend half a year shaping a challenge that seems appealing for outsiders, broad enough to leave room for creativity and narrow enough so that it really helps fulfill the need in a foreseeable timespan.”
THE Port aims for diversity, so the demographics of these hackathons mirrors that of CERN’s parties—a third from the scientific community, a third from the humanitarian sector and a third from everywhere else—including architects, artists and science communicators. Only about a sixth of the participants are particle physicists.
“I really loved having interactions with people who were not doing my job—normally I'm around people who more or less have a similar mind set. I think this has changed my life somehow,” says Virginia Azzolini, a particle physicist from the Massachusetts Institute of Technology currently working at CERN and a participant in THE Port’s latest hackathon. “I think what a particle physicist can bring to the hackathon is a free mind [because] we are always searching for something that is not there and we don’t have a book that tells us how to find it.”
This year, Azzolini and her teammates had the task of improving emergency housing networks that help distribute goods or services to refugees displaced by war or natural disasters. To address this challenge, they created a web application where people affected by a catastrophe could post their needs—blankets or shelter, for example—so that individuals in nearby communities could help meet those requests.
Another of this year’s challenges, posed by Handicap International—an NGO devoted to supporting people with disabilities and vulnerable populations in conflict and disaster zones—was to develop a clean and mobile solution for disposing explosives.
“The issue is that if clean elimination of explosives is much more expensive than open burning or open detonating, chances are that donors will not invest money in that approach,” says Paul Vermeulen, the Project Manager of Strategic Innovation at Handicap International.
THE Port’s team was able to think of a safe, simple and cheap solution: dissolve the explosive in a solvent then burn it with waste oil and use the steam and heat generated for electricity.
“The motivation, the energy and the good spirit of all the participants is really the strong element of this hackathon,” Vermeulen says.
For many participants, the work doesn’t end when the hackathon does. Both Azzolini and her teammates as well as the group who worked on Handicap International’s challenge continue to meet to refine their solutions in order to put these to use in the real world.
In the future, THE Port hopes to bring humanitarian hackathons to locations beyond CERN, including Berkeley Lab and Fermilab.
“The future goal is to bring this humanitarian style of hackathon around the world,” Dobos says.
Q&A: What more can we learn about the Higgs?
Four physicists discuss Higgs boson research since the discovery.

More than two decades before the discovery of the Higgs boson, four theoretical physicists wrote a comprehensive handbook called The Higgs Hunter’s Guide. The authors— Sally Dawson of the Department of Energy's Brookhaven National Laboratory; John F. Gunion from the University of California, Davis; Howard E. Haber from the University of California, Santa Cruz; and Gordon Kane from the University of Michigan—were recently recognized for “instrumental contributions to the theory of the properties, reactions and signatures of the Higgs boson” as recipients of the American Physical Society’s 2017 JJ Sakurai Prize for Theoretical Physics.
They are still investigating the particle that completed the Standard Model, and some are hunting different Higgs bosons that could take particle physics beyond that model.
Gunion, Haber and Dawson recently attended the Higgs Couplings 2016 workshop at SLAC National Accelerator Laboratory, where physicists gathered to talk about the present and future of Higgs research. Symmetry interviewed all four to find out what's on the horizon.
What is meant by "Higgs couplings"?
The Higgs is an unstable particle that lasts a very short time in the detector before it decays into pairs of things like top quarks, gluons, and photons. The rates and relative importance of these decays is determined by the couplings of the Higgs boson to these different particles. And that's what the workshop is all about, trying to determine whether or not the couplings predicted in the Standard Model agree with the couplings that are measured experimentally.
Right, we can absolutely say how much of the time we expect the Higgs to decay to the known particles, so a comparison of our predictions with the experimental measurements tells us whether there's any possible deviation from our Standard Model.
For us what would be really exciting is if we did see deviations. However, that probably requires more precision than we currently have experimentally.
But we don’t all agree on that, in the sense that I would prefer that it almost exactly agree with the Standard Model predictions because of a theory that I like that says it should. But most of the people in the world would prefer what John and Sally said.
How many people are working in Higgs research now worldwide?
I did a search for “Higgs” in the title of scientific papers after 2011 on arXiv.org and came up with 5211 hits; there are several authors per paper, of course, and some have written multiple papers, so we can only estimate.
There are roughly 5000 people on each experiment, ATLAS and CMS, and some fraction of those work on Higgs research, but it’s really too hard to calculate. They all contribute in different ways. Let’s just say many thousands of experimentalists and theorists worldwide.
What are Higgs researchers hoping to accomplish?
There are basically two different avenues. One is called the precision Higgs program designed to improve precision in the current data. The other direction addresses a really simple question: Is the Higgs boson a solo act or not? If additional Higgs-like particles exist, will they be discovered in future LHC experiments?
I think everybody would like to see more Higgs bosons. We don’t know if there are more, but everybody is hoping.
If you were Gordy [Kane] who only believes in one Higgs boson, you would be working to confirm with greater and greater precision that the Higgs boson you see has precisely the properties predicted in the Standard Model. This will take more and more luminosity and maybe some future colliders like a high luminosity LHC or an e+e- collider.
The precision Higgs program is a long-term effort because the high luminosity LHC is set to come online in the mid 2020s and is imagined to continue for another 10 years. There are a lot of people trying to predict what precision could you ultimately achieve in the various measurements of Higgs boson properties that will be made by the mid 2030s. Right now we have a set of measurements with statistical and systematic errors of about 20 percent. By the end of the high luminosity LHC, we anticipate that the size of the measurement errors can be reduced to around 10 percent and maybe in some cases to 5 percent.
How has research on the topic changed since the Higgs discovery?
People no longer build theoretical models that don’t have a Higgs in them. You have to make sure that your model is consistent with what we know experimentally. You can’t just build a crazy model; it has to be a model with a Higgs with roughly the properties we’ve observed, and that is actually pretty restrictive.
Many theoretical models have either been eliminated or considerably constrained. For example, the supersymmetric models that are theoretically attractive kind of expect a Higgs boson of this mass, but only after pushing parameters to a bit of an extreme. There’s also an issue called naturalness: In the Standard Model alone there is no reason why the Higgs boson should have such a light mass as we see, whereas in some of these theories it is natural to see the Higgs boson at this mass. So that’s a very important topic of research—looking for those models that are in a certain sense naturally predicting what we see and finding additional experimental signals associated with such models.
For example, the supersymmetric theories predict that there will be five Higgs bosons with different masses. The extent to which the electroweak symmetry is broken by each of the five depends on their couplings, but there should be five discovered eventually if the others exist.
There’s also a slightly different attitude to the research today. Before the Higgs boson was discovered it was known that the Standard Model was theoretically inconsistent without the Higgs boson. It had to be there in some form. It wasn’t going to be that we ran the LHC and saw nothing—no Higgs boson and nothing else. This is called a no-lose theorem. Now, having discovered the Higgs boson, you cannot guarantee that additional new phenomenon exists that must be discovered at the LHC. In other words, the Standard Model itself, with the Higgs boson, is a theoretically consistent theory. Nevertheless, not all fundamental phenomena can be explained by Standard Model physics (such as neutrino masses, dark matter and the gravitational force), so we know that new phenomena beyond the Standard Model must be present at some very high-energy scale. However, there is no longer a no-lose theorem that states that this new phenomena must appear at the energy scale that is probed at the LHC.
How have the new capabilities of the LHC changed the game?
We have way more Higgs bosons; that’s really how it’s changed. Since the energy is higher we can potentially make heavier new particles.
There were about a million Higgs bosons produced in the first run of the LHC, and there will be more than twice that in the second run, but they only can find a small fraction of those in the detector because of background noise and some other things. It’s very hard. It takes clever experimenters. To find a couple of hundred Higgs you need to produce a million.
Most of the time the Higgs decays into something we can’t see in our detector. But as the measurements get better and better, experimentalists who have been extracting the couplings are quantifying more properties of the Higgs decays. So instead of just counting how many Higgs bosons decay to two Z bosons, they will look at where the two Z bosons are in the detector or the energy of the Z bosons.
Are there milestones you are looking forward to?
Confirming the Standard Model Higgs with even more precision. The decay the Higgs boson was discovered in—two photons—could happen in any other kind of particle. But the decay to W boson pairs is the one that you need for it to break the electroweak symmetry [a symmetry between the masses of the particles associated with the electromagnetic and weak forces], which is what it should do according to the Standard Model.
So, one of the things we will see a lot of in the next year or two is better measurements of the Higgs decay into the bottom quarks. Within a few years, we should learn whether or not there are more Higgs bosons. Measuring the couplings to the desired precision will take 20 years or more.
There’s another thing people are thinking about, which is how the Higgs can be connected to the important topic of dark matter. We are working on models that establish such a connection, but most of these models, of course, have extra Higgs bosons. It’s even possible that one of those extra Higgs bosons might be invisible dark matter. So the question is whether the Higgs we can see tells us something about dark matter Higgs bosons or other dark matter particles, such as the invisible particles that are present in supersymmetry.
Are there other things still to learn?
There is one more interconnection, and that has to do with inflation, the early rapid expansion of the universe. Many models of this employ a Higgs boson, not the Standard Model Higgs boson, but something called an inflaton. There are many different things like this where a Higgs boson, in a generic sense, might be an important ingredient.
What to do with the data?
Physicists and scientific computing experts prepare for an onslaught of petabytes.

Rapid advances in computing constantly translate into new technologies in our everyday lives. The same is true for high-energy physics. The field has always been an early adopter of new technologies, applying them in ever more complex experiments that study fine details of nature’s most fundamental processes. However, these sophisticated experiments produce floods of complex data that become increasingly challenging to handle and analyze.
Researchers estimate a decade from now, computing resources may have a hard time keeping up with the slew of data produced by state-of-the-art discovery machines. CERN’s Large Hadron Collider, for example, already generates tens of petabytes (millions of gigabytes) of data per year today, and it will produce ten times more after a future high-luminosity upgrade.
Big data challenges like these are not limited to high-energy physics. When the Large Synoptic Survey Telescope begins observing the entire southern sky in never-before-seen detail, it will create a stream of 10 million time-dependent events every night and a catalog of 37 billion astronomical objects over 10 years. Another example is the future LCLS-II X-ray laser at DOE’s SLAC National Accelerator Laboratory, which will fire up to a million X-ray pulses per second at materials to provide unprecedented views of atoms in motion. It will also generate tons of scientific data.
To make things more challenging, all big data applications will have to compete for available computing resources, for example when shuttling information around the globe via shared networks.
What are the tools researchers will need to handle future data piles, sift through them and identify interesting science? How will they be able to do it as fast as possible? How will they move and store tremendous data volumes efficiently and reliably? And how can they possibly accomplish all of this while facing budgets that are expected to stay flat?
“Clearly, we’re at a point where we need to discuss in what direction scientific computing should be going in order to address increasing computational demands and expected shortfalls,” says Richard Mount, head of computing for SLAC’s Elementary Particle Physics Division.
The researcher co-chaired the 22nd International Conference on Computing in High-Energy and Nuclear Physics (CHEP 2016), held Oct. 10-14 in San Francisco, where more than 500 physicists and computing experts brainstormed possible solutions.
Here are some of their ideas.
Exascale supercomputers
Scientific computing has greatly benefited from what is known as Moore’s law—the observation that the performance of computer chips has doubled every 18 months or so for the past decades. This trend has allowed scientists to handle data from increasingly sophisticated machines and perform ever more complex calculations in reasonable amounts of time.
Moore’s law, based on the fact that hardware engineers were able to squeeze more and more transistors into computer chips, has recently reached its limits because transistor densities have begun to cause problems with heat.
Instead, modern hardware architectures involve multiple processor cores that run in parallel to speed up performance. Today’s fastest supercomputers, which are used for demanding calculations such as climate modeling and cosmological simulations, have millions of cores and can perform tens of millions of billions of computing operations per second.
“In the US, we have a presidential mandate to further push the limits of this technology,” says Debbie Bard, a big-data architect at the National Energy Research Scientific Computing Center. “The goal is to develop computing systems within the next 10 years that will allow calculations on the exascale, corresponding to at least a billion billion operations per second.”
Software reengineering
Running more data analyses on supercomputers could help address some of the foreseeable computing shortfalls in high-energy physics, but the approach comes with its very own challenges.
“Existing analysis codes have to be reengineered,” Bard says. “This is a monumental task, considering that many have been developed over several decades.”
Maria Girone, chief technology officer at CERN openlab, a collaboration of public and private partners developing IT solutions for the global LHC community and other scientific research, says, “Computer chip manufacturers keep telling us that our software only uses a small percentage of today’s processor capabilities. To catch up with the technology, we need to rewrite software in a way that it can be adapted to future hardware developments.”
Part of this effort will be educating members of the high-energy physics community to write more efficient software.
“This was much easier in the past when the hardware was less complicated,” says Makoto Asai, who leads SLAC’s team for the development of Geant4, a widely used simulation toolkit for high-energy physics and many other applications. “We must learn the new architectures and make them more understandable for physicists, who will have to write software for our experiments.”
Smarter networks and cloud computing
Today, LHC computing is accomplished with the Worldwide LHC Computing Grid, or WLCG, a network of more than 170 linked computer centers in 42 countries that provides the necessary resources to store, distribute and analyze the tens of petabytes of data produced by LHC experiments annually.
“The WLCG is working very successfully, but it doesn’t always operate in the most cost-efficient way,” says Ian Fisk, deputy director for computing at the Simons Foundation and former computing coordinator of the CMS experiment at the LHC.
“We need to move large amounts of data and store many copies so that they can be analyzed in various locations. In fact, two-thirds of the computing-related costs are due to storage, and we need to ask ourselves if computing can evolve so that we don’t have to distribute LHC data so widely.”
More use of cloud services that offer internet-based, on-demand computing could be a viable solution for remote data processing and analysis without reproducing data.
Commercial clouds have the capacity and capability to take on big data: Google, receives billions of photos per day and hundreds of hours of video every minute, posing technical challenges that have led to the development of powerful computing, storage and networking solutions.
Deep machine learning for data analysis
While conventional computer algorithms perform only operations that they are explicitly programmed to perform, machine learning uses algorithms that learn from the data and successively become better at analyzing them.
In the case of deep learning, data are processed in several computational layers that form a network of algorithms inspired by neural networks. Deep learning methods are particularly good at finding patterns in data. Search engines, text and speech recognition, and computer vision are all examples.
“There are many areas where we can learn from technology developments outside the high-energy physics realm,” says Craig Tull, who co-chaired CHEP 2016 and is head of the Science Software Systems Group at Lawrence Berkeley National Laboratory. “Machine learning is a very good example. It could help us find interesting patterns in our data and detect anomalies that could potentially hint at new science.”
At present, machine learning in high-energy physics is in its infancy, but researchers have begun implementing it in the analysis of data from a number of experiments, including ATLAS at the LHC and the Daya Bay neutrino experiment in China.
Quantum computing
The most futuristic approach to scientific computing is quantum computing, an idea that goes back to the 1980s when it was first brought up by Richard Feynman and other researchers.
Unlike conventional computers, which encode information as a series of bits that can have only one of two values, quantum computers use a series of quantum bits, or qubits, that can exist in several states at once. This multitude of states at any given time exponentially increases the computing power.
A simple one-qubit system could be an atom that can be in its ground state, excited state or a superposition of both, all at the same time.
“A quantum computer with 300 qubits will have more states than there are atoms in the universe,” said Professor John Martinis from the University of California, Santa Barbara, during his presentation at CHEP 2016. “We’re at a point where these qubit systems work quite well and can perform simple calculations.”
Martinis has teamed up with Google to build a quantum computer. In a year or so, he says, they will have built the first 50-qubit system. Then, it will take days or weeks for the largest supercomputers to validate the calculations done within a second on the quantum computer.
We might soon find out in what directions scientific computing in high-energy physics will develop: The community will give the next update at CHEP 2018 in Bulgaria.
The origins of dark matter
Theorists think dark matter was forged in the hot aftermath of the Big Bang.

Transitions are everywhere we look. Water freezes, melts, or boils; chemical bonds break and form to make new substances out of different arrangements of atoms.
The universe itself went through major transitions in early times. New particles were created and destroyed continually until things cooled enough to let them survive.
Those particles include ones we know about, such as the Higgs boson or the top quark. But they could also include dark matter, invisible particles which we presently know only because of their gravitational effects.
In cosmic terms, dark matter particles could be a “thermal relic,” forged in the hot early universe and then left behind during the transitions to more moderate later eras. One of these transitions, known as “freeze-out,” changed the nature of the whole universe.
The hot cosmic freezer
On average, today's universe is a pretty boring place. If you pick a random spot in the cosmos, it’s far more likely to be in intergalactic space than, say, the heart of a star or even inside an alien solar system. That spot is probably cold, dark and quiet.
The same wasn’t true for a random spot shortly after the Big Bang.
“The universe was so hot that particles were being produced from photons smashing into other photons, of photons hitting electrons, and electrons hitting positrons and producing these very heavy particles,” says Matthew Buckley of Rutgers University.
The entire cosmos was a particle-smashing party, but parties aren’t meant to last. This one lasted only a trillionth of a second. After that came the cosmic freeze-out.
During the freeze-out, the universe expanded and cooled enough for particles to collide far less frequently and catastrophically.
“One of these massive particles floating through the universe is finding fewer and fewer antimatter versions of itself to collide with and annihilate,” Buckley says.
“Eventually the universe would get large enough and cold enough that the rate of production and the rate of annihilation basically goes to zero, and you just a relic abundance, these few particles that are floating out there lonely in space.”
Many physicists think dark matter is a thermal relic, created in huge numbers in before the cosmos was a half-second old and lingering today because it barely interacts with any other particle.
A WIMPy miracle
One reason to think of dark matter as a thermal relic is an interesting coincidence known as the “WIMP miracle.”
WIMP stands for “weakly-interacting massive particle,” and WIMPs are the most widely accepted candidates for dark matter. Theory says WIMPs are likely heavier than protons and interact via the weak force, or at least interactions related to the weak force.
The last bit is important, because freeze-out for a specific particle depends on what forces affect it and the mass of the particle. Thermal relics made by the weak force were born early in the universe’s history because particles need to be jammed in tight for the weak force, which only works across short distances, to be a factor.
“If dark matter is a thermal relic, you can calculate how big the interaction [between dark matter particles] needs to be,” Buckley says.
Both the primordial light known as the cosmic microwave background and the behavior of galaxies tell us that most dark matter must be slow-moving (“cold” in the language of physics). That means interactions between dark matter particles must be low in strength.
"Through what is perhaps a very deep fact about the universe,” Buckley says, “that interaction turns out to be the strength of what we know as the weak nuclear force."
That's the WIMP miracle: The numbers are perfect to make just the right amount of WIMPy matter.
The big catch, though, is that experiments haven't found any WIMPs yet. It's too soon to say WIMPs don't exist, but it does rule out some of the simpler theoretical predictions about them.
Ultimately, the WIMP miracle could just be a coincidence. Instead of the weak force, dark matter could involve a new force of nature that doesn't affect ordinary matter strongly enough to detect. In that scenario, says Jessie Shelton of the University of Illinois at Urbana-Champaign, "you could have thermal freeze-out, but the freeze-out is of dark matter to some other dark field instead of [something in] the Standard Model."
In that scenario, dark matter would still be a thermal relic but not a WIMP.
For Shelton, Buckley, and many other physicists, the dark matter search is still full of possibilities.
"We have really compelling reasons to look for thermal WIMPs," Shelton says. "It's worth remembering that this is only one tiny corner of a much broader space of possibilities."
Is there a dark energy particle?
A theoretical particle that adapts to its surroundings could explain the accelerating expansion of our universe.

Our universe grows a little bigger every day. Empty space is expanding, sweeping galaxies further and further apart. Even starlight traversing this swelling nothingness is stretched like a rubber band.
The astronomical evidence for the accelerating expansion of the universe is overwhelming. But what is pushing the universe apart?
Particle physicists endeavor to answer cosmic-sized questions like this using the most fundamental laws of nature. But this particular query has them in a pickle because it is unlike anything else.
“If we understand gravity correctly, then there is some other substance in the universe that makes up about two-thirds of the total energy density and that behaves totally differently from normal matter,” says theorist Amol Upadhye, a postdoc at the University of Wisconsin, Madison. “So the big mystery is, what is this stuff?”
This stuff is dark energy, but besides its ostensible pushing effect in the cosmos, scientists know little else. However, theorists like Upadhye suspect that if there really is something causing empty space to expand, there is a good chance that it produces a particle. But to mesh with the cosmological observation, a dark energy particle would require a series of perplexing properties. For one, it would need to behave like a chameleon—that is, it would need to alter its properties based on its surroundings.

Cosmic chameleon: You come and go
In the depths of empty space, a chameleon particle might be almost massless, minimizing its gravitational attraction to other particles. But here on Earth (and in any other densely populated regions of space), the chameleon would need to swell to a much larger mass. This would limit its ability to easily interact with ordinary matter and make it nearly invisible to most detectors.
“If matter were music, then ordinary matter would be like the keys on a piano,” Upadhye says. “Each particle has a discrete mass, just like each piano key plays a single note. But chameleon particles would be like the slide on a trombone and able to change their pitches based on the amount of background noise.”
In addition to a sliding mass, the chameleons would need to exert a negative pressure. Classically, pressure is the force particles exert on their container. When the container is made of matter (like the rubber of a balloon), it expands as the internal pressure increases, and relaxes back to normal when the pressure diminishes. But when the container is made of nothing—that is, the container is spacetime itself—the reverse effect happens. For instance, when a birthday balloon fills with air, the surrounding empty space contracts slightly. But as the balloon releases air and the pressure diminishes, space relaxes back to normal.
All known particles contract space as their pressure increases and relax space as their pressure approaches zero. But to actually expand space, a particle would need to exert a negative pressure—an idea which is totally alien in our macroscopic physical world but not impossible on a subatomic scale.
“This was actually Einstein’s idea,” Upadhye says. “If you put in a substance with a negative pressure into the equations of general relativity you get this accelerating expansion of universe.”
A mass-shifting, space-expanding particle would be unlike anything else in physics. But physicists are hopeful that if such a particle exists, it would be abundant both in the depths of space and here in our own solar system.
Several experiments have searched indirectly for chameleon particles by closely monitoring the properties of ordinary matter and looking for any chameleon-like affects. But the CERN Axion Solar Telescope, or CAST experiment, is hoping to catch chameleons directly as they radiate from the sun.
“The sun is our biggest source of particles,” says Konstantin Zioutas, the spokesperson for the CAST experiment. “If chameleons exist, then they could copiously be produced in the sun.”
The CAST experiment is a specialized telescope that looks for rare and exotic particles emanating from the sun and the early universe. Zioutas and his colleagues recently installed a special magnifying glass inside CAST which collects and focuses particles onto a highly sensitive membrane suspended in a resonant electromagnetic cavity. Their hope is that if chameleon particles exist and are produced by the sun, they’ll see the very tiny pressure these particles flux should exert as they are reflected off the membrane when the sun is in view.
So far they haven’t seen anything unexpected, but new upgrades this winter will make their experiment even more sensitive to both solar chameleons and other exotic cosmic-sprung phenomena.
“The dark energy mystery is the biggest challenge in physics, and nothing we currently understand can explain it,” Zioutas says. “We need to look at the exotic of the exotica for possible solutions.”
#AskSymmetry Twitter chat with Leonardo Senatore
See theorist Leonardo Senatore’s answers to readers’ questions about parallel universes.
