The bullet cluster made dark matter apparent: Has it stood the test of time?
Dark matter was first proposed to explain the speed at which stars orbit the center of their galaxies. Ever since, the search for other lines of evidence for dark matter has been an interesting one.
One of the biggest successes appeared to be a collision of galaxy clusters called the Bullet Cluster. It provided one of the most spectacular and intuitive indications that seemed to show that dark matter was real. Our own report on the first evidence of the Bullet Cluster, written more than a decade ago, was pretty excited. And in the stories that followed about the existence of dark matter, we’ve tended to treat the Bullet Cluster as a gold standard. If you can’t explain the Bullet Cluster, then your theory is probably a bit useless really.
The image above shows the remnant of two galaxy clusters that have collided, with a smaller “bullet” that has passed through the larger cluster. The energy of the collision is such that regular matter has been heated to very high temperatures, causing it to glow like crazy in the X-ray regime (which is shown in red). So, an X-ray telescope can produce a clear image of the matter distribution of both the bullet and the larger cluster. Even better, this collision appears to be almost side-on to us, so we have the best seat in the house to observe it.
In addition, both clusters have significant mass and act like gravitational lenses. By imaging objects that are behind the clusters and understanding how the images are distorted by the intervening lens, we can map out the Bullet Cluster’s mass. This is shown in blue.
Overlaying the two images shows that the mass is not where the matter is—hence, dark matter. This is only one of several collisions between clusters that show similar features—gravity without apparent matter—but the Bullet Cluster is, without doubt, the cleanest example of them all.
However, the Bullet Cluster shows something that is, arguably, more important: science works. Although the initial publication was touted as evidence for dark matter, it was quickly realized that the story may be more complicated than that. In fact, the story even started to shade toward the Bullet Cluster being evidence against dark matter. Theoretical physicists let their imaginations loose, bringing dark energy and modified theories of gravity to the table. But eventually, as the dust settled, thinking came back around to the original interpretation being correct.
Looking back at the Bullet Cluster today—how we got from here to there and back again—highlights how science works in that same clean manner. Data is king, but theory is the kingdom; you need both, and neither is set in stone.
Explaining the data raises questions
Shortly after the Bullet Cluster analysis was published back in 2006, scientists began to take a closer look at the data. Initially, it all seemed a bit puzzling. Attempts to model the collision didn’t seem to work.
One of the cottage industries in astrophysics is modeling galaxies and clusters of galaxies. You can, in your computer, create two clusters that approximately match the mass distribution of some observations, then ram them together at any speed you like. You can also produce a model that has lots of different clusters and look at the statistics of the collisions to see what the average cluster crash looks like.
This two-step process tells us different things. One model tells us, given the observational data, how big the clusters were and how fast they were approaching each other when they collided. The second model tells us, given our Universe, what size of galaxy clusters we should expect and how fast they typically collide.
For the collisional model, it is not enough to match the distribution of visible matter and gravitational lensing that was observed. There are a whole raft of features that the models need to reproduce. As we mentioned above, the normal matter is so hot that it produces lots of X-rays. But it’s not enough for a model to just produce X-rays; it should produce the same spectrum of X-rays—that is we should be able to predict the relative brightness of each color of X-ray. Other constraints have to do with the material in the clusters. During the collision, matter (ordinary matter, that is) is transferred between clusters. Our observations provide an estimate of how much is transferred, and the models should predict the transfer.
The second model is all about probabilities. When you map the results of the first model onto models of many galaxy clusters randomly colliding with each other, you should find that the predicted collision is not too extraordinary. Yes, it is possible that we hit the equivalent of a winning lotto ticket. But if the models predict that the cluster collision requires pretty exceptional conditions, we should probably assume that we’ve made a mistake somewhere. Or, more precisely, for every collision that requires extreme conditions, we should have observed lots that are within the normal range. Since we don’t have lots of other collisions, the Bullet Cluster should be within that normal range
But the first papers published after the Bullet Cluster analysis showed that, maybe, just maybe, all is not well. Is the Bullet Cluster special?
The first indication that something could be amiss came from models that collided two clusters.
To collide two galaxy clusters, you have to decide what physics to include. In the first attempts, the models were relatively simple. Each cluster consisted of a number of ordinary matter and dark matter particles. These passed through each other, colliding in the case of ordinary matter (the dark matter ignores everything in its path). The increase in pressure from the collision drives up the temperature, causing the particles to emit X-rays. At the same time, the collision generates a shock-wave that also drives the pressure up and produces an even hotter gas that emits more X-rays.
Although computationally intensive due to the number of particles, the model only contains the minimal physics of a fairly simple fluid. And the analysis was equally simple: does our model reproduce the major features in our observations? The researchers focused on the observed shock-front, mass distribution, and X-ray emissions. Their attempt to reproduce those features involved trying different combinations of collision speeds, densities, and total masses of the two clusters.
For a given set of initial conditions, any particular observational property could be reproduced. However, to get all the features required that the two clusters have a pretty restricted set of densities, mass ratios, and, most importantly, collisional speed.
As is typical of exciting new results, others were trying to do the same thing, all using slightly different models. But they all came to similar conclusions. The range of collision speeds also seemed wrong—it ranged from 2,700km/s through to a massive 4,050km/s. The entire range seemed high, considering that the predominant dark matter theory is titled “cold dark matter,” where cold is another way of saying slow-moving.
But we have no idea whether galaxy clusters obey any sort of speed limit (other than “slower than light”). To have more than an intuitive guess about whether these results were high, researchers needed to turn to a different type of model, one that models the motion of galaxy clusters. The first step to building this type of model is to decide what your universe is made of.
Because we can see it, we already know about how much ordinary matter is around, and we know the kind of speed that it is moving at. Dark matter is a different story, though. If you assume that dark matter exists, then you have to decide on how it is distributed and how fast it is moving.
The speed of dark (matter)
This is not an entirely free choice. The Big Bang and the fact that galaxies managed to form after that event both put limits on the speed and distribution of dark matter. The motion of the galaxies within a cluster also tells you about the distribution of dark matter. So, all of that observational data goes in as a starting point, which puts some limits on the model’s flexibility. After all, reality rules. If the starting point wouldn’t result in galaxies, for instance, then it will be rejected.
To find out if the collisional speed was exceptional, the next step was to examine models of cluster collisions. To do this, researchers created a large box (more than 4GigaParsecs on a side) and filled it with dark matter—ordinary matter is a minor component, and most, but not all, models neglect it. The researchers let the model run to evolve the Universe. At different points in time, they would freeze the model and examine it. The researchers were searching for large clusters that had trapped a small cluster in its gravitational well. Under these conditions, the small cluster would be doomed to collide with its bigger neighbor.
To make the comparison with the Bullet Cluster fair, researchers restricted themselves to clusters with a mass ratio similar to that of the Bullet Cluster collision. In addition, they removed glancing blows, since the Bullet Cluster looks like it is close to a head-on collision.
Collisions seemed to happen on a fairly regular basis: the researchers found just under 80 examples of collisions that looked like the Bullet Cluster. Yet, none of them reproduced the details of the Bullet Cluster collision. Only one collision had an infall speed greater than 2,000km/s, which was still too slow—remember, all of the collision models had suggested a much higher speed.
Even more worryingly, Bullet Cluster collisions didn’t happen in the past. The model showed that all the collisions with the right mass ratio (e.g., little cluster dives into big cluster) happen in the present day. Out there in the Universe right now there are small clusters being sucked helplessly in the maw of large ones. In the past, though, the researchers didn’t find any of these pairings. Because we observe the Bullet Cluster today, we know that it happened in the past. We even know how long ago it happened.
So to fit the observable data, we should find small clusters colliding with large clusters in the past. Yet, our models showed nothing of the sort.
Instead, the past is dominated by similarly sized clusters that hurl into each other. That process may be what creates the disparity in cluster sizes that, eventually, allows Bullet-Cluster-like collisions. But that takes time—according to these models, a long time.
The big issue turned out to be that, in every computer model, researchers have a number of choices to make: what physics to include, what to exclude, and what to approximate. Beyond that, there are also technical choices to be made: what is the size of the Universe you plan to simulate? What is the smallest feature that your model will deal with? These two are coupled choices that are limited by the amount of computational power available. And they really matter.
It turns out that the size of the model box and the resolution matter. Or, more precisely, the bigger the box and the more particles there are in the box, the further you can reach into the extremes of the speed distribution. For the type of model used in the initial analysis, high-speed collisions are expected to be rare. Later work suggested that the box needed to have a volume about eight times greater than any that had been tried so far if you wanted to see a single collision that matched the speeds predicted by the collision models.
But we didn’t know that at the time. The consensus then seemed to be that something was wrong—not necessarily with dark matter, and certainly not with the observations. The expectation was that either the models that simulate galaxy cluster formation and dynamics were missing something, or the model that collided clusters was missing something. But which one was off, and what exactly was it missing?
At this point, theoretical physicists start to get a bit excited—Results That Aren’t Explained™ means New Physics™. Maybe dark energy could speed the cluster up? And, if not dark energy, could we try modified Newtonian Dynamics, an idea that replaces dark matter with a modified theory of gravity? In both cases, you could get greater collisional speeds. But they came at a cost: using a physical model that had some pretty sparse evidence supporting it.
In this case, all of these ideas turned out to be wrong, but considering them was an essential part of the process. Not considering them would suggest that we refused to reevaluate the correctness of fundamental physics. There are always ideas that should be up for discussion when experimental evidence and current theory fail to agree. They will almost always be wrong, but the “almost” aspect is rather critical.
In this case, even though there were differences between theory and observations, the story didn’t end with a new theory. Instead, researchers figured out how to resolve the differences. The process started by revisiting the model for the cluster collision. The original work had looked at only a few gross features: where was the center of mass for each cluster, what was the shape of the shock front, etc.
The model assumed that the clusters were, prior to colliding, spherically symmetric. That’s pretty unrealistic, and the huge discrepancy meant it was time to get serious. The clusters were turned into ellipses, and the effect of magnetic fields was added to the fluid-like physics. This latter is important because magnetic fields confine charged (ordinary) matter to move around field lines. This can increase pressures and temperatures.
Even without these additions, the old model already fit the gross features of the Bullet Cluster. Now it was also time to try to explain the details. Most astronomical data comes in the form of images and not necessarily visible light images. X-rays, radio telescope data, and many other parts of the spectrum are common. Some of this data is used to provide pixel-by-pixel estimates of the more interesting physical properties, like temperature.
This is where the researchers headed: comparing the models to the best resolution of the data they were supposed to model. This involved a pixel-by-pixel comparison between the experimental data and the model predictions.
Doing that requires a bit of finesse. There is only one bullet cluster, and thus only one complete data set. The model has some unknowns that have to be set based on the experimental data, too. So, how do you use the data to set up your model and still compare the results to the data? In the end, a team used the gravitational lensing data and the low-energy part of the X-ray emission spectrum to fix the parameters in their model. They then compared the model’s output to all the rest of the data.
In the end, the model that incorporated the magnetic fields reproduced the observational data pretty well. Not perfectly, and in some ways poorly, but it was better than previous work. Even so, it wasn’t obvious that this got us anywhere, as the collision speed it predicted still seemed rather high (around 2,800km/s to 2,900km/s). There was, however, an important difference: to reproduce the collision, the main cluster had to be larger than predicted by the previous model.
With the physics of the collision apparently reproduced, the researchers returned to the collision speed. In their model, the collision speed was still a massive 2,800km/s, which is not that different from the values obtained by earlier researchers. Yet they claimed that this speed is OK. What is the difference?
The difference is in the mass of the larger cluster. The new model predicted a mass that is three times greater than previously thought. That gives an additional gravitational attraction as it draws in the small one, speeding the impact. Re-running models of clusters using a much bigger model universe and with many more galaxy clusters, researchers were able to see that clusters of this mass were not so uncommon, and there were plenty of collisions that looked Bullet-Cluster-like.
Most importantly, for the larger clusters, the collisional speeds were greater. The Bullet Cluster is still a bit above the average. What does that mean? It means that the Bullet Cluster collision is still exceptional, but only in the one-in-a-hundred sense and not in the one-in-100-million sense indicated by the earliest research.
Is the story entirely resolved? Probably not. I’m sure that the revised model will still need more scrutiny, but the Bullet Cluster—and science in general—is a slow-moving story. The original Bullet Cluster observations were announced about ten years ago; the revised model is only two years old. And this simply reflects the nature of science. For the most part, it’s about sweating the small stuff, because that is the only way to understand the big stuff. It’s a self-correcting process. It’s generating models that you know to be wrong and putting them out there to see how wrong they really are.
Science is, in short, playing with failure and loving it.
Latest posts by Sebastien Clarke (see all)
- Industry wants NASA to move ahead quickly on Gateway module - May 20, 2019
- SpaceX Is Building a ‘Starship’ Rocket Prototype in Florida, Too - May 18, 2019
- NASA selects 11 companies for lunar lander studies - May 17, 2019