There is a problem with the Universe.
Or possibly there is a problem in how we observe it. Either way, something is fish.
In short, the Universe is expanding. There are a lot of different ways to measure this expansion. The good news is that all these methods are obtained about the same number for this. The bad news is that they don’t get it exactly the same number. One group of methods gets a number and another group gets another number.
This discrepancy has been for some time and does not improve. In fact, it gets worse (as astronomers mean, there is a growing tension between the methods). The big difference between the two groups is that one set of methods contemplates things relatively close to the Universe, and the other in the more distant ones. Either we are doing something wrong, or the Universe is doing something different far from what is here.
A new article he just published uses a smart method to measure expansion by looking at nearby galaxies, and what he finds matches the other “nearby object” methods. Which may or may not help.
Okay, backing up … we’ve known for a century that the Universe is expanding. We see that all galaxies are moving away from us, and the farther away a galaxy is, the faster it seems to move. From what we can tell, there is a close relationship between the distance of a galaxy and how fast it seems to move away. So, say, a 1 megaparsec galaxy* (Abbreviated Mpc) away may be moving away from us at 70 miles per second, and a double (2 Mpc) moves twice as fast (140 km / sec).
This proportion seems to be kept at great distances, so we call it the Hubble constant or H0 (pronounced “H naught”), after Edwin Hubble, who was one of the first to propose this idea. It is measured in odd units of kilometers per second per megaparsec (or speed per distance; something moves faster if it is farther away).
Methods that look at closer objects such as stars in nearby galaxies, exploding stars, and the like get H0 at about 73 km / sec / Mpc. But methods that use more distant things like the cosmic microwave background and the acoustic oscillations of the baryon get a smaller number, more like 68 km / sec / Mpc.
They are close, but they are not the same. And considering the two methods, they all seem internally consistent, that’s a problem. What is happening?
The new document uses a great method called surface brightness fluctuations. It’s a fancy name, but it implies an idea that is actually intuitive.
Imagine that you are on the edge of a forest, right in front of a tree. Because you are so close, you only see one tree in your field of view. Back it up a bit and you’ll be able to see more trees. Back up further and you’ll see even more.
Same with galaxies. Observe one closely with a telescope. In a given pixel of the camera, you may see ten stars, all blurred together in that single pixel. Just because of the stats, another pixel can see 15 (it’s 50% brighter than the first pixel), another 5 (half that than the first).
Now look at a galaxy that is the same in every way, but twice as distant. In one pixel you may see 20 stars and in others you may see 27 and 13 (a difference of ~ 35%). Ten times the distance you see 120, 105 and 90 (an approximate difference of 10%), note that I am way simplifying it here and only making numbers as an example. The point is, the farther away a galaxy is, the smoother the brightness distribution (the difference between pixels is smaller compared to the total of each pixel). Not only that, it is smoother in a way that a number can be measured and assigned.
It’s actually more complicated than that. If a galaxy produces stars busy in a section, it throws out the numbers, so it’s best to look at elliptical galaxies, which haven’t created new stars in thousands of years. The galaxy must be close enough to get good statistics, which limits it to those that are 300 million light-years away and closer. You also have to consider the dust and background galaxies in the images and star clusters and how galaxies have more stars toward their centers, and … and … and …
But all of these are well known and fairly easy to correct.
When they did all this, the number they got for H0 was (drum roll …) 73.3 km / sec / Mpc (with an uncertainty of approximately ± 2 km / sec / Mpc) in line with other close methods and very different from the other group using distant methods.
Somehow it is expected, but again, this gives credibility to the idea that here we are missing something important.
All methods have problems, but their uncertainties are quite small. Either we are really underestimating these uncertainties (always possible but a bit unlikely at the moment) or the Universe behaves in a way we did not expect.
If I had to bet, I would go with the latter.
Because? Because this has been done before. The Universe is complicated. We have known since the 1990s that expansion has deviated from a constant. Astronomers saw that very distant exploding stars were always farther away than a simple measure indicated, which led them to think that the Universe was expanding faster than it used to do, which in turn led to to the discovery of dark energy, the mysterious entity that Universal Expansion.
When we look at very distant objects, we see them as they were in the past, when the Universe was younger. If the rate of expansion of the Universe were different then (e.g., 12 – 13.8 billion years ago) than it is now (less than a billion years ago) we can get two different values for H0. Or maybe different parts of the Universe are expanding at different rates.
If the rate of expansion has changed that has profound implications. It means that the Universe is not the age we believe in (we use the rate of expansion to reverse it), which means it has a different size, which means that the time it takes for things to happen is different . It means that the physical processes that took place in the initial universe happened at different times and may involve other processes that affect the speed of expansion.
So yes, it’s a mess. Either we do not understand how the Universe behaves well enough or we do not measure it correctly. Either way, it’s a huge pain. And we just don’t know what it is.
This new document makes it seem even more that the discrepancy is real and that the Universe itself is to blame. But it is not conclusive. We have to get it, keep discarding uncertainties, keep trying new methods, and hopefully at some point we’ll have enough data to point something out and say:AHA!”
It will be an interesting day. Our understanding of the cosmos will make a big leap when that happens, and then cosmologists will have to find something else to discuss. They will. It’s a great place, this Universe, and there’s a lot of joy.
* A parsec is a unit of length equal to 3.26 light – years (or 1/12 of a Kessel). I know, it’s a weird unit, but it has a lot of historical significance and is tied to many ways of measuring distance. Astronomers observing galaxies like to use the unit of distance from megaparsecs, where 1 Mpc is 3.26 million light-years. This is a little longer than the distance between us and the Andromeda galaxy.