There's no speed limit in a superfluid universe. Now we know why.

Aug 22, 2020
17
2
35
Visit site
What if spacetime of our whole universe is a certain kind of superfluid?

This is a topic I really think I can provide useful guidance/direction to any interested theoretical physicists:

The problem is how GR emerges from QM!
Which is actually, how spacetime emerges from quantum vacuum!

First realize, GR is fully compatible w/ spacetime being a (super)fluid:

Then, realize, the problem actually is, how spacetime superfluid (at macro-scale) created by GAS-like dynamics of virtual particles of quantum vacuum (at micro-scale)!
(As an (much simpler) analogy, think about how air (gas) creates atmosphere/weather (fluid)!)

(Of course, the main goal would be to express these ideas mathematically, to create a new theory of quantum-relativity (which is beyond me)!)
 
The clickbait title in no way refer to our universe. The simplest general relativistic model both has the observed universal speed limit and an ideal "fluid" which in a classic model is an interaction free ideal "gas". These are different things, which the observation of having particle isolation at walls in order to exceed the Landau velocity (the speed at which fluid excitations would attain negative energy).

So it is a curious, but not breaking physics of superfluids or the universe.
 
What if spacetime of our whole universe is a certain kind of superfluid?

Well, as I note in my comment on the article, it already is (as you later note) but the superfluid physics here is not relevant to our universe ...

The problem is how GR emerges from QM! Which is actually, how spacetime emerges from quantum vacuum!

... so this is a side topic.

And again, these things are separate problems. It may look like spacetime "emerges" from general relativity since classical general relativity describes spacetime "curvature" in the form of geodesics that massless particles follow. But they may simply be convenience tools without real physics instantation analogous to how other classical field theores describes field lines without real physical instantation. If quantum physics field theory describes general relativity particles will follow none of those but the wavefunction will describe volume filling Feynman path distributions in a path integral formulation - the particle may visit any of those paths.

And in the simplest theory gravity is a field that we get from linearizing Einsteins equations, a field that in preliminary tests describe the full non-linear general relativistic model outcomes [ http://www.scholarpedia.org/article/Quantum_gravity_as_a_low_energy_effective_field_theory ].

This plays nice with modern inflationary hot big bang cosmology, where we observe space to be on average flat over sufficiently large volumes, and both the latest inflation field characterization and the integrated cosmology results points to an eternal slow roll inflation multiverse [Planck 2018 cosmic background summary cosmology paramater paper suggests [an eternal] slow roll inflation field; BOSS 2020 galaxy survey summary cosmology paper short lists Weinbergs multiverse as explanation for the observed low vacuum energy].

In a flat universe the system can be approximated as a classical (non-relativistic) adiabatic free expansion, not doing any pressure work (since inflation sees to it that both fields are at a 0 K temperature and 0 K/J entropy vacuum, consistent with the 0 J*m^3 energy density of flat space*). That seems to suggest, like the naturally eternal inflation process, that spacetime has always been flat despite fulfilling a Lorentz condition which preserves the laws of gravity and all other quantum fields within it. Gravity is the weakest force and so covers all energy densities up to Planck scale of quantum particle fields, so must always play nice with the Lorentz condition. But it, like space and time, may always have been. For no particular [sic!] reason.

* It is a true vacuum in that sense, despite wanting to roll down to local vacuum energy universes, but it expands faster than the fluctuation mediated local roll downs which expands to unmeasurable volumes on the inflation expansion rate scale. It looks to me like it is a frustrated vacuum state, analogous to the magnetized state solid state magnets can take.

And of course a quantum field vacuum can only roughly be at 0 K, the quantum fluctuations set a smallest possible temperature. And so on for other measures.
 
Last edited:
Sep 20, 2020
11
0
30
Visit site
Apparently helium 3 and helium 4 cannot freeze even at absolute zero kelvin - if this is true then the universe will always have motion and cannot go back to a static motionless state from now on.
When the Universe started to fall:
The Gravitational Instability Cosmological Theory on the Formation of the Universe.
The Theory:
(1) The expansion of the universe is a result of the " heat ' contained therein;
(2) The source of the " heat " is the cosmic microwave radiation background at 3 kelvin,
wherein;
(3) The microwave electro magnetic-nuclear energy was formed as a result of the
interaction of two different static gravitational vacuum fields, causing gravitational
instability and the motion, void of matter, at this time, wherein; static gravitational
field (1) began to go into "motion".

Therefore; the interaction of (2) motionless / static gravity vacuum fields, could eventually create dust particles in the Universe that later form into stars, galaxies , planets, moons and other objects in or about their current locations.

Q: When did this motion start?
A: If a neutral particle is able to resist the universal motion, in theory, that particle
would go back in time. Going back in time the neutral particle would then enter into (1)
of the (2) motionless-static gravity vacuum fields void of motion, and cause an unbalance
and gravitational instability and this interaction would create motion and energy
particles.
Therefore; the interaction of (2) motionless / static gravity vacuum fields, ( now thanks to Mr Peebles, one Baryonic gravity vacuum field and one Non Baryonic vacuum field ) could eventually create dust particles in the Universe that later form into stars, galaxies , planets, moons and other objects in or about their current locations.

Q: What causes a gravitational static vacuum field in the first place ?
A: Pressure force is used to create a vacuum on Earth, perhaps an exotic something
100,000 times weaker than the force of gravity decays, causing a static-motionless gravity vacuum field.

Q: What created the motionless gravity vacuum fields in the first place ?

A: Vacuums are created by pressure so the only answer I can think of is a created gravity vacuum pressure from the future goes back in time to start motion in the past.
theory needs improvement - help yourself )


They say the expanding balloon story often, but if the expanding balloon can be measured then it has a center. As for the rest of the theory, sending a man made particle back in time, ( the decay of a Higgs Muon ? ) maybe at CERN is pseudo science and even if possible, how could a tiny particle cause gravitational instability of 2 vacuum gravity fields of absolute zero 13.8 billion plus years ago that then interacts and generates a microwave background field that then creates dust particles that clump to form stars and galaxies and causes one field to fall. ...I'm not happy with the theory. but to say I should except a perpetually HOT blob that eventually expands and creates all the galaxies and has no place of occurrence other than everywhere is an even worse theory. ( an always hot blob ? )


"but to say I should except a perpetually HOT blob that eventually expands and creates all the galaxies and has no place of occurrence other than everywhere is an even worse theory"
The universe is not hot now, so the perpetually hot blob is a wrong idea. But where did you get this from? Why is this a candidate idea for the start of the Universe?

That is a summary of the Big Bang Theory. Actually the universe if hot at 3 kelvin for the billion ly + size of it 3 kelvin is impressive. The CERN collider operates at 2 kelvin, my question is when was the last time the universe was at 2 kelvin ? ( Some Big Bang theorist would say never. )



see the Neutroid Steady State Theory or
The Steady State Galaxy Theory

at

RUFUS'S GALAXY WEB PAGE

...

The Steady State Galaxy Theory
An Alternative To
The Big Bang Theory

--------------------------------------------------

Here is a completely different cosmological theory - it too has problems such as it does to explain cosmic inflation, ( the Redshift ) the microwave background , and claims galaxies do not decay, other than this it is a thoughtful theory and i agree based on helium 4 as helium 3 and 4 cannot freeze ( decay ) the universe is I say 4x older than 13.8 billion years old and having a Neutroid instead of a back hole in the center of some galaxies makes sense too. I've seen Hubble galaxy photos with both a hole ) funnel ) and a neutroid ( ball of light ) in galaxy centers so it is up to debate
 
Apparently helium 3 and helium 4 cannot freeze even at absolute zero kelvin - if this is true then the universe will always have motion and cannot go back to a static motionless state from now on.

Again, the paper and the article is not about our universe, despite the click bait title - you likely did not read either and is not interested in the presented science.

A quantum particle field, as all the universe forces and matter seem to be constituted of, have always fluctuations even if a perfect vacuum [https://en.wikipedia.org/wiki/Vacuum_state ].

When the Universe started to fall:
The Gravitational Instability Cosmological Theory on the Formation of the Universe. The Theory: (1) The expansion of the universe is a result of the " heat ' contained therein;
(2) The source of the " heat " is the cosmic microwave radiation background at 3 kelvin,
wherein;
(3) The microwave electro magnetic-nuclear energy was formed as a result of the
interaction of two different static gravitational vacuum fields, causing gravitational

I'm not sure what "started to fall" means, though it is true that a classical newtonian gravity model of the expansion derives it as analogous to a thrown mass (for the matter dominated era), c.f. cosmologist Susskind's cosmology lectures at Stanford's open web courses.

That leads up to that the expansion scale factor as a function of time develops depending on the inner energy state of the universe [ https://en.wikipedia.org/wiki/Scale_factor_(cosmology) ; note that the matter dominated era scale factor alpha(t) ~ t^2/3 describes precisely a parabola of a thrown mass].

The radiation dominated era between the hot big bang start and about 50 kyrs, until dilution made the universe enter the matter dominated era for a couple of billion years (we are now in the dark energy dominated era) could be said to be driven by the universe "heat" at temperatures ranging from close to Planck temperature down trending towards the ~ 3,000 K when cosmic background radiation was released at ~ 400 kyrs. (Which now with space expanded by a factor ~ 1,000 has stretched cosmic background photons to a radiation temperature of ~ 3 K.)

That is far from your quantitative description, and any correspondence with reality stops there. E.g. there is only one type of gravity field.

That is a summary of the Big Bang Theory. Actually the universe if hot at 3 kelvin for the billion ly + size of it 3 kelvin is impressive. The CERN collider operates at 2 kelvin, my question is when was the last time the universe was at 2 kelvin ? ( Some Big Bang theorist would say never. )

No, it obviously isn't [ https://en.wikipedia.org/wiki/Big_Bang ].

And it is better described today as inflationary hot big bang cosmology [
View: https://www.youtube.com/watch?v=P1Q8tS-9hYo
; the manuscript source is the popular astrophysicist Katie Mack]. One of the reasons for that is how the inflation that precedes the hot big bang period explains both the flatness and the homogeneity and isotropy of space, which only an expansion cannot [see the video].

The universe has never before been at 2.7 K, obviously.

Here is a completely different cosmological theory

That link, if not the description, takes me to a pseudoscience site, with superstition thrown in for good measure.

So no, it isn't "a theory" at a guess. Even if there were some reasonable numbers in a meaningless attempt at quantification of the absurd. it couldn't be published (hence the site).
 
Sep 20, 2020
11
0
30
Visit site
RUFUS'S GALAXY WEB PAGE
The Steady State Galaxy Theory
An Alternative To
The Big Bang Theory


" The Steady State Galaxy Theory overcomes the problem Hoyle had with his steady state theory while at the same time does not have the problems which the Big Bang Theory has. "
 
It's too bad our world is not usually at cryogenic temperatures. If it were, we could use all sorts of effects like superconductivity and superfluidity all the time. That would change our lives immensely. Maybe at some point physicists will find out how to make these fascinating states of matter occur at temperatures closer to our everyday room temperature. If that ever happens, we might have many additional tools to deal with climate change.
 
  • Like
Reactions: TorbjornLarsson
Sep 20, 2020
11
0
30
Visit site
Bose–Einstein condensate from Wikipedia:

Bosons, which is a group of particles that includes the photon as well as atoms such as helium-4 (4 He), are allowed to share a quantum state. Einstein proposed that cooling bosonic atoms to a very low temperature would cause them to fall (or "condense") into the lowest accessible quantum state, resulting in a new form of matter.

In 1938, Fritz London proposed the BEC as a mechanism for superfluidity in 4 He and superconductivity.

On 5 June 1995, the first gaseous condensate was produced by Eric Cornell and Carl Wieman at the University of Colorado at Boulder NISTJILA lab, in a gas of rubidium atoms cooled to 170 nanokelvins (nK).[9] Shortly thereafter, Wolfgang Ketterle at MIT realized a BEC in a gas of sodium atoms. For their achievements Cornell, Wieman, and Ketterle received the 2001 Nobel Prize in Physics.[10] These early studies founded the field of ultracold atoms, and hundreds of research groups around the world now routinely produce BECs of dilute atomic vapors in their labs.

Since 1995, many other atomic species have been condensed, and BECs have also been realized using molecules, quasi-particles, and photons.




Experimental observation
Superfluid helium-4

In 1938, Pyotr Kapitsa, John Allen and Don Misener discovered that helium-4 became a new kind of fluid, now known as a superfluid, at temperatures less than 2.17 K (the lambda point). Superfluid helium has many unusual properties, including zero viscosity (the ability to flow without dissipating energy) and the existence of quantized vortices. It was quickly believed that the superfluidity was due to partial Bose–Einstein condensation of the liquid. In fact, many properties of superfluid helium also appear in gaseous condensates created by Cornell, Wieman and Ketterle (see below). Superfluid helium-4 is a liquid rather than a gas, which means that the interactions between the atoms are relatively strong; the original theory of Bose–Einstein condensation must be heavily modified in order to describe it. Bose–Einstein condensation remains, however, fundamental to the superfluid properties of helium-4. Note that helium-3, a fermion, also enters a superfluid phase (at a much lower temperature) which can be explained by the formation of bosonic Cooper pairs of two atoms (see also fermionic condensate).
 
Sep 20, 2020
11
0
30
Visit site
When the universe started to fall; fall in this case means the following:

quote : " Einstein proposed that cooling bosonic atoms to a very low temperature would cause them to fall (or "condense") into the lowest accessible quantum state, resulting in a new form of matter. "

Bose–Einstein condensate from Wikipedia:


Bosons, which is a group of particles that includes the photon as well as atoms such as helium-4 (4 He), are allowed to share a quantum state. Einstein proposed that cooling bosonic atoms to a very low temperature would cause them to fall (or "condense") into the lowest accessible quantum state, resulting in a new form of matter.

In 1938, Fritz London proposed the BEC as a mechanism for superfluidity in 4 He and superconductivity.




Again, the paper and the article is not about our universe, despite the click bait title - you likely did not read either and is not interested in the presented science.

A quantum particle field, as all the universe forces and matter seem to be constituted of, have always fluctuations even if a perfect vacuum [https://en.wikipedia.org/wiki/Vacuum_state ].



I'm not sure what "started to fall" means, though it is true that a classical newtonian gravity model of the expansion derives it as analogous to a thrown mass (for the matter dominated era), c.f. cosmologist Susskind's cosmology lectures at Stanford's open web courses.

That leads up to that the expansion scale factor as a function of time develops depending on the inner energy state of the universe [ https://en.wikipedia.org/wiki/Scale_factor_(cosmology) ; note that the matter dominated era scale factor alpha(t) ~ t^2/3 describes precisely a parabola of a thrown mass].

The radiation dominated era between the hot big bang start and about 50 kyrs, until dilution made the universe enter the matter dominated era for a couple of billion years (we are now in the dark energy dominated era) could be said to be driven by the universe "heat" at temperatures ranging from close to Planck temperature down trending towards the ~ 3,000 K when cosmic background radiation was released at ~ 400 kyrs. (Which now with space expanded by a factor ~ 1,000 has stretched cosmic background photons to a radiation temperature of ~ 3 K.)

That is far from your quantitative description, and any correspondence with reality stops there. E.g. there is only one type of gravity field.



No, it obviously isn't [ https://en.wikipedia.org/wiki/Big_Bang ].

And it is better described today as inflationary hot big bang cosmology [
View: https://www.youtube.com/watch?v=P1Q8tS-9hYo
; the manuscript source is the popular astrophysicist Katie Mack]. One of the reasons for that is how the inflation that precedes the hot big bang period explains both the flatness and the homogeneity and isotropy of space, which only an expansion cannot [see the video].

The universe has never before been at 2.7 K, obviously.



That link, if not the description, takes me to a pseudoscience site, with superstition thrown in for good measure.

So no, it isn't "a theory" at a guess. Even if there were some reasonable numbers in a meaningless attempt at quantification of the absurd. it couldn't be published (hence the site).
 
[Reposting pseudoscience + superstition site text using crackpot font.]

[Reposting a BEC reference from Wikipedia.]

When the universe started to fall; fall in this case means the following:
quote : " Einstein proposed that cooling bosonic atoms to a very low temperature would cause them to fall (or "condense") into the lowest accessible quantum state, resulting in a new form of matter. "

When you references to specific quotes, you may want to use the full reference ("fall into the lowest accessible quantum state") and add the quote. But our universe is obviously not a condensate nor at a lowest accessible state, as evidenced by us writing this.

None of this has anything to do with a cosmological claim, as I pointed out at the start. The article has some science that can be discussed, but none of it is cosmology. If you are interested in current cosmology, the video I posted a link to is a pity summary (with no mechanisms, but I gave such references that those can be found in case it is your interest).

And please stop linking to pseudoscience, superstition or other known erroneous ideas if you are on a science site - they don't belong here.
 
Last edited:
Jan 22, 2021
1
0
10
Visit site
Now what if superfluids are the key to faster than light travel? As in what if by creating a bubble of a superfluid substance around a spacecraft or possibly even almost anything.... if a proper form of protection could be found to protect what is inside the bubble of the substance..... Anyways and as long as the bubble is maintained it could allow what's inside it to move faster than light possibly? You never know.....
 
I know the thread has long comments, but in my first I noted that this model is not applicable to our universe.

The clickbait title in no way refer to our universe. The simplest general relativistic model both has the observed universal speed limit and an ideal "fluid" which in a classic model is an interaction free ideal "gas". These are different things, which the observation of having particle isolation at walls in order to exceed the Landau velocity (the speed at which fluid excitations would attain negative energy).

So it is a curious, but not breaking physics of superfluids or the universe.
 
I know the thread has long comments, but in my first I noted that this model is not applicable to our universe.


I must agree with the comments made by Mr. Larsson.

The Steady State Galaxy Theory by Rufus Young, from 2005 which Mr. dizzo refers to, posits that a "neutroid" lies at the center of every galaxy and shoots beams of matter. ("The Central Core consists of a neutroid at the center and an obscuring mass of material trapped in the Neutroid's magnetic field. The areas from 1 to 2 are gigantic jets of gas which are being ejected by the Neutroid and are contained within its magnetic field. Star formation occurs in these areas. At point 2 the magnetic field of the Neutroid weakens to the extent that it no longer constrains the material within it and as the material continues to move outward it will now trace a spiral arc as per the previous illustrations in Figs. 1 & 2. At point 3 the hydrogen fuel has been consumed and although the remains of the burned out stars are still there they become invisible dark matter as they continue to travel to the top of their projectory and then fall back to the Neutroid.")

First of all, the universe is not experiencing a steady state.

In Edwin Hubble’s discovery of the cosmic expansion in the 1920s, he used entire galaxies as standard candles. But galaxies, coming in many shapes and sizes, are diffi- cult to match against a standard brightness. They can grow fainter with time, or brighter—by merging with other galaxies. In the 1970s, it was suggested that the brightest member of a galaxy cluster might serve as a reliable stan- dard candle. But in the end, all proposed distant galactic candidates were too susceptible to evolutionary change.

As early as 1938, Walter Baade, working closely with Fritz Zwicky, pointed out that supernovae were extremely promising candidates for measuring the cosmic expansion. Their peak brightness seemed to be quite uniform, and they were bright enough to be seen at extremely large dis- tances.1 In fact, a supernova can, for a few weeks, be as bright as an entire galaxy. Over the years, however, as more and more supernovae were measured, it became clear that they were a rather heterogeneous group with a wide range of intrinsic peak brightnesses.

In the early 1980s, a new subclassification of super-novae emerged. Supernovae with no hydrogen features in their spectra had previously all been classified simply as type I. Now this class was subdivided into types Ia and Ib, depending on the presence or absence of a silicon absorbtion feature at 6150 Å in the supernova’s spectrum. With that minor improvement in typology, an amazing consistency among the type Ia supernovae became evident. Their spectra matched feature-by-feature, as did their “light curves”—the plots of waxing and waning brightness is the more striking when their spectra were studied in detail as they brightened and then faded.

First, the outermost parts of the exploding star emit a spectrum that’s the same for all typical type Ia supernovae, indicating the same elemental densities, excitation states, velocities, etc. Then, as the exploding ball of gas expands, the outermost layers thin out and become transparent, letting us see the spectral signatures of conditions further inside. Eventually, if we watch the entire time series of spectra, we get to see indicators that probe almost the entire explosive event. It is impressive that the type Ia supernovae exhibit so much uniformity down to this level of detail. Such a “supernova CAT-scan” can be difficult to interpret. But it’s just weeks following a supernova explosion.

The best fit to the 1998 supernova data implies that, in the present epoch, the vacuum en-
ergy density rL is larger than the energy density attributable to mass (r c2). Therefore, the cosmic expansion is now accelerating. If the universe has no large-scale curvature, as the recent measurements of the cosmic microwave back- ground strongly indicate, we can say quantitatively that about 70% of the total energy density is vacuum energy and 30% is mass. In units of the critical density rc, one usually writes this result as WL rL/rc 0.7and Wm rm/rc 0.3.

Why not a cosmological constant?

The story might stop right here with a happy ending—a complete physics model of the cosmic expansion—were it not for a chorus of complaints from the particle theorists. The standard model of particle physics has no natural place for a vacuum energy density of the modest magni- tude required by the astrophysical data. The simplest es- timates would predict a vacuum energy 10120 times greater. (In supersymmetric models, it’s “only” 1055 times greater.) So enormous a L would have engendered an acceleration so rapid that stars and galaxies could never have formed. Therefore it has long been assumed that there must be some underlying symmetry that precisely cancels the vacuum energy. Now, however, the supernova data appear to require that such a cancellation would have to leave a remainder of about one part in 10

In the cosmic expansion, mass density becomes ever more dilute. Since the end of inflation, it has fallen by very many orders of magnitude. But the vacuum energy density rL, a property of empty space itself, stays constant. It seems a remarkable and implausible coincidence that the mass density, just in the present epoch, is within a factor of 2 of the vacuum energy density.

Given these two fine-tuning coincidences, it seems likely that the standard model is missing some funda- mental physics. Perhaps we need some new kind of accel- erating energy—a “dark energy” that, unlike L, is not con- stant. Borrowing from the example of the putative “inflaton” field that is thought to have triggered inflation, theorists are proposing dynamical scalar-field models and other even more exotic alternatives to a cosmological constant.

By confirming the flat geometry of the cosmos, the recent measurements of the cosmic microwave background have also contributed to the confidence in the accelerating universe results. Without the extra degree of freedom provided by possible spatial curvature, one would have to invoke improbably large systematic error to negate the supernova results. And if we include the low rm estimates based on inventory studies of galaxy clusters, the W –W parameter mL plane shows a reassuring overlap for the three independent kinds of cosmological observations.

On to Super Massive Black Holes

Today, we know that the center of galaxies contain super massive black holes and the Milky Way Galaxy harbors one called Sagittarius A*, or SgA*, not a "neutroid."

Beginning in the 1990s, Andrea Ghez, UCLA, and Reinhard Genzel, UC Berkeley, each led teams that used telescopes to peer at the center of the Milky Way, measuring the orbits of stars that zip around the galaxy’s heart. Those stars move so fast, both teams found, that only an incredibly compact, massive object such as a giant black hole could explain their trajectories (SN: 10/5/96). That work, which has continued in the decades since, helped solidify the existence of black holes, and helped confirm the predictions of general relativity (SN: 10/4/12).

The Milky Way’s central black hole, named Sagittarius A*, is a behemoth at 4 million times the mass of the sun. Scientists now think that such a supermassive black hole sits at the center of most large galaxies.
Astronomers have detected stars orbiting Sgr A* at speeds much greater that those of any other stars in the Milky Way. One of these stars, designated S2, was observed spinning around Sgr A* at speeds of over 5,000 km/s when it made its closest approach to the object. Sagittarius A* has a diameter of 44 million kilometres, or a Schwarzchild radius of 22 million kilometers, roughly equalling the distance from Mercury to the Sun (46 million km).

Sgr A* emits a large amount of IR, gamma-rays and X-rays. It appears motionless, but there are clouds of dust and gas orbiting it, which provides a clue to the nature of the object. Astronomers calculated its mass using Kepler’s laws and measuring the period and semi-major axis of the orbit of a star that came within 17 light hours of the object. They arrived at approximately 4 million solar masses. The only kind of object that can be that massive and have a radius of about 100 astronomical units is a black hole. The object was discovered on February 13 and 15, 1974 by astronomers Robert Brown and Bruce Balick at the National Radio Astronomy Observatory.

Using the highest resolution IR cameras available, astronomers have repeatedly observed the stars orbiting around Sgr A*. They have measured the orbit of a star that comes within 17 light-hours of the object in the core of our Galaxy, which is a distance that is only a few times larger than the orbit of Pluto around the Sun. Using Kepler's laws, if we measure the period and semi-major axis of this star's orbit around Sgr A*, we can calculate the mass of this object. The mass that results from the study of this star and other nearby stars is 4 million solar masses! The only type of object that astronomers believe can have a mass of approximately 4 million stars, but a radius of about 100 AU, is a black hole. Clearly the supernova explosion of one star could never produce a single black hole with a mass so large, so this object must have formed in a different manner. Sgr A* is one example of a class of objects called Super-Massive Black Holes, or SMBHs.

The black hole in the compact galaxy hosting Swift J1644+57 may be twice the mass of the four-million-solar-mass black hole lurking at the center of our own Milky Way galaxy. As a star falls toward a black hole, it is ripped apart by intense tidal forces. The gas is corralled into an accretion disk that swirls around the black hole and becomes rapidly heated to temperatures of millions of degrees.

The innermost gas in the disk spirals toward the black hole, where rapid orbital motion magnifies its magnetic field and creates dual, oppositely directed "funnels" through which some particles may escape. These particle jets driving matter at velocities greater than 90 percent the speed of light form along the black hole's spin axis. In the case of Swift J1644+57, one of these jets happened to point straight at Earth.

"The radio emission occurs when the outgoing jet traveling at relativistic speeds, slams into the interstellar environment", explains Ashley Zauderer, leading author of the radio study. "By contrast, the X-rays arise much closer to the black hole, likely near the base of the jet." There the inflating material is heated to millions of degrees.

Theoretical studies of tidally disrupted stars suggested that they would appear as flares at optical and ultraviolet energies. Thanks to the constructs of relativity, the brightness and energy of a black hole's jet is greatly enhanced when viewed head-on. The phenomenon, called relativistic beaming where particles and photons are accelerated to near light speeds, explains why Swift J1644+57 was seen at X-ray energies and appeared so strikingly luminous.

When first detected with NASA's Swift satellite on March 28, the flares were initially assumed to signal a gamma-ray burst, one of the nearly daily short blasts of high-energy radiation often associated with the death of a massive star and the birth of a black hole in the distant universe. But as the emission continued to brighten and flare, astronomers realized that the most plausible explanation was the tidal disruption of a sun-like star seen as beamed emission.

Two days later, on March 30, EVLA observations by Zauderer's team showed a brightening radio source centered on a faint galaxy, with a recessional velocity of z=1.16, near Swift's position for the X-ray flares. These data provided the first conclusive evidence that the galaxy, the radio source and the Swift event were linked.

The observations show that the radio-emitting region is still expanding at more than half the speed of light. Tracking this expansion backward in time could confirm that the outflow formed at the same time as the Swift X-ray source."

According to relativity, looking "down the barrel" of a particle jet also distorts time, again adhering to relativity, making the jet's evolution appear to unfold many times slower than it actually is. "We expect that within two years the jet should be about 12 light-years across", says Andreas Brunthaler from the Max-Planck-Institut für Radioastronomie in Bonn, co-author of the radio paper. "Despite the galaxy's enormous distance of 3.8 billion light-years, this is large enough that the jet will be resolvable using VLBI technique." Very Large Baseline Interferometry (VLBI) combines data from widely separated radio telescopes to emulate one nearly Earth's size. For the observations of Swift 1644+57 the VLBA network in the U.S. and the 100 m Effelsberg radio telescope in Germany are jointly used as a vrtual radio telescope across the Atlantic ocean.

"Incredibly, this source is still producing X-rays and may remain bright enough for Swift to observe into next year," said David Burrows, a professor of astronomy at Penn State University, lead scientist for the mission's X-Ray Telescope (XRT) instrument team. "It behaves unlike anything we've seen before."

The origin of the photon emission is still unclear. There are clearly at least two prominent peaks in the spectral energy distribution: one in the far IR and one in the hard X- ray band. They can be modelled as direct synchrotron emission (single- component model) from radio to X-rays, with strong dust extinction in the optical/UV band. Alternatively, the radio/IR peak is the direct synchrotron emission and the X- ray peak is due to inverse Compton scattering of external photons, most likely disc photons (two-component blazar model). A third possibility is that the X-ray emission is due to inverse Compton emission at the base of the jet, while the radio/IR synchrotron emission comes from the forward shock at the interface between the head of the jet and the interstellar medium.

Two studies appearing in the Aug. 25 issue of the journal Nature provide new insights into a cosmic accident that has been streaming X-rays toward Earth since late March. That's when NASA's Swift satellite first alerted astronomers to intense and unusual high-energy flares from a new source in the constellation Draco.



See: http://www.astro.ucla.edu/~ghezgroup/gc/animations.html

See: https://arxiv.org/pdf/1710.04659.pdf

See: http://www.aprim2014.org/download/APRIM_2014_Proceedings_File/107_S4-475.pdf

See: https://www.space.com/universe-expanding-fast-new-physics.html
 
Last edited:
I must agree with the comments made by Mr. Larsson.

Thank you, and thank you for an interesting, detailed and thoughtful comment!

I haven't followed the history of black holes much until the Ligo/Virgo detections of star massed black hole merger events and what multimessenger observations teach us, so I'll start with my pity response there.

To the history of super massive black holes and especially SgaA* one can note that Ghez and Genzel (as well as theorist Roger Penrose) were awarded the Nobel Prize in Physics 2020 for their discovery [ https://www.nobelprize.org/prizes/physics/2020/press-release/ ].

Besides their role in regulating galaxy star formation rates [ https://www.nature.com/articles/nature24999 ], super massive black holes are useful as laboratories for studying black hole characteristics.

The circularity of the M87* super massive black hole shadow suggests that black hole event horizons have "no hair" (are featureless) [ https://www.sciencemag.org/news/202...ts-are-finally-seeing-black-holes-or-are-they ].

And the OJ 287 system of a super massive black hole orbiting through the accretion disk of another super massive black hole concur based on timing [ https://www.universetoday.com/14593...ng-through-its-accretion-disk-every-12-years/ ].

I can be more verbose on the universe expansion rate measurements, the vacuum energy and its finetuning. In fact, since I was preparing a quantitative response elsewhere on the latter I may or may not come back later and edit or append the part here.

Once again, my historical chops on supernova observations is meager since I picked up after the two independent 1998 surveys that observed a vacuum energy. I tend to divide the observations into the two clusters where most results stand in a few percent tension, i.e. the low-z local measurements that are often ladder based such as the supernova results and the high-z integrative measurements [ https://sci.esa.int/web/planck/-/60504-measurements-of-the-hubble-constant ].

The tension between these two clusters is the remaining one in the modern ΛCDM cosmology and can have many causes.

If we take the supernova data first it is as noted ladder dependent, but in my probably historical naive picture it is also data sparse and contains two populations.

But he thinks cosmologists will run into trouble as they put their theories to more rigorous tests that require more precise standard candles. “Supernovae could be less useful for precision cosmology,” he says.

Astronomers already knew the peak brightness of type Ia supernovae isn’t perfectly consistent. To cope, they have worked out an empirical formula, known as the Phillips relation, that links peak brightness to the rate at which the light fades: Flashes that decay slowly are overall brighter than those that fade quickly. But more than 30% of type Ia supernovae stray far from the Phillips relation. Perhaps low-mass D6 explosions can explain these oddballs, Shen says. For now, those who wield the cosmic yardstick will need to “throw away anything that looks weird,” Gaensicke says, and hope for the best.

[ https://www.sciencemag.org/news/202...o-nuclear-unexpected-trigger-pairs-dead-stars ]

That leads to uncertainties in the determination of cosmological parameters when different spectral techniques give discordant results.

[ https://iopscience.iop.org/article/10.3847/1538-4357/abb140 ]

There is also a group of intermediate results. One of the more interesting is based on massive galaxy statistics, despite the historical problems. From looking at nearly 12,000 galaxies with z < 1.3 with different methods of estimating distances yielded consistent distances for all of them, resulting in H_0 = 70 km s^-1 Mpc^-1.

[ https://iopscience.iop.org/article/10.3847/1538-3881/abafba ]

To the high-z integrative surveys we can now add eBOSS galaxy survey 20 year results as well as the recent ACT cosmic background survey that both find agreement with WMAP and Planck earlier ones.

The inverse distance ladder measurement under this model yields H_0 = 68.20 ± 0.81 km s^-1Mpc^-1, remaining in tension with several direct determination methods; the BAO data allow Hubble constant estimates that are robust against the assumption of the cosmological model. In addition, the BAO data allow estimates of H_0 that are independent of the CMB data, with similar central values and precision under a ΛCDM model.

[ https://arxiv.org/abs/2007.08991 ]

”Now we’ve come up with an answer where Planck and ACT agree,” said Simone Aiola, a researcher at the Flatiron Institute’s Center for Computational Astrophysics and first author of one of two papers. “It speaks to the fact that these difficult measurements are reliable."

[ https://www.sciencedaily.com/releases/2021/01/210104131925.htm ]

ΛCDM is a good fit. The best-fit model has a reduced chi^2 of 1.07 (PTE=0.07) with H_0=67.9±1.5 km/s/Mpc.

[ https://arxiv.org/pdf/2007.07289.pdf ]

Perhaps the most fascinating reconciliation would be, besides the cosmic ladder results sampling two supernova populations, if magnetic fields from before recombination would add to the CMB results towards the median 70 km s^-1 Mpc^-1 which is also the result of the fit to nearby galaxies,

Here we show that accounting for the enhanced recombination rate due to additional small-scale inhomogeneities in the baryon density may solve both the H_0 and the S_8 − Ω_m tensions. The additional baryon inhomogeneities can be induced by primordial magnetic fields present in the plasma prior to recombination. The required field strength to solve the Hubble tension is just what is needed to explain the existence of galactic, cluster, and extragalactic magnetic fields without relying on dynamo amplification.

Allowing for clumping using Model 1 makes the decisive difference, moving the best fit to H_0 = 71.03 ± 0.74 km s^-1 Mpc^-1 … This means that Planck+H3 M1 is essentially as good a fit to CMB as the Planck ΛCDM.

[ https://arxiv.org/abs/2004.09487 ]

If one or both of these promising leads come to fruition the current simple 6 parameter LCDM theory survives unscathed, without any need for extraordinary new physics and extraordinary new observations supporting it. That is a much better position than we started last year with.

Moving on to the vacuum energy density and its observed finetuning, the failure of finding supersymmetric natural range WIMPs in LHC, ACME (an electron sphericity experiment) as well a FermiLAT observations of the Milky Way core - as well as the finding of a 3D flat universe - bodes ill for string theory and in the latter case its theoretical stability. So it seems we have a ~ 10^-100 part finetuning to contend with.

I'll end for now with a quote from the mentioned eBOSS survey that bears on both the vacuum energy density and its finetuning.

Nevertheless, the observed consistency with flat ΛCDM at the higher precision of this work points increasingly towards a pure cosmological constant solution, for example, as would be produced by a vacuum energy finetuned to have a small value. This fine-tuning represents a theoretical difficulty without any agreed-upon resolution and one that may not be resolvable through fundamental physics considerations alone (Weinberg 1989; Brax & Valageas 2019). This difficulty has been substantially sharpened by the observations presented here.

[ https://arxiv.org/pdf/2007.08991.pdf ]

The Weinberg reference is of course one where he lays out his "anthropic multiverse" hypothesis, which was later tested by the vacuum energy density observations.

An interesting consequence of this argument is that Λ should not be zero, but only small enough for life to exist. Weinberg (1987, 1989) argues that this bound is very close to the observational limits. The prossibility that Λ is small for anthropic reasons is therefore of interest to astronomers, since they should be able to detect a nonvanishing value.

[ http://articles.adsabs.harvard.edu/full/1992ARA&A..30..499C ; "The cosmological constant", Carroll, S. M., Press, W. H., & Turner, E. L., Journal: In: Annual review of astronomy and astrophysics. 1992].
 
Last edited: