Astronomers Reimagine the Making of the Planets

Planet-Formation_Lede Alma.jpeg
Newborn star systems imaged by the ALMA telescope, featuring protoplanetary disks with rings, arcs, filaments and spirals, are among the observations changing the theory of how planets are made.

By Rebecca Boyle
6 June 2022

Start at the center, with the sun. Our middle-aged star may be more placid than most, but it is otherwise unremarkable. Its planets, however, are another story.

First, Mercury: More charred innards than fully fledged planet, it probably lost its outer layers in a traumatic collision long ago. Next come Venus and Earth, twins in some respects, though oddly only one is fertile. Then there’s Mars, another wee world, one that, unlike Mercury, never lost layers; it just stopped growing. Following Mars, we have a wide ring of leftover rocks, and then things shift. Suddenly there is Jupiter, so big it’s practically a half-baked sun, containing the vast majority of the material left over from our star’s creation. Past that are three more enormous worlds — Saturn, Uranus, and Neptune — forged of gas and ice. The four gas giants have almost nothing in common with the four rocky planets, despite forming at roughly the same time, from the same stuff, around the same star. The solar system’s eight planets present a puzzle: Why these?

Now look out past the sun, way beyond. Most of the stars harbor planets of their own. Astronomers have spotted thousands of these distant star-and-planet systems. But strangely, they have so far found none that remotely resemble ours. So the puzzle has grown harder: Why these, and why those?

The swelling catalog of extrasolar planets, along with observations of distant, dusty planet nurseries and even new data from our own solar system, no longer matches classic theories about how planets are made. Planetary scientists, forced to abandon decades-old models, now realize there may not be a grand unified theory of world-making — no single story that explains every planet around every star, or even the wildly divergent orbs orbiting our sun. “The laws of physics are the same everywhere, but the process of building planets is sufficiently complicated that the system becomes chaotic,” said Alessandro Morbidelli, a leading figure in planetary formation and migration theories and an astronomer at the Côte d’Azur Observatory in Nice, France.

Still, the findings are animating new research. Amid the chaos of world-building, patterns have emerged, leading astronomers toward powerful new ideas. Teams of researchers are working out the rules of dust and pebble assembly and how planets move once they coalesce. Fierce debate rages over the timing of each step, and over which factors determine a budding planet’s destiny. At the nexus of these debates are some of the oldest questions humans have asked ourselves: How did we get here? Is there anywhere else like here?

A Star and Its Acolytes Are Born
Astronomers have understood the basic outlines of the solar system’s origins for nearly 300 years. The German philosopher Immanuel Kant, who like many Enlightenment thinkers dabbled in astronomy, published a theory in 1755 that remains pretty much correct. “All the matter making up the spheres belonging to our solar system, all the planets and comets, at the origin of all things was broken down into its elementary basic material,” he wrote.

Indeed, we come from a diffuse cloud of gas and dust. Four and a half billion years ago, probably nudged by a passing star or by the shock wave of a supernova, the cloud collapsed under its own gravity to form a new star. It’s how things went down afterward that we don’t really understand.

Once the sun ignited, surplus gas swirled around it. Eventually, the planets formed there. The classical model that explained this, known as the minimum-mass solar nebula, envisioned a basic “protoplanetary disk” filled with just enough hydrogen, helium and heavier elements to make the observed planets and asteroid belts. The model, which dates to 1977, assumed planets formed where we see them today, beginning as small “planetesimals,” then incorporating all the material in their area like locusts consuming every leaf in a field.

“The model was just somehow making this assumption that the solar disk was filled with planetesimals,” said Joanna Drążkowska, an astrophysicist at the Ludwig Maximilian University of Munich and author of a recent review chapter on the field. “People were not considering any smaller objects — no dust, no pebbles.”

Drazkowska.jpeg
Joanna Drążkowska, an astrophysicist at Ludwig Maximilian University of Munich, uses computer simulations to explore the formation of planetesimals and planets out of dust grains swirling around young stars. photo by Wieńczysław Bykowski

Astronomers vaguely reasoned that planetesimals arose because dust grains pushed around by the gas would have drifted into piles, the way wind sculpts sand dunes. The classical model had planetesimals randomly strewn throughout the solar nebula, with a statistical distribution of sizes following what physicists call a power law, meaning there are more small ones than big ones. “Just a few years ago, everybody was assuming the planetesimals were distributed in a power law throughout the nebula,” said Morbidelli, “but now we know it is not the case.”

The change came courtesy of a handful of silver parabolas in Chile’s Atacama Desert. The Atacama Large Millimeter/submillimeter Array (ALMA) is designed to detect light from cool, millimeter-size objects, such as dust grains around newborn stars. Starting in 2013, ALMA captured stunning images of neatly sculpted infant star systems, with putative planets embedded in the hazy disks around the new stars.

Astronomers previously imagined these disks as smooth halos that grew more diffuse as they extended outward, away from the star. But ALMA showed disks with deep, dark gaps, like the rings of Saturn; others with arcs and filaments; and some containing spirals, like miniature galaxies. “ALMA changed the field completely,” said David Nesvorny, an astronomer at the Southwest Research Institute in Boulder, Colorado.

ALMA-atacama 1.jpeg
The Atacama Large Millimeter/submillimeter Array (ALMA) in Chile’s Atacama Desert observes distant, dusty planet nurseries. Sergio Otarola (ESO/NAOJ/NRAO)

ALMA disproved the classical model of planetary formation. “We have to now reject it and start thinking about completely different models,” Drążkowska said. The observations showed that, rather than being smoothly dispersed through the disk, dust collects in particular places, as dust likes to do, and that is where the earliest planet embryos are made. Some dust, for instance, probably clumps together at the “snow line,” the distance from the star where water freezes. Recently, Morbidelli and Konstantin Batygin, an astronomer at the California Institute of Technology, argued that dust also clumps at a condensation line where silicates form droplets instead of vapor. These condensation lines probably cause traffic jams, curbing the rate at which dust falls toward the star and allowing it to pile up.

“It’s a new paradigm,” Morbidelli said.

Even before ALMA showed where dust likes to accrue, astronomers were struggling to understand how it could pile up quickly enough to form a planet — especially a giant one. The gas surrounding the infant sun would have dissipated within about 10 million years, which means Jupiter would have had to collect most of it within that time frame. That means dust must have formed Jupiter’s core very soon after the sun ignited. The Juno mission to Jupiter showed that the giant planet probably has a fluffy core, suggesting it formed fast. But how?

The problem, apparent to astronomers since about the year 2000, is that turbulence, gas pressure, heat, magnetic fields and other factors would prevent dust from orbiting the sun in neat paths, or from drifting into big piles. Moreover, any big clumps would likely be drawn into the sun by gravity.

In 2005, Andrew Youdin and Jeremy Goodman, then of Princeton University, published a new theory for dust clumps that went part of the way toward a solution. A few years after the sun ignited, they argued, gas flowing around the star formed headwinds that forced dust to gather in clumps, and kept the clumps from falling into the star. As the primordial dust bunnies grew bigger and denser, eventually they collapsed under their own gravity into compact objects. This idea, called streaming instability, is now a widely accepted model for how millimeter-size dust grains can quickly turn into large rocks. The mechanism can form planetesimals about 100 kilometers across, which then merge with one another in collisions.

But astronomers still struggled to explain the creation of much bigger worlds like Jupiter.

In 2012, Anders Johansen and Michiel Lambrechts, both at Lund University in Sweden, proposed a variation on planet growth dubbed pebble accretion. According to their idea, planet embryos the size of the dwarf planet Ceres that arise through the streaming instability quickly grow much bigger. Gravity and drag in the circumstellar disk would cause dust grains and pebbles to spiral onto these objects, which would grow apace, like a snowball rolling downhill.

Screen Shot 2022-06-26 at 8.11.08 PM.png
Merrill Sherman/Quanta Magazine

Pebble accretion is now a favored theory for how gas giant cores are made, and many astronomers argue it may be taking place in those ALMA images, allowing giant planets to form in the first few million years after a star is born. But the theory’s relevance to the small, terrestrial planets near the sun is controversial. Johansen, Lambrechts and five coauthors published research last year showing how inward-drifting pebbles could have fed the growth of Venus, Earth, Mars and Theia — a since-obliterated world that collided with Earth, ultimately creating the moon. But problems remain. Pebble accretion does not say much about giant impacts like the Earth-Theia crash, which were vital processes in shaping the terrestrial planets, said Miki Nakajima, an astronomer at the University of Rochester. “Even though pebble accretion is very efficient and is a great way to avoid issues with the classical model, it doesn’t seem to be the only way” to make planets, she said.

Morbidelli rejects the idea of pebbles forming rocky worlds, in part because geochemical samples suggest that Earth formed over a long period, and because meteorites come from rocks of widely varying ages. “It’s a matter of location,” he said. “Processes are different depending on the environment. Why not, right? I think that makes qualitative sense.”

Research papers appear nearly every week about the early stages of planet growth, with astronomers arguing about the precise condensation points in the solar nebula; whether planetesimals start out with rings that fall onto the planets; when the streaming instability kicks in; and when pebble accretion does, and where. People can’t agree on how Earth was built, let alone terrestrial planets around distant stars.

The five wanderers of the night sky — Mercury, Venus, Mars, Jupiter and Saturn — were the only known worlds besides this one for most of human history. Twenty-six years after Kant published his nebular hypothesis, William Herschel found another, fainter wanderer and named it Uranus. Then Johann Gottfried Galle spotted Neptune in 1846. Then, a century and a half later, the number of known planets suddenly shot up.

It started in 1995, when Didier Queloz and Michel Mayor of the University of Geneva pointed a telescope at a sunlike star called 51 Pegasi and noticed it wobbling. They inferred that it’s being tugged at by a giant planet closer to it than Mercury is to our sun. Soon, more of these shocking “hot Jupiters” were seen orbiting other stars.

The exoplanet hunt took off after the Kepler space telescope opened its lens in 2009. We now know the cosmos is peppered with planets; nearly every star has at least one, and probably more. Most seem to have planets we lack, however: hot Jupiters, for instance, as well as a class of midsize worlds that are bigger than Earth but smaller than Neptune, uncreatively nicknamed “super-Earths” or “sub-Neptunes.” No star systems have been found that resemble ours, with its four little rocky planets near the sun and four gas giants orbiting far away. “That does seem to be something that is unique to our solar system that is unusual,” said Seth Jacobson, an astronomer at Michigan State University.

Enter the Nice model, an idea that may be able to unify the radically different planetary architectures. In the 1970s, geochemical analysis of the rocks collected by Apollo astronauts suggested that the moon was battered by asteroids 3.9 billion years ago — a putative event known as the Late Heavy Bombardment. In 2005, inspired by this evidence, Morbidelli and colleagues in Nice argued that Jupiter, Saturn, Uranus and Neptune did not form in their present locations, as the earliest solar nebula model held, but instead moved around 3.9 billion years ago. In the Nice model (as the theory became known), the giant planets changed their orbits wildly at that time, which sent an asteroid deluge toward the inner planets.

Pebble accretion is now a favored theory for how gas giant cores are made, and many astronomers argue it may be taking place in those ALMA images, allowing giant planets to form in the first few million years after a star is born. But the theory’s relevance to the small, terrestrial planets near the sun is controversial. Johansen, Lambrechts and five coauthors published research last year showing how inward-drifting pebbles could have fed the growth of Venus, Earth, Mars and Theia — a since-obliterated world that collided with Earth, ultimately creating the moon. But problems remain. Pebble accretion does not say much about giant impacts like the Earth-Theia crash, which were vital processes in shaping the terrestrial planets, said Miki Nakajima, an astronomer at the University of Rochester. “Even though pebble accretion is very efficient and is a great way to avoid issues with the classical model, it doesn’t seem to be the only way” to make planets, she said.

Morbidelli rejects the idea of pebbles forming rocky worlds, in part because geochemical samples suggest that Earth formed over a long period, and because meteorites come from rocks of widely varying ages. “It’s a matter of location,” he said. “Processes are different depending on the environment. Why not, right? I think that makes qualitative sense.”

Research papers appear nearly every week about the early stages of planet growth, with astronomers arguing about the precise condensation points in the solar nebula; whether planetesimals start out with rings that fall onto the planets; when the streaming instability kicks in; and when pebble accretion does, and where. People can’t agree on how Earth was built, let alone terrestrial planets around distant stars.

Screen Shot 2022-06-26 at 8.17.25 PM.png
Merrill Sherman/Quanta Magazine

The evidence for the Late Heavy Bombardment is no longer considered convincing, but the Nice model has stuck. Morbidelli, Nesvorny and others now conclude that the giants probably migrated even earlier in their history, and that — in an orbital pattern dubbed the Grand Tack — Saturn’s gravity probably stopped Jupiter from moving all the way in toward the sun, where hot Jupiters are often found.

In other words, we might have gotten lucky in our solar system, with multiple giant planets keeping each other in check, so that none swung sunward and destroyed the rocky planets.

“Unless there is something to arrest that process, we would end up with giant planets mostly close to their host stars,” said Jonathan Lunine, an astronomer at Cornell University. “Is inward migration really a necessary outcome of the growth of an isolated giant planet? What are the combinations of multiple giant planets that could arrest that migration? It’s a great problem.”

There is also, according to Morbidelli, “a fierce debate about the timing” of the giant-planet migration — and a possibility that it actually helped grow the rocky planets rather than threatening to destroy them after they grew. Morbidelli just launched a five-year project to study whether an unstable orbital configuration soon after the sun’s formation might have helped stir up rocky remains, coaxing the terrestrial worlds into being.

The upshot is that many researchers now think giant planets and their migrations might dramatically affect the fates of their rocky brethren, in this solar system and others. Jupiter-size worlds might help move asteroids around, or they could limit the number of terrestrial worlds that form. This is a leading hypothesis for explaining the small stature of Mars: It would have grown bigger, maybe to Earth size, but Jupiter’s gravitational influence cut off the supply of material. Many stars studied by the Kepler telescope harbor super-Earths in close orbits, and scientists are split on whether those are likelier to be accompanied by giant planets farther out. Teams have convincingly shown both correlations and anti-correlations between the two exoplanet types, said Rachel Fernandes, a graduate student at the University of Arizona; this indicates that there’s not enough data yet to be sure. “That’s one of those things that is really fun at conferences,” she said. “You’re like, ‘Yeah, yell at each other, but which science is better?’ You don’t know.”

Recently, Jacobson came up with a new model that radically changes the timing of the Nice model migration. In a paper published in April in Nature, he, Beibei Liu of Zhejiang University in China and Sean Raymond of the University of Bordeaux in France argued that gas flow dynamics may have caused the giant planets to migrate only a few million years after they formed — 100 times earlier than in the original Nice model and probably before Earth itself arose.

seth jacobson.jpeg
Seth Jacobson, a planetary scientist at Michigan State University, and collaborators recently identified a rebound mechanism by which giant planets that have moved close to their stars might then move back out.
Credit: Derrick Turner, University Communications, Michigan State University


In the new model, the planets “rebounded,” moving in and then back out as the sun warmed up the gas in the disk and blew it off into oblivion. This rebound would have happened because, when a baby giant planet is bathed in a warm disk of gas, it feels an inward pull toward dense gas closer to the star and an outward pull from gas farther out. The inward pull is greater, so the baby planet gradually moves closer to its star. But after the gas begins to evaporate, a few million years after the star’s birth, the balance changes. More gas remains on the far side of the planet relative to the star, so the planet is dragged back out.

The rebound “is a pretty significant shock to the system. It can destabilize a very nice arrangement,” Jacobson said. “But this does a great job of explaining [features] of the giant planets in terms of their inclination and eccentricity.” It also tracks with evidence that hot Jupiters seen in other star systems are on unstable orbits — perhaps bound for a rebound.

Between condensation lines, pebbles, migrations and rebounds, a complex story is taking shape. Still, for now, some answers may be in hiding. Most of the planet-finding observatories use search methods that turn up planets that orbit close to their host stars. Lunine said he would like to see planet hunters use astrometry, or the measurement of stars’ movements through space, which could reveal distantly orbiting worlds. But he and others are most excited for the Nancy Grace Roman Space Telescope, set to launch in 2027. Roman will use microlensing, measuring how the light from a background star is warped by the gravity of a foreground star and its planets. That will let the telescope capture planets with orbital distances between Earth’s and Saturn’s — a “sweet spot,” Lunine said.
Nesvorny said modelers will continue tinkering with code and trying to understand the finer points of particle distributions, ice lines, condensation points and other chemistry that may play a role in where planetesimals coalesce. “It will take the next few decades to understand that in detail,” he said.

Time is the essence of the problem. Human curiosity may be unbounded, but our lives are short, and the birth of planets lasts eons. Instead of watching the process unfold, we have only snapshots from different points.

Batygin, the Caltech astronomer, compared the painstaking effort to reverse-engineer planets to trying to model an animal, even a simple one. “An ant is way more complicated than a star,” Batygin said. “You can perfectly well imagine writing a code that captures a star in pretty good detail,” whereas “you could never model the physics and chemistry of an ant and hope to capture the whole thing. … In planet formation, we are somewhere between an ant and a star.”

See: https://www.quantamagazine.org/how-are-planets-made-new-theories-are-taking-shape-20220609/

I have always enjoyed studying stars and planets. And the great mystery, in our solar system, has always been how our planets formed. Soon, the Nancy Grace Roman Space Telescope, will launch in 2027. Roman will use microlensing, which measures how the light from a background star is warped by the gravity of a foreground star and its planets. Very cool, indeed.
Hartmann352
 
Last edited:
Very cool indeed.
But it does not affect the concept of self-organization and self-formation of regular patterns in accordance with mathematical "guiding principles", does it?
 
write4u -

When it comes to the "guiding principles of mathematics" which you mention, the physical description of the changing planetary alignment over time and the physical processes at work throughout the formation of our solar system requires a tremendous use of mathematical formulae to describe these processes and their effects on the various planet types: rocky and gaseous.

Planetary formation in our solar system has been a sticky proposition and difficult to quantify, seemingly forever.

Gas flow dynamics may have caused the giant planets to migrate only a few million years after they formed — 100 times earlier than in the original Nice model and probably before Earth itself arose. This appears to explain the current locations of Jupiter and Saturn.

In the Nice model (see above), the giant planets changed their orbits wildly at one time, which sent an tremendous asteroid deluge toward the inner planets, now known as the Second Heavy Bombardment, as evidenced in the myriad craters of all size visible on the Moon, Mercury and Mars. We know that the thick atmosphere and the 900F heat found on Venus have all served to obscure it history of cratering.

In the growing importance of the discovery and examination of other stellar planets, for instance, a large number of super Jupiter sized planets have been found orbiting very close to their parent stars as are some super-Earth like planets.
Hartmann352
 
  • Like
Reactions: write4u
Determinism posits that

In mathematics
The systems studied in chaos theory are deterministic. If the initial state were known exactly, then the future state of such a system could theoretically be predicted. However, in practice, knowledge about the future state is limited by the precision with which the initial state can be measured, and chaotic systems are characterized by a strong dependence on the initial conditions. This sensitivity to initial conditions can be measured with Lyapunov exponents.
https://en.wikipedia.org/wiki/Deterministic_system

It's interesting that Max Tegmark's mathematical universe can in theory be described by some 32 relational values and a handful of equations.

The problem lies in the number of relational values in the universe.
Only a computer the size of the universe itself would be able to handle all the calculations of the relational interactions of every particle in the universe.
 
write4u -

I don't think we'd need to track every particle in the universe. We know that many particles congregate into planets, stars and galaxies and by looking out farther from our telescopic (in radio waves, visible light, infrared and x-rays) vantage points on Earth and, currently, in orbit, we can observe other stars and their planets as they existed in the past and then apply that epistemological knowledge to our current conditions.

When it comes to his MUH (mathematical universe hypothesis), Tegmark states that he is in a minority of scientists who postulate it. It took a while before he got his ideas published in a scientific journal and he was warned that his MUH would damage his reputation and career. Despite this, there can be some support found for Tegmark’s controversial hypothesis.

The physicist Eugene Wigner wrote an essay called The Unreasonable Effectiveness of Mathematics in the Natural Sciences*. Wigner asked why nature is so accurately described by mathematics – Tegmark claims that this implies that mathematics is at the very foundation of reality. More ancient thinkers, such as Pythagoras also believed that the universe was built on mathematics; while Galileo said that nature is a “grand book” written in “the language of mathematics.”

Tegmark argues that there are two ways to view reality; from inside the mathematical structure and from the outside. We view it from within and so see a physical reality which exists in time. From the outside point of view, however, Tegmark thinks that there is only a mathematical structure which exists outside of time. Some might respond to this by saying that the idea of “outside of time” and “timelessness” is verging on the mystical and religious.

It is also worth highlighting that there are those who think that mathematics is purely a human invention, albeit one which is extremely useful. In their book Where Mathematics Comes From, George Lakoff and Rafael Nunez maintain that mathematics arises from our brains, our everyday experiences, and the needs of human societies. They say it is the result of normal cognitive abilities, especially the capacity for understanding one idea in terms of another.

Mathematics is effective because it is the result of evolution, not because it has its basis in an objective reality. However, these authors do praise the invention of mathematics as one of the greatest and most ingenious inventions ever made. There is also the idea of mathematical fictionalism originally put forward by Hartry Field in his book, Science Without Numbers*. He said that mathematics does not correspond to anything real. He believes that mathematics is a kind of “useful fiction” – statements such as “2+2=4” are just as fictional as statements such as “Harry Potter lives in Hogwarts.”

Tegmark assumes that "all aspects" of reality are isomorphic to a mathematical structure, he explains:
Let us now digest the idea that physical world (specifically, the Level III multiverse) is a mathematical structure. Although traditionally taken for granted by many theoretical physicists, this is a deep and far-reaching notion. It means that mathematical equations describe not merely some limited aspects of the physical world, but all aspects of it. It means that there is some mathematical structure that is what mathematicians call isomorphic (and hence equivalent) to our physical world, with each physical entity having a unique counterpart in the mathematical structure and vice versa. (page 10)
We are part of reality and so we are isomorphic to a part of this mathematical structure. That makes us a substructure of it. Since we are self-aware, we are a self-aware substructure (SAS) of the mathematical universe:
Given a mathematical structure, we will say that it has physical existence if any self-aware substructure (SAS) within it subjectively, from its frog perspective, perceives itself as living in a physically real world. What would, mathematically, such an SAS be like? In the classical physics example above, an SAS such as you would be a tube through spacetime, a thick version of what Einstein referred to as a world-line. The location of the tube would specify your position in space at different times. Within the tube, the fields would exhibit certain complex behavior, corresponding to storing and processing information about the field-values in the surroundings, and at each position along the tube, these processes would give rise to the familiar but mysterious sensation of self-awareness. From its frog perspective, the SAS would perceive this one-dimensional string of perceptions along the tube as passage of time. (page 11)
The multiverse idea allows him to use the anthropic principle to claim that there must be at least one of those universes able to allow for human life since we exist.
...although many if not most mathematical structures are likely to be dead and devoid of SASs, failing to provide the complexity, stability and predictability that SASs require, we of course expect to find with 100% probability that we inhabit a mathematical structure capable of supporting life. Because of this selection effect, the answer to the question “what is it that breathes fire into the equations and makes a universe for them to describe?” (Hawking 1993) would then be “you, the SAS”. (page 13)
What Tegmark means by a self-aware substructure is self-aware reality like us. Since he believes all aspects of reality is a mathematical structure this requires him to think of us as substructures of that mathematical structure.

Is he using structure to describe physical reality the same way he is to describe, say a non-Abelian group? Yes. Does it make sense? To some extent, yes. Imagine being lost in your head and thinking that all of your thoughts are information, and that all of the things your thoughts reflect can be thought of as information, for instance particles are described as mathematical waves, or atoms can be described with bits, then it might be easy to just declare the universe itself mathematical instead of separating the universe into an external, concrete reality made of matter, and an internal abstract structure made of logical, arithmetic, and set theoretic structure.

Of course, those of us who are realists of various flavors think this is silly. Rocks may be made up atoms, but our thoughts which follow from the computational functionality of the neurons in our head are not the same as the rocks we think about. This is a theme that Platonic roots where numbers are instantiations of Forms. A famous example of this thinking is also encompassed by the above essay called The Unreasonable Effectiveness of Mathematics by Eugene Wigner.

There are plenty of arguments against a Platonic view of the universe, but alas, you didn't ask about them. But do consider this. Note all messages require a medium. If you deny this, identify a message without a physical medium. Now, if all messages, or all information requires a physical embodiment, be it electrons in a flip-flop or clay or paper and ink, then the question stands, what is the mathematical structure of the universe expressed in? What is the medium that encodes the information that represents physical reality? If you are unable to provide scientific proof that this medium exists, then you must reject the notion that the universe is made of information until you can meet empirical requirements. If you reject the need for empirical requirements, then you are participating in pure rationality and Metaphysics with a capital M, which by scientific thinkers is generally labeled as meaningless. So, you find yourself having to choose between science or the position that the universe is made of information.

Now ask yourself, are you willing to bet against science? If you are, then you might consider argument by defenestration, and prove us realists wrong and throw yourself out a window to prove us wrong. I would advise against it, of course.

In response to the fair critique that the argument against Tegmark's claim is circular, first, remember that Tegmark is the original claimant, and as such, the burden of proof to show that the universe is a mathematical structure would be his to show that the universe is a mathematical structure (presumably represented in some sort of math machine). Obviously, his actual argument isn't present here, so I've outlined a sketch of an objection.

Is the argument circular, as suggested? As a skeptic and empiricist, in a way, yes, I have rejected the notion that the universe isn't a Platonic realm as an assumption, and hold that the burden of proof is to show that it is. Perhaps Tegmark has an explanation of how everything being a mathematical structure allows me to have sensory input to discriminate the material from the non-material, presumably as some sort of simulation.

So, my manner of justification presumes some elements of my conclusion, but that's ultimately because rationality has limits, and it is those limits that show that a purely rational answer cannot be the solution, and that we must appeal to our physical embodiment as a method of discriminating what is, and what is not. See my response in the post regarding the Muenchhausen Trilemma***.

See: https://www.samwoolfe.com/2013/06/max-tegmark-universe-is-made-of-mathematics.html

See: https://philosophy.stackexchange.com/questions/65524/max-tegmarks-mathematical-universe

* The Unreasonable Effectiveness of Mathematics in the Natural Sciences: the purpose is to point out that the laws of nature are all conditional statements and they relate only to a very small part of our knowledge of the world. Thus, classical mechanics, which is the best known prototype of a physical theory, gives the second derivatives of the positional coordinates of all bodies, on the basis of the knowledge of the positions, etc., of these bodies. It gives no information on the existence, the present positions, or velocities of these bodies.

We discovered some years ago that even the conditional statements cannot be entirely precise: that the conditional statements are probability laws which enable us only to place intelligent bets on future properties of the inanimate world, based on the knowledge of the present state. They do not allow us to make categorical statements, not even categorical statements conditional on the present state of the world. The probabilistic nature of the “laws of nature” manifests itself in the case of machines also, and can be verified, at least in the case of nuclear reactors, if one runs them at very low power. However, the additional limitation of the scope of the laws of nature which follows from their probabilistic nature will play no role in the rest of the discussion...

See: http://www.hep.upenn.edu/~johnda/Papers/wignerUnreasonableEffectiveness.pdf

** Hartry Field, Science Without Numbers: caused a stir in philosophy on its original publication in 1980, with a bold nominalist approach to the ontology of mathematics and science. Hartry Field argues that we can explain the utility of mathematics without assuming it true. Part of the argument is that good mathematics has a special feature ("conservativeness") that allows it to be applied to "nominalistic" claims (roughly, those neutral to the existence of mathematical entities) in a way that generates nominalistic consequences more easily without generating any new ones. Field goes on to argue that we can axiomatize physical theories using nominalistic claims only, and that in fact this has advantages over the usual axiomatizations that are independent of nominalism. There has been much debate about the book since it first appeared. It is now reissued in a revised contains a substantial new preface giving the author's current views on the original book and the issues that were raised in the subsequent discussion of it.

The book argues that there is no reason to believe in the existence of mathematical entities, or the literal truth of mathematics, and in particular that physical theory does not require this. The explanation of the utility of mathematics in describing the physical world does not require that the mathematics be true, but only little more than it be consistent. Physical theories are best presented in an intrinsic manner that does not require any reference to mathematical entities. This volume is a reprinting of a book from 1980 with an extensive new Preface discussing issues that have arisen since the original publication.

See: https://www.cambridge.org/core/jour...-xiii-130-pp/A8B566A5C7FE2AEBA4B78EC571F60E2E

See: https://oxford.universitypressschol...o/9780198777915.001.0001/acprof-9780198777915

*** Muenchhausen trilemma - The Münchhausen-Trilemma (after Baron Münchhausen, who allegedly pulled himself out of a swamp by his own hair), also called Agrippa's Trilemma (after Agrippa the Skeptic), is a philosophical term coined to stress the purported impossibility to prove any truth even in the fields of logic and mathematics. It is the name of an argument in the theory of knowledge going back to the German philosopher Hans Albert, and, more traditionally, Agrippa.

See: https://psychology.fandom.com/wiki/Münchhausen_Trilemma

Work by Max Tegmark shows that the properties we associate with reality such as mass, time, space, and so on are mere mathematical structures. He came to this idea based on the mathematical patterns we have seen appear in nature, like the Golden Ratio or the Fibonacci Sequence, but also with more common place things like conics. Math describes natural phenomena extremely well but Tegmark says this isn’t enough. We sometimes mistake the notation of mathematics with that of reality. Instead, think of that as a convention we adapted to decribe the true mathematical reality around us We ourselves are substructures made of math (otherwise known as a self-aware substructure), uncovering the landscape all around us by formalizing them in equations and theorems. This is a new philosophical position known as mathematical monism, implying the singular source of our reality.

If everything could be reduced to a single mathematical structure, then all of our sciences are currently limited based on the very vocabulary we use to describe them.

We can view the science of a system: from one of two standpoints: either being a participant or an omnipotent observer. Relativity and quantum mechanics depend on this, and it would be hard to gain an understanding of the two without it. Math has done such a good job of describing and predicting outcomes in science that Tegmark's hypothesis gains solid footing if we can make that hurdle in our communication over it.

Challenges to the hypothesis are many in number. How can it be reconciled with theories that contradict this mathematical precision (like the incompatibility of quantum mechanics and general relativity or Gödel’s Incompleteness Theorem and the underlying formulation of gravity on the subatomic scale) are yet to be seen and constructed.
Hartmann352
 
write4u -
Tegmark argues that there are two ways to view reality; from inside the mathematical structure and from the outside. We view it from within and so see a physical reality which exists in time. From the outside point of view, however, Tegmark thinks that there is only a mathematical structure which exists outside of time. Some might respond to this by saying that the idea of “outside of time” and “timelessness” is verging on the mystical and religious.
Yes, but I believe that that argument is always made from the perspective of inside the universe and the universe as a wholeness of course has a timeline associated with the duration of its existence.

But the universe is expanding , which suggests that outside the universe there must be a permittive condition of nothingness that has no geometrical dimensions or mathematical properties. It is merely permittive.

And that can be described as being a timeless infinite nothingness. It has no existence of any kind and that would also agree with the notion that universal time of duration began to emerge along with with the duration of existence of this universe since the "beginning" directly after the "inflationary epoch".

This concept solves several problems associated with an initial FTL chaotic expansion of the singularity. A permittive nothingness has no phisical restrictive properties like a Higgs field. But inside the expanding fledgling universe mathematical laws began to become expressed with the cooling of the chaotic plasma and mathematical patterns began to emerge from the logical physical interactions of the first fundamental generic relational "values" via fundamental locical mathematical functions.

IMO, this chronology would be consistent with Chaos Theory

So I can really identify with Tegmark's scenario of a timeless condition outside the universe.
It is also worth highlighting that there are those who think that mathematics is purely a human invention, albeit one which is extremely useful. In their book Where Mathematics Comes From, George Lakoff and Rafael Nunez maintain that mathematics arises from our brains, our everyday experiences, and the needs of human societies. They say it is the result of normal cognitive abilities, especially the capacity for understanding one idea in terms of another.
Yes but that is merely a subjective anthropomorphization of generic logical mathematical properties of spacetime.

AFAIK, there is not a single persuasive argument against a mathematical essence to spacetime.

OTOH, there is a relatively new hypothesis that proposes a mathematically fractal aspect to spacetime fabric.

Causal Dynamical Triangulation
Causal dynamical triangulation (abbreviated as CDT) theorized by Renate Loll, Jan Ambjørn and Jerzy Jurkiewicz, is an approach to quantum gravity that, like loop quantum gravity, is background independent.
This means that it does not assume any pre-existing arena (dimensional space), but rather attempts to show how the spacetime fabric itself evolves.
There is evidence [1] that at large scales CDT approximates the familiar 4-dimensional spacetime, but shows spacetime to be 2-dimensional near the Planck scale, and reveals a fractal structure on slices of constant time. These interesting results agree with the findings of Lauscher and Reuter, who use an approach called Quantum Einstein Gravity, and with other recent theoretical work.
Introduction
Near the Planck scale, the structure of spacetime itself is supposed to be constantly changing due to quantum fluctuations and topological fluctuations. CDT theory uses a triangulation process which varies dynamically and follows deterministic rules, to map out how this can evolve into dimensional spaces similar to that of our universe.

And fractals are mathematical objects! Hence the "geometry of spacetime"

continued........
 
Last edited:
continued......

and the paper by Renate Loll et al:

Quantum gravity from causal dynamical triangulations: a review, R Loll1,2
Published 6 December 2019 • © 2019 IOP Publishing Ltd


Classical and Quantum Gravity
TOPICAL REVIEW
Quantum gravity from causal dynamical triangulations: a review
R Loll1,2

Abstract
This topical review gives a comprehensive overview and assessment of recent results in causal dynamical triangulations, a modern formulation of lattice gravity, whose aim is to obtain a theory of quantum gravity non perturbatively from a scaling limit of the lattice-regularized theory.
In this manifestly diffeomorphism-invariant approach one has direct, computational access to a Planckian spacetime regime, which is explored with the help of invariant quantum observables. During the last few years, there have been numerous new and important developments and insights concerning the theory's phase structure, the roles of time, causality, diffeomorphisms and global topology, the application of renormalization group methods and new observables. We will focus on these new results, primarily in four spacetime dimensions, and discuss some of their geometric and physical implications.
Mathematics is effective because it is the result of evolution, not because it has its basis in an objective reality. However, these authors do praise the invention of mathematics as one of the greatest and most ingenious inventions ever made. There is also the idea of mathematical fictionalism originally put forward by Hartry Field in his book, Science Without Numbers*. He said that mathematics does not correspond to anything real. He believes that mathematics is a kind of “useful fiction” – statements such as “2+2=4” are just as fictional as statements such as “Harry Potter lives in Hogwarts.”
Tegmark assumes that "all aspects" of reality are isomorphic to a mathematical structure, he explains:
"We are part of reality and so we are isomorphic to a part of this mathematical structure. That makes us a substructure of it. Since we are self-aware, we are a self-aware substructure (SAS) of the mathematical universe:"
The multiverse idea allows him to use the anthropic principle to claim that there must be at least one of those universes able to allow for human life since we exist.
What Tegmark means by a self-aware substructure is self-aware reality like us. Since he believes all aspects of reality is a mathematical structure this requires him to think of us as substructures of that mathematical structure.
Is he using structure to describe physical reality the same way he is to describe, say a non-Abelian group? Yes. Does it make sense? To some extent, yes.
As I understand it in his description of reality he uses the term "patterns of various values and densities".
Imagine being lost in your head and thinking that all of your thoughts are information, and that all of the things your thoughts reflect can be thought of as information, for instance particles are described as mathematical waves, or atoms can be described with bits, then it might be easy to just declare the universe itself mathematical instead of separating the universe into an external, concrete reality made of matter, and an internal abstract structure made of logical, arithmetic, and set theoretic structure.
Of course, those of us who are realists of various flavors think this is silly. Rocks may be made up atoms, but our thoughts which follow from the computational functionality of the neurons in our head are not the same as the rocks we think about.
This is a theme that Platonic roots where numbers are instantiations of Forms.
Yes, I see that as generic relationalvalues
A famous example of this thinking is also encompassed by the above essay called The Unreasonable Effectiveness of Mathematics by Eugene Wigner.
It appears as reasonable because mathematics are expressions of logical functions
There are plenty of arguments against a Platonic view of the universe, but alas, you didn't ask about them. But do consider this. Note all messages require a medium. If you deny this, identify a message without a physical medium. Now, if all messages, or all information requires a physical embodiment, be it electrons in a flip-flop or clay or paper and ink, then the question stands, what is the mathematical structure of the universe expressed in?
But does it ? A kinetic force does not require a medium, it is as effective in a vacuum as in water, no?
Sets of values arranged in patterns of various densities do not need a medium , they are the medium.
What is the medium that encodes the information that represents physical reality? If you are unable to provide scientific proof that this medium exists, then you must reject the notion that the universe is made of information until you can meet empirical requirements.
I believe David Bohm would suggest that there are two states ; The enfolded (Implicate) order and the unfolded (Explicate) order.
If you reject the need for empirical requirements, then you are participating in pure rationality and Metaphysics with a capital M, which by scientific thinkers is generally labeled as meaningless. So, you find yourself having to choose between science or the position that the universe is made of information.
But does science not recognize forms of "potential" energy , i.e. unrealized (enfolded) information?
I am thinking of a mountain lake that has an inherent potential (unrealized) gravitational energy . Put a turbine/generator halfway down the mountain and you get lots of expressed energy in the form of electricity.
Now ask yourself, are you willing to bet against science? If you are, then you might consider argument by defenestration, and prove us realists wrong and throw yourself out a window to prove us wrong. I would advise against it, of course.
I trust the science, I do not always trust the interpretation of the science.
In response to the fair critique that the argument against Tegmark's claim is circular, first, remember that Tegmark is the original claimant, and as such, the burden of proof to show that the universe is a mathematical structure would be his to show that the universe is a mathematical structure (presumably represented in some sort of math machine). Obviously, his actual argument isn't present here, so I've outlined a sketch of an objection.
Is the argument circular, as suggested? As a skeptic and empiricist, in a way, yes, I have rejected the notion that the universe isn't a Platonic realm as an assumption, and hold that the burden of proof is to show that it is. Perhaps Tegmark has an explanation of how everything being a mathematical structure allows me to have sensory input to discriminate the material from the non-material, presumably as some sort of simulation.
I read that the Universe functions via "differential equations and that is what produces the relativistic behaviors. I believe the generic mathematical relational values and functions are axiomatic. I have yet to see a scientific equation describing a natural phenomenon that is not mathematical in essence,
So, my manner of justification presumes some elements of my conclusion, but that's ultimately because rationality has limits, and it is those limits that show that a purely rational answer cannot be the solution, and that we must appeal to our physical embodiment as a method of discriminating what is, and what is not. *.
I have an intuitive problem with that assessment.
Why must the Universe necessarily have irrational properties? The dynamic nature of spacetime itself accounts for all stochastic relational expressions.

Humans do not create universal mathematics. We observe them and make practical codified use of them. We explicated the Higgs boson via applied mathematics. Does that not prove something?

Everybody always demands proof of the mathematical nature of the spacetime geometry, but from my perspective every single physical interaction is an expression of the generic mathematical logical process of ;
(value) Input --> (mathematical) Function --> (value) Output.

I believe that process is contained in the logical expression of "Necessity and Sufficiency" .
Imho there is no need for magic . I agree with Tegmark's suggestion that just as the human brain has all the necessary inherent (enfolded) potentials for the emergence of consciousness, so has the Universe all the inherent (enfolded) potentials for the unfolding (emergence) of reality itself.

I believe that if we assign generic relational values to physical objects then that simplifies all other considerations of patterns, structures, potentials, and observable expressions, we can do away with a lot of the fundamentally unnecessary questions you identified.

We assert a deterministic Universe and then proceed to sow doubt as to the "quantity" and "quality" of deterministic processes.

I am always trying to keep in mind Ockham's razor that suggests there is no "irreducible complexity" and Tegmark's hypothesis does indeed simplify the universal dynamics to about 32 generic relational values and a handful of mathematical equations

See: https://www.samwoolfe.com/2013/06/max-tegmark-universe-is-made-of-mathematics.html

See: https://philosophy.stackexchange.com/questions/65524/max-tegmarks-mathematical-universe

Add Chaos Theory and you get a Universe that is generically mathematical in essence.

continued...... 1657559955162.png
 
Last edited:
* The Unreasonable Effectiveness of Mathematics in the Natural Sciences: the purpose is to point out that the laws of nature are all conditional statements and they relate only to a very small part of our knowledge of the world. Thus, classical mechanics, which is the best known prototype of a physical theory, gives the second derivatives of the positional coordinates of all bodies, on the basis of the knowledge of the positions, etc., of these bodies. It gives no information on the existence, the present positions, or velocities of these bodies.
That is only of use to us. To the universe the relational aspects themselves are sufficient
We discovered some years ago that even the conditional statements cannot be entirely precise: that the conditional statements are probability laws which enable us only to place intelligent bets on future properties of the inanimate world, based on the knowledge of the present state. They do not allow us to make categorical statements, not even categorical statements conditional on the present state of the world. The probabilistic nature of the “laws of nature” manifests itself in the case of machines also, and can be verified, at least in the case of nuclear reactors, if one runs them at very low power. However, the additional limitation of the scope of the laws of nature which follows from their probabilistic nature will play no role in the rest of the discussion...
What does the universe care about whether we can make sense of it? The Universe is limited only by generic logical mathematical guiding principle.

Humans are limited in ability to observe the mathematical guiding principles.
Many cosmologists claim that everytime they make a new mathematically sound observation they feel it was there all along and we only are discovering these new application of mathematical principles.

See: http://www.hep.upenn.edu/~johnda/Papers/wignerUnreasonableEffectiveness.pdf
** Hartry Field, Science Without Numbers: caused a stir in philosophy on its original publication in 1980, with a bold nominalist approach to the ontology of mathematics and science. Hartry Field argues that we can explain the utility of mathematics without assuming it true. Part of the argument is that good mathematics has a special feature ("conservativeness") that allows it to be applied to "nominalistic" claims (roughly, those neutral to the existence of mathematical entities) in a way that generates nominalistic consequences more easily without generating any new ones. Field goes on to argue that we can axiomatize physical theories using nominalistic claims only, and that in fact this has advantages over the usual axiomatizations that are independent of nominalism. There has been much debate about the book since it first appeared. It is now reissued in a revised contains a substantial new preface giving the author's current views on the original book and the issues that were raised in the subsequent discussion of it.
Yes, I believe as our observational abilities keep expanding in scope and accuracy the universe keeps serving up mathematical processes.
The book argues that there is no reason to believe in the existence of mathematical entities, or the literal truth of mathematics, and in particular that physical theory does not require this. The explanation of the utility of mathematics in describing the physical world does not require that the mathematics be true, but only little more than it be consistent. Physical theories are best presented in an intrinsic manner that does not require any reference to mathematical entities. This volume is a reprinting of a book from 1980 with an extensive new Preface discussing issues that have arisen since the original publication.
And that is why I like to use the term "generic mathematics" that only describes the inherent guiding principles, that can be translated and codified by several mathematical "dialects".

I like Ricky Gervais' analogy that if we destroyed all religious scripture and a thousand years from tried to rewrite the , they would be completely different from today.
However if we took all the mathematical books and destroyed them and If we were to rewrite them a thousand years from now, they'd be exactly the same as today.
View: https://www.youtube.com/watch?v=P5ZOwNK6n9U


See: https://www.cambridge.org/core/jour...-xiii-130-pp/A8B566A5C7FE2AEBA4B78EC571F60E2E

See: https://oxford.universitypressschol...o/9780198777915.001.0001/acprof-9780198777915
Work by Max Tegmark shows that the properties we associate with reality such as mass, time, space, and so on are mere mathematical structures. He came to this idea based on the mathematical patterns we have seen appear in nature, like the Golden Ratio or the Fibonacci Sequence, but also with more common place things like conics.
And the observation of self-forming patterns in Chaos Theory.
Math describes natural phenomena extremely well but Tegmark says this isn’t enough. We sometimes mistake the notation of mathematics with that of reality. Instead, think of that as a convention we adapted to describe the true mathematical reality around us.
Again, that problem is solved by the use of "generic mathematics"
We ourselves are substructures made of math (otherwise known as a self-aware substructure), uncovering the landscape all around us by formalizing them in equations and theorems. This is a new philosophical position known as mathematical monism, implying the singular source of our reality.
If everything could be reduced to a single mathematical structure, then all of our sciences are currently limited based on the very vocabulary we use to describe them.
!
We can view the science of a system: from one of two standpoints: either being a participant or an omnipotent observer. Relativity and quantum mechanics depend on this, and it would be hard to gain an understanding of the two without it. Math has done such a good job of describing and predicting outcomes in science that Tegmark's hypothesis gains solid footing if we can make that hurdle in our communication over it.
If the existing mathematical data were applied to any other individual hypothesis, it would be considered settled science. But because we know it does not describe ALL of the universe, we consider it merely "made by humanity" (which IMO, is hubris in and of itself)
Challenges to the hypothesis are many in number. How can it be reconciled with theories that contradict this mathematical precision (like the incompatibility of quantum mechanics and general relativity or Gödel’s Incompleteness Theorem and the underlying formulation of gravity on the subatomic scale) are yet to be seen and constructed.
The contradiction is not a limitation of Universal processes, it is a result of human limitations in knowledge of Universal processes.
Godel's Incompleteness addresses human liitations in knowledge , not Universal limitations.
Hartmann352
Thanks for your comprehensive missive.
I hope that some of my interested layman's interpretations add to the discussion.
If not, I'd like to know where and how my perspectives fall short.
 
Last edited:
write4u -

Mathematical predictions of the universe are always incredibly exciting to discuss.

However, I try to shy away from Wikipedia as a source. Wikipedia can be changed by anyone with the proper passwords, memberships and abilities, while the theories and papers offered by researchers, whether you agree or not, cannot be so easily changed.

While I am not a fan of immutability, I tend to be suspicious of areas so easily changeable.

Hartmann352