Magnetic pull driving ufos

Jan 27, 2023
1
0
10
Visit site
Hello,
My name is Andy and I have a question to throw out here. Understandably people might rip this theory to shreds which I completely get and welcome but here goes anyway:

Are aliens races using magnetic fields to cross galaxies?
Obviously they are extremely advanced species and have found a way of tapping into the magnetic pull from planets to move them around from A to B.
All they would require is a method of controlling the strength of the magnetic pull and manipulating it to their advantage.
This would explain how they can enter oceans and disappear into space with ease.
You would ask what could withstand should a strong magnetic pull and my thought would be nature.
Reeds, seaweed, rocks, etc, etc live in sunlight, water. All natural elements.
Could ancient civilisations have found a way to use natural organic materials and crafted methods of transport enabling them to cross worlds, galaxies, etc,etc.
 
AJShepard -

Recall that Einstein stated that nothing can surpass the speed of light because your mass, at the speed of light, becomes infinite.

Next, remember how small we are. Jupiter has 318 times the mass of the Earth. While the Earth's magnetic field is believed to be generated by its internal dynamo—the churning of electrically conductive fluids in the core. But Jupiter is thought to be made of helium and hydrogen, which are not very conductive. This has led to theories that suggest the great pressure exerted inside the planet resulted in the formation of liquid metallic hydrogen, which, as its name implies, conducts much like a metal.

In the superionic crystalline phase, as seemingly in evidence on Uranus and Neptune, water loses its molecular identity (H2O): negative oxygen ions (O2-) crystallize into an extensive lattice, and protons in the form of positive hydrogen ions (H+) form a liquid that floats around freely within the oxygen lattice.

"The situation can be compared to a metal conductor such as copper, with the big difference that positive ions form the crystal lattice in the metal, and electrons bearing a negative charge are free to wander around the lattice," said Maurice de Koning, a professor at the State University of Campinas's Gleb Wataghin Physics Institute (IFGW-UNICAMP) in São Paulo state, Brazil.

De Koning led the study that resulted in an article published in Proceedings of the National Academy of Sciences (PNAS) and featured on the cover of its November 8, 2022 issue.

Superionic ice forms at extremely high temperatures in the range of 5,000 Kelvin (4,700 °C) and pressure of around 340 gigapascals, or over 3.3 million times Earth's standard atmospheric pressure, he explained. It is therefore impossible for stable superionic ice to exist on our planet. It can exist on Neptune and Uranus, however. In fact, scientists are confident that large amounts of ice XVIII lurk deep in their mantles, thanks to the pressure resulting from these giants' huge gravitational fields, as confirmed by seismographic readings.

"The electricity conducted by the protons through the oxygen lattice relates closely to the question of why the axis of the magnetic field doesn't coincide with the rotation axis in these planets. They're significantly misaligned, in fact," De Koning said.

Measurements made by the space probe Voyager 2, which flew by these distant planets on its journey to the edge of the Solar System and beyond, show that the axes of Neptune's and Uranus's magnetic fields form angles of 47 degrees and 59 degrees with their respective rotation axes.

To fully grasp the actions of the planets' strongest magnetic fields. you must then realise how far apart they are and that their magnetic fields, like gravity, adhere to the inverse square law, where doubling the distance from the sources reduces its strength by a factor of four, or 1/4.

The distance between the Earth and Mercury is 0.61 AU. That’s around 91,691,000 kilometers, or 56,974,146 miles.

Likewise, the distance between Earth to Venus and Mars is 0.28 AU and 0.52 AU respectively. 0.28 AU is approximately 41,400,000 km or 25,724,767 miles and 0.52 AU is around 78,340,000 km or 48,678,219 miles. In addition, the distance to Jupiter from Earth is about 4.2 AU which is approximately 628,730,000 kilometers and 390,674,710 miles.

Furthermore, the distance from Saturn to Earth is 8.52 AU which is about 1,275,000,000 kilometers or about 792,248,270 miles. Similarly, Earth’s distance to Uranus is 18.21 AU. That is about 2,723,950,000 km or 1,692,662,530 miles. Finally, the distance between Earth and Neptune is 29.09 AU. That’s around 4,351,400,000 kilometers or 2,703,959,960 miles.

As a result, you can visualise the decrease of each planet's magnetic field over these vast distances. Then examine the average distance between any two stars in our Milky Way galaxy. the average distance between any two stars in our galaxy. That number turns out to be about 5 light years, which is very close to the 4 light year distance between our Sun and Alpha Centauri, our closest stellar neighbour. Again, stellar magnetic fields are ruled by the inverse square law, too. Five light years equates to some 5,878,600,000,000 miles - almost 6 trillion miles.

Therefore, planning to propel yourself from star to star, utilising each star's inherent magnetic field, where star's lie an average of some 6 trillion miles apart, may not be a very effective means of galactic propulsion. If the speed garnered from each star or planet via its own magnetic field is sufficient when flown close enough to harness it, there are other problems - radiation and heat.

Jupiter is surrounded by an enormous magnetic field called the magnetosphere, which has a million times the volume of Earth's magnetosphere. Charged particles are trapped in the magnetosphere and form intense radiation belts. These belts are similar to the Earth's Van Allen belts, but are many millions of times more intense.

Jupiter has the most complex and energetic radiation belts in our Solar System and one of the most challenging space environments to measure and characterize in-depth. Their hazardous environment is also a reason why so many spacecraft avoid flying directly through their most intense regions, thus explaining how Jupiter’s radiation belts have kept many of their secrets so well hidden, despite having been studied for decades. In this paper we argue why these secrets are worth unveiling. Jupiter’s radiation belts and the vast magnetosphere that encloses them present us with an unprecedented physical laboratory. Voyages through the uninviting environment of Jupiter’s radiation belts presents us with many challenges in mission design, science planning, instrumentation, and technology.

Measurements in the radiation belts of Uranus and Neptune, sampled only once by the Voyager 2 spacecraft, should definitely be part of any future attempt to explore the two planets. Saturn’s radiation belts were surveyed in depth thanks to the 13-year Cassini mission at the Kronian system. In comparison, Jupiter’s radiation belts, while visited by numerous missions and monitored for decades through their synchrotron emission, still hold onto many of their secrets. No single mission, payload, or observation campaign was ever designed to capture and/or cope with their full scale, complexity, dynamics, and energetics, as argued in the two follow-up subsections.

jupiter's radiation belts.png
Jupiter’s magnetospheric region hosting the inner radiation belts (center). The moons Io, Europa, Ganymede, and Callisto are drawn, while the Io plasma torus and associated plasma disc are shown in red (Image Credit: John Spencer). Information on the inner electron and ion radiation belts are shown on each side. Color radiation belt maps are from models in, used with permission from Quentin Nénon. They cover the distances inward of Europa.

The above gives you an idea of the size, intensity and make up of Jupiter's radiation belts. Recall that Earth, too, has an area of high levels of radiation constrained by its magnetic field - the Van Allen Belt.

So, one must be mindful of the deadly radiation contained or constrained within planetary or stellar magnetic lines of force.

To wrap this first part up, let's recall that an infinite mass requires an infinite energy to move it.

Two, the distances involved between stars and planets effectively rule out having their magnetic lines of force being used to propel an interstellar vehicle with any notable success. Additionally, such use would vary immensely over time and directly proportional to the distances between the selected magnetospheres.

Thirdly, the magnetic lines of force, or the magnetosphere, harbor dangerous levels of radiation necessitating layers of shielding, adding mass to the conveyance.

To travel the distances you have in mind, one must examine other more exotic propulsive systems.

The underlying issue is that our closest star system is 4.25 light-years away from the sun. To reach this destination promptly without running out of fuel and victuals, space propulsion must be rethought because current reactive chemical technology is inadequate for long-distance spaceflight. There are three promising propulsion schemes for efficient short-term interstellar flight currently in development: ion thrusters, fusion-driven rockets, and the laser-pushed light sail.

For long-distance spaceflight, the Tsiolkovskjy rocket equation, which governs the motion of all rockets, becomes a major concern.

v = ve In m0mf

This equation relates the velocity gain of vehicle, v, to the exhaust velocity, ve, of the reaction mass and the ratio of the initial mass of the rocket, m0, to its final mass, mf. The plot of this equation below, listed as Figure 2, shows that as the desired momentum change v increases, the required amount of fuel increases exponentially.

Graph


The trading of velocity gain for lower fuel consumption results in longer flight durations. This dilemma between fuel consumption and flight time puts interstellar travel in a problematic situation. The propulsion solutions presented below seek to overcome this obstacle by either pushing the practical limits of the rocket equation or simply circumventing it.

An ion thruster is a form of electric propulsion that relies on injecting charged particles into an electric field to accelerate them. The resulting force propels the spacecraft using Newton’s third law. This is still a rocket concept, but instead of ejecting high-temperature combustion products, ions are discharged. This has a significant impact on the fuel consumption of this propulsion system, making it more efficient than combustion engines (2). The anatomy of the ion engine is illustrated in Figure 3.

ion thruster

Electrostatic Ion Thruster Diagram (Source: NASA)

The fusion-driven rocket scheme attempts to exploit the tremendous amount of nuclear energy released by atomic synthesis to either directly expel hot plasma or heat and accelerate a propellant. The physics of fusion is governed by Einstein’s mass-energy equivalence equation.

E = mc²

When two atoms collide and fuse, the resulting reaction produces a new atom and the mass difference mbetween the reactants and the products are converted to energy, E, as shown in Figure 4. This equation states that the conversion factor between mass and energy is the square of the speed of light c, which is about 300 000 km/s. The large value for c is the reason behind the enormous amount of energy that is released from these collisions (3).

Atoms


Unlike electric and nuclear propulsion, the laser-pushed light sail is a propellantless scheme that relies on the principle of direct momentum transfer. The energy source is a stationary laser that sends a large light beam from Earth to a thin sheet of material moving in space called a light sail, which carries the probe (4). Although the momentum equation (3) from classical physics suggests that massless objects like photons can’t carry momentum, the laws of quantum mechanics and special relativity allow any particle that carries energy to have momentum, regardless of whether they have mass or not, as demonstrated in the general form of the relativistic equation (4). In the quantum world, all wave-like particles have energy since they have a non-zero frequency as shown by Plank’s equation.

p = mv
p = E2 – (mc2)2(c2)
E = hf

Hence, as illustrated below, the momentum carried by the photons can be transferred to the sail throughout the interstellar journey. This simple solution allows for high-velocity missions without the limitation of the rocket equation.

Laser propulsion
Schematic Schematic of the Laser Propulsion Concept

As far as warp drives, the bending of space, the utilisation of wormholes and black holes for travel, and putting colonists and crew to sleep for 50 or a hundred years, all remain in the realm of science fiction at the present while presenting a host of technical and biological problems to overcome.

See: https://www.calculateme.com/astronomy/light-years/to-miles/5

See: https://public.nrao.edu/ask/what-is-the-average-distance-between-stars-in-our-galaxy/

See: https://www.thenakedscientists.com/forum/index.php?topic=30636.0

See: https://link.springer.com/article/10.1007/s10686-021-09801-0

See: https://phys.org/news/2023-01-superionic-ice-contributes-magnetic-anomalies.html

See: https://stemfellowship.org/rethinking-space-propulsion-for-interstellar-travel/

Hartmann352
 
Dec 11, 2021
9
0
530
Visit site
AJShepard -

Recall that Einstein stated that nothing can surpass the speed of light because your mass, at the speed of light, becomes infinite.

Next, remember how small we are. Jupiter has 318 times the mass of the Earth. While the Earth's magnetic field is believed to be generated by its internal dynamo—the churning of electrically conductive fluids in the core. But Jupiter is thought to be made of helium and hydrogen, which are not very conductive. This has led to theories that suggest the great pressure exerted inside the planet resulted in the formation of liquid metallic hydrogen, which, as its name implies, conducts much like a metal.

In the superionic crystalline phase, as seemingly in evidence on Uranus and Neptune, water loses its molecular identity (H2O): negative oxygen ions (O2-) crystallize into an extensive lattice, and protons in the form of positive hydrogen ions (H+) form a liquid that floats around freely within the oxygen lattice.

"The situation can be compared to a metal conductor such as copper, with the big difference that positive ions form the crystal lattice in the metal, and electrons bearing a negative charge are free to wander around the lattice," said Maurice de Koning, a professor at the State University of Campinas's Gleb Wataghin Physics Institute (IFGW-UNICAMP) in São Paulo state, Brazil.

De Koning led the study that resulted in an article published in Proceedings of the National Academy of Sciences (PNAS) and featured on the cover of its November 8, 2022 issue.

Superionic ice forms at extremely high temperatures in the range of 5,000 Kelvin (4,700 °C) and pressure of around 340 gigapascals, or over 3.3 million times Earth's standard atmospheric pressure, he explained. It is therefore impossible for stable superionic ice to exist on our planet. It can exist on Neptune and Uranus, however. In fact, scientists are confident that large amounts of ice XVIII lurk deep in their mantles, thanks to the pressure resulting from these giants' huge gravitational fields, as confirmed by seismographic readings.

"The electricity conducted by the protons through the oxygen lattice relates closely to the question of why the axis of the magnetic field doesn't coincide with the rotation axis in these planets. They're significantly misaligned, in fact," De Koning said.

Measurements made by the space probe Voyager 2, which flew by these distant planets on its journey to the edge of the Solar System and beyond, show that the axes of Neptune's and Uranus's magnetic fields form angles of 47 degrees and 59 degrees with their respective rotation axes.

To fully grasp the actions of the planets' strongest magnetic fields. you must then realise how far apart they are and that their magnetic fields, like gravity, adhere to the inverse square law, where doubling the distance from the sources reduces its strength by a factor of four, or 1/4.

The distance between the Earth and Mercury is 0.61 AU. That’s around 91,691,000 kilometers, or 56,974,146 miles.

Likewise, the distance between Earth to Venus and Mars is 0.28 AU and 0.52 AU respectively. 0.28 AU is approximately 41,400,000 km or 25,724,767 miles and 0.52 AU is around 78,340,000 km or 48,678,219 miles. In addition, the distance to Jupiter from Earth is about 4.2 AU which is approximately 628,730,000 kilometers and 390,674,710 miles.

Furthermore, the distance from Saturn to Earth is 8.52 AU which is about 1,275,000,000 kilometers or about 792,248,270 miles. Similarly, Earth’s distance to Uranus is 18.21 AU. That is about 2,723,950,000 km or 1,692,662,530 miles. Finally, the distance between Earth and Neptune is 29.09 AU. That’s around 4,351,400,000 kilometers or 2,703,959,960 miles.

As a result, you can visualise the decrease of each planet's magnetic field over these vast distances. Then examine the average distance between any two stars in our Milky Way galaxy. the average distance between any two stars in our galaxy. That number turns out to be about 5 light years, which is very close to the 4 light year distance between our Sun and Alpha Centauri, our closest stellar neighbour. Again, stellar magnetic fields are ruled by the inverse square law, too. Five light years equates to some 5,878,600,000,000 miles - almost 6 trillion miles.

Therefore, planning to propel yourself from star to star, utilising each star's inherent magnetic field, where star's lie an average of some 6 trillion miles apart, may not be a very effective means of galactic propulsion. If the speed garnered from each star or planet via its own magnetic field is sufficient when flown close enough to harness it, there are other problems - radiation and heat.

Jupiter is surrounded by an enormous magnetic field called the magnetosphere, which has a million times the volume of Earth's magnetosphere. Charged particles are trapped in the magnetosphere and form intense radiation belts. These belts are similar to the Earth's Van Allen belts, but are many millions of times more intense.

Jupiter has the most complex and energetic radiation belts in our Solar System and one of the most challenging space environments to measure and characterize in-depth. Their hazardous environment is also a reason why so many spacecraft avoid flying directly through their most intense regions, thus explaining how Jupiter’s radiation belts have kept many of their secrets so well hidden, despite having been studied for decades. In this paper we argue why these secrets are worth unveiling. Jupiter’s radiation belts and the vast magnetosphere that encloses them present us with an unprecedented physical laboratory. Voyages through the uninviting environment of Jupiter’s radiation belts presents us with many challenges in mission design, science planning, instrumentation, and technology.

Measurements in the radiation belts of Uranus and Neptune, sampled only once by the Voyager 2 spacecraft, should definitely be part of any future attempt to explore the two planets. Saturn’s radiation belts were surveyed in depth thanks to the 13-year Cassini mission at the Kronian system. In comparison, Jupiter’s radiation belts, while visited by numerous missions and monitored for decades through their synchrotron emission, still hold onto many of their secrets. No single mission, payload, or observation campaign was ever designed to capture and/or cope with their full scale, complexity, dynamics, and energetics, as argued in the two follow-up subsections.

View attachment 2543
Jupiter’s magnetospheric region hosting the inner radiation belts (center). The moons Io, Europa, Ganymede, and Callisto are drawn, while the Io plasma torus and associated plasma disc are shown in red (Image Credit: John Spencer). Information on the inner electron and ion radiation belts are shown on each side. Color radiation belt maps are from models in, used with permission from Quentin Nénon. They cover the distances inward of Europa.

The above gives you an idea of the size, intensity and make up of Jupiter's radiation belts. Recall that Earth, too, has an area of high levels of radiation constrained by its magnetic field - the Van Allen Belt.

So, one must be mindful of the deadly radiation contained or constrained within planetary or stellar magnetic lines of force.

To wrap this first part up, let's recall that an infinite mass requires an infinite energy to move it.

Two, the distances involved between stars and planets effectively rule out having their magnetic lines of force being used to propel an interstellar vehicle with any notable success. Additionally, such use would vary immensely over time and directly proportional to the distances between the selected magnetospheres.

Thirdly, the magnetic lines of force, or the magnetosphere, harbor dangerous levels of radiation necessitating layers of shielding, adding mass to the conveyance.

To travel the distances you have in mind, one must examine other more exotic propulsive systems.

The underlying issue is that our closest star system is 4.25 light-years away from the sun. To reach this destination promptly without running out of fuel and victuals, space propulsion must be rethought because current reactive chemical technology is inadequate for long-distance spaceflight. There are three promising propulsion schemes for efficient short-term interstellar flight currently in development: ion thrusters, fusion-driven rockets, and the laser-pushed light sail.

For long-distance spaceflight, the Tsiolkovskjy rocket equation, which governs the motion of all rockets, becomes a major concern.

v = ve In m0mf

This equation relates the velocity gain of vehicle, v, to the exhaust velocity, ve, of the reaction mass and the ratio of the initial mass of the rocket, m0, to its final mass, mf. The plot of this equation below, listed as Figure 2, shows that as the desired momentum change v increases, the required amount of fuel increases exponentially.

Graph


The trading of velocity gain for lower fuel consumption results in longer flight durations. This dilemma between fuel consumption and flight time puts interstellar travel in a problematic situation. The propulsion solutions presented below seek to overcome this obstacle by either pushing the practical limits of the rocket equation or simply circumventing it.

An ion thruster is a form of electric propulsion that relies on injecting charged particles into an electric field to accelerate them. The resulting force propels the spacecraft using Newton’s third law. This is still a rocket concept, but instead of ejecting high-temperature combustion products, ions are discharged. This has a significant impact on the fuel consumption of this propulsion system, making it more efficient than combustion engines (2). The anatomy of the ion engine is illustrated in Figure 3.

ion thruster

Electrostatic Ion Thruster Diagram (Source: NASA)

The fusion-driven rocket scheme attempts to exploit the tremendous amount of nuclear energy released by atomic synthesis to either directly expel hot plasma or heat and accelerate a propellant. The physics of fusion is governed by Einstein’s mass-energy equivalence equation.

E = mc²

When two atoms collide and fuse, the resulting reaction produces a new atom and the mass difference mbetween the reactants and the products are converted to energy, E, as shown in Figure 4. This equation states that the conversion factor between mass and energy is the square of the speed of light c, which is about 300 000 km/s. The large value for c is the reason behind the enormous amount of energy that is released from these collisions (3).

Atoms


Unlike electric and nuclear propulsion, the laser-pushed light sail is a propellantless scheme that relies on the principle of direct momentum transfer. The energy source is a stationary laser that sends a large light beam from Earth to a thin sheet of material moving in space called a light sail, which carries the probe (4). Although the momentum equation (3) from classical physics suggests that massless objects like photons can’t carry momentum, the laws of quantum mechanics and special relativity allow any particle that carries energy to have momentum, regardless of whether they have mass or not, as demonstrated in the general form of the relativistic equation (4). In the quantum world, all wave-like particles have energy since they have a non-zero frequency as shown by Plank’s equation.

p = mv
p = E2 – (mc2)2(c2)
E = hf

Hence, as illustrated below, the momentum carried by the photons can be transferred to the sail throughout the interstellar journey. This simple solution allows for high-velocity missions without the limitation of the rocket equation.

Laser propulsion
Schematic Schematic of the Laser Propulsion Concept

As far as warp drives, the bending of space, the utilisation of wormholes and black holes for travel, and putting colonists and crew to sleep for 50 or a hundred years, all remain in the realm of science fiction at the present while presenting a host of technical and biological problems to overcome.

See: https://www.calculateme.com/astronomy/light-years/to-miles/5

See: https://public.nrao.edu/ask/what-is-the-average-distance-between-stars-in-our-galaxy/

See: https://www.thenakedscientists.com/forum/index.php?topic=30636.0

See: https://link.springer.com/article/10.1007/s10686-021-09801-0

See: https://phys.org/news/2023-01-superionic-ice-contributes-magnetic-anomalies.html

See: https://stemfellowship.org/rethinking-space-propulsion-for-interstellar-travel/

Hartmann352
I do not believe that what Einstein said is absolute truth.
 
asdcx -

Many have tried to disputed disprove Einstein's theories.

However physicists from his initial publications to today continually prove that he is correct.

For instance, here on Live Science you may find that "Light from behind a black hole spotted 1st time proving Einstein right" by Ben Turner on July 30, 2021.

You might enjoy reading: Proving Einstein Right: The Daring Expeditions that Changed How We Look at the Universe, by S. James Gates Jr and Cathie Pelletier.

Defining the basic currency of truthful knowledge is surprisingly difficult. To know something you must first believe it, but that’s not enough: to make factual knowledge, that belief must also be true.

“True belief” is insufficient, though. A belief can be true just by chance, or we can arrive at a right answer via a wrong route. So epistemologists – those philosophers who ponder the theory of knowledge – have traditionally added another condition for a true belief to count as knowledge: it must also be justified in some way.

See: https://www.newscientist.com/definition/scientific-truth/

It’s important to remember that to be interested in scientific truth, one doesn’t have to reject other sources of meaning. Many scientists, spiritual leaders, and everyday folks alike see profound meaning in both their spiritual and scientific beliefs. Some religious organizations, like the Vatican — which hosts scientific conferences for astronomers and even runs its own observatory — have emphasized the lack of conflict between science and religion. Science and spiritual matters can peacefully and productively coexist.

But scientific truth must be proven by accepted scientific methods, all of which are the basis for our understanding of physics, astrophysics, chemistry and biology.
Hartmann352
 
Dec 11, 2021
9
0
530
Visit site
asdcx -

Many have tried to disputed disprove Einstein's theories.

However physicists from his initial publications to today continually prove that he is correct.

For instance, here on Live Science you may find that "Light from behind a black hole spotted 1st time proving Einstein right" by Ben Turner on July 30, 2021.

You might enjoy reading: Proving Einstein Right: The Daring Expeditions that Changed How We Look at the Universe, by S. James Gates Jr and Cathie Pelletier.

Defining the basic currency of truthful knowledge is surprisingly difficult. To know something you must first believe it, but that’s not enough: to make factual knowledge, that belief must also be true.

“True belief” is insufficient, though. A belief can be true just by chance, or we can arrive at a right answer via a wrong route. So epistemologists – those philosophers who ponder the theory of knowledge – have traditionally added another condition for a true belief to count as knowledge: it must also be justified in some way.

See: https://www.newscientist.com/definition/scientific-truth/

It’s important to remember that to be interested in scientific truth, one doesn’t have to reject other sources of meaning. Many scientists, spiritual leaders, and everyday folks alike see profound meaning in both their spiritual and scientific beliefs. Some religious organizations, like the Vatican — which hosts scientific conferences for astronomers and even runs its own observatory — have emphasized the lack of conflict between science and religion. Science and spiritual matters can peacefully and productively coexist.

But scientific truth must be proven by accepted scientific methods, all of which are the basis for our understanding of physics, astrophysics, chemistry and biology.
Hartmann352
Sometimes people can predict future without scientific methods . You cannot believe it but it is possible. God gives them knowledge. Maybe physisits do not know everything, and they can make wrong conclusions.
 
There have been many skewed theorems and many are just plain wrong.

Einstein was wrong about at least two things: There are, in fact, "spooky actions at a distance," now proven by researchers at the National Institute of Standards and Technology (NIST) and Einstein's other slip up was his failure to appreciate the "cosmological constant."

Einstein used that term to refer to quantum mechanics, which describes the curious behavior of the smallest particles of matter and light. He was referring, specifically, to quantum entanglement, the idea that two physically separated particles can have correlated properties, with values that are uncertain until they are measured. Einstein was dubious, and until now, researchers have been unable to support it with near-total confidence.

As described in a paper posted online and published in Physical Review Letters (PRL),* researchers from NIST and several other institutions created pairs of identical light particles, or photons, and sent them to two different locations to be measured. Researchers showed the measured results not only were correlated, but also—by eliminating all other known options—that these correlations cannot be caused by the locally controlled, "realistic" universe Einstein thought we lived in. This implies a different explanation for such an entanglement.

* We present a loophole-free violation of local realism using entangled photon pairs. We ensure that all relevant events in our Bell test are spacelike separated by placing the parties far enough apart and by using fast random number generators and high-speed polarization measurements. A high-quality polarization-entangled source of photons, combined with high-efficiency, low-noise, single-photon detectors, allows us to make measurements without requiring any fair-sampling assumptions. Using a hypothesis test, we compute p values as small as 5.9×10−9 for our Bell violation while maintaining the spacelike separation of our events. We estimate the degree to which a local realistic system could predict our measurement choices. Accounting for this predictability, our smallest adjusted p value is 2.3×10−7. We therefore reject the hypothesis that local realism governs our experiment.

The NIST experiments are called Bell tests, so named because in 1964 Irish physicist John Bell showed there are limits to measurement correlations that can be ascribed to local, pre-existing (i.e. realistic) conditions. Additional correlations beyond those limits would require either sending signals faster than the speed of light, which scientists consider impossible, or another mechanism, such as quantum entanglement.

The research team achieved this feat by simultaneously closing all three major "loopholes" that have plagued previous Bell tests. Closing the loopholes was made possible by recent technical advances, including NIST's ultrafast single-photon detectors, which can accurately detect at least 90 percent of very weak signals, and new tools for randomly picking detector settings.

See: https://www.nist.gov/news-events/news/2015/11/nist-team-proves-spooky-action-distance-really-real

In 1917 Einstein published a paper in which he applied this theory to all matter in space. His theory led to the conclusion that all the mass in the universe would bend space so much that it should have long ago contracted into a single dense blob. Given that the universe seems pretty well spread out, however, and does not seem to be contracting, Einstein decided to add a “fudge factor,” that acts like “anti-gravity” and prevents the universe from collapsing. He called this idea, which was represented as an additional term in the mathematical equation representing his theory of gravity, the cosmological constant. In other words, Einstein supposed the universe to be static and unchanging, because that is the way it looked to astronomers in 1917.

It would be more than a decade before evidence would begin to emerge that the universe was expanding. And it would be scores of years before astronomers would find that the expansion was also accelerating. Then again, it would be many years before Einstein realised his actual mistake.

While he popularly considered his addition of the constant to be an affront to his own work, it may not have been as bad as he thought. The cosmological principle states, rather assumes, that the universe at the largest scales has the same properties everywhere. This is a spatial definition. An extension called the ‘perfect’ cosmological principle states that the universe at the largest scales has the same properties everywhere and at every time, implying that it has always remained the way it is today and it will continue to be this way forever. This is also called the steady-state theory, an alternative to the Big Bang theory that has been widely discredited – and which Einstein himself pursued for a while in 1931.

Anyway, in defence of Einstein, theoretical astrophysicist Peter Coles writes on his blog, “General relativity, when combined with the cosmological principle, but without the cosmological constant, requires the universe to be dynamical rather than static. If anything, therefore, you could argue that Einstein’s biggest blunder was to have failed to predict the expansion of the Universe!” Indeed, if Einstein had not decided to fudge his own monumental equations, he may have been onto something.

As Coles goes on to explain, the field equations, like all equations, have two sides: on the left are the effects of gravity due to the bending of spacetime; on the right are the effects of massive bodies and their motion through spacetime. Einstein added the cosmological constant to the left side, indicating that he thought it was gravity’s side of things that needed to be modified to imply a static universe.

Modern cosmologists, however, use the term on the right side, to indicate that it is matter and its behaviour through spacetime that need a modification. As a result, the cosmological constant is often related to the energy of spacetime itself, also known as dark energy or vacuum energy.

n 1998, two astronomy teams found that the universe’s expansion was accelerating. They were able to independently come to the same conclusion by observing stars around the universe that were dying as Type Ia supernovae. When a mid- to low-mass star runs out of hydrogen to fuse, it will attempt to fuse carbon. However, if it can’t become sufficiently hot to commission the fusion process, the star will accumulate carbon and oxygen in its core. Over time, the outer layers will dissipate to leave behind a carbon-oxygen lump called a white dwarf. If it has a binary companion – an object, such as another star, orbiting it – then the white dwarf could start siphoning material from the object’s surface, but only upto a point.

In 1930, a 19-year-old Indian astrophysicist named Subrahmanyan Chandrasekhar had found that white dwarf stars should blow up as soon as they became more than 1.44-times as heavy as our Sun. This explosion is called a Type Ia supernova. Because it is moderated by the Chandrasekhar limit, its characteristics – such as its brightness – are always the same no matter whatever else happened before the limit was breached. The two astronomy teams were able to use this information to determine how far various white dwarfs were in the universe and, using the Doppler effect, how quickly they seemed to be moving away.

They found that the cosmological constant was positive. More specifically, assuming that the energy density of our universe was close to its critical density, they calculated the value of a number called omega-lambda to be about 0.7. The universe’s critical density determines how information behaves within the universe. For example, if the universe’s density is higher than its critical density, then astrophysicists predict it will be shaped like a sphere, and that particles that originate on one point of it will eventually circle back to it if they persisted for long enough – and the information will reset and ‘restart’. Our universe is assumed to be flat, “like a sheet of paper”, because its density is lower than the critical density. And in a flat universe, omega-lambda is the fraction of dark energy. This is how we say, “The universe is 70% dark energy”, etc. – because omega-lambda hovers around 0.7.

See: https://thewire.in/history/cosmological-constant-einstein-mach

Spooky action at a distance, as Einstein quipped, is real and has been proven real and his adding of a filler to negate the universe's expansion, using his "cosmological constant", debunked in 1997 by Saul Perlmutter and their observations of Type Ia supernova.
Hartmann352
 
Dec 11, 2021
9
0
530
Visit site
There have been many skewed theorems and many are just plain wrong.

Einstein was wrong about at least two things: There are, in fact, "spooky actions at a distance," now proven by researchers at the National Institute of Standards and Technology (NIST) and Einstein's other slip up was his failure to appreciate the "cosmological constant."

Einstein used that term to refer to quantum mechanics, which describes the curious behavior of the smallest particles of matter and light. He was referring, specifically, to quantum entanglement, the idea that two physically separated particles can have correlated properties, with values that are uncertain until they are measured. Einstein was dubious, and until now, researchers have been unable to support it with near-total confidence.

As described in a paper posted online and published in Physical Review Letters (PRL),* researchers from NIST and several other institutions created pairs of identical light particles, or photons, and sent them to two different locations to be measured. Researchers showed the measured results not only were correlated, but also—by eliminating all other known options—that these correlations cannot be caused by the locally controlled, "realistic" universe Einstein thought we lived in. This implies a different explanation for such an entanglement.

* We present a loophole-free violation of local realism using entangled photon pairs. We ensure that all relevant events in our Bell test are spacelike separated by placing the parties far enough apart and by using fast random number generators and high-speed polarization measurements. A high-quality polarization-entangled source of photons, combined with high-efficiency, low-noise, single-photon detectors, allows us to make measurements without requiring any fair-sampling assumptions. Using a hypothesis test, we compute p values as small as 5.9×10−9 for our Bell violation while maintaining the spacelike separation of our events. We estimate the degree to which a local realistic system could predict our measurement choices. Accounting for this predictability, our smallest adjusted p value is 2.3×10−7. We therefore reject the hypothesis that local realism governs our experiment.

The NIST experiments are called Bell tests, so named because in 1964 Irish physicist John Bell showed there are limits to measurement correlations that can be ascribed to local, pre-existing (i.e. realistic) conditions. Additional correlations beyond those limits would require either sending signals faster than the speed of light, which scientists consider impossible, or another mechanism, such as quantum entanglement.

The research team achieved this feat by simultaneously closing all three major "loopholes" that have plagued previous Bell tests. Closing the loopholes was made possible by recent technical advances, including NIST's ultrafast single-photon detectors, which can accurately detect at least 90 percent of very weak signals, and new tools for randomly picking detector settings.

See: https://www.nist.gov/news-events/news/2015/11/nist-team-proves-spooky-action-distance-really-real

In 1917 Einstein published a paper in which he applied this theory to all matter in space. His theory led to the conclusion that all the mass in the universe would bend space so much that it should have long ago contracted into a single dense blob. Given that the universe seems pretty well spread out, however, and does not seem to be contracting, Einstein decided to add a “fudge factor,” that acts like “anti-gravity” and prevents the universe from collapsing. He called this idea, which was represented as an additional term in the mathematical equation representing his theory of gravity, the cosmological constant. In other words, Einstein supposed the universe to be static and unchanging, because that is the way it looked to astronomers in 1917.

It would be more than a decade before evidence would begin to emerge that the universe was expanding. And it would be scores of years before astronomers would find that the expansion was also accelerating. Then again, it would be many years before Einstein realised his actual mistake.

While he popularly considered his addition of the constant to be an affront to his own work, it may not have been as bad as he thought. The cosmological principle states, rather assumes, that the universe at the largest scales has the same properties everywhere. This is a spatial definition. An extension called the ‘perfect’ cosmological principle states that the universe at the largest scales has the same properties everywhere and at every time, implying that it has always remained the way it is today and it will continue to be this way forever. This is also called the steady-state theory, an alternative to the Big Bang theory that has been widely discredited – and which Einstein himself pursued for a while in 1931.

Anyway, in defence of Einstein, theoretical astrophysicist Peter Coles writes on his blog, “General relativity, when combined with the cosmological principle, but without the cosmological constant, requires the universe to be dynamical rather than static. If anything, therefore, you could argue that Einstein’s biggest blunder was to have failed to predict the expansion of the Universe!” Indeed, if Einstein had not decided to fudge his own monumental equations, he may have been onto something.

As Coles goes on to explain, the field equations, like all equations, have two sides: on the left are the effects of gravity due to the bending of spacetime; on the right are the effects of massive bodies and their motion through spacetime. Einstein added the cosmological constant to the left side, indicating that he thought it was gravity’s side of things that needed to be modified to imply a static universe.

Modern cosmologists, however, use the term on the right side, to indicate that it is matter and its behaviour through spacetime that need a modification. As a result, the cosmological constant is often related to the energy of spacetime itself, also known as dark energy or vacuum energy.

n 1998, two astronomy teams found that the universe’s expansion was accelerating. They were able to independently come to the same conclusion by observing stars around the universe that were dying as Type Ia supernovae. When a mid- to low-mass star runs out of hydrogen to fuse, it will attempt to fuse carbon. However, if it can’t become sufficiently hot to commission the fusion process, the star will accumulate carbon and oxygen in its core. Over time, the outer layers will dissipate to leave behind a carbon-oxygen lump called a white dwarf. If it has a binary companion – an object, such as another star, orbiting it – then the white dwarf could start siphoning material from the object’s surface, but only upto a point.

In 1930, a 19-year-old Indian astrophysicist named Subrahmanyan Chandrasekhar had found that white dwarf stars should blow up as soon as they became more than 1.44-times as heavy as our Sun. This explosion is called a Type Ia supernova. Because it is moderated by the Chandrasekhar limit, its characteristics – such as its brightness – are always the same no matter whatever else happened before the limit was breached. The two astronomy teams were able to use this information to determine how far various white dwarfs were in the universe and, using the Doppler effect, how quickly they seemed to be moving away.

They found that the cosmological constant was positive. More specifically, assuming that the energy density of our universe was close to its critical density, they calculated the value of a number called omega-lambda to be about 0.7. The universe’s critical density determines how information behaves within the universe. For example, if the universe’s density is higher than its critical density, then astrophysicists predict it will be shaped like a sphere, and that particles that originate on one point of it will eventually circle back to it if they persisted for long enough – and the information will reset and ‘restart’. Our universe is assumed to be flat, “like a sheet of paper”, because its density is lower than the critical density. And in a flat universe, omega-lambda is the fraction of dark energy. This is how we say, “The universe is 70% dark energy”, etc. – because omega-lambda hovers around 0.7.

See: https://thewire.in/history/cosmological-constant-einstein-mach

Spooky action at a distance, as Einstein quipped, is real and has been proven real and his adding of a filler to negate the universe's expansion, using his "cosmological constant", debunked in 1997 by Saul Perlmutter and their observations of Type Ia supernova.
Hartmann352
I have seen different types of ufos above my head.