1. The Rosetta mission lands on a comet The Rosetta mission and its landing of the Philae probe on comet 67P/Churyumov-Gerasimenko was one of the biggest science success stories of 2014—and our Breakthrough of the Year. This image of the comet’s surface, taken by Philae when it was just moments away from landing, is one of the mission’s most iconic.
2. The origin of the penis It’s not a question a lot of scientists ponder out loud, but it’s key
to much of life on Earth: Where does the penis come from? This image of
a snake embryo shows tiny buds where legs would be if snakes had
legs—but in fact, they’re actually the beginning of the snake's paired
penises. After studying how the organ gets its start in snakes, lizards,
mice, and chickens, researchers said they’ve finally figured out where
the penis comes from.
3. The Z machine In October, researchers using the awesomely named Z machine at Sandia
National Laboratories in New Mexico reported a significant advance in
the race toward nuclear fusion. They’ve detected significant numbers of
neutrons—byproducts of fusion reactions—coming from the experiment. This
not only demonstrates the viability of their approach, but also brings
them closer to their ultimate goal of producing more energy than the
fusion device takes in. 4. Two giant blue stars melding in space Astronomers found further evidence of how phenomenally cool space
is—and how little we know about it—when they discovered that the
brightest object in a nearby star cluster isn’t a single star, but
actually two massive blue stars in the process of merging. We don't know
what will happen when the merging is complete: Some models predict the
explosive release of a massive amount of energy, but others hint at a
less violent outcome.
JAVIER LORENZO/UNIVERSIDAD DE ALICANTE
5. Adrift on an Arctic ice floe Home alone for the holidays? It could be worse. Somewhere in the
Arctic Ocean, two Norwegian scientists are adrift on an ice floe,
equipped with a year’s worth of food and fuel—and one research hovercraft named SABVABAA
(pictured). Right now, they’re drifting northward along the submarine
Lomonosov Ridge, taking sediment cores to learn about the polar
environment more than 60 million years ago.
YNGVE KRISTOFFERSENE
6. An octopus supermom This octopus died in 2011, but scientists didn’t tell her amazing
story until this year. She was spotted in the same place, holding her
eggs in her arms, for a whopping 4.5 years—smashing the previous record
for egg brooding. In 53 months, she was never seen eating, and over time
she turned from pale purple to ghostly white. Like most female
octopuses, she died after her watch ended—but her eggs hatched
successfully.
7. Rocks made out of plastic In June, researchers reported finding a new type of rock made out of
plastic on the shores of Hawaii. Called a plastiglomerate, the rock is
cobbled together from plastic and organic matter like sand and coral.
The discovery suggests humanity’s heavy hand in natural processes may be
changing the world more than we realize.
Patricia Corcoran
8. An uncontacted tribe makes contact This year, members of a previously isolated Amazonian tribe took a
momentous step and made contact with the outside world. In this picture,
a young man from the tribe clutches a bundle of used clothing, which
some worry could have been a source of disease transmission,
during initial contact with local villagers in July. Officials suspect
that the tribe fled illegal logging and drug trafficking in their
traditional homelands in Peru.
9. Spinosaurus, the swimming dinosaur Meet Spinosaurus, the world’s biggest carnivorous
dinosaur—and the only swimmer. In September, analysis of
97-million-year-old fossils revealed that 15-meter-long Spinosaurus is not only the largest land carnivore ever to exist, but it’s also the only dinosaur known to have made its home in the water.
10. Comet dust found on Earth This is a single particle of comet dust, found preserved in the ice
and snow of Antarctica—the first time it's ever been found on Earth’s
surface. Comet dust is the oldest astronomical particle we can study and
provides clues about how our solar system first formed, so scientists
are excited to get their hands on this potential new source.
A new distance record has been set in the strange world of quantum teleportation.
In a recent experiment, the quantum state (the direction it was
spinning) of a light particle instantly traveled 15.5 miles (25
kilometers) across an optical fiber, becoming the farthest successful quantum teleportation
feat yet. Advances in quantum teleportation could lead to better
Internet and communication security, and get scientists closer to
developing quantum computers.
About five years ago, researchers could only teleport quantum
information, such as which direction a particle is spinning, across a
few meters. Now, they can beam that information across several miles.
Quantum teleportation doesn't mean it's possible for a person to
instantly pop from New York to London, or be instantly beamed aboard a
spacecraft like in television's "Star Trek." Physicists can't instantly
transport matter, but they can instantly transport information through quantum teleportation. This works thanks to a bizarre quantum mechanics property called entanglement.
Quantum entanglement happens when two subatomic particles stay
connected no matter how far apart they are. When one particle is
disturbed, it instantly affects the entangled partner. It's impossible
to tell the state of either particle until one is directly measured, but
measuring one particle instantly determines the state of its partner.
In the new, record-breaking experiment, researchers from the University
of Geneva, NASA's Jet Propulsion Laboratory and the National Institute
of Standards and Technology used a superfast laser to pump out photons.
Every once in a while, two photons would become entangled.
Once the researchers had an entangled pair, they sent one down the
optical fiber and stored the other in a crystal at the end of the cable.
Then, the researchers shot a third particle of light at the photon
traveling down the cable. When the two collided, they obliterated each
other.
Though both photons vanished, the quantum information from the
collision appeared in the crystal that held the second entangled photon. Going the distance
Quantum information has already been transferred dozens of miles, but
this is the farthest it's been transported using an optical fiber, and
then recorded and stored at the other end. Other quantum teleportation
experiments that beamed photons farther used lasers instead of optical
fibers to send the information. But unlike the laser method, the
optical-fiber method could eventually be used to develop technology like
quantum computers that are capable of extremely fast computing, or quantum cryptography that could make secure communication possible.
Physicists think quantum teleportation will lead to secure wireless
communication — something that is extremely difficult but important in
an increasingly digital world. Advances in quantum teleportation could
also help make online banking more secure.
The research was published Sept. 21 in the journal Nature Photonics.
The idea of traversable wormholes has been science fiction fodder since Einstein first theorized their existence with the formulation of his general theory of relativity, but do wormholes even exist in nature? Actually, we have no idea if they exist or not, but if they do, theoretical physicists have proposed that they could act as portals into the future and the past or connect two distant regions of space.
But before you grab your Grays Sports Almanac and get ready for some temporal mischief, there’s one huge caveat to this idea — only photons may travel… and even photons may be too much of a stretch for the hypothetical shortcut through spacetime.
In a paper published to the arXiv preprint service (and submitted to the journal Physical Review D), theoretical physicist Luke Butcher of the University of Cambridge has revisited wormhole theory and potentially found a way to bridge these notoriously unstable entities. In the late 1980s, physicist Kip Thorne, of the California
Institute of Technology (Caltech), theorized that to make a wormhole
‘traversable’ — as in to actually make these spacetime shortcuts stable
enough to travel through — some form of negative energy would be
required. In the quantum world, this negative energy could come in the
form of Casimir energy. It is well known that if two perfectly smooth plates are held
very close together in a vacuum, quantum effects between the plates will
have a net repulsive (or attractive, depending on the plate
configuration) effect between the two. This is caused by waves of energy
being too large to fit between the plates, causing a net negative
energy between the plates when compared with the surrounding “normal”
space. As realized by Thorne and his Caltech team, this Casimir energy
could be applied to the neck of a wormhole, potentially holding it open
long enough for something to pass through.
Alas, we are talking about quantum-sized wormhole throats, meaning
Marty McFly’s speeding DeLorean will be left revving in the 1985 parking
lot, unable to squeeze through. But even if some quantum-sized traveler
could pass through the wormhole’s neck, the wormhole would still likely collapse very quickly. On reevaluating this scenario, Butcher has identified some more
stable wormhole configurations and, in certain situations, the wormhole
collapse could be prevented for an “arbitrarily long time.” But for
this to happen, the wormhole needs to be very long and have a very
narrow throat. In this case it seems possible that photons could
traverse the wormhole. “(T)he negative Casimir energy does allow the wormhole to
collapse extremely slowly, its lifetime growing without bound as the
throat-length is increased,” writes Butcher. “We find that the throat
closes slowly enough that its central region can be safely traversed by a
pulse of light.” Butcher admits that although it’s not clear from his
calculations whether the light pulse will be able to complete its
journey from one end to the other, there is a tantalizing possibility
for sending signals faster than the speed of light or even back in time.
“These results tentatively suggest that a macroscopic traversable
wormhole might be sustained by its own Casimir energy, providing a
mechanism for faster-than-light communication and closed causal curves.” For the moment, this work is highly theoretical, but, as
pointed out by Matt Visser of Victoria University of Wellington, New
Zealand, in New Scientist on Tuesday, this research could renew interest in the study of wormholes and their potential spacetime-bridging capabilities. So if we were to look for physical evidence of wormholes, could
this research help us? Could we perhaps look out for be some kind of
unique polarization of light that has traveled from another part of the
Universe or some other time, appearing randomly in our local volume of
spacetime? For answers to these questions, and as to whether this may
spawn some kind of faster-then-light communications technology, we’ll
likely have to wait until the theoretical physicists have crunched more
numbers.
Why Time Can't Go Backward | sci-english.blogspot.com
“Time is what keeps everything from happening at once,” wrote Ray Cummings in his 1922 science fiction novel “The Girl in the Golden Atom,” which sums up time’s function quite nicely. But how does time stop everything from happening at once? What mechanism drives time forward, but not backward?
In a recent study published in the journal Physical Review Letters, a group of theoretical physicists re-investigate the “Arrow of Time” — a concept that describes the relentless forward march of time — and highlight a different way of looking at how time manifests itself over universal scales.
Traditionally, time is described by the “past hypothesis” that assumes
that any given system begins in a low entropy state and then, driven by
thermodynamics, its entropy increases. In a nutshell: The past is low
entropy and the future is high entropy, a concept known as thermodynamic time asymmetry.
In our everyday experience, we can find many examples of increasing
entropy, such as a gas filling a room or an ice cube melting. In these
examples, an irreversible increase in entropy (and therefore disorder)
is observed.
If this is applied on a universal scale, it is presumed that the Big
Bang spawned the Universe in a low entropy state — i.e. a state of
minimum entropy. Over the aeons, as the Universe expanded and cooled,
the entropy of this large-scale system has increased. Therefore, as the
hypothesis goes, time is intrinsically linked with the degree of
entropy, or disorder, in our Universe.
But there are several problems with this idea.
Just after the Big Bang, several lines of observational evidence point
to a Big Bang environment that was a hot and extremely disordered mess
of primordial particles. As the Universe matured and cooled, gravity
took over and made the Universe more ordered and more complex —
from the cooling clouds of gas, stars formed and planets evolved from
gravitational collapse. Eventually, organic chemistry became possible,
giving rise to life and humans that philosophize about time and space.
On a Universal scale, therefore, “disorder” has effectively decreased, not increased as the “past hypothesis” presumes.
This, argues co-investigator Flavio Mercati of the Perimeter Institute (PI) for Theoretical Physics in Ontario, Canada, is an issue with how entropy is measured.
As entropy is a physical quantity with dimensions (like energy and
temperature), there needs to be an external reference frame so they can
be measured against. “This can be done for subsystems of the universe
because the rest of the universe sets these references for them, but the
whole universe has, by definition, nothing exterior to it with respect
to define these things,” Mercati wrote in an email to Discovery News.
So if not entropy, what could be driving universal time forward? ANALYSIS: Gravitational Waves Could ‘Pump Up’ Star Brightness
Complexity is a dimensionless quantity that, in its most basic form,
describes how complex a system is. So, if one looks at our Universe,
complexity is directly linked with time; as time progresses, the
Universe becomes increasingly structured.
“The question we seek to answer in our paper is: what set these systems
in that very low-entropy state in first place? Our answer is: gravity,
and its tendency to create order and structure (complexity) from chaos,”
said Mercati.
To test this idea, Mercati and his colleagues created basic computer
models to simulate particles in a toy universe. They found that, no
matter how the simulation was run, the universes’ complexity always
increased, and never decreased, with time.
From the Big Bang, the Universe started in its lowest-complexity state
(the hot ‘soup’ of disordered particles and energy). Then, as the
Universe cooled to a state that gravity began to take over, gases
clumped together, stars formed and galaxies evolved. The Universe became
inexorably more complex, and gravity is the driving force of this
increase in complexity. ANALYSIS: Brian Cox: Time Travel is Easy! Kinda
“Every solution of the gravitational toy model we studied has this
property of having somewhere in the middle a very homogeneous, chaotic
and unstructured state, which looks very much like the plasma soup that
constituted the universe at the time the Cosmic Microwave Background was
created,” said Mercati. “Then in both time directions from that state
gravity enhances the inhomogeneities and creates a lot of structure and
order, in an irreversible way.”
As the Universe matures, he added, the subsystems become isolated
enough so that other forces set up the conditions for the ‘classical’
arrow of time to dominate in low-entropy subsystems. In these
subsystems, such as daily life on Earth, entropy can take over, creating
a “thermodynamical arrow of time.”
Over Universal scales, our perception of time is driven by the
continuous growth of complexity, but in these subsystems, entropy
dominates.
“The universe is a structure whose complexity is growing,” said Mercati in a PI press release.
“The universe is made up of big galaxies separated by vast voids. In
the distant past, they were more clumped together. Our conjecture is
that our perception of time is the result of a law that determines an
irreversible growth of complexity.”
The next step in this research would be to look for observational
evidence, something Mercati and his team are working on. “…we don’t know
yet whether there is any (observational) support, but we know what kind
of experiments have a chance of testing our idea. These are
cosmological observations.”
For now, he hasn’t revealed what kinds of cosmological observations
will be investigated, only that they will detailed in an upcoming, and
likely fascinating, paper.
This grid was made by 4D printing. These images show how the grid could form a convex or concave surface | sci-english.blogspot.com
Using a new technique known as 4D printing, researchers can print out dynamic 3D structures capable of changing their shapes over time.
Such 4D-printed items could one day be used in everything from medical implants to home appliances, scientists added.
Today's 3D printing creates items from a wide variety of materials — plastic, ceramic, glass, metal, and even more unusual ingredients such as chocolate and living cells. The machines work by setting down layers of material just like ordinary printers lay down ink, except 3D printers can also deposit flat layers on top of each other to build 3D objects. "Today, this technology can be found not just in industry, but [also] in households for less than $1,000," said lead study author Dan Raviv, a mathematician at MIT. "Knowing you can print almost anything, not just 2D paper, opens a window to unlimited opportunities, where toys, household appliances and tools can be ordered online and manufactured in our living rooms."
Now, in a further step, Raviv and his colleagues are developing 4D printing, which involves 3D printing items that are designed to change shape after they are printed. [The 10 Weirdest Things Created By 3D Printing]
"The most exciting part is the numerous applications that can emerge from this work," Raviv told Live Science. "This is not just a cool project or an interesting solution, but something that can change the lives of many."
In a report published online today (Dec. 18) in the journal Scientific Reports, the researchers explain how they printed 3D structures using two materials with different properties. One material was a stiff plastic, and stayed rigid, while the other was water absorbent, and could double in volume when submerged in water. The precise formula of this water-absorbent material, developed by 3D-printing company Stratasys in Eden Prairie, Minnesota, remains a secret.
The researchers printed up a square grid, measuring about 15 inches (38 centimeters) on each side. When they placed the grid in water, they found that the water-absorbent material could act like joints that stretch and fold, producing a broad range of shapes with complex geometries. For example, the researchers created a 3D-printed shape that resembled the initials "MIT" that could transform into another shape resembling the initials "SAL."
"In the future, we imagine a wide range of applications," Raviv said. These could include appliances that can adapt to heat and improve functionality or comfort, childcare products that can react to humidity or temperature, and clothing and footwear that will perform better by sensing the environment, he said.
In addition, 4D-printed objects could lead to novel medical implants. "Today, researchers are printing biocompatible parts to be implanted in our body," Raviv said. "We can now generate structures that will change shape and functionality without external intervention."
One key health-care application might be cardiac stents, tubes placed inside the heart to aid healing. "We want to print parts that can survive a lifetime inside the body if necessary," Raviv said.
The researchers now want to create both larger and smaller 4D-printed objects. "Currently, we've made items a few centimeters in size," Raviv said. "For things that go inside the body, we want to go 10 to 100 times smaller. For home appliances, we want to go 10 times larger."
Raviv cautioned that a great deal of research is needed to improve the materials used in 4D printing. For instance, although the 4D-printed objects the researchers developed can withstand a few cycles of wetting and drying, after several dozen cycles of folding and unfolding, the materials lose their ability to change shape. The scientists said they would also like to develop materials that respond to factors other than water, such as heat and light.
Tsunami File photo in tamilnadu, india | sci-english.blogspot.com
The Indian Ocean tsunami was one of the worst natural disasters in history. Enormous waves struck countries in South Asia and East Africa with little to no warning, killing 243,000 people. The destruction played out on television screens around the world, fed by shaky home videos. The outpouring of aid in response to the devastation in Indonesia, Sri Lanka, Thailand and elsewhere was unprecedented. The disaster raised awareness of tsunamis and prompted nations to pump money into research and warning systems. on the 10th anniversary of the deadly tsunami, greatly expanded networks of seismic monitors and ocean buoys are on alert for the next killer wave in the Indian Ocean, the Pacific and the Caribbean. In fact, tsunami experts can now forecast how tsunamis will flood distant coastlines hours before the waves arrive. But hurdles remain in saving lives for everyone under the threat of tsunamis. No amount of warning will help those who need to seek immediate shelter away from beaches, disaster experts said. "A lot of times, you're not going to get any warning near these zones where there are large earthquakes, so we have to prepare the public to interpret the signs and survive," said Mike Angove, head of the National Oceanic and Atmospheric Administration's (NOAA) tsunami program. In 2004, the tsunami waves approached coastal Indonesia just nine minutes after the massive magnitude-9.1 earthquake stopped shaking, Angove said. On alert Since 2004, geologists have uncovered evidence of several massive tsunamis in buried sand layers preserved in Sumatran caves. It turns out that the deadly waves aren't as rare in the Indian Ocean as once thought. "We had five fatal tsunamis off the coast of Sumatra prior to 2004," said Paula Dunbar, a scientist at NOAA's National Geophysical Data Center. Over the past 300 years, 69 tsunamis were seen in the Indian Ocean, she said. Despite the risk, there was no oceanwide tsunami warning system in the region. Now, a $450 million early-alert network is fully operational, though it is plagued with equipment problems. (Even the global monitoring network loses 10 percent of its buoys each year, according to NOAA.) Essentially built from scratch, the $450 million Indian Ocean Tsunami Warning System (IOWTS) includes more than 140 seismometers, about 100 sea-level gauges and several buoys that detect tsunamis. More buoys were installed, but they have been vandalized or accidentally destroyed. The buoys and gauges help detect whether an earthquake triggered a tsunami. The global network of Deep-Ocean Assessment and Reporting of Tsunami (DART) buoys, which detects passing tsunami waves, has also expanded, from six buoys in 2004 to 60 buoys in 2014, Angove said. Regional tsunami alert centers have been built in Australia, India and Indonesia. Scientists at the centers decide whether a tsunami is likely based on information from the network of sensors, estimate the probable size, then alert governments to get the warning out through sirens, TV, radio and text alerts. Getting the warnings down to people living in remote coastal areas is one of the biggest hurdles for the new system. Not all warnings reach the local level. And not every tsunami earthquake is strong enough to scare people away from shorelines. In Sumatra's Mentawai Islands, a 2010 tsunami killed more than 400 people because residents failed to evacuate in the short time between the earthquake and the tsunami's arrival. The shaking was simply not strong enough to trigger people's fear of tsunamis, even though islanders had self-evacuated after a 2007 earthquake, according to an investigation by the University of Southern California's Tsunami Research Center. There was also no clear-cut warning from the regional tsunami alert system. "Tsunami earthquakes remain a major challenge," Emile Okal, a seismologist at Northwestern University in Evanston, Illinois, said Dec. 15 at the American Geophysical Union's (AGU) annual meeting in San Francisco.
This graphs shows the latitudinal distribution of humidity in Mars'
atmosphere during the year according to data collected by the
SPICAMInfrared instruments | sci-english.blogspot.com
Russian
scientists from the Space Research Institute of the Russian Academy of
Sciences and the Moscow Institute of Physics and Technology (MIPT),
together with their French and American colleagues, have created a 'map'
of the distribution of water vapour in Mars' atmosphere. Their research
includes observations of seasonal variations in atmospheric
concentrations using data collected over ten years by the Russian-French
SPICAM spectrometer aboard the Mars Express orbiter. This is the
longest period of observation and provides the largest volume of data
about water vapour on Mars.
The first SPICAM (Spectroscopy for Investigation of Characteristics
of the Atmosphere of Mars) instrument was built for the Russian Martian
orbiter Mars 96, which was lost due to an accident in the rocket
launcher. The new updated version of the instrument was built with the
participation of the Space Research Institute as part of the agreement
between RosCosmos and the French space agency CNES for the Mars Express
orbiter. The apparatus was launched on June 2, 2003 from the Baikonur
Cosmodrome using a Russian Soyuz rocket launcher with a Fregat
propulsion stage. At the end of December 2003, Mars Express entered a
near-Mars orbit and since then has been operating successfully,
collecting data on the planet and its surroundings. Staff of the Space Research Institute and MIPT, including Alexander
Trokhimovsky, Anna Fyodorova, Oleg Korablyov and Alexander Rodin,
together with their colleagues from the French laboratory LATMOS and
NASA's Goddard Center, have analysed a mass of data obtained by
observing water vapour in Mars' atmosphere using an infrared
spectrometer that is part of the SPICAM instrument over a period of five
Martian years (about 10 Earth years as a year on Mars is equal to 1.88
Earth years). Conditions on Mars -- low temperatures and low atmospheric pressure
-- do not allow water to exist in liquid form in open reservoirs as it
would on Earth. However, on Mars, there is a powerful layer of
permafrost, with large reserves of frozen water concentrated at the
polar caps. There is water vapour in the atmosphere, although at very
low levels compared to the quantities experienced hereon Earth. If the
entire volume of water in the atmosphere was to be spread evenly over
the surface of the planet, the thickness of the water layer would not
exceed 10-20 microns, while on Earth such a layer would be thousands of
times thicker. Data from the SPICAM experiment has allowed scientists to create a
picture of the annual cycle of water vapour concentration variation in
the atmosphere. Scientists have been observing the atmosphere during
missions to Mars since the end of the 1970s in order to make the picture
more precise, as well as traceits variability. The content of water vapour in the atmosphere reaches a maximum level
of 60-70 microns of released water in the northern regions during the
summer season. The summer maximum in the southern hemisphere is
significantly lower -- about 20 microns. The scientists have also
established a significant, by 5-10 microns, reduction in the
concentration of water vapour during global sandstorms, which is
probably connected to the removal of water vapour from the atmosphere
due to adsorption processes and condensation on surfaces. "This research, based on one of the longest periods of monitoring of
the Martian climate, has made an important contribution to the
understanding of the Martian hydrological cycle -- the most important of
the climate mechanisms which could potentially support the existence of
biological activity on the planet," said co-author of the research
Alexander Rodin, deputy head of the Infrared Spectroscopy of Planetary
Atmospheres Laboratory at MIPT and senior scientific researcher at the
Space Research Institute.
Alexander Trokhimovskiy, Anna Fedorova, Oleg Korablev, Franck Montmessin, Jean-Loup Bertaux, Alexander Rodin, Michael D. Smith. Mars’ water vapor mapping by the SPICAM IR spectrometer: Five martian years of observations. Icarus, 2014; DOI: 10.1016/j.icarus.2014.10.007
A dust devil reaching half a mile above the plain of Amazonis Planitia
is twisted by the wind at different levels above the surface | sci-english.blogspot.com
Spinning up a
dust devil in the thin air of Mars requires a stronger updraft than is
needed to create a similar vortex on Earth, according to research at The
University of Alabama in Huntsville (UAH).
Early results from this research in UAH's Atmospheric Science
Department are scheduled for presentation today at the American
Geophysical Union's fall meeting in San Francisco. "To start a dust devil on Mars you need convection, a strong
updraft," said Bryce Williams, an atmospheric science graduate student
at UAH. "We looked at the ratio between convection and surface
turbulence to find the sweet spot where there is enough updraft to
overcome the low level wind and turbulence. And on Mars, where we think
the process that creates a vortex is more easily disrupted by frictional
dissipation -- turbulence and wind at the surface -- you need twice as
much convective updraft as you do on Earth." Williams and UAH's Dr. Udaysankar Nair looked for the dust devil
sweet spot by combining data from a study of Australian dust devils with
meteorological observations collected during the Viking Lander mission.
They used that data and a one-dimensional Mars planetary boundary layer
model to find thresholds of the ratio between convection and surface
friction velocities that identify conditions conducive to forming dust
devils. While dust devils on Earth are seldom more than meteorological
curiosities, on Mars they sometimes grow to the size of terrestrial
tornados, with a funnel more than 100 meters wide stretching as much as
12 miles above the Martian surface. Williams and Nair are looking at the effects dust devils have on
lifting dust into the Martian atmosphere. Dust in the Martian air and
its radiative forcing are important modulators of the planet's climate. "The Martian air is so thin, dust has a greater effect on energy
transfers in the atmosphere and on the surface than it does in Earth's
thick atmosphere," said Nair, an associate professor of atmospheric
science. Dust in the Martian air cools the surface during the day and
emits long-wave radiation that warms the surface at night.
In October 1984 I arrived at
Oxford University, trailing a large steamer trunk containing a couple
of changes of clothing and about five dozen textbooks. I had a freshly
minted bachelor’s degree in physics from Harvard, and I was raring to
launch into graduate study. But within a couple of weeks, the more
advanced students had sucked the wind from my sails. Change fields now
while you still can, many said. There’s nothing happening in fundamental
physics.
Then, just a couple of months later, the prestigious (if tamely titled) journal Physics Letters B published
an article that ignited the first superstring revolution, a sweeping
movement that inspired thousands of physicists worldwide to drop their
research in progress and chase Einstein’s long-sought dream of a unified
theory. The field was young, the terrain fertile and the atmosphere
electric. The only thing I needed to drop was a neophyte’s inhibition to
run with the world’s leading physicists. I did. What followed proved to
be the most exciting intellectual odyssey of my life.
That was 30 years ago this month, making the moment
ripe for taking stock: Is string theory revealing reality’s deep laws?
Or, as some detractors have claimed, is it a mathematical mirage that
has sidetracked a generation of physicists?
Unification has become synonymous
with Einstein, but the enterprise has been at the heart of modern
physics for centuries. Isaac Newton united the heavens and Earth,
revealing that the same laws governing the motion of the planets and the
Moon described the trajectory of a spinning wheel and a rolling rock.
About 200 years later, James Clerk Maxwell took the unification baton
for the next leg, showing that electricity and magnetism are two aspects
of a single force described by a single mathematical formalism.
The next two steps, big ones at that, were indeed
vintage Einstein. In 1905, Einstein linked space and time, showing that
motion through one affects passage through the other, the hallmark of
his special theory of relativity. Ten years later, Einstein extended
these insights with his general theory of relativity, providing the most
refined description of gravity, the force governing the likes of stars
and galaxies. With these achievements, Einstein envisioned that a grand
synthesis of all of nature’s forces was within reach.
But by 1930, the landscape
of physics had thoroughly shifted. Niels Bohr and a generation of
intrepid explorers ventured deep into the microrealm, where they
encountered quantum mechanics, an enigmatic theory formulated with
radically new physical concepts and mathematical rules. While
spectacularly successful at predicting the behavior of atoms and
subatomic particles, the quantum laws looked askance at Einstein’s
formulation of gravity. This set the stage for more than a half-century
of despair as physicists valiantly struggled, but repeatedly failed, to
meld general relativity and quantum mechanics, the laws of the large and
small, into a single all-encompassing description.
Such was the case until December 1984, when John
Schwarz, of the California Institute of Technology, and Michael Green,
then at Queen Mary College, published a once-in-a-generation paper
showing that string theory could overcome the mathematical antagonism
between general relativity and quantum mechanics, clearing a path that
seemed destined to reach the unified theory.
The idea underlying string unification is as simple as
it is seductive. Since the early 20th century, nature’s fundamental
constituents have been modeled as indivisible particles—the most
familiar being electrons, quarks and neutrinos—that can be pictured as
infinitesimal dots devoid of internal machinery. String theory
challenges this by proposing that at the heart of every particle is a
tiny, vibrating string-like filament. And, according to the theory, the
differences between one particle and another—their masses, electric
charges and, more esoterically, their spin and nuclear properties—all
arise from differences in how their internal strings vibrate.
Much as the sonorous tones of a cello arise from the
vibrations of the instrument’s strings, the collection of nature’s
particles would arise from the vibrations of the tiny filaments
described by string theory. The long list of disparate particles that
had been revealed over a century of experiments would be recast as
harmonious “notes” comprising nature’s score.
Most gratifying, the mathematics revealed that one of
these notes had properties precisely matching those of the “graviton,” a
hypothetical particle that, according to quantum physics, should carry
the force of gravity from one location to another. With this, the
worldwide community of theoretical physicists looked up from their
calculations. For the first time, gravity and quantum mechanics were
playing by the same rules. At least in theory.
I began learning
the mathematical underpinnings of string theory during an intense
period in the spring and summer of 1985. I wasn’t alone. Graduate
students and seasoned faculty alike got swept up in the potential of
string theory to be what some were calling the “final theory” or the
“theory of everything.” In crowded seminar rooms and flyby corridor
conversations, physicists anticipated the crowning of a new order.
But the simplest and most important question loomed
large. Is string theory right? Does the math explain our universe? The
description I’ve given suggests an experimental strategy. Examine
particles and if you see little vibrating strings, you’re done. It’s a
fine idea in principle, but string theory’s pioneers realized it was
useless in practice. The math set the size of strings to be about a
million billion times smaller than even the minute realms probed by the
world’s most powerful accelerators. Save for building a collider the
size of the galaxy, strings, if they’re real, would elude brute force
detection.
Making the situation seemingly more dire, researchers
had come upon a remarkable but puzzling mathematical fact. String
theory’s equations require that the universe has extra dimensions beyond
the three of everyday experience—left/right, back/forth and up/down.
Taking the math to heart, researchers realized that their backs were to
the wall. Make sense of extra dimensions—a prediction that’s grossly at
odds with what we perceive—or discard the theory.
String theorists pounced on an idea
first developed in the early years of the 20th century. Back then,
theorists realized that there might be two kinds of spatial dimensions:
those that are large and extended, which we directly experience, and
others that are tiny and tightly wound, too small for even our most
refined equipment to reveal. Much as the spatial extent of an enormous
carpet is manifest, but you have to get down on your hands and knees to
see the circular loops making up its pile, the universe might have three
big dimensions that we all navigate freely, but it might also have
additional dimensions so minuscule that they’re beyond our observational
reach.
In a paper submitted for publication a day after New
Year’s 1985, a quartet of physicists—Philip Candelas, Gary Horowitz,
Andrew Strominger and Edward Witten—pushed this proposal one step
further, turning vice to virtue. Positing that the extra dimensions were
minuscule, they argued, would not only explain why we haven’t seen
them, but could also provide the missing bridge to experimental
verification.
Strings are so small that when they
vibrate they undulate not just in the three large dimensions, but also
in the additional tiny ones. And much as the vibrational patterns of air
streaming through a French horn are determined by the twists and turns
of the instrument, the vibrational patterns of strings would be
determined by the shape of the extra dimensions. Since these vibrational
patterns determine particle properties like mass, electric charge and
so on—properties that can be detected experimentally—the quartet had
established that if you know the precise geometry of the extra
dimensions, you can make predictions about the results that certain
experiments would observe.
For me, deciphering the paper’s equations was one of
those rare mathematical forays bordering on spiritual enlightenment.
That the geometry of hidden spatial dimensions might be the universe’s
Rosetta stone, embodying the secret code of nature’s fundamental
constituents—well, it was one of the most beautiful ideas I’d ever
encountered. It also played to my strength. As a mathematically oriented
physics student, I’d already expended great effort studying topology
and differential geometry, the very tools needed to analyze the
mathematical form of extra-dimensional spaces.
And so, in the mid-1980s, with a small group of
researchers at Oxford, we set our sights on extracting string theory’s
predictions. The quartet’s paper had delineated the category of
extra-dimensional spaces allowed by the mathematics of string theory
and, remarkably, only a handful of candidate shapes were known. We
selected one that seemed most promising, and embarked on grueling days
and sleepless nights, filled with arduous calculations in higher
dimensional geometry and fueled by grandiose thoughts of revealing
nature’s deepest workings.
The final results that we
found successfully incorporated various established features of particle
physics and so were worthy of attention (and, for me, a doctoral
dissertation), but were far from providing evidence for string theory.
Naturally, our group and many others turned back to the list of allowed
shapes to consider other possibilities. But the list was no longer
short. Over the months and years, researchers had discovered ever larger
collections of shapes that passed mathematical muster, driving the
number of candidates into the thousands, millions, billions and then,
with insights spearheaded in the mid-1990s by Joe Polchinski, into
numbers so large that they’ve never been named.
Against this embarrassment of riches, string theory
offered no directive regarding which shape to pick. And as each shape
would affect string vibrations in different ways, each would yield
different observable consequences. The dream of extracting unique
predictions from string theory rapidly faded.
From a public relations standpoint, string theorists
had not prepared for this development. Like the Olympic athlete who
promises eight gold medals but wins “only” five, theorists had
consistently set the bar as high as it could go. That string theory
unites general relativity and quantum mechanics is a profound success.
That it does so in a framework with the capacity to embrace the known
particles and forces makes the success more than theoretically relevant.
Seeking to go even further and uniquely explain the detailed properties
of the particles and forces is surely a noble goal, but one that lies
well beyond the line dividing success from failure.
Nevertheless, critics who had bristled at string
theory’s meteoric rise to dominance used the opportunity to trumpet the
theory’s demise, blurring researchers’ honest disappointment of not
reaching hallowed ground with an unfounded assertion that the approach
had crashed. The cacophony grew louder still with a controversial turn
articulated most forcefully by one of the founding fathers of string
theory, the Stanford University theoretical physicist Leonard Susskind.
In August 2003,
I was sitting with Susskind at a conference in Sigtuna, Sweden,
discussing whether he really believed the new perspective he’d been
expounding or was just trying to shake things up. “I do like to stir the
pot,” he told me in hushed tones, feigning confidence, “but I do think
this is what string theory’s been telling us.”
Susskind was arguing that if the mathematics does not
identify one particular shape as the right one for the extra dimensions,
perhaps there isn’t a single right shape. That is, maybe all of the
shapes are right shapes in the sense that there are many universes, each
with a different shape for the extra dimensions.
Our universe would then be just one of a vast
collection, each with detailed features determined by the shape of their
extra dimensions. Why, then, are we in this universe instead of any
other? Because the shape of the hidden dimensions yields the spectrum of
physical features that allow us to exist. In another universe, for
example, the different shape might make the electron a little heavier or
the nuclear force a little weaker, shifts that would cause the quantum
processes that power stars, including our sun, to halt, interrupting the
relentless march toward life on Earth.
Radical though this proposal may be, it was supported
by parallel developments in cosmological thinking that suggested that
the Big Bang may not have been a unique event, but was instead one of
innumerable bangs spawning innumerable expanding universes, called the
multiverse. Susskind was suggesting that string theory augments this
grand cosmological unfolding by adorning each of the universes in the
multiverse with a different shape for the extra dimensions.
With or without string theory, the multiverse is a
highly controversial schema, and deservedly so. It not only recasts the
landscape of reality, but shifts the scientific goal posts. Questions
once deemed profoundly puzzling—why do nature’s numbers, from particle
masses to force strengths to the energy suffusing space, have the
particular values they do?—would be answered with a shrug. The detailed
features we observe would no longer be universal truths; instead, they’d
be local bylaws dictated by the particular shape of the extra
dimensions in our corner of the multiverse.
Most physicists, string theorists among them, agree
that the multiverse is an option of last resort. Yet, the history of
science has also convinced us to not dismiss ideas merely because they
run counter to expectation. If we had, our most successful theory,
quantum mechanics, which describes a reality governed by wholly peculiar
waves of probability, would be buried in the trash bin of physics. As
Nobel laureate Steven Weinberg has said, the universe doesn’t care about
what makes theoretical physicists happy.
This spring,
after nearly two years of upgrades, the Large Hadron Collider will
crackle back to life, smashing protons together with almost twice the
energy achieved in its previous runs. Sifting through the debris with
the most complex detectors ever built, researchers will be looking for
evidence of anything that doesn’t fit within the battle-tested “Standard
Model of particle physics,” whose final prediction, the Higgs boson,
was confirmed just before the machine went on hiatus. While it is likely
that the revamped machine is still far too weak to see strings
themselves, it could provide clues pointing in the direction of string
theory.
Many researchers have pinned their hopes on finding a
new class of so-called “supersymmetric” particles that emerge from
string theory’s highly ordered mathematical equations. Other collider
signals could show hints of extra-spatial dimensions, or even evidence
of microscopic black holes, a possibility that arises from string
theory’s exotic treatment of gravity on tiny distance scales.
While none of these predictions can properly be called
a smoking gun—various non-stringy theories have incorporated them too—a
positive identification would be on par with the discovery of the Higgs
particle, and would, to put it mildly, set the world of physics on
fire. The scales would tilt toward string theory.
But what happens in the event—likely, according to some—that the collider yields no remotely stringy signatures?
Experimental evidence is the final
arbiter of right and wrong, but a theory’s value is also assessed by the
depth of influence it has on allied fields. By this measure, string
theory is off the charts. Decades of analysis filling thousands of
articles have had a dramatic impact on a broad swath of research cutting
across physics and mathematics. Take black holes, for example. String
theory has resolved a vexing puzzle by identifying the microscopic
carriers of their internal disorder, a feature discovered in the 1970s
by Stephen Hawking.
Looking back, I’m gratified at how far we’ve come but
disappointed that a connection to experiment continues to elude us.
While my own research has migrated from highly mathematical forays into
extra-dimensional arcana to more applied studies of string theory’s
cosmological insights, I now hold only modest hope that the theory will
confront data during my lifetime.
Even so, string theory’s pull remains strong. Its
ability to seamlessly meld general relativity and quantum mechanics
remains a primary achievement, but the allure goes deeper still. Within
its majestic mathematical structure, a diligent researcher would find
all of the best ideas physicists have carefully developed over the past
few hundred years. It’s hard to believe such depth of insight is
accidental.
I like to think that Einstein would
look at string theory’s journey and smile, enjoying the theory’s
remarkable geometrical features while feeling kinship with fellow
travelers on the long and winding road toward unification. All the same,
science is powerfully self-correcting. Should decades drift by without
experimental support, I imagine that string theory will be absorbed by
other areas of science and mathematics, and slowly shed a unique
identity. In the interim, vigorous research and a large dose of patience
are surely warranted. If experimental confirmation of string theory is
in the offing, future generations will look back on our era as
transformative, a time when science had the fortitude to nurture a
remarkable and challenging theory, resulting in one of the most profound
steps toward understanding reality.
Use of a light-emitting electronic device (LE-eBook) in the hours before
bedtime can adversely impact overall health, alertness, and the
circadian clock which synchronizes the daily rhythm of sleep to external
environmental time cues | sci-english.blogspot.com
Use of a
light-emitting electronic device (LE-eBook) in the hours before bedtime
can adversely impact overall health, alertness, and the circadian clock
which synchronizes the daily rhythm of sleep to external environmental
time cues, according to researchers at Brigham and Women's Hospital
(BWH) who compared the biological effects of reading an LE-eBook
compared to a printed book. These findings of the study are published in
the Proceedings of the National Academy of Sciences on December 22, 2014.
"We found the body's natural circadian rhythms were interrupted by
the short-wavelength enriched light, otherwise known as blue light, from
these electronic devices," said Anne-Marie Chang, PhD, corresponding
author, and associate neuroscientist in BWH's Division of Sleep and
Circadian Disorders. "Participants reading an LE-eBook took longer to
fall asleep and had reduced evening sleepiness, reduced melatonin
secretion, later timing of their circadian clock and reduced
next-morning alertness than when reading a printed book."
Previous research has shown that blue light suppresses melatonin,
impacts the circadian clock and increase alertness, but little was known
about the effects of this popular technology on sleep. The use of light
emitting devices immediately before bedtime is a concern because of the
extremely powerful effect that light has on the body's natural
sleep/wake pattern, and may thereby play a role in perpetuating sleep
deficiency.
During the two-week inpatient study, twelve participants read
LE-e-Books on an iPad for four hours before bedtime each night for five
consecutive nights. This was repeated with printed books. The order was
randomized with some reading the iPad first and others reading the
printed book first. Participants reading on the iPad took longer to fall
asleep, were less sleepy in the evening, and spent less time in REM
sleep. The iPad readers had reduced secretion of melatonin, a hormone
which normally rises in the evening and plays a role in inducing
sleepiness. Additionally, iPad readers had a delayed circadian rhythm,
indicated by melatonin levels, of more than an hour. Participants who
read from the iPad were less sleepy before bedtime, but sleepier and
less alert the following morning after eight hours of sleep. Although
iPads were used in this study, BWH researchers also measured other
eReaders, laptops, cell phones, LED monitors, and other electronic
devices, all emitting blue light.
"In the past 50 years, there has been a decline in average sleep
duration and quality," stated Charles Czeisler, PhD, MD, FRCP, chief,
BWH Division of Sleep and Circadian Disorders. "Since more people are
choosing electronic devices for reading, communication and
entertainment, particularly children and adolescents who already
experience significant sleep loss, epidemiological research evaluating
the long-term consequences of these devices on health and safety is
urgently needed."
Researchers emphasize the importance of these findings, given recent
evidence linking chronic suppression of melatonin secretion by nocturnal
light exposure with the increased risk of breast cancer, colorectal
cancer and prostate cancer.
Anne-Marie Chang,
Daniel Aeschbach,
Jeanne F. Duffy,
and Charles A. Czeisler. Evening use of light-emitting eReaders negatively affects sleep, circadian timing, and next-morning alertness. PNAS, December 22, 2014 DOI: 10.1073/pnas.1418490112
The same autopod-building genetic switches from gar are able to drive
gene activity (purple) in the digits of transgenic mice; an activity
that was absent in other fish groups studied | sci-english.blogspot.com
Paleontologists
have documented the evolutionary adaptations necessary for ancient
lobe-finned fish to transform pectoral fins used underwater into strong,
bony structures, such as those of Tiktaalik roseae. This enabled these
emerging tetrapods, animals with limbs, to crawl in shallow water or on
land. But evolutionary biologists have wondered why the modern structure
called the autopod--comprising wrists and fingers or ankles and
toes--has no obvious morphological counterpart in the fins of living
fishes.
In the Dec. 22, 2014, issue of the Proceedings of the National Academy of Sciences,
researchers argue previous efforts to connect fin and fingers fell
short because they focused on the wrong fish. Instead, they found the
rudimentary genetic machinery for mammalian autopod assembly in a
non-model fish, the spotted gar, whose genome was recently sequenced. "Fossils show that the wrist and digits clearly have an aquatic
origin," said Neil Shubin, PhD, the Robert R. Bensley Professor of
organismal biology and anatomy at the University of Chicago and a leader
of the team that discovered Tiktaalik in 2004. "But fins and limbs have
different purposes. They have evolved in different directions since
they diverged. We wanted to explore, and better understand, their
connections by adding genetic and molecular data to what we already know
from the fossil record." Initial attempts to confirm the link based on shape comparisons of
fin and limb bones were unsuccessful. The autopod differs from most
fins. The wrist is composed of a series of small nodular bones, followed
by longer thin bones that make up the digits. The bones of living fish
fins look much different, with a set of longer bones ending in small
circular bones called radials. The primary genes that shape the bones, known as the HoxD and HoxA
clusters, also differ. The researchers first tested the ability of
genetic "switches" that control HoxD and HoxA genes from teleosts--bony,
ray-finned fish--to shape the limbs of developing transgenic mice. The
fish control switches, however, did not trigger any activity in the
autopod. Teleost fish--a vast group that includes almost all of the world's
important sport and commercial fish--are widely studied. But the
researchers began to realize they were not the ideal comparison for
studies of how ancient genes were regulated. When they searched for
wrist and digit-building genetic switches, they found "a lack of
sequence conservation" in teleost species. They traced the problem to a radical change in the genetics of
teleost fish. More than 300 million years ago, after the fish-like
creatures that would become tetrapods split off from other bony fish, a
common ancestor of the teleost lineage went through a whole-genome
duplication (WGD)--a phenomenon that has occurred multiple times in
evolution. By doubling the entire genetic repertoire of teleost fish, this WGD
provided them with enormous diversification potential. This may have
helped teleosts to adapt, over time, to a variety of environments
worldwide. In the process, "the genetic switches that control
autopod-building genes were able to drift and shuffle, allowing them to
change some of their function, as well as making them harder to identify
in comparisons to other animals, such as mice," said Andrew Gehrke, a
graduate student in the Shubin lab and lead author of the study. Not all bony fishes went through the whole genome duplication,
however. The spotted gar, a primitive freshwater fish native to North
America, split off from teleost fishes before the WGD. When the research team compared Hox gene switches from the spotted
gar with tetrapods, they found "an unprecedented and previously
undescribed level of deep conservation of the vertebrate autopod
regulatory apparatus." This suggests, they note, a high degree of
similarity between "distal radials of bony fish and the autopod of
tetrapods." They tested this by inserting gar gene switches related to fin
development into developing mice. This evoked patterns of activity that
were "nearly indistinguishable," the authors note, from those driven by
the mouse genome. "Overall," the researchers conclude, "our results provide regulatory
support for an ancient origin of the 'late' phase of Hox expression that
is responsible for building the autopod." This study was supported by the Brinson Foundation; the National
Science Foundation; the Brazilian National Council for Scientific and
Technological Development grants; the National Institutes of Health; the
Volkswagen Foundation, Germany; the Alexander von Humboldt-Foundation,
the Spanish and Andalusian governments; and Proyecto de Excelencia. Additional authors include Mayuri Chandran and Tetsuya Nakamura from
the University of Chicago; Igor Schneider from the Instituto de Ciencias
Biologicas, Universida de Federal do Para, Belem, Brazil; Elisa de la
Calle-Mustienes, Juan J. Tena, Carlos Gomez-Marin and José Luis
Gómez-Skarmeta from the Centro Andaluz de Biología del Desarrollo,
Sevilla, Spain; and Ingo Braasch and John H. Postlethwait from the
Institute of Neuroscience, University of Oregon.
Andrew R. Gehrke,
Igor Schneider,
Elisa de la Calle-Mustienes,
Juan J. Tena,
Carlos Gomez-Marin,
Mayuri Chandran,
Tetsuya Nakamura,
Ingo Braasch,
John H. Postlethwait,
José Luis Gómez-Skarmeta,
and Neil H. Shubin. Deep conservation of wrist and digit enhancers in fish. PNAS, December 22, 2014 DOI: 10.1073/pnas.1420208112
A bold new theory suggests many worlds have existed, side-by-side, since the beginning of time. Cathal O'Connell investigates. Parallel worlds are standard fare for sci-fi, but
the idea originates from quantum physics where the seemingly bizarre
notion of alternative universes is taken very seriously. Now an Australian team has taken the weirdness a step further. Their new theory, published last month in Physical Review X,
is dubbed “Many interacting worlds”. It not only proposes that stable
parallel worlds exist but suggests it might be possible to test for
their existence. “It’s kind of a radical idea, but it looks very interesting and
promising” says Bill Poirier, a quantum dynamicist at Texas Tech
University, who proposed an early version of the theory in 2010. Parallel worlds were first evoked in the 1950s to explain quantum
effects such as how a particle can appear to be in two places at once.
Alas Hugh Everett, the American physicist who proposed them, suffered
such ridicule he quit science. In his “many worlds” interpretation of
quantum theory every quantum measurement causes the universe to “branch”
into a bunch of new universes. It was as if, at the flip of a quantum
coin, two universes would sprout into existence – one for heads, and one
for tails. The new theory, bravely proposed by Howard Wiseman, Director of the
Centre of Quantum Dynamics at Griffith University, is different. No new
universes are ever created. Instead many worlds have existed,
side-by-side, since the beginning of time. Some follow the best sci-fi
plots, for instance “worlds where the dinosaur-killing asteroid never
hit” says Wiseman. Others are almost identical to our own, inhabited by
versions of ourselves on alternative Earths, perhaps differing only in
the shape of a cirrus cloud above parallel-Melbourne on a bright spring
parallel-morning. It’s the interaction of these nearby worlds that
gives rise to quantum effects, the theory says. So what’s led Wiseman to stir yet another seriously weird theory into the mix? Physicists have been trying to come to terms with the experimental
findings in the quantum world for a century. “It’s still notoriously
unfathomable”, he says “This motivated us to look for a better
description of what is really going on.” One of the unfathomable experiments Wiseman is referring to is the
“two-slit” experiment for electrons. It’s a version of the famous
two-slit experiment first performed by Thomas Young in the 1800s with
light. He found that a light beam passing through two closely spaced
slits in a screen onto a wall behind did not, as one might expect,
produce an image of two lines. Instead it formed an interference pattern
rather like the overlapping wakes of two boats, leading Young to deduce
that light is a type of wave. In the early 20th century Niels Bohr and others began to find
evidence that fundamental particles, like electrons, also had wave-like
properties. The proof came in 1961 when the two-slit experiment was
performed with electrons and a wave-like interference pattern, just like
that of light, was found. It was strange to find that particles could act like waves, but then
things got stranger. Responding to a suggestion by Richard Feynman, in
1974 Giulio Pozzi and colleagues at the University of Bologna showed you
still get an interference pattern when you fire one electron at a time –
as if each individual electron passes through both slits and interferes
with itself. Feynman said that all of quantum mechanics could be
gleaned from carefully thinking about this one experiment. Physicists
have been doing that for decades, yet are still tormented for lack of an
intuitive understanding. “The question,” says Wiseman, “is how do you
explain what’s going on?”
Einstein abhorred how quantum theory turned the electron
from a particle into a cloud. He, like Newton, imagined a universe that
ran like clockwork.
Niels Bohr’s quantum theory managed to predict the behaviours using
some strange maths known as the “wave function” to pictures the electron
neither as a particle nor a wave, but as a fuzzy “cloud of
probability”. But what, exactly, that means in physical terms has never
been well defined. “There has been a lot of debate and controversy for
100 years,” says Poirier. Einstein abhorred how quantum theory turned the electron from a
particle into a cloud. He, like Newton, imagined a universe that ran
like clockwork. “God does not play dice,” said Einstein. “I think
Einstein should stop telling God what to do,” responded Bohr. The new theory proposed by the Griffith team is a lot closer to
Einstein’s vision than Bohr’s. Gone are the probability clouds along
with the other conundrums of wave-particle duality. In the new picture
the electrons being fired at the slits are particles after all – tiny
little spheres just as Newton would have imagined them. In our world the
electron might pass through the bottom slit. But in a parallel world
the electron passes through the top slit. As the two ghostly twins
travel towards the detectors (one in our world, one in a parallel
world), their paths could overlap. But according to the theory, a newly
proposed repulsive force stops the electrons coming too close to one
another. In effect, the electron in our world “collides” with its
ghostly twin, like billiard balls knocking together as they roll across a
pool table. According to Wiseman and his team this interaction between parallel
worlds leads to just the type of interference patterns observed –
implying electrons are not waves after all. They have supported their
theory by running computer simulations of the two-slit experiment using
up to 41 interacting worlds. “It certainly captured the essential
features of peaks and troughs in the right places,” says Wiseman. Though he says it is in its early stages, Poirier is impressed with
the theory and predicts it will generate huge interest in the physics
community. “ By restricting the worlds to be discrete or finite, Poirier adds, the
Griffith team has developed equations that are much easier for a
computer to solve. Quantum mechanical calculations that would usually
take minutes were completed “in a matter of seconds,” says Michael Hall,
lead author of the study. Hall hopes that eventually this will lead to
applications in predicting real world chemical reactions. And if the number of worlds is finite – as modelled in the team’s
computer simulations – rather than infinite, then the predictions made
by the new theory will deviate from standard quantum theory. Though the
deviations are likely to be only slight they could be testable in the
lab using experiments similar to the double slit. Tantalisingly, as the
size of the deviations depends on the number of parallel worlds, these
experiments could provide an effective measure of how many worlds are
out there.