Wednesday 30 September 2015

NASA's Search for Extraterrestrial Life in the Universe --"The Next Ten Years"

SETI (Search for ExtraTerrestrial Intelligence) experiments seek to determine the distribution of advanced life in the universe through detecting the presence of technology, usually by searching for electromagnetic emission from communication technology, but also by searching for evidence of large scale energy usage or interstellar propulsion. Technology is thus used as a proxy for intelligence - if an advanced technology exists, so to does the ad advanced life that created it.

Although natural astrophysical sources produce a diverse array of electromagnetic, gravitational and high energy particle emission, there are particular types of emission that, as far as we know, could only be generated by an advanced technology. For example, technology constructed by human beings has been producing radio emission for more than 100 years that would be readily detectable at dozens of light years using re ceiving technology only moderately more advanced than our own. Some emission, including that produced by the planetary radars at Arecibo Observatory and the NASA Deep Space Network, would be detectable across our galaxy.

Technologies far more advanced than our own could potentially produce even more dramatic evidence of their presence. Large stellar-scale structures could cause apparent modulation in starlight as they orbited their host, and massive energy usage by a super advanced civilization might be revealed by its thermodynamic signature, even from millions of light- years away.

Although we know of only one example of life anywhere in the universe, we have reasons to be optimistic about the possibility of life beyond Earth. Earth-like planets, water and complex chemistry have now been found in abundance throughout our galaxy. Everything that we believe was necessary for life to begin on Earth is now known to be ubiquitous throughout our galaxy and beyond. Knowing that extraterrestrial life could exist, the race is on to discover whether or not it, in fact, does exist.

Yet more compelling is the possibility that extraterrestrial life may have followed a similar developmental process as life on the Earth, and given rise to a life form possessing intelligence and a technological capability similar to, or perhaps far exceeding, our own. Conducting direct searches for advanced extraterrestrial life is the sole means of determining the prevalence of such life in the universe, and answering one of our most fundamental questions: Are we alone?

Radio and Optical SETI

The motivation for radio searches for extraterrestrial intelligence can be summarized as follows:

1. coherent radio emission is commonly produced by advanced technology (judging by Earth's technological development), 2. electromagnetic radiation can convey information at the maximum velocity currently known to be possible, 3. radio photons are energetically cheap to produce, 4. certain types of coherent radio emissions are easily distinguished from astrophysical background sources, 5. these emissions can transit vast regions of interstellar space relatively unaected by gas, plasma and dust. These arguments are unaected by varying assumptions about the motivation of the transmitting intelligence, e.g. whether the signal transmitted is intentional or unintentional, and can be applied roughly equally to a variety of potential signal types or modulation schemes. Modern radio SETI experiments have been ongoing for the last 55 years, but for the most part they have searched only a small fraction of the radio spectrum accessible from the surface of the Earth and have probed only a few nearby stars at high sensitivity.

Large national radio telescopes in the United States, such as the Green Bank Telescope in West Virginia and the Arecibo Observatory in Puerto Rico are superb facilities for a wide range of astronomy, including pulsar studies that could lead to the detection of low-frequency gravitational radiation, mapping the atomic and molecular content of nearby galaxies, and probing the earliest epochs of the universe. In addition, these facilities are among the world's best at searching for the faint whispers of distant technologies. Figure 1 illustrates the natural low noise portion of the radio spectrum between 110 GHz, and the approximate locations in the radio spectrum where several dierent types of terrestrial transmitters produce emission. The Green Bank Telescope and Arecibo Observatory can conduct observations across this entire range of the radio spectrum.

Earlier this year, a group of astronomy, engineering and physics students and staff from our team at UC Berkeley, working with our colleagues from the National Radio Astronomy Observatory, installed a new instrument at the Green Bank Telescope that enables us to con duct SETI observations in parallel with other astronomers, a technique we call \piggy-back" observing. This project was funded by a combination of support from the NASA Astrobi ology Program, the National Science Foundation and the John Templeton Foundation, but NASA no longer provides funding for SETI experiments.

In addition to the Green Bank Telescope, other radio SETI programs are underway in the United States at Arecibo Observatory and the private Allen Telescope Array in Northern California. Many international radio telescopes are also currently being used for radio SETI searches, including the Low Frequency Array (LOFAR) in Europe, the Murchison Wideeld Array (MWA) in Australia and the Lovell Telescope at Jodrell Bank Observatory in the United Kingdom.

While the dominant paradigm in SETI research involves searching the radio portion of the electromagnetic spectrum, other wavelengths possess merit as well. Similar to our consideration of radio emission from human technology as an example of a potentially de tectable signal from extraterrestrial intelligences, we can conceive of ways in which human technologies operating at other wavelengths could produce signatures detectable at interstel lar distances. For example, laser technology already developed on Earth could be used to produce a signal that could outshine the sun by many orders of magnitude at a distance of more than 1000 lightyears.

These optical and infrared SETI experiments come in two varieties, searches for pulses of light of short duration and searches for light emission at a single wavelength using a spec trometer. Relative to radio transmissions, optical signals could potentially convey much more information content and are more easily focused for directed signaling or communication.

At one of the world's premier optical telescopes, the Keck Observatory on Mauna Kea, Hawaii, students and faculty at UC Berkeley are pursuing a search for continuous articial lasers, using optical spectra that are collected for the primary purpose of searching for and characterizing extrasolar planets. In an additional effort, a group led by students and faculty at UC San Diego are using the Lick Observatory, near San Jose, California, to conduct a search for pulsed lasers in the near-infrared part of the electromagnetic spectrum, wavelengths just a hair longer than optical light - the first SETI experiment ever to operate at this wavelength. Other optical SETI experiments are currently operating or under development at Harvard University and several research institutes in France and Italy.

Ensuring that facilities like the Green Bank Telescope, Arecibo and Keck Observatory continue to exist as world class astronomical observatories is critical to their continued avail ability for SETI experiments.

Data Mining SETI

The era of the "virtual observatory" has arrived. Many astronomers no longer need to travel thousands of miles to a telescope and command an instrument to perform observations.

Hundreds of terabytes of astronomical data, collected with billion dollar telescopes, are now freely available. In many cases, these data are made available to the entire astronomical community the instant they are collected. These data undoubtedly contain many astronom ical discoveries that are as yet uncovered. Perhaps, if looked at very closely in a novel way, some of these data contain evidence of an extraterrestrial intelligence.

The astronomical literature is rife with speculation that very advanced intelligences may produce signatures detectable by traditional astronomical observations. Massive \Dyson" structures, so named after they were rst proposed by physicist Freeman Dyson, might be be built to harvest the energy of hundreds of Suns and could be detected in the latest generation of infrared sky surveys. Figure 4 shows three hypothetical congurations of these Dyson structures orbiting a parent star. A very advanced civilization using a naturally bright astronomical source as a pseudo-articial beacon, such as modulation of a naturally expanding and contracting star or interference with a pulsar, could be discovered through careful analysis of variable star or pulsar observations.

Researchers at Penn State University, led by Prof. Jason Wright, have recently under taken a signicant eort to search for evidence of massive energy usage by very advanced civilizations using data from NASA's WISE space telescope. Although so far this work has produced only negative results, it has placed signicant constraints on the presence of ex tremely advanced galactic-scale civilizations. Additional work led by Dr. Erik Zackrisson in Sweden and Prof. Michael Garrett in the Netherlands has come to similar conclusions regarding the rarity of super-advanced civilizations, but this collected work represents a promising and growing new area of SETI research.

The Next Ten Years

It is undoubtable that the next decade will be an incredibly exciting time for astrobiology. Data provided by missions like the Transiting Exoplanet Survey Satellite (TESS) and the James Webb Space Telescope (JWST) virtually guarantee dramatic new insights into exo planet science, including identifying and characterizing some of the nearest exoplanets to the Earth. At the same time, we will continue to learn more about the development of life on Earth and the potential for life elsewhere in our own Solar System. If history is any guide, these discoveries will only heighten our imagination about the possibilities for advanced life elsewhere in the Universe. Two of the most exciting prospects for advances in SETI research are described below.

The Breakthrough Prize Foundation1 has recently announced the Breakthrough Listen and Message Initiatives2 - two programs that will investigate the possibility of life beyond Earth and the relationship between humanity and life in the universe. Breakthrough Listen is a $100M 10year eort to conduct the most sensitive, comprehensive and intensive search for advanced intelligent life on other worlds ever performed.

Using the GBT and Parkes, Breakthrough Listen will conduct deep observations of 1,000,000 of the nearest stars to the Earth that will be at least 10 times more sensitive than ever performed and will cover at least 5 times more of the radio spectrum. Break through Listen will also conduct an unprecedented complete survey of the entire plane of the Milky Way Galaxy, as well as surveys of more than 100 other galaxies, including all galax ies in the Milky Way's Local Group. The sensitivity of the Green Bank, expressed as the luminosity (power) detectable as a function of the transmitter distance, is shown in Figure

6. As shown, the Green Bank Telescope could detect an extraterrestrial radio transmitter with the same luminosity as our own most powerful radar (the Arecibo Planetary Radar), in 1 minute each, for more than 1 million nearby stars.

Using a robotic optical telescope at Lick Observatory, the APF with its evy spec trometer," breakthrough listen will observe 1000 nearby stars and 100 galaxies searching for articial laser emission. these observations will be performed over wavelengths from 374950 nm, including the near ultraviolet, the entire visible, and near infrared portion of the electromagnetic spectrum. breakthrough listen will detect lasers with any power above 100 watts from the nearest stars and above 1000 gigawatts from the nearest galaxies.

the square kilometre array (ska) project is an international partnership that seeks to construct the world's largest radio telescope operating at meter and centimeter wavelengths, eventually achieving a full square kilometer (1 km2) of collecting area for some components.

such a telescope, as envisioned, would be the most powerful radio telescope in the world, surpassing all current facilities by an order of magnitude or more in sheer sensitivity. phase i of the ska is expected to be built in southern africa and southern australia, with the mid-frequency (centimeter-wave, ska1-mid) and low frequency (ska1-low, meter-wave) components split between the two sites respectively. the ska headquarters is located at the jodrell bank observatory near manchester, uk. there are 10 ska member countries, and approximately 100 organizations from 20 countries participating in its development. the united states is not currently an ska member country.

the ska will offer revolutionary new observational capabilities for seti, allowing sen sitive targeted seti observations to be performed alongside other astronomical research. rather than simply \piggy-backing," seti observers will be able to independently point the telescope at targets of interest using high speed digital beamforming. ska construc tion is expected to be completed in two phases. ska phase 1 will include a low frequency (ska1-low, 50 350 mhz) array component made up of 130,000 dipole antennas sited in southern australia (figure 7a) and a 200-dish mid-frequency (ska1-mid, 350 14000 mhz) component sited in southern africa (figure 7b). ska phase 2 will complete telescope con struction at both sites and could include augmentation with new mid-frequency aperture array technology that could dramatically expand the telescopes primary eld of view.

ska may be the first telescope capable of detecting truly earthlike leakage radiation from nearby stars, allowing us our best chance at detecting another civilization with an artificial radio signature similar to our own.

the xmm-newton x-ray image at the top of the page shows the coma cluster, a large cluster of galaxies that contains over 1,000 identified galaxies. along with the leo cluster (abell 1367), it is one of the two major clusters comprising the coma supercluster. esa's x-ray space observatory xmm-newton is the biggest scientific satellite ever built in europe, its telescope mirrors are among the most powerful ever developed in the world, and with its sensitive cameras it can see much more than any previous x-ray satellite.

Read More

Water on the Moon --"The Asteroid Delivery System"

100275_web

At the beginning of the space age, during the days of the Apollo program, scientists believed the moon to be completely dry. At these earliest stages in satellite evolution, the absence of an atmosphere and the influence of solar radiation were thought enough to evaporate all volatile substances into space. However, in the1990s, scientists obtained data from the Lunar Prospector probe that shook their confidence: the neutron current from the satellite surface was indicative of a larger fraction of hydrogen at the near-surface soil of some regions of the moon, which one could interpret as a sign of the presence of water.

Today, water reserves found on the moon are the result of asteroids acting as "delivery vehicles" and not of falling comets as was previously thought. Using computer simulation, scientists from MIPT and the RAS Geosphere Dynamics Institute have discovered that a large asteroid can deliver more water to the lunar surface than the cumulative fall of comets over a billion year period. Their research is discussed in an article recently published in the journal Planetary and Space Science.

In order to explain how water could be kept on the moon's surface, scientists formulated a theory known as "cold traps." The axis of the moon's rotation is nearly vertical, which is why in the polar regions there are craters with floors that are never exposed to sunlight. When comets consisting mostly of water ice fall, evaporated water can gravitate into those "traps" and remain there indefinitely, as solar rays do not evaporate it.

In recent years, lunar missions (the Indian Chandrayan probe, the American LRO, data from the Cassini probe and Deep Impact) have brought scientists two pieces new information. The first is that there are indeed considerate quantities of water and hydroxyl groups in the near-surface soil on the moon. The LCROSS experiment, in which a probe purposely crashed onto the moon resulting in the release of a cloud of gas and dust that was later studied with the use of a spectrometer, directly confirmed the existence of water and other volatile substances. The second piece of new information came when the Russian LEND apparatus mounted on board LRO generated a map of water distribution on the moon's surface.

But this second piece has only partly proven their theory: the map of "cold traps" did not correspond to the map of water deposits. The scientists had to refine the theory, and the idea of "lunar congelation" was proposed. It allowed accepting that "survival" of water ice in the regions exposed to sunlight is possible under a soil blanket. It was also suggested that a substantial part of "water" seen by the probes is implanted solar wind: hydrogen atoms from solar wind react with oxygen atoms and form an unstable "dew" of water molecules and hydroxyl groups. Scientists left the possibility open that water could exist in a bound state, i.e. in hydrated minerals.

There was still the matter of determining how water had appeared on the moon and how much of it there could be. At the same time, another issue may prove to be of practical importance in the coming years: if manned stations are to be constructed on the moon in the nearest future, we should know what kind of resources we can count on, preferably before construction begins.

Vladimir Svettsov and Valery Shuvalov, who have been researching the fall of comets and asteroids, including the computerized simulation of the Tunguska catastrophe as well as the Chelyabinsk meteorite fall, decided to develop the most probable mechanism of water delivery to the moon and an approximate the "supply" volume. For this they used the SOVA algorithm, which they created themselves, for the computerized modelling of the fall of cosmic bodies onto the surface of the moon. Each body had its own velocity and its own angle of fall. In particular, at the output, the model demonstrated the distribution of maximum temperatures when the falling body's mass heated up during impact as well as its dynamic.

The scientists first decided to check whether the comets are able to fulfill the role of main "water suppliers." The typical velocity of an ice comet ranges from 20 to 50 km per second. The estimates suggested that such a high impact velocity causes from 95 to 99.9 percent of the water to evaporate into space beyond retrieve. There is a family of short-period comets whose velocity of fall is much lower - 8-10 km per second. Such short-period comets account for about 1.5 percent of lunar craters. Nevertheless, the simulation has shown that when these short-period comets do fall, almost all the water evaporates and less than 1 percent of it remains at the impact point.

"We came to the conclusion that only a very small amount of water that arrives with a comet stays on the moon, and from this decided to explore the possibility of an asteroid origin of lunar water," Shuvalov says.

The scientists decided to take a closer look at asteroids and found that they consist of initially non-differentiated construction materials of the solar system and contain a rather considerable proportion of water. In particular, chondrite carbonaceous, the most common type of asteroids and meteorites, can contain up to 10 percent water.

However, water in chondrites is effectively protected: it is in a chemically bounded condition, and it is "blocked" in a crystal lattice of minerals. Water starts to seep out only when it is heated to 300-1200 degrees centigrade depending on the type of hydrous mineral. This means that it has the potential of remaining in the crater together with the asteroid.

The simulation has also revealed that when the velocity of fall is 14 km per second and the angle of fall is 45 degrees, about half of the asteroid's mass will never even reach the fusing temperature and remains in a solid state. One-third of all asteroids that fall on the moon have a velocity of less than 14 km per second just before impact. When this happens, the major part of the fallen body remains in the crater: 30-40 percent is left after an oblique impact, and 60-70 percent after a vertical one.

"We've concluded that the fall of asteroids containing water could generate "deposits" of chemically bounded water inside some lunar craters," Shuvalov says. "The fall of one two-kilometer size asteroid with a rather high proportion of hydrated minerals could bring to the moon more water than all of the comets that have fallen over billions of years," he adds.

Calculations reveal that around 2 to 4.5 percent of lunar craters could contain considerable supplies of water in the form of hydrated minerals. They are stable enough to contain water even in areas exposed to the Sun.

"That is very important because the polar cold traps are not very convenient areas for the construction of lunar bases. There is a small amount of solar energy and it is difficult to organize radio communication and, lastly, there are dramatically low temperatures. The possibility of obtaining lunar water in regions exposed to the Sun could make the issue of satellite exploration much easier," concluded the scientist.

Moscow Institute of Physics and Technology

Read More

Tuesday 29 September 2015

Beyond the Higgs: Nature's Top Quark Hints the "Universe Could Suddenly Collapse"

In the post-Big Bang world, nature’s top quark — a key component of matter — is a highly sensitive probe that physicists use to evaluate competing theories about quantum interactions. Physicists at Southern Methodist University, Dallas, have achieved a new precise measurement of a key subatomic particle, opening the door to better understanding some of the deepest mysteries of our universe.

The researchers calculated the new measurement for a critical characteristic — mass — of the top quark. Quarks make up the protons and neutrons that comprise almost all visible matter. Physicists have known the top quark’s mass was large, but encountered great difficulty trying to clearly determine it.

The newly calculated measurement of the top quark will help guide physicists in formulating new theories, said Robert Kehoe, a professor in SMU’s Department of Physics. Kehoe leads the SMU group that performed the measurement.

Top quark’s mass matters ultimately because the particle is a highly sensitive probe and key tool to evaluate competing theories about the nature of matter and the fate of the universe. Physicists for two decades have worked to improve measurement of the top quark’s mass and narrow its value.

“Top” bears on newest fundamental particle, the Higgs boson. The new value from SMU confirms the validity of recent measurements by other physicists, said Kehoe. But it also adds growing uncertainty about aspects of physics’ Standard Model.

The Standard Model is the collection of theories physicists have derived — and continually revise — to explain the universe and how the tiniest building blocks of our universe interact with one another. Problems with the Standard Model remain to be solved. For example, gravity has not yet been successfully integrated into the framework.

The Standard Model holds that the top quark — known familiarly as “top” — is central in two of the four fundamental forces in our universe — the electroweak force, by which particles gain mass, and the strong force, which governs how quarks interact. The electroweak force governs common phenomena like light, electricity and magnetism. The strong force governs atomic nuclei and their structure, in addition to the particles that quarks comprise, like protons and neutrons in the nucleus.

The top plays a role with the newest fundamental particle in physics, the Higgs boson, in seeing if the electroweak theory holds water.

Some scientists think the top quark may be special because its mass can verify or jeopardize the electroweak theory. If jeopardized, that opens the door to what physicists refer to as “new physics” — theories about particles and our universe that go beyond the Standard Model.

Other scientists theorize the top quark might also be key to the unification of the electromagnetic and weak interactions of protons, neutrons and quarks. In addition, as the only quark that can be observed directly, the top quark tests the Standard Model’s strong force theory.

“So the top quark is really pushing both theories,” Kehoe said. “The top mass is particularly interesting because its measurement is getting to the point now where we are pushing even beyond the level that the theorists understand. Our experimental errors, or uncertainties, are so small, that it really forces theorists to try hard to understand the impact of the quark’s mass. We need to observe the Higgs interacting with the top directly and we need to measure both particles more precisely.”

The new measurement results were presented in August and September at the Third Annual Conference on Large Hadron Collider Physics, St. Petersburg, Russia, and at the 8th International Workshop on Top Quark Physics, Ischia, Italy.

“The public perception, with discovery of the Higgs, is ‘Ok, it’s done,’” Kehoe said. “But it’s not done. This is really just the beginning and the top quark is a key tool for figuring out the missing pieces of the puzzle.”

The results were made public by DZero, a collaborative experiment of more than 500 physicists from around the world. The measurement is described in “Precise measurement of the top quark mass in dilepton decays with optimized neutrino weighting” and is available online at http://bit.ly/1QJJzAe.

To narrow the top quark measurement, SMU doctoral researcher Huanzhao Liu took a standard methodology for measuring the top quark and improved the accuracy of some parameters. He also improved calibration of an analysis of top quark data.

“Liu achieved a surprising level of precision,” Kehoe said. “And his new method for optimizing analysis is also applicable to analyses of other particle data besides the top quark, making the methodology useful within the field of particle physics as a whole.”

The SMU optimization could be used to more precisely understand the Higgs boson, which explains why matter has mass, said Liu.

The Higgs was observed for the first time in 2012, and physicists keenly want to understand its nature.

“This methodology has its advantages — including understanding Higgs interactions with other particles — and we hope that others use it,” said Liu. “With it we achieved 20-percent improvement in the measurement. Here’s how I think of it myself — everybody likes a $199 iPhone with contract. If someday Apple tells us they will reduce the price by 20 percent, how would we all feel to get the lower price?”

Another optimization employed by Liu improved the calibration precision by four times, Kehoe said.

Top quarks, which rarely occur now, were much more common right after the Big Bang 13.8 billion years ago. However, top is the only quark, of six different kinds, that can be observed directly. For that reason, experimental physicists focus on the characteristics of top quarks to better understand the quarks in everyday matter.

To study the top, physicists generate them in particle accelerators, such as the Tevatron, a powerful U.S. Department of Energy particle accelerator operated by Fermi National Laboratory in Illinois, or the Large Hadron Collider in Switzerland, a project of the European Organization for Nuclear Research, CERN.

SMU’s measurement draws on top quark data gathered by DZero that was produced from proton-antiproton collisions at the Tevatron, which Fermilab shut down in 2011.

The new measurement is the most precise of its kind from the Tevatron, and is competitive with comparable measurements from the Large Hadron Collider. The top quark mass has been precisely measured more recently, but there is some divergence of the measurements. The SMU result favors the current world average value more than the current world record holder measurement, also from Fermilab. The apparent discrepancy must be addressed, Kehoe said.

“The ability to measure the top quark mass precisely is fortuitous because it, together with the Higgs boson mass, tells us whether the universe is stable or not,” Kehoe said. “That has emerged as one of today’s most important questions.”

“We want a theory — Standard Model or otherwise — that can predict physical processes at all energies,” Kehoe said. “But the measurements now are such that it looks like we may be over the border of a stable universe. We’re metastable, meaning there’s a gray area, that it’s stable in some energies, but not in others.”

Are we facing imminent doom? Will the universe collapse?
That disparity between theory and observation indicates the Standard Model theory has been outpaced by new measurements of the Higgs and top quark.

“It’s going to take some work for theorists to explain this,” Kehoe said, adding it’s a challenge physicists relish, as evidenced by their preoccupation with “new physics” and the possibilities the Higgs and Top quark create.

“I attended two conferences recently,” Kehoe said, “and there’s argument about exactly what it means, so that could be interesting.”

So are we in trouble? “Not immediately,” Kehoe said. “The energies at which metastability would kick in are so high that particle interactions in our universe almost never reach that level. In any case, a metastable universe would likely not change for many billions of years.”

As the only quark that can be observed, the top quark pops in and out of existence fleetingly in protons, making it possible for physicists to test and define its properties directly.

“To me it’s like fireworks,” Liu said. “They shoot into the sky and explode into smaller pieces, and those smaller pieces continue exploding. That sort of describes how the top quark decays into other particles.”

By measuring the particles to which the top quark decays, scientists capture a measure of the top quark, Liu explained

But study of the top is still an exotic field, Kehoe said. “For years top quarks were treated as a construct and not a real thing. Now they are real and still fairly new — and it’s really important we understand their properties fully.” — Margaret Allen

Read More

Global Warming News: "Communicating Via Electrons Help Deep Sea Microbes Gulp Methane"

             SEM_NanoSIMSaggregate_mcGlynn-NEWS_WEB (1)

Good communication is crucial to any relationship, especially when partners are separated by distance. This also holds true for microbes in the deep sea that need to work together to consume large amounts of methane released from vents on the ocean floor. Recent work at Caltech has shown that these microbial partners can still accomplish this task, even when not in direct contact with one another, by using electrons to share energy over long distances. This is the first time that direct interspecies electron transport—the movement of electrons from a cell, through the external environment, to another cell type—has been documented in microorganisms in nature.

"Our lab is interested in microbial communities in the environment and, specifically, the symbiosis—or mutually beneficial relationship—between microorganisms that allows them to catalyze reactions they wouldn't be able to do on their own," says Professor of Geobiology Victoria Orphan, who led the recent study. For the last two decades, Orphan's lab has focused on the relationship between a species of bacteria and a species of archaea that live in symbiotic aggregates, or consortia, within deep-sea methane seeps. The organisms work together in syntrophy (which means "feeding together") to consume up to 80 percent of methane emitted from the ocean floor—methane that might otherwise end up contributing to climate change as a greenhouse gas in our atmosphere.

Previously, Orphan and her colleagues contributed to the discovery of this microbial symbiosis, a cooperative partnership between methane-oxidizing archaea called anaerobic methanotrophs (or "methane eaters") and a sulfate-reducing bacterium (organisms that can "breathe" sulfate instead of oxygen) that allows these organisms to consume methane using sulfate from seawater. However, it was unclear how these cells share energy and interact within the symbiosis to perform this task.

Because these microorganisms grow slowly (reproducing only four times per year) and live in close contact with each other, it has been difficult for researchers to isolate them from the environment to grow them in the lab. So, the Caltech team used a research submersible, called Alvin, to collect samples containing the methane-oxidizing microbial consortia from deep-ocean methane seep sediments and then brought them back to the laboratory for analysis.

The researchers used different fluorescent DNA stains to mark the two types of microbes and view their spatial orientation in consortia. In some consortia, Orphan and her colleagues found the bacterial and archaeal cells were well mixed, while in other consortia, cells of the same type were clustered into separate areas.

Orphan and her team wondered if the variation in the spatial organization of the bacteria and archaea within these consortia influenced their cellular activity and their ability to cooperatively consume methane. To find out, they applied a stable isotope "tracer" to evaluate the metabolic activity. The amount of the isotope taken up by individual archaeal and bacterial cells within their microbial "neighborhoods" in each consortia was then measured with a high-resolution instrument called nanoscale secondary ion mass spectrometry (nanoSIMS) at Caltech. This allowed the researchers to determine how active the archaeal and bacterial partners were relative to their distance to one another.

To their surprise, the researchers found that the spatial arrangement of the cells in consortia had no influence on their activity. "Since this is a syntrophic relationship, we would have thought the cells at the interface—where the bacteria are directly contacting the archaea—would be more active, but we don't really see an obvious trend. What is really notable is that there are cells that are many cell lengths away from their nearest partner that are still active," Orphan says.

To find out how the bacteria and archaea were partnering, co-first authors Grayson Chadwick (BS '11), a graduate student in geobiology at Caltech and a former undergraduate researcher in Orphan's lab, and Shawn McGlynn, a former postdoctoral scholar, employed spatial statistics to look for patterns in cellular activity for multiple consortia with different cell arrangements. They found that populations of syntrophic archaea and bacteria in consortia had similar levels of metabolic activity; when one population had high activity, the associated partner microorganisms were also equally active—consistent with a beneficial symbiosis. However, a close look at the spatial organization of the cells revealed that no particular arrangement of the two types of organisms—whether evenly dispersed or in separate groups—was correlated with a cell's activity.

To determine how these metabolic interactions were taking place even over relatively long distances, postdoctoral scholar and coauthor Chris Kempes, a visitor in computing and mathematical sciences, modeled the predicted relationship between cellular activity and distance between syntrophic partners that are dependent on the molecular diffusion of a substrate. He found that conventional metabolites—molecules previously predicted to be involved in this syntrophic consumption of methane—such as hydrogen—were inconsistent with the spatial activity patterns observed in the data. However, revised models indicated that electrons could likely make the trip from cell to cell across greater distances.

"Chris came up with a generalized model for the methane-oxidizing syntrophy based on direct electron transfer, and these model results were a better match to our empirical data," Orphan says. "This pointed to the possibility that these archaea were directly transferring electrons derived from methane to the outside of the cell, and those electrons were being passed to the bacteria directly."

Guided by this information, Chadwick and McGlynn looked for independent evidence to support the possibility of direct interspecies electron transfer. Cultured bacteria, such as those from the genus Geobacter, are model organisms for the direct electron transfer process. These bacteria use large proteins, called multi-heme cytochromes, on their outer surface that act as conductive "wires" for the transport of electrons.

Using genome analysis—along with transmission electron microscopy and a stain that reacts with these multi-heme cytochromes—the researchers showed that these conductive proteins were also present on the outer surface of the archaea they were studying. And that finding, Orphan says, can explain why the spatial arrangement of the syntrophic partners does not seem to affect their relationship or activity.

"It's really one of the first examples of direct interspecies electron transfer occurring between uncultured microorganisms in the environment. Our hunch is that this is going to be more common than is currently recognized," she says.

Orphan notes that the information they have learned about this relationship will help to expand how researchers think about interspecies microbial interactions in nature. In addition, the microscale stable isotope approach used in the current study can be used to evaluate interspecies electron transport and other forms of microbial symbiosis occurring in the environment.

These results were published in a paper titled, "Single cell activity reveals direct electron transfer in methanotrophic consortia." The work was funded by the Department of Energy Division of Biological and Environmental Research and the Gordon and Betty Moore Foundation Marine Microbiology Initiative.

The results were published in the September 16 issue of the journal Nature.

Read More

Top Candidates for Alien Life: "Rocky Planets With Magnetic Fields Orbiting Small Stars"

385931-exoplanet

A planet's magnetic field emanates from its core and is thought to deflect the charged particles of the stellar wind, protecting the atmosphere from being lost to space. Magnetic fields, born from the cooling of a planet's interior, could also protect life on the surface from harmful radiation, as the Earth's magnetic field protects us.

Earth-like planets orbiting close to small stars probably have magnetic fields that protect them from stellar radiation and help maintain surface conditions that could be conducive to life, according to research from astronomers at the University of Washington.

Low-mass stars are among the most common in the universe. Planets orbiting near such stars are easier for astronomers to target for study because when they transit, or pass in front of, their host star, they block a larger fraction of the light than if they transited a more massive star. But because such a star is small and dim, its habitable zone -- where an orbiting planet gets the heat necessary to maintain life-friendly liquid water on the surface -- also lies relatively close in.

And a planet so close to its star is subject to the star's powerful gravitational pull, which could cause it to become tidally locked, with the same side forever facing its host star, as the moon is with the Earth. That same gravitational tug from the star also creates tidally generated heat inside the planet, or tidal heating. Tidal heating is responsible for driving the most volcanically active body in our solar system, Jupiter's moon Io.

In a paper published Sept. 22 in the journal Astrobiology, lead author Peter Driscoll sought to determine the fate of such worlds across time: "The question I wanted to ask is, around these small stars, where people are going to look for planets, are these planets going to be roasted by gravitational tides?" He was curious, too, about the effect of tidal heating on magnetic fields across long periods of time.

The research combined models of orbital interactions and heating by Rory Barnes, assistant professor of astronomy, with those of thermal evolution of planetary interiors done by Driscoll, who began this work as a UW postdoctoral fellow and is now a geophysicist at the Carnegie Institution for Science in Washington, D.C.

Their simulations ranged from one stellar mass -- stars the size of our sun -- down to about one-tenth of that size. By merging their models, they were able, Barnes said, "to produce a more realistic picture of what is happening inside these planets."

Barnes said there has been a general feeling in the astronomical community that tidally locked planets are unlikely to have protective magnetic fields "and therefore are completely at the mercy of their star." This research suggests that assumption false.

Far from being harmful to a planet's magnetic field, tidal heating can actually help it along -- and in doing so also help the chance for habitability.

This is because of the somewhat counterintuitive fact that the more tidal heating a planetary mantle experiences, the better it is at dissipating its heat, thereby cooling the core, which in turn helps create the magnetic field.

Barnes said that in computer simulations they were able to generate magnetic fields for the lifetimes of these planets, in most cases. "I was excited to see that tidal heating can actually save a planet in the sense that it allows cooling of the core. That's the dominant way to form magnetic fields."

And since small or low mass stars are particularly active early in their lives -- for the first few billion years or so -- "magnetic fields can exist precisely when life needs them the most."

Driscoll and Barnes also found through orbital calculations that the tidal heating process is more extreme for planets in the habitable zone around very small stars, or those less than half the mass of the sun.

For planets in eccentric, or noncircular orbits around such low mass stars, they found that these orbits tend to become more circular during the time of extreme tidal heating. Once that circularization takes place, the planet stops experiencing any tidal heating at all.

The research was done through the Virtual Planetary Laboratory, a UW-based interdisciplinary research group funded through the NASA Astrobiology Institute.

"These preliminary results are promising, but we still don't know how they would change for a planet like Venus, where slow planetary cooling is already hindering magnetic field generation," Driscoll said. "In the future, exoplanetary magnetic fields could be observable, so we expect there to be a growing interest in this field going forward."

Read More

NASA Image of the Year --“Now We Know There is Liquid Water on the Surface of This Cold, Desert Planet"

15-195_perspective_2 (1)

“It took multiple spacecraft over several years to solve this mystery, and now we know there is liquid water on the surface of this cold, desert planet,” said Michael Meyer, lead scientist for NASA’s Mars Exploration Program at the agency’s headquarters in Washington. “It seems that the more we study Mars, the more we learn how life could be supported and where there are resources to support life in the future.”

The dark, narrow, 100 meter-long streaks shown above called recurring slope lineae flowing downhill on Mars are inferred to have been formed by contemporary flowing water. Recently, planetary scientists detected hydrated salts on these slopes at Hale crater, corroborating their original hypothesis that the streaks are indeed formed by liquid water. The blue color seen upslope of the dark streaks are thought not to be related to their formation, but instead are from the presence of the mineral pyroxene.

New findings from NASA's Mars Reconnaissance Orbiter (MRO) provide the strongest evidence yet that liquid water flows intermittently on present-day Mars. Using an imaging spectrometer on MRO, researchers detected signatures of hydrated minerals on slopes where mysterious streaks are seen on the Red Planet. These darkish streaks appear to ebb and flow over time. They darken and appear to flow down steep slopes during warm seasons, and then fade in cooler seasons. They appear in several locations on Mars when temperatures are above minus 10 degrees Fahrenheit (minus 23 Celsius), and disappear at colder times.

15-195_perspective_6

“Our quest on Mars has been to ‘follow the water,’ in our search for life in the universe, and now we have convincing science that validates what we’ve long suspected,” said John Grunsfeld, astronaut and associate administrator of NASA’s Science Mission Directorate in Washington. “This is a significant development, as it appears to confirm that water -- albeit briny -- is flowing today on the surface of Mars.”

These downhill flows, known as recurring slope lineae (RSL), often have been described as possibly related to liquid water. The new findings of hydrated salts on the slopes point to what that relationship may be to these dark features. The hydrated salts would lower the freezing point of a liquid brine, just as salt on roads here on Earth causes ice and snow to melt more rapidly. Scientists say it’s likely a shallow subsurface flow, with enough water wicking to the surface to explain the darkening.

Dark narrow streaks called recurring slope lineae emanating out of the walls of Garni crater on Mars. The dark streaks here are up to few hundred meters in length. They are hypothesized to be formed by flow of briny liquid water on Mars. The image above was produced by draping an orthorectified (RED) image (ESP_031059_1685) on a Digital Terrain Model (DTM) of the same site produced by High Resolution Imaging Science Experiment (University of Arizona). Vertical exaggeration is 1.5.

"We found the hydrated salts only when the seasonal features were widest, which suggests that either the dark streaks themselves or a process that forms them is the source of the hydration. In either case, the detection of hydrated salts on these slopes means that water plays a vital role in the formation of these streaks," said Lujendra Ojha of the Georgia Institute of Technology (Georgia Tech) in Atlanta, lead author of a report on these findings published Sept. 28 by Nature Geoscience.

Ojha first noticed these puzzling features as a University of Arizona undergraduate student in 2010, using images from the MRO's High Resolution Imaging Science Experiment (HiRISE). HiRISE observations now have documented RSL at dozens of sites on Mars. The new study pairs HiRISE observations with mineral mapping by MRO’s Compact Reconnaissance Imaging Spectrometer for Mars (CRISM).

The spectrometer observations show signatures of hydrated salts at multiple RSL locations, but only when the dark features were relatively wide. When the researchers looked at the same locations and RSL weren't as extensive, they detected no hydrated salt.

Ojha and his co-authors interpret the spectral signatures as caused by hydrated minerals called perchlorates. The hydrated salts most consistent with the chemical signatures are likely a mixture of magnesium perchlorate, magnesium chlorate and sodium perchlorate. Some perchlorates have been shown to keep liquids from freezing even when conditions are as cold as minus 94 degrees Fahrenheit (minus 70 Celsius). On Earth, naturally produced perchlorates are concentrated in deserts, and some types of perchlorates can be used as rocket propellant.

Perchlorates have previously been seen on Mars. NASA's Phoenix lander and Curiosity rover both found them in the planet's soil, and some scientists believe that the Viking missions in the 1970s measured signatures of these salts. However, this study of RSL detected perchlorates, now in hydrated form, in different areas than those explored by the landers. This also is the first time perchlorates have been identified from orbit.

MRO has been examining Mars since 2006 with its six science instruments.

"The ability of MRO to observe for multiple Mars years with a payload able to see the fine detail of these features has enabled findings such as these: first identifying the puzzling seasonal streaks and now making a big step towards explaining what they are," said Rich Zurek, MRO project scientist at NASA's Jet Propulsion Laboratory (JPL) in Pasadena, California.

For Ojha, the new findings are more proof that the mysterious lines he first saw darkening Martian slopes five years ago are, indeed, present-day water.

"When most people talk about water on Mars, they're usually talking about ancient water or frozen water," he said. "Now we know there’s more to the story. This is the first spectral detection that unambiguously supports our liquid water-formation hypotheses for RSL."

The discovery is the latest of many breakthroughs by NASA’s Mars missions.

Read More

Monday 28 September 2015

NASA TV: "Water Found on Mars!" View Live

"Under certain circumstances, liquid water has been found on Mars" - Jim Green, NASA Planetary Science Director via Twitter

« News Flash: NASA TV (View Below) to Live-Stream Major Mars Discovery MONDAY @ 11:30 am ET | Main

Read More

News Flash: NASA TV (View Below) to Live-Stream Major Mars Discovery MONDAY @ 11:30 am ET

Maven-mars-atmosphere-945


NASA will broadcast news about Mars during a press conference Monday (Sept. 28) at 11:30 a.m. EDT (1530 GMT). During the press conference, researchers "will detail a major science finding from the agency’s ongoing exploration of Mars," according to NASA officials.

NASA officials Participating in the event include:

Jim Green, director of planetary science at NASA Headquarters

Michael Meyer, lead scientist for the Mars Exploration Program at NASA Headquarters

Lujendra Ojha of the Georgia Institute of Technology in Atlanta

Mary Beth Wilhelm of NASA’s Ames Research Center in Moffett Field, California and the Georgia Institute of Technology

Alfred McEwen of the University of Arizona in Tucson, principal investigator for the High Resolution Imaging Science Experiment (HiRISE) aboard NASA's Mars Reconnaissance Orbiter (MRO) spacecraft
NASA Satellite TV Information:


        

« "Extreme Pulsar Companion Stars" --Confirmed by Harvard-Smithsonian Center for Astrophysics | Main | NASA TV: "Water Found on Mars!" View Live »

Read More

"Extreme Pulsar Companion Stars" --Confirmed by Harvard-Smithsonian Center for Astrophysics

56091d347098d

CfA astronomers used ultraviolet images from Hubble to identify the companion stars to two millisecond pulsars located in the globular cluster 47 Tucanae. They were also able to confirm a previous but tentative identification, and to confirm two more. They report that each is of these companions is a white dwarf star – an evolved star that can no longer sustain nuclear burning and which has shrunk to a fraction of its original radius.

When a star with a mass of roughly ten solar masses finishes its life, it does so in a spectacular explosion known as a supernova, leaving behind as remnant "ash" a neutron star. Neutron stars have masses of one-to-several Suns, but they are tiny in size, only tens of kilometers. Neutron stars spin rapidly, and when they have associated rotating magnetic fields to constrain charged particles, these particles emit electromagnetic radiation in a lighthouse-like beam that can sweep past the Earth with great regularity every few seconds or less. Such neutron stars are known as pulsars. Pulsars are dramatic and powerful probes of supernovae, their progenitor stars, and the properties of nuclear matter under the extreme conditions that exist in these stars.

Some pulsars called millisecond pulsars spin much more quickly, and astronomers have concluded that in order to rotate so rapidly these objects must be regularly accreting material from a nearly companion star which in a binary orbit with it; the new material helps to spin-up the neutron star, which normally would gradually slow down. There are more than 200 known millisecond pulsars. An understanding of these pulsars has been hampered, however, by the fact that only about a dozen of them have had their companion stars directly detected and studied.

Each of the pulsars discovered by the CfA spins more than 120 times per second, and the companions orbit quite closely with periods ranging from only 0.43 days to 1.2 days, close enough to easily satisfy the requirements needed for this kind of cosmic cannibalism as the pulsars gradually feed on material from the white dwarfs. The new work significantly increases the number of identified and characterized millisecond pulsar companions.

"Discovery of Near-Ultraviolet Counterparts to Millisecond Pulsars in the Globular Cluster 47 Tucanae," L. E. Rivera-Sandoval, M. van den Berg, C. O. Heinke, H. N. Cohn, P. M. Lugger, P. Freire, J. Anderson, A. M. Serenelli, L. G. Althaus, A. M. Cool, J. E. Grindlay, P. D. Edmonds, R. Wijnands and N. Ivanova, MNRAS 453, 2707, 2015.

The image above is an optical photo of the globular cluster, 47 Tucanae. (South African Astronomical Observatory)

Read More

Saturday 26 September 2015

"Missing Gravitational Waves in the Fabric of the Universe" --Leads to Black Hole Rethink (Weekend Feature)

18lq81fap7vv5jpg

One hundred years since Einstein proposed gravitational waves as part of his general theory of relativity, an 11-year search performed with CSIRO’s Parkes telescope has shown that an expected background of waves is missing, casting doubt on our understanding of galaxies and black holes.

For scientists gravitational waves exert a powerful appeal, as it is believed they carry information allowing us to look back into the very beginnings of the Universe. Although there is strong circumstantial evidence for their existence, they have not yet been directly detected.

The work, led by Dr Ryan Shannon (of CSIRO and the International Centre for Radio Astronomy Research), is published today in the journal Science.

Using Parkes, the scientists expected to detect a background ‘rumble’ of the waves, coming from the merging galaxies throughout the Universe, but they weren’t there. This world-first research has caused scientists to think about the Universe in a different way.

“This is probably the most comprehensive, high precision science that’s ever been undertaken in this field of astronomy,” Dr Shannon said. “By pushing ourselves to the limits required for this sort of cosmic search we’re moving into new frontiers in all areas of physics, forcing ourselves to understand how galaxies and black holes work.”

The fact that gravitational waves weren’t detected goes against all theoretical calculations and throws our current understanding of black holes into question.

Galaxies grow by merging and every large one is thought to have a supermassive black hole at its heart. When two galaxies unite, the black holes are drawn together and form an orbiting pair. At this point, Einstein’s theory is expected to take hold, with the pair predicted to succumb to a death spiral, sending ripples known as gravitational waves through space-time, the very fabric of the Universe.

The image pictured up top is a composite of X-rays from NASA's orbiting Chandra X-ray observatory (blue) and optical data from the Hubble Space Telescope (gold). A close-up of the boxed portion of the image, featuring only X-ray data (pictured here), reveals two distinct black holes at the heart of galaxy NGC3393 that are around 1 million and 30 million times the mass of the Sun, and believed to be involved in what is known as a "minor merger," wherein a galaxy of relatively larger mass "eats" a smaller one.

Although Einstein’s general theory of relativity has withstood every test thrown at it by scientists, directly detecting gravitational waves remain the one missing piece of the puzzle.

To look for the waves, Dr Shannon’s team used the Parkes telescope to monitor a set of ‘millisecond pulsars’. These small stars produce highly regular trains of radio pulses and act like clocks in space. The scientists recorded the arrival times of the pulsar signals to an accuracy of ten billionths of a second.

A gravitational wave passing between Earth and a millisecond pulsar squeezes and stretches space, changing the distance between them by about 10 metres — a tiny fraction of the pulsar’s distance from Earth. This changes, very slightly, the time that the pulsar’s signals arrive on Earth.

The scientists studied their pulsars for 11 years, which should have been long enough to reveal gravitational waves.

So why haven’t they been found? There could be a few reasons, but the scientists suspect it’s because black holes merge very fast, spending little time spiralling together and generating gravitational waves.

“There could be gas surrounding the black holes that creates friction and carries away their energy, letting them come to the clinch quite quickly,” said team member Dr Paul Lasky, a postdoctoral research fellow at Monash University.

Whatever the explanation, it means that if astronomers want to detect gravitational waves by timing pulsars they’ll have to record them for many more years.

“There might also be an advantage in going to a higher frequency,” said Dr Lindley Lentati of the University of Cambridge, UK, a member of the research team who specialises in pulsar-timing techniques. Astronomers will also gain an advantage with the highly sensitive Square Kilometre Array telescope, set to start construction in 2018.

Not finding gravitational waves through pulsar timing has no implications for ground-based gravitational wave detectors such as Advanced LIGO (the Laser Interferometer Gravitational-Wave Observatory), which began its own observations of the Universe last week.

“Ground-based detectors are looking for higher-frequency gravitational waves generated by other sources, such as coalescing neutron stars,” said Dr Vikram Ravi, a member of the research team from Swinburne University (now at Caltech, in Pasadena, California).

The International Centre for Radio Astronomy Research (ICRAR) is a joint venture between Curtin University and The University of Western Australia with support and funding from the State Government of Western Australia.

Read More

CERN: "The Fundamental Symmetry of the Universe Confirmed" (Week's Most Popular)

6a00d8341bf7f753ef01b8d15bdf8a970c-800wi

Scientists working with ALICE (A Large Ion Collider Experiment), a heavy-ion detector on the Large Hadron Collider (LHC) ring, have made precise measurements of particle mass and electric charge that confirm the existence of a fundamental symmetry in nature. The investigators include Brazilian researchers affiliated with the University of SĂ£o Paulo (USP) and the University of Campinas (UNICAMP).

"After the Big Bang, for every particle of matter an antiparticle was created. In particle physics, a very important question is whether all the laws of physics display a specific kind of symmetry known as CPT, and these measurements suggest that there is indeed a fundamental symmetry between nuclei and antinuclei," said Marcelo Gameiro Munhoz, a professor at USP's Physics Institute (IF) and a member of the Brazilian team working on ALICE.

6a00d8341bf7f753ef01b8d15bdf06970c-800wi

The findings, reported in a paper published online in Nature Physics on August 17, led the researchers to confirm a fundamental symmetry between the nuclei of the particles and their antiparticles in terms of charge, parity and time (CPT).

These measurements of particles produced in high-energy collisions of heavy ions in the LHC were made possible by the ALICE experiment's high-precision tracking and identification capabilities, as part of an investigation designed to detect subtle differences between the ways in which protons and neutrons join in nuclei while their antiparticles form antinuclei.

Munhoz is the principal investigator for the research project "High-energy nuclear physics at RHIC and LHC", supported by SĂ£o Paulo Research Foundation (FAPESP). The project--a collaboration between the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory in the United States and ALICE at the LHC, operated by the European Organization for Nuclear Research (CERN) in Switzerland--consists of experimental activities relating to the study of relativistic heavy-ion collisions.

Among other objectives, the Brazilian researchers involved with ALICE seek to understand the production of heavy quarks (charm and bottom quarks) based on the measurement of electrons using an electromagnetic calorimeter and, more recently, Sampa, a microchip developed in Brazil to study rarer phenomena arising from heavy-ion collisions in the LHC.

According to Munhoz, the measurements of mass and charge performed in the symmetry experiment, combined with other studies, will help physicists to determine which of the many theories on the fundamental laws of the universe is most plausible.

"These laws describe the nature of all matter interactions," he said, "so it's important to know that physical interactions aren't changed by particle charge reversal, parity transformation, reflections of spatial coordinates and time inversion. The key question is whether the laws of physics remain the same under such conditions."

In particular, the researchers measured the mass-over-charge ratio differences for deuterons, consisting of a proton and a neutron, and antideuterons, as well as for nuclei of helium-3, comprising two protons and one neutron, and antihelium-3. Recent measurements at CERN compared the same properties of protons and antiprotons at high resolution.

The ALICE experiment records high-energy collisions of lead ions at the LHC, enabling the study of matter at extremely high temperatures and densities.

The lead-ion collisions provide an abundant source of particles and antiparticles, producing nuclei and the corresponding antinuclei at nearly equal rates. This allows ALICE to make a detailed comparison of the properties of the nuclei and antinuclei that are most copiously produced.

The experiment makes precise measurements of both the curvature of particle tracks in the detector's magnetic field and the particles' time of flight and uses this information to determine the mass-to-charge ratios for nuclei and antinuclei.

The high precision of the time-of-flight detector, which determines the arrival time of particles and antiparticles with a resolution of 80 picoseconds and is associated with the energy-loss measurement provided by the time-projection chamber, allows the scientists involved to measure a clear signal for deuterons/antideuterons and helium-3/antihelium-3, the particles studied in the similarity experiment.

The image at the top of the page is an artist's conception that illustrates the history of the cosmos, from the Big Bang and the recombination epoch that created the microwave background, through the formation of galactic superclusters and galaxies themselves. The dramatic flaring at right emphasizes that the universe's expansion currently is speeding up.

Read More

Massive Dead Galaxies of the Early Universe Observed

Galaxy-820x420

An international team led by researchers at Swiss Federal Institute of Technology in ZĂ¼rich observed massive dead galaxies in the universe 4 billion years after the Big Bang with the Subaru Telescope's Multi-Object InfraRed Camera and Spectrograph (MOIRCS). They discovered that the stellar content of these galaxies is strikingly similar to that of massive elliptical galaxies seen locally. Furthermore, they identified progenitors of these dead galaxies when they were forming stars at an earlier cosmic epoch, unveiling the formation and evolution of massive galaxies across 11 billion years of cosmic time.

In the local universe, massive galaxies hosting more than about 100 billion stars are predominantly dead elliptical galaxies, that is, without any signs of star-formation activity. Many questions remain on when, how and for how long star formation occurred in such galaxies before the cessation of star formation, as well as what happened since to form the dead elliptical galaxies seen today.

In order to address these issues, the research team made use of fossil records imprinted by stars in the spectra of distant dead galaxies which give important clues to their age, metal content, and element abundances. Local massive dead galaxies are about 10 billion years old and rich in heavy elements. Also, alpha-elements (Note), which measure the duration of star formation, are more abundant than iron, indicating that these galaxies formed a large amount of stars in a very short period. The team investigated the stellar content of galaxies in the distant universe 4 billion years after the Big Bang, in order to study galaxy evolution much closer to their formation epoch.

The team took the advantage of the MOIRCS's capability to observe multiple objects simultaneously, efficiently observing a sample of 24 faint galaxies. They created a composite spectrum that would have taken 200 hours of Subaru Telescope's time for a single spectrum of comparable quality.

Analysis of the composite spectrum shows that the age of the galaxies is already 1 billion years old when observed 4 billion years after the Big Bang. They host 1.7 times more heavy elements relative to the amount of hydrogen and their alpha-elements are twice enhanced relative to iron than the solar values. It is the first time that the alpha-element abundance in stars is measured in such distant dead galaxies, and it tells us that the duration of star formation in these galaxies was shorter than 1 billion years. These results reveal that these massive dead galaxies have evolved to today without further star formation.

What do massive dead galaxies look like when they are forming stars? To answer this, the team investigated the progenitors of their sample based on their spectral analysis. The progenitors must be star-forming galaxies in the universe 1 billion years before the observed epoch for the dead galaxies. Indeed, they do find similarly massive star-forming galaxies at the right epoch and with the right star formation rate expected from the spectra. If these active galaxies continue to create stars at the same rate, they will immediately become more massive than seen in the present universe. Therefore, these galaxies will cease star formation soon and simply age.

Segue 1 shown at the top of the page, referred to as the "least chemically evolved galaxy known," could hint at the universe's very first stars. Among all known galaxies, Segue 1 has significantly fewer stars and less heavy elements, revealing an estimated end to its evolution 13 billion years ago. Heavy elements like iron are found only in trace amounts, leaving most of the composition as helium and hydrogen.

This research was published on 1st August 2015 in The Astrophysical Journal (Onodera et al. 2015 "The Ages, Metallicities, and Element Abundance Ratios of Massive Quenched Galaxies at z~1.6"). The preprint of the paper is available at http://bit.ly/1FzvVz8

Read More

New 'Stealth Theory' --"May Explain the Missing Matter of the Universe"

6a00d8341bf7f753ef01bb083dd388970d-800wi

Lawrence Livermore scientists have come up with a new theory that may identify why dark matter has evaded direct detection in Earth-based experiments. A group of national particle physicists known as the Lattice Strong Dynamics Collaboration, led by a Lawrence Livermore National Laboratory team, has combined theoretical and computational physics techniques and used the Laboratory's massively parallel 2-petaflop Vulcan supercomputer to devise a new model of dark matter. It identifies it as naturally "stealthy" today, but would have been easy to see via interactions with ordinary matter in the extremely high-temperature plasma conditions that pervaded the early universe.

"These interactions in the early universe are important because ordinary and dark matter abundances today are strikingly similar in size, suggesting this occurred because of a balancing act performed between the two before the universe cooled," said Pavlos Vranas of LLNL, and one of the authors of the paper, "Direct Detection of Stealth Dark Matter through Electromagnetic Polarizability". The paper appears in an upcoming edition of the journal Physical Review Letters and is an "Editor's Choice."

Dark matter makes up 83 percent of all matter in the universe and does not interact directly with electromagnetic or strong and weak nuclear forces. Light does not bounce off of it, and ordinary matter goes through it with only the feeblest of interactions. Essentially invisible, it has been termed dark matter, yet its interactions with gravity produce striking effects on the movement of galaxies and galactic clusters, leaving little doubt of its existence.

The key to stealth dark matter's split personality is its compositeness and the miracle of confinement. Like quarks in a neutron, at high temperatures, these electrically charged constituents interact with nearly everything. But at lower temperatures they bind together to form an electrically neutral composite particle. Unlike a neutron, which is bound by the ordinary strong interaction of quantum chromodynamics (QCD), the stealthy neutron would have to be bound by a new and yet-unobserved strong interaction, a dark form of QCD.

"It is remarkable that a dark matter candidate just several hundred times heavier than the proton could be a composite of electrically charged constituents and yet have evaded direct detection so far," Vranas said.

Similar to protons, stealth dark matter is stable and does not decay over cosmic times. However, like QCD, it produces a large number of other nuclear particles that decay shortly after their creation. These particles can have net electric charge but would have decayed away a long time ago. In a particle collider with sufficiently high energy (such as the Large Hadron Collider in Switzerland), these particles can be produced again for the first time since the early universe. They could generate unique signatures in the particle detectors because they could be electrically charged.

"Underground direct detection experiments or experiments at the Large Hadron Collider may soon find evidence of (or rule out) this new stealth dark matter theory," Vranas said.

The LLNL lattice team authors are Evan Berkowitz, Michael Buchoff, Enrico Rinaldi, Christopher Schroeder and Pavlos Vranas, who is the lead of the team. The LLNL Laboratory Directed Research and Development and Grand Challenge computation programs supported this research. Other collaborators include researchers from Yale University, Boston University, Institute for Nuclear Theory, Argonne Leadership Computing Facility, University of California, Davis, University of Oregon, University of Colorado, Brookhaven National Laboratory and Syracuse University.

Read More

Thursday 24 September 2015

Newly Discovered Supermassive Black Hole --"Defies Theories of Galaxy-Size Limits"

Maxresdefault

The central supermassive black hole of a recently discovered galaxy is far larger than should be possible, according to current theories of galactic evolution. New work, carried out by astronomers at Keele University and the University of Central Lancashire, shows that the black hole is much more massive than it should be, compared to the mass of the galaxy around it. The scientists publish their results in a paper in Monthly Notices of the Royal Astronomical Society.

The galaxy, SAGE0536AGN, was initially discovered with NASA's Spitzer space telescope in infrared light. Thought to be at least 9 billion years old, it contains an active galactic nucleus (AGN), an incredibly bright object resulting from the accretion of gas by a central supermassive black hole. The gas is accelerated to high velocities due to the black hole's immense gravitational field, causing this gas to emit light.

The team has now also confirmed the presence of the black hole by measuring the speed of the gas moving around it. Using the Southern African Large Telescope, the scientists observed that an emission line of hydrogen in the galaxy spectrum (where light is dispersed into its different colours – a similar effect is seen using a prism) is broadened through the Doppler Effect, where the wavelength (colour) of light from objects is blue- or red-shifted depending on whether they are moving towards or away from us. The degree of broadening implies that the gas is moving around at high speed, a result of the strong gravitational field of the black hole.

These data have been used to calculate the black hole's mass: the more massive the black hole, the broader the emission line. The black hole in SAGE0536AGN was found to be 350 million times the mass of the Sun. But the mass of the galaxy itself, obtained through measurements of the movement of its stars, has been calculated to be 25 billion solar masses. This is seventy times larger than that of the black hole, but the black hole is still thirty times larger than expected for this size of galaxy.

"Galaxies have a vast mass, and so do the black holes in their cores. This one though is really too big for its boots – it simply shouldn’t be possible for it to be so large", said Dr Jacco van Loon, an astrophysicist at Keele University and the lead author on the new paper.

In ordinary galaxies the black hole would grow at the same rate as the galaxy, but in SAGE0536AGN the black hole has grown much faster, or the galaxy stopped growing prematurely. Because this galaxy was found by accident, there may be more such objects waiting to be discovered. Time will tell whether SAGE0536AGN really is an oddball, or simply the first in a new class of galaxies.

The new work appears in "An evolutionary missing link? A modest-mass early-type galaxy hosting an oversized nuclear black hole", Jacco Th. van Loon and Anne E. Sansom, Monthly Notices of the Royal Astronomical Society, vol. 453 (3), pp. 2341-2348, Oxford University Press.

Read More

A "Just Right" Universe --New Insights from the Quantum World

93935_web

The big mystery concerning the origin of the universe is how the star clusters, planetary systems, galaxies, and other objects that we now see managed to evolve out of nothing. There is a widespread belief within the scientific community that the birth of structure in the universe lies in the crossing of a quantum phase transition and that the faster the transition is crossed, the more structure it generates. Important new findings contradict that belief.

A new study has translated the Goldilocks dictum of "not too hot or too cold, just right" to the quantum world and the generation of quantum entanglement - the binding within and between matter and light -and suggests that the universe started "neither too fast nor too slow."

By studying a system that couples matter and light together, like the universe itself, researchers have now found that crossing a quantum phase transition at intermediate speeds generates the richest, most complex structure. Such structure resembles "defects" in an otherwise smooth and empty space. The findings are published in Physical Review A, the American Physical Society's main journal.

"Our findings suggest that the universe was 'cooked' at just the right speeds," said Neil Johnson, professor of physics in the University of Miami College of Arts & Sciences and one of the authors of the study. "Our paper provides a simple model that can be realized in a lab on a chip, to explore how such defect structure develops as the speed of cooking changes."

The study sheds new light on how to generate, control, and manipulate quantum entanglement, since the defects contain clusters of quantum entanglement of all sizes. The findings hold the key to a new generation of futuristic technologies--in particular, ultrafast quantum computing, ultrasafe quantum cryptography, high-precision quantum metrology, and even the quantum teleportation of information.

"Quantum entanglement is like the 'bitcoin' that funds the universe in terms of interactions and information," Johnson said. "It is the magic sauce that connects together all objects in the universe, including light and matter."

The image below shows 'just right' structure that emerges when you drive a system containing light and matter (like the universe), neither too fast nor too slow across a quantum phase transition. It illustrates the findings of the study titled "Enhanced dynamic light-matter entanglement from driving neither too fast nor too slow," published in the journal Physical Review A.

                                  99818_web

In the everyday world, a substance can undergo a phase transition at different temperatures; for example, water will turn to ice or steam when sufficiently cold or hot. But in the quantum world, the system can undergo a phase transition at absolute zero temperature, simply by changing the amount of interaction between the light and matter. This phase transition generates quantum entanglement in the process.

Johnson likes to compare the emergence of highly entangled light-matter structures, as the quantum phase transition is crossed, with the way lumps of porridge appear out of "nothing," when you heat up milk and oats.

"If you cross the transition at the right speed (cook at right speed), the structures (lumps) that appear are far more complex - more 'tasty' - than when crossing fast or slow," said Johnson. "Since it is a quantum phase transition that is being crossed, the structures that appear contain clumps of quantum entanglement."

The results of the study, titled "Enhanced dynamic light-matter entanglement from driving neither too fast nor too slow," are robust for a wide range of system sizes, and the effect is realizable using existing experimental setups under realistic conditions. O.L. Acevedo, from Universidad de los Andes, Colombia, is first author of the study. Other co-authors from Universidad de los Andes are L. Quiroga and F. J. Rodriguez.

"Understanding quantum entanglement in light-matter systems is arguably the fundamental problem in physics," Johnson said.

The current paper opens up a novel line of investigation in this area. In addition, it provides a unique opportunity to design and build new nanostructure systems that harness and manipulate quantum entanglement effects. The researchers are now looking at specifying the precise conditions that experimentalists will need in order to see the enhanced quantum entanglement effect that they predict.

Read More

Wednesday 23 September 2015

NASA: Radiation from the Milky Way --"Could Have Had a Profound Effect on Mutation Rates"

ALMA-Views-the-Remains-of-a-Recent-Supernova

Radiation from sources in our galaxy could have had a profound effect on mutation rates throughout the history of life on Earth. Studying ancient life on Earth is important for astrobiologists who are interested in how speciation and radiation occurred throughout the history of our planet. However, it’s not always easy to pinpoint these events in time. For instance, when looking back at the history of life, there is a disparity between fossil ages and molecular divergence dates for some groups of organisms.

Molecular divergence is based on the idea that certain molecules change over time as they are passed down from generation to generation. For example, by looking closely at the DNA of an organism, scientists can see how the sequence of nucleotides (the building blocks of DNA) has been altered by mutation, and determine the rate at which the changes occur. This calculation can indicate when in geologic history two species diverged. This technique is referred to as the 'molecular clock.’

Molecular ages for the origin of species are usually higher than fossil ages, and the disparity between fossil and molecular ages has often been attributed to the fact that fossils only provide a lower boundary for ages. Gaps in the fossil record always make it tricky to determine when exactly speciation occurred.

A new study looks at the problem from the perspective of physics, examining a potential bias in molecular ages tied to the fact molecules do not always change over time at a constant rate. The rate of evolution can be affected by many things, such as selection pressures or communities that become isolated from the rest of the population. The history of Earth is also full of large-scale events that could cause dramatic increases in mutation, such as exposure to radiation events from nearby supernovae or other sources like gamma ray bursts.

The study, “A possible role for stochastic radiation events in the systematic disparity between molecular and fossil dates,” will be published in the upcoming book Earth and Life II. The work was supported by the Exobiology & Evolutionary Biology element of the NASA Astrobiology Program.

The image at the top of the page shows the remnant of Supernova 1987A seen in light of very different wavelengths. ALMA Observatory data (in red) shows newly formed dust in the center of the remnant. Hubble (in green) and Chandra (in blue) data show the expanding shock wave. 

Read More

Anthropocene Global-Warming Tipping Point --"Early Warning Signals Observed"

470566main_2006_hurricanes_eyewall

The indications of climate change are all around us today but now researchers have revealed for the first time when and where the first clear signs of global warming appeared in the temperature record and where those signals are likely to be clearly seen in extreme rainfall events in the near future.

The new research published in Environmental Research Letters gives an insight into the global impacts that have already been felt, even at this very early stage, and where those impacts are likely to intensify in the coming years. The image at the top of the page shows Hurricane Katrina's powerful eyewall. (Lieutenant Mike Silah/courtesy NOAA)

"We examined average and extreme temperatures because they were always projected to be the measure that is most sensitive to global warming," said lead author from the ARC Centre of Excellence for Climate System Science, Dr Andrew King.

"Remarkably our research shows that you could already see clear signs of global warming in the tropics by the 1960s but in parts of Australia, South East Asia and Africa it was visible as early as the 1940s."

The reason the first changes in average temperature and temperature extremes appeared in the tropics was because those regions generally experienced a much narrower range of temperatures. This meant smaller shifts in the temperature record due to global warming were more easily seen.

The first signal to appear in the tropics was the change in average temperatures. Later extreme temperature events showed a global warming signal.

Closer to the poles the emergence of climate change in the temperature record appeared later but by the period 1980-2000 the temperature record in most regions of the world were showing clear global warming signals.

One of the few exceptions to this clear global warming signal was found in large parts of the continental United States, particularly on the Eastern coast and up through the central states. These regions have yet to manifest obvious warming signals according to the models but it is expected they will appear in the next decade.

While temperature records generally showed pronounced indications of global warming, heavy rainfall events have yet to make their mark. The models showed a general increase in extreme rainfall but the global warming signal was not strong enough yet to rise above the expected natural variation.

"We expect the first heavy precipitation events with a clear global warming signal will appear during winters in Russia, Canada and northern Europe over the next 10-30 years," said co-author Dr Ed Hawkins from the National Centre for Atmospheric Science at the University of Reading, UK.

"This is likely to bring pronounced precipitation events on top of the already existing trend towards increasingly wet winters in these regions."

Importantly, the findings closely correspond to observational datasets used by the IPCC (Chapter 10 - Detection and Attribution of Climate Change) in its most recent report, which showed increasing temperatures caused by global warming.

Read More

Extreme Star Discovered --Brightest in the Universe With a Massive Magnetic Field

Image_582

Observations using NASA's Chandra X-ray Observatory revealed that the unusually large magnetosphere around an O-type star called NGC 1624-2 contains a raging storm of extreme stellar winds and dense plasma that gobbles up X-rays before they can escape into space. The image above shows the open star cluster NGC 1624. The blue arrow pinpoints the star NGC 1624-2.

Findings from a team of researchers led by Florida Institute of Technology Assistant Professor VĂ©ronique Petit may help scientists better understand the lifecycle of certain massive stars, which are essential for creating metals needed for the formation of other stars and planets.

The findings will be published Sept. 23 in the journal Monthly Notices of the Royal Astronomical Society from Oxford University Press.

The massive O-type star - the hottest and brightest type of star in the universe - has the largest magnetosphere known in its class. Petit found NGC 1624-2's magnetic field traps gas trying to escape from the star and those gases absorb their own X-rays. The star's powerful stellar winds are three to five times faster and at least 100,000 times denser than our Sun's solar wind. Those winds grapple violently with the magnetic field and the trapped particles create the star's huge aura of hot, very dense plasma.

99720_web

"The magnetic field isn't letting its stellar wind get away from the star, so you get these big flows that are forced to collide head on at the magnetic equator, creating gas shock-heated to 10 million Kelvin and plenty of X-rays," said Petit, who was part of a team of scientists that discovered the star in 2012. "But the magnetosphere is so large that nearly 80 percent of these X-rays get absorbed before being able to escape into free space and reach the Chandra telescope."

The magnetic field at the surface of NGC 1624-2 is 20,000 times stronger than at the surface of our Sun. If NGC 1624-2 was in the center of our solar system, loops of dense, hot plasma would extend nearly to the orbit of Venus.

Only one in 10 massive stars have a magnetic field. Unlike smaller stars like our sun that generate magnetism with an internal dynamo, magnetic fields in massive stars are "fossils" left over from some event in its early life, perhaps from a collision with another star.

Petit and her team, including Florida Tech graduate student Rebecca MacInnis, will know even more about the NGC 1624-2 in October after getting data back from the Hubble Space Telescope that will explore the dynamics of its trapped wind.

Read More

About Me

Designed ByBlogger Templates