Thursday, 28 January 2016

Evidence Of Tsunamis On Indian Ocean Shores Long Before 2004

Kruawun Jankaew led a team of geologists who unearthed evidence that tsunamis have repeatedly washed over a Thai island during the last 2,800 years.

A quarter-million people were killed when a tsunami inundated Indian Ocean coastlines the day after Christmas in 2004. Now scientists have found evidence that the event was not a first-time occurrence.
A team working on Phra Thong, a barrier island along the hard-hit west coast of Thailand, unearthed evidence of at least three previous major tsunamis in the preceding 2,800 years, the most recent from about 550 to 700 years ago. That team, led by Kruawun Jankaew of Chulalongkorn University in Thailand, included Brian Atwater, a University of Washington affiliate professor of Earth and space sciences and a U.S. Geological Survey geologist.
A second team found similar evidence of previous tsunamis during the last 1,200 years in Aceh, a province at the northern tip of the Indonesian island of Sumatra where more than half the deaths from the 2004 tsunami occurred.
Sparse knowledge of the region's tsunami history contributed to the loss of life in 2004, the scientists believe. Few people living along the coasts knew to heed the natural tsunami warnings, such as the strong shaking felt in Aceh and the rapid retreat of ocean water from the shoreline that was observed in Thailand.
But on an island just off the coast of Aceh most people safely fled to higher ground in 2004 because the island's oral history includes information about a devastating tsunami in 1907.
"A region's tsunami history can serve as a long-term warning system," Atwater said.
The research will reinforce the importance of tsunami education as an essential part of early warning, said Jankaew, the lead author.
"Many people in Southeast Asia, especially in Thailand, believe, or would like to believe, that it will never happen again," Jankaew said. "This will be a big step towards mitigating the losses from future tsunami events."
The team found evidence for previous tsunamis by digging pits and auguring holes at more than 150 sites on an island about 75 miles north of Phuket, a Thai tourist resort area ravaged by the 2004 tsunami. That tsunami was generated 300 miles to the west when the seafloor was warped during a magnitude 9.2 earthquake.
At 20 sites in marshes, the researchers found layers of white sand about 4 inches thick alternating with layers of black peaty soil. Witnesses confirmed that the top sand layer, just below the surface, was laid down by the 2004 tsunami, which ran 20 to 30 feet deep across much of the island.
Radiocarbon dating of bark fragments in soil below the second sand layer led the scientists to estimate that the most recent predecessor to the 2004 tsunami probably occurred between A.D. 1300 and 1450. They also noted signs of two earlier tsunamis during the last 2,500 to 2,800 years.
There are no known written records describing an Indian Ocean tsunami between A.D. 1300 and 1450, including the accounts of noted Islamic traveler Ibn Battuta and records of the great Ming Dynasty armadas of China, both of which visited the area at different times during that period. Atwater hopes the new geologic evidence might prompt historians to check other Asian documents from that era.
"This research demonstrates that tsunami geology, both recent and past tsunamis, can help extend the tsunami catalogues far beyond historical records," Jankaew said.
The new findings also carry lessons for the northwest coast of North America, where scientists estimate that many centuries typically elapse between catastrophic tsunamis generated by the Cascadia subduction zone.
"Like Aceh, Cascadia has a history of tsunamis that are both infrequent and catastrophic, and that originate during earthquakes that provide a natural tsunami warning," Atwater said. "This history calls for sustained efforts in tsunami education."
Findings from both teams are published in the Oct. 30 edition of Nature.
Other co-authors of the Thai paper are Yuki Sawai of the Geological Survey of Japan, Montri Choowong and Thasinee Charoentitirat of Chulalongkorn University, Maria Martin of the UW and Amy Prendergast of Geoscience Australia.
The research was funded by the U.S. Agency for International Development, Thailand's Ministry of Natural Resources and Environment, the U.S. National Science Foundation, the Japan Society for the Promotion of Science and the Thailand Research Fund.

Story Source:
The above post is reprinted from materials provided byUniversity of WashingtonNote: Materials may be edited for content and length

Historic Indian sword was masterfully crafted

75-centimeter-long shamsheer
from the late 18th or early
19th century made in India (Wallace Collection, London).

Italian, UK researchers use non-destructive techniques and show the secrets of forging methods.
The master craftsmanship behind Indian swords was highlighted when scientists and conservationists from Italy and the UK joined forces to study a curved single-edged sword called a shamsheer. The study, led by Eliza Barzagli of the Institute for Complex Systems and the University of Florence in Italy, is published in Springer's journal Applied Physics A -- Materials Science & Processing.
The 75-centimeter-long sword from the Wallace Collection in London was made in India in the late eighteenth or early nineteenth century. The design is of Persian origin, from where it spread across Asia and eventually gave rise to a family of similar weapons called scimitars being forged in various Southeast Asian countries.
Two different approaches were used to examine the shamsheer: the classical one (metallography) and a non-destructive technique (neutron diffraction). This allowed the researchers to test the differences and complementarities of the two techniques.
The sword in question first underwent metallographic tests at the laboratories of the Wallace Collection to ascertain its composition. Samples to be viewed under the microscope were collected from already damaged sections of the weapon. The sword was then sent to the ISIS pulsed spallation neutron source at the Rutherford Appleton Laboratory in the UK. Two non-invasive neutron diffraction techniques not damaging to artefacts were used to further shed light on the processes and materials behind its forging.
"Ancient objects are scarce, and the most interesting ones are usually in an excellent state of conservation. Because it is unthinkable to apply techniques with a destructive approach, neutron diffraction techniques provide an ideal solution to characterize archaeological specimens made from metal when we cannot or do not want to sample the object," said Barzagli, explaining why different methods were used.
It was established that the steel used is quite pure. Its high carbon content of at least one percent shows it is made of wootz steel. This type of crucible steel was historically used in India and Central Asia to make high-quality swords and other prestige objects. Its band-like pattern is caused when a mixture of iron and carbon crystalizes into cementite. This forms when craftsmen allow cast pieces of metal (called ingots) to cool down very slowly, before being forged carefully at low temperatures. Barzagli's team reckons that the craftsman of this particular sword allowed the blade to cool in the air, rather than plunging it into a liquid of some sort. Results explaining the item's composition also lead the researchers to presume that the particular sword was probably used in battle.
Craftsmen often enhanced the characteristic "watered silk" pattern of wootz steel by doing micro-etching on the surface. Barzagli explains that through overcleaning some of these original 'watered' surfaces have since been obscured, or removed entirely. "A non-destructive method able to identify which of the shiny surface blades are actually of wootz steel is very welcome from a conservative point of view," she added.

Story Source:
The above post is reprinted from materials provided bySpringer Science+Business MediaNote: Materials may be edited for content and length.

Journal Reference:
  1. E. Barzagli, F. Grazzi, A. Williams, D. Edge, A. Scherillo, J. Kelleher, M. Zoppi. Characterization of an Indian sword: classic and noninvasive methods of investigation in comparisonApplied Physics A, 2015; DOI:10.1007/s00339-014-8968-0

New discoveries concerning Ötzi's genetic history

The Iceman's hand is pictured.

A study was published last week on the DNA of Helicobacter pylori, the pathogen extracted from the stomach of Ötzi, the ice mummy who has provided valuable information on the life of Homo Sapiens. New research at the European Academy of Bolzano/Bozen (EURAC) further clarifies the genetic history of man who lived in the Eastern Alps over 5,300 years ago.
In 2012 a complete analysis of the Y chromosome (transmitted from fathers to their sons) showed that Ötzi's paternal genetic line is still present in modern-day populations. In contrast, studies of mitochondrial DNA (transmitted solely via the mother to her offspring) left many questions still open. To clarify whether the genetic maternal line of the Iceman, who lived in the eastern Alps over 5,300 years ago, has left its mark in current populations, researchers at the European Academy of Bolzano/Bozen (EURAC) have now compared his mitochondrial DNA with 1,077 modern samples. The study concluded that the Iceman's maternal line -- named K1f -- is now extinct. A second part of the study, a comparison of genetic data of the mummy with data from other European Neolithic samples, provided information regarding the origin of K1f: researchers postulate that the mitochondrial lineage of the Iceman originated locally in the Alps, in a population that did not grow demographically. The study, which also clarifies Ötzi's genetic history in the context of European demographic changes from Neolithic times onwards, was published in Scientific Reports, an open access journal of the Nature group.
"The mummy's mitochondrial DNA was the first to be analysed, in 1994." says Valentina Coia, a biologist at EURAC and first author of the study. "It was relatively easy to analyse and -- along with the Y chromosome -- allows us to go back in time, telling us about the genetic history of an individual. Despite this, the genetic relationship between the Iceman's maternal lineage and lineages found in modern populations was not yet clear."
The most recent study regarding the analysis of Ötzi's complete mitochondrial DNA, conducted in 2008 by other research teams showed that the Iceman's maternal lineage -- named K1f -- was no longer traceable in modern populations. The study did not make clear, however, whether this was due to an insufficient number of comparison samples or whether K1f was indeed extinct. Valentina Coia explains further: "The first hypothesis could not be ruled out given that the study considered only 85 modern comparison samples from the K1 lineage -- the genetic lineage that also includes that of Ötzi -- which comprised few samples from Europe and especially none from the eastern Alps, which are home to populations that presumably have a genetic continuity with the Iceman.
To test the two hypotheses, we needed to compare Ötzi's mitochondrial DNA with a larger number of modern samples." The EURAC research team, in collaboration with the Sapienza University of Rome and the University of Santiago de Compostela, thus compared the mitochondrial DNA of the Iceman with that from 1,077 individuals belonging to the K1 lineage, of which 42 samples originated from the eastern Alps and were for the first time analysed in this study. The new comparison showed that neither the Iceman's lineage nor any other evolutionarily close lineages are present in modern populations: the researchers therefore lean towards the hypothesis that Ötzi's maternal genetic branch has died out.
It remains to be explained why Ötzi's maternal lineage has disappeared, while his paternal lineage -- named G2a -- still exists in Europe. To clarify this point, researchers at EURAC compared Ötzi's mitochondrial DNA and Y chromosome with available data from numerous ancient samples found at 14 different archaeological sites throughout Europe. The results showed that the paternal lineage of Ötzi was very common in different regions in Europe during the Neolithic age, while his maternal lineage probably existed only in the Alps.
Putting together the genetic data on the ancient and modern samples, namely those already present in the literature and those analysed in this study, researchers have now proposed the following scenario to explain the Iceman's genetic history: Ötzi's paternal lineage, G2a, is part of an ancient genetic substrate that arrived in Europe from the Near East with the migrations of the first Neolithic peoples some 8,000 years ago. Additional migrations and other demographic events occurring after the Neolithic Age in Europe then partially replaced G2a with other lineages, except in geographically isolated areas such as Sardinia. In contrast, the Iceman's maternal branch originated locally in the eastern Alps at least 5,300 years ago. The same migrations that have replaced only in part his paternal lineage caused the extinction of his maternal lineage that was inherited in a small and demographic stationary population. The groups from the eastern Alps in fact significantly increased in size only from the Bronze Age onwards, as evidenced by archaeological studies conducted in the territory inhabited by the Iceman.

Story Source:
The above post is reprinted from materials provided byEuropean Academy of Bozen/Bolzano (EURAC)Note: Materials may be edited for content and length.

Journal Reference:
  1. V. Coia, G. Cipollini, P. Anagnostou, F. Maixner, C. Battaggia, F. Brisighelli, A Gómez-Carballa, G. Destro Bisol, A. Salas, A. Zink. Whole mitochondrial DNA sequencing in Alpine populations and the genetic history of the Neolithic Tyrolean IcemanScientific Reports, 2016; 6: 18932 DOI: 10.1038/srep18932

Harmful mutations have accumulated during early human migrations out of Africa

Harmful mutations have accumulated
during early human migrations out of Africa.



Modern humans (Homo sapiens) are thought to have first emerged in Africa about 150,000 years ago. 100,000 years later, a few of them left their homeland travelling first to Asia and then further east, crossing the Bering Strait, and colonizing the Americas. Excoffier and his colleagues developed theoretical models predicting that if modern humans migrated as small bands, then the populations that broke off from their original African family should progressively accumulate slightly harmful mutations -- a mutation load. Moreover, the mutational load of a population should then represent a way of measuring the distance it has covered since it left Africa. In a nutshell: an individual from Mexico should be carrying more harmful genetic variants than an individual from Africa.

To test their hypothesis, the researchers used next-generation sequencing (NGS) technology to sequence the complete set of coding variants from the genomes of individuals from seven populations within and outside Africa, i.e. from the Democratic Republic of Congo, Namibia, Algeria, Pakistan, Cambodia, Siberia and Mexico. They then simulated the spatial distribution of harmful mutations according to their theory. And their findings coincided: the number of slightly deleterious mutations per individual does indeed increase with distance from Southern Africa, which is consistent with an expansion of humans from that region.
The main reason for a higher load of harmful mutations in populations established further away from Africa is that natural selection is not very powerful in small populations: deleterious mutations were purged less efficiently in small pioneer tribes than in larger populations. In addition, selection had less time to act in populations that had broken away from their African homeland and thus settled far later.
"We find that mildly deleterious mutations have evolved as if they were neutral during the out-of-Africa expansion, which lasted probably for more than a thousand generations. Contrastingly, very harmful mutations are found at similar frequencies in all individuals of the world, as if there was a maximum threshold any individual can stand," says Stephan Peischl, a SIB member from Bern, and one of the main authors of the study.
"It's quite amazing that 50 thousand year-old migrations still leave a mark on current human genetic diversity, but to be able to see this you need a huge amount of data in many populations from different continents. Only 5 years ago, this would not have been possible," concludes Laurent Excoffier.
These results were recently published in the Proceedings of the National Academy of Sciences

Story Source:
The above post is reprinted from materials provided by Swiss Institute of BioinformaticsNote: Materials may be edited for content and length.

Journal Reference:
  1. Brenna M. Henn, Laura R. Botigué, Stephan Peischl, Isabelle Dupanloup, Mikhail Lipatov, Brian K. Maples, Alicia R. Martin, Shaila Musharoff, Howard Cann, Michael P. Snyder, Laurent Excoffier, Jeffrey M. Kidd, Carlos D. Bustamante. Distance from sub-Saharan Africa predicts mutational load in diverse human genomes.Proceedings of the National Academy of Sciences, 2015; 201510805 DOI: 10.1073/pnas.1510805112

Mosquitoes capable of carrying Zika virus found in Washington, D. C.

Mosquito biting.

On Monday (Jan. 25), the World Health Organization announced that Zika virus, a mosquito-borne illness that in the past year has swept quickly throughout equatorial countries, is expected to spread across the Americas and into the United States.
The disease, which was discovered in 1947 but had since been seen in only small, short-lived outbreaks, causes symptoms including a rash, headache and small fever. However, a May 2015 outbreak in Brazil led to nearly 3,500 reports of birth defects linked to the virus, even after its symptoms had passed, and an uptick in cases of Guillain-Barre syndrome, an immune disorder. The Centers for Disease Control and Prevention has issued a travel alert advising pregnant women to avoid traveling to countries where the disease has been recorded.
Zika virus is transmitted by the mosquito species Aedes aegypti, also a carrier of dengue fever and chikungunya, two other tropical diseases. Though Aedes aegypti is not native to North America, researchers at the University of Notre Dame who study the species have reported a discovery of a population of the mosquitoes in a Capitol Hill neighborhood in Washington, D.C. To add insult to injury, the team identified genetic evidence that these mosquitoes have overwintered for at least the past four years, meaning they are adapting for persistence in a northern climate well out of their normal range.
While the Washington population is currently disease-free, Notre Dame Department of Biological Sciences professor David Severson, who led the team, noted that the ability of this species to survive in a northern climate is troublesome. This mosquito is typically restricted to tropical and subtropical regions of the world and not found farther north in the United States than Alabama, Mississippi, Georgia and South Carolina.
"What this means for the scientific world," said Severson, who led the team, "is some mosquito species are finding ways to survive in normally restrictive environments by taking advantage of underground refugia. Therefore, a real potential exists for active transmission of mosquito-borne tropical diseases in popular places like the National Mall. Hopefully, politicians will take notice of events like this in their own backyard and work to increase funding levels on mosquitoes and mosquito-borne diseases."
Severson's research focuses on mosquito genetics and genomics with a primary goal of understanding disease transmission. He has studied and tracked mosquitoes all over the world and most recently served as the director of the Eck Institute for Global Health at Notre Dame. His team, in coordination with the Disease Carrying Insects Program of Fairfax County Health Department in Fairfax, Virginia, recently published their findings in the American Journal of Tropical Medicine and Hygiene.
Notre Dame has a long history of mosquito research, studying both Aedes aegypti and Anopheles gambiae species, vector control and using mathematical models to better understand the dynamics of infectious disease transmission and control. Alex Perkins, Eck Family Assistant Professor of Biological Sciences, focuses on using mathematical, statistical and computational approaches to study mosquito-borne pathogens including dengue, chikungunya and Zika. Perkins uses the models to understand how to best control and prevent transmission of these diseases. He has previously worked with the CDC on making recommendations for chikungunya and dengue virus, and said he has discussed working with the CDC on Zika virus modeling.

Story Source:
The above post is reprinted from materials provided byUniversity of Notre DameNote: Materials may be edited for content and length.

Journal Reference:
  1. A. Lima, D. D. Lovin, P. V. Hickner, D. W. Severson.Evidence for an Overwintering Population of Aedes aegypti in Capitol Hill Neighborhood, Washington, DC.American Journal of Tropical Medicine and Hygiene, 2015; 94 (1): 231 DOI: 10.4269/ajtmh.15-0351

Wednesday, 20 January 2016

science technology up2date: How to download Facebook videos

science technology up2date: How to download Facebook videos: NEW DELHI: Not being able to download your favorite videos from Facebook is something that most users hate. But there are certain ways in w...

It's a 3-D printer, but not as we know it

3D printing techniques have quickly become some of the most widely used tools to rapidly design and build new components. A team of engineers at the University of Bristol has developed a new type of 3D printing that can print composite materials, which are used in many high performance products such as tennis rackets, golf clubs and aeroplanes. This technology will soon enable a much greater range of things to be 3D printed at home and at low-cost.
The study published in Smart Materials and Structures creates and demonstrates a novel method in which ultrasonic waves are used to carefully position millions of tiny reinforcement fibres as part of the 3D printing process. The fibres are formed into a microscopic reinforcement framework that gives the material strength. This microstructure is then set in place using a focused laser beam, which locally cures the epoxy resin and then prints the object.
To achieve this the research team mounted a switchable, focused laser module on the carriage of a standard three-axis 3D printing stage, above the new ultrasonic alignment apparatus.
Tom Llewellyn-Jones, a PhD student in advanced composites who developed the system, said: "We have demonstrated that our ultrasonic system can be added cheaply to an off-the-shelf 3D printer, which then turns it into a composite printer."
In the study, a print speed of 20mm/s was achieved, which is similar to conventional additive layer techniques. The researchers have now shown the ability to assemble a plane of fibres into a reinforcement framework. The precise orientation of the fibres can be controlled by switching the ultrasonic standing wave pattern mid-print.
This approach allows the realisation of complex fibrous architectures within a 3D printed object. The versatile nature of the ultrasonic manipulation technique also enables a wide-range of particle materials, shapes and sizes to be assembled, leading to the creation of a new generation of fibrous reinforced composites that can be 3D printed.
Bruce Drinkwater, Professor of Ultrasonics in the Department of Mechanical Engineering, said: "Our work has shown the first example of 3D printing with real-time control over the distribution of an internal microstructure and it demonstrates the potential to produce rapid prototypes with complex microstructural arrangements. This orientation control gives us the ability to produce printed parts with tailored material properties, all without compromising the printing."
Dr Richard Trask, Reader in Multifunctional Materials in the Department of Aerospace Engineering, added: "As well as offering reinforcement and improved strength, our method will be useful for a range of smart materials applications, such as printing resin-filled capsules for self-healing materials or piezoelectric particles for energy harvesting."

Story Source:
The above post is reprinted from materials provided by University of BristolNote: Materials may be edited for content and length.

Journal Reference:
  1. Thomas M Llewellyn-Jones, Bruce W Drinkwater, Richard S Trask. 3D printed components with ultrasonically arranged microscale structureSmart Materials and Structures, 2016; 25 (2): 02LT01 DOI:10.1088/0964-1726/25/2/02LT01

Watching electrons cool in 30 quadrillionths of a second

An illustration showing single layers of graphene with thin layers of insulating boron nitride that form a sandwich structure.

Two University of California, Riverside assistant professors of physics are among a team of researchers that have developed a new way of seeing electrons cool off in an extremely short time period.
The development could have applications in numerous places where heat management is important, including visual displays, next-generation solar cells and photodetectors for optical communications.
In visual displays, such as those used in cell phones and computer monitors, and photodetectors, which have a wide variety of applications including solar energy harvesting and fiber optic telecommunications, much of the energy of the electrons is wasted by heating the material. Controlling the flow of heat in the electrons, rather than wasting this energy by heating the material, could potentially increase the efficiency of such devices by converting excess energy into useful power.
The research is outlined in a paper, "Tuning ultrafast electron thermalization pathways in a van der Waals heterostructure," published online Monday (Jan. 18) in the journal Nature Physics. Nathan Gabor and Joshua C.H. Lui, assistant professors of physics at UC Riverside, are among the co-authors.
In electronic materials, such as those used in semiconductors, electrons can be rapidly heated by pulses of light. The time it takes for electrons to cool each other off is extremely short, typically less than 1 trillionth of a second.
To understand this behavior, researchers use highly specialized tools that utilize ultra-fast laser techniques. In the two-dimensional material graphene cooling excited electrons occurs even faster, taking only 30 quadrillionths of a second. Previous studies struggled to capture this remarkably fast behavior.
To solve that, the researchers used a completely different approach. They combined single layers of graphene with thin layers of insulating boron nitride to form a sandwich structure, known as a van der Waals heterostructure, which gives electrons two paths to choose from when cooling begins. Either the electrons stay in graphene and cool by bouncing off one another, or they get sucked out of graphene and move through the surrounding layer.
By tuning standard experimental knobs, such as voltage and optical pulse energy, the researchers found they can precisely control where the electrons travel and how long they take to cool off. The work provides new ways of seeing electrons cool off at extremely short time scales, and demonstrates novel devices for nanoscale optoelectronics.
This structure is one of the first in a new class of devices that are synthesized by mechanically stacking atomically thin membranes. By carefully choosing the materials that make up the device, the researchers developed a new type of optoelectronic photodetector that is only 10 nanometers thick. Such devices address the technological drive for ultra-dense, low-power, and ultra-efficient devices for integrated circuits.
The research follows advances made in 2011 Science article, in which the research team discovered the fundamental importance of hot electrons in the optoelectronic response of devices based on graphene.

Story Source:
The above post is reprinted from materials provided by University of California - RiversideNote: Materials may be edited for content and length.

Journal Reference:
  1. Qiong Ma, Trond I. Andersen, Nityan L. Nair, Nathaniel M. Gabor, Mathieu Massicotte, Chun Hung Lui, Andrea F. Young, Wenjing Fang, Kenji Watanabe, Takashi Taniguchi, Jing Kong, Nuh Gedik, Frank H. L. Koppens, Pablo Jarillo-Herrero. Tuning ultrafast electron thermalization pathways in a van der Waals heterostructureNature Physics, 2016; DOI: 10.1038/nphys3620

Monday, 18 January 2016

Astronomers Detect Signs of an Invisible Black Hole at the Center of the Milky Way

Nobeyama Radio Telescope Detects Signs of an Invisible Black Hole


A team of astronomers led by Tomoharu Oka, a professor at Keio University in Japan, has found an enigmatic gas cloud, called CO-0.40-0.22, only 200 light years away from the center of the Milky Way. What makes CO-0.40-0.22 unusual is its surprisingly wide velocity dispersion: the cloud contains gas with a very wide range of speeds. The team found this mysterious feature with two radio telescopes, the Nobeyama 45-m Telescope in Japan and the ASTE Telescope in Chile, both operated by the National Astronomical Observatory of Japan.
To investigate the detailed structure, the team observed CO-0.40-0.22 with the Nobeyama 45-m Telescope again to obtain 21 emission lines from 18 molecules. The results show that the cloud has an elliptical shape and consists of two components: a compact but low density component with a very wide velocity dispersion of 100 km/s, and a dense component extending 10 light years with a narrow velocity dispersion.
What makes this velocity dispersion so wide? There are no holes inside of the cloud. Also, X-ray and infrared observations did not find any compact objects. These features indicate that the velocity dispersion is not caused by a local energy input, such as supernova explosions.
Nobeyama Radio Telescope Detects Signs of an Invisible Black Hole in Milky Way
Figure. (a) The center of the Milky Way seen in the 115 and 346 GHz emission lines of carbon monoxide (CO). The white regions show the condensation of dense, warm gas. (b) Close-up intensity map around CO-0.40-0.22 seen in the 355 GHz emission line of HCN molecules. The ellipses indicate shell structures in the gas near C0-0.40-0.22. (c) Velocity dispersion diagram taken along the dotted line shown above. The wide velocity dispersion of 100 km/s in CO-0.40-0.22 stands out.
The team performed a simple simulation of gas clouds flung by a strong gravity source. In the simulation, the gas clouds are first attracted by the source and their speeds increase as they approach it, reaching maximum at the closest point to the object. After that the clouds continue past the object and their speeds decrease. The team found that a model using a gravity source with 100 thousand times the mass of the Sun inside an area with a radius of 0.3 light years provided the best fit to the observed data. “Considering the fact that no compact objects are seen in X-ray or infrared observations,” Oka, the lead author of the paper that appeared in the Astrophysical Journal Letters, explains “as far as we know, the best candidate for the compact massive object is a black hole.”
If that is the case, this is the first detection of an intermediate mass black hole. Astronomers already know about two sizes of black holes: stellar-mass black holes, formed after the gigantic explosions of very massive stars; and supermassive black holes (SMBH) often found at the centers of galaxies. The mass of SMBH ranges from several million to billions of times the mass of the Sun. A number of SMBHs have been found, but no one knows how the SMBHs are formed. One idea is that they are formed from mergers of many intermediate mass black holes. But this raises a problem because so far no firm observational evidence for intermediate mass black holes has been found. If the cloud CO-0.40-0.22, located only 200 light years away from Sgr A* (the 400 million solar mass SMBH at the center of the Milky Way), contains an intermediate mass black hole, it might support the intermediate mass black hole merger scenario of SMBH evolution.

Signs of an Invisible Black Hole
(Left Top) CO-0.40-0.22 seen in the 87 GHz emission line of SiO molecules. (Left Bottom) Position-velocity diagram of CO-0.04-0.22 along the magenta line in the top panel. (Right Top) Simulation results for two moving clouds affected by a strong compact gravity source. The diagram shows changes in the positions and shapes of the clouds over a period of 900 thousand years (starting from t=0) at intervals of 100 thousand years. The axes are in parsecs (1 parsec = 3.26 light years). (Right Bottom) Comparison of observational results (in gray) and the simulation (red, magenta, and orange) in terms of the shape and velocity structure. The shapes and velocities of the clouds at 700 thousand years in the simulation match the observational results well.
These results open a new way to search for black holes with radio telescopes. Recent observations have revealed that there are a number of wide-velocity-dispersion compact clouds similar to CO-0.40-0.22. The team proposes that some of those clouds might contain black holes. A study suggested that there are 100 million black holes in the Milky Way Galaxy, but X-ray observations have only found dozens so far. Most of the black holes may be “dark” and very difficult to see directly at any wavelength. “Investigations of gas motion with radio telescopes may provide a complementary way to search for dark black holes” said Oka. “The on-going wide area survey observations of the Milky Way with the Nobeyama 45-m Telescope and high-resolution observations of nearby galaxies using the Atacama Large Millimeter/submillimeter Array (ALMA) have the potential to increase the number of black hole candidates dramatically.”
The observation results were published as Oka et al. “Signature of an Intermediate-Mass Black Hole in the Central Molecular Zone of Our Galaxy” in the Astrophysical Journal Letters issued on January 1, 2016. The research team members are Tomoharu Oka, Reiko Mizuno, Kodai Miura, Shunya Takekawa, all at Keio University.
This research is supported by the Japanese Society for the Promotion of Science (JSPS) Grant-in-Aid for Scientific Research (C) No. 24540236.

Saturday, 9 January 2016

UAE desert sand can store solar energy up to 1000°c


Desert sand from the UAE can now be considered a possible thermal energy storage (TES) material, say researchers. Its thermal stability, specific heat capacity, and tendency to agglomerate have been studied at high temperatures
The Masdar Institute of Science and Technology, an independent, research-driven graduate-level university focused on advanced energy and sustainable technologies, today announced that its researchers have successfully demonstrated that desert sand from the UAE could be used in concentrated solar power (CSP) facilities to store thermal energy up to 1000°C.
The research project called 'Sandstock' has been seeking to develop a sustainable and low-cost gravity-fed solar receiver and storage system, using sand particles as the heat collector, heat transfer and thermal energy storage media.
Desert sand from the UAE can now be considered a possible thermal energy storage (TES) material. Its thermal stability, specific heat capacity, and tendency to agglomerate have been studied at high temperatures.
Dr. Behjat Al Yousuf, Interim Provost, Masdar Institute, said, "The research success of the Sandstock project illustrates the strength of our research and its local relevance. With the launch of the MISP in November, we have further broadened the scope of our solar energy research and we believe more success will follow in the months ahead."
A research paper on the findings developed under the guidance of Dr. Nicolas Calvet, Assistant Professor, Department of Mechanical and Materials Engineering, was presented by PhD student Miguel Diago at the 21st Solar Power and Chemical Energy Systems (SolarPACES 2015) Conference in South Africa. The paper was co-authored by alumni Alberto Crespo Iniesta, Dr. Thomas Delclos, Dr. Tariq Shamim, Professor of Mechanical and Materials Engineering at Masdar Institute, and Dr. Audrey Soum-Glaude (French National Center for Scientific Research PROMES CNRS Laboratory).
Replacing the typical heat storage materials used in TES systems -- synthetic oil and molten salts -- with inexpensive sand can increase plant efficiency due to the increased working temperature of the storage material and therefore reduce costs. A TES system based on such a local and natural material like sand also represents a new sustainable energy approach that is relevant for the economic development of Abu Dhabi's future energy systems.
The analyses showed that it is possible to use desert sand as a TES material up to 800-1000 °C. The sand chemical composition has been analyzed with the X-ray fluorescence (XRF) and X-ray diffraction (XRD) techniques, which reveal the dominance of quartz and carbonate materials. The sand's radiant energy reflectiveness was also measured before and after a thermal cycle, as it may be possible to use the desert sand not only as a TES material but also as a direct solar absorber under concentrated solar flux.
Dr Nicolas Calvet said: "The availability of this material in desert environments such as the UAE allows for significant cost reductions in novel CSP plants, which may use it both as TES material and solar absorber. The success of the Sandstock project reflects that usability and practical benefits of the UAE desert sand."
In parallel to sand characterization, a laboratory scale prototype was tested with a small solar furnace at the laboratory of PROMES CNRS 1 MW solar furnace in Odeillo, France. Masdar Institute alumnus Alberto Crespo Iniesta was in charge of the design, construction, and experiment.
The next step of the project is to test an improved prototype at the pre-commercial scale at the Masdar Institute Solar Platform (MISP) using the beam down concentrator, potentially in collaboration with an industrial partner.


Story Source:
The above post is reprinted from materials provided by Masdar Institute of Science and TechnologyNote: Materials may be edited for content and length.

Single-chip laser delivers powerful result


A schematic of the new laser system.





From their use in telecommunication to detecting hazardous chemicals, lasers play a major role in our everyday lives. They keep us connected, keep us safe, and allow us to explore the dark corners of the universe.
Now a Northwestern University team has made this ever-important tool even simpler and more versatile by integrating a mid-infrared tunable laser with an on-chip amplifier. This breakthrough allows adjustable wavelength output, modulators, and amplifiers to be held inside a single package.
With this architecture, the laser has demonstrated an order-of-magnitude more output power than its predecessors, and the tuning range has been enhanced by more than a factor of two.
"We have always been leaders in high-power and high-efficiency lasers," said Manijeh Razeghi, Walter P. Murphy Professor of Electrical Engineering and Computer Science at Northwestern's McCormick School of Engineering, who led the study. "Combining an electrically tunable wavelength with high power output was the next logical extension."
Supported by the Department of Homeland Security Science and Technology Directorate, National Science Foundation, Naval Air Systems Command, and NASA, the research is described in a paper published online on December 21, 2015 in the journal Applied Physics Letters.
With mid-infrared spectroscopy, a chemical can be identified through its unique absorption spectrum. This greatly interests government agencies that aim to detect hazardous chemicals or possible explosive threats. Because Razeghi's new system is highly directional, the high power can be used more efficiently, allowing for the greater ability to detect chemicals. It also allows for standoff application, which keeps personnel physically distant from potentially dangerous environments. The technology could also benefit free-space optical communications and aircraft protection.
This new research builds on Razeghi's many years of research with Northwestern's Center for Quantum Devices. In 2012, she developed a widely tunable, single chip, mid-infrared laser.
"We demonstrated the first continuously tunable, continuous operation, mid-infrared lasers with electrical tuning of the emission wavelength," Razeghi said. "This initial demonstration was very exciting, and continuing developing has led us to a number of new projects."
Story Source:
The above post is reprinted from materials provided by Northwestern University. The original item was written by Amanda Morris. Note: Materials may be edited for content and length.
Journal Reference:
  1. S. Slivken, S. Sengupta, M. Razeghi. High power continuous operation of a widely tunable quantum cascade laser with an integrated amplifierApplied Physics Letters, 2015; 107 (25): 251101 DOI: 10.1063/1.4938005

Monday, 4 January 2016

NuSTAR finds cosmic clumpy doughnut around black hole

Galaxy 1068 can be seen in close-up in this view from NASA's Hubble Space Telescope. NuSTAR's high-energy X-rays eyes were able to obtain the best view yet into the hidden lair of the galaxy's central, supermassive black hole.

The most massive black holes in the universe are often encircled by thick, doughnut-shaped disks of gas and dust. This deep-space doughnut material ultimately feeds and nourishes the growing black holes tucked inside.


Until recently, telescopes weren't able to penetrate some of these doughnuts, also known as tori.
"Originally, we thought that some black holes were hidden behind walls or screens of material that could not be seen through," said Andrea Marinucci of the Roma Tre University in Italy, lead author of a new Monthly Notices of the Royal Astronomical Society study describing results from NASA's Nuclear Spectroscopic Telescope Array, or NuSTAR, and the European Space Agency's XMM-Newton space observatory.
With its X-ray vision, NuSTAR recently peered inside one of the densest of these doughnuts known to surround a supermassive black hole. This black hole lies at the center of a well-studied spiral galaxy called NGC 1068, located 47 million light-years away in the Cetus constellation.
The observations revealed a clumpy, cosmic doughnut.
"The rotating material is not a simple, rounded doughnut as originally thought, but clumpy," said Marinucci.
Doughnut-shaped disks of gas and dust around supermassive black holes were first proposed in the mid-1980s to explain why some black holes are hidden behind gas and dust, while others are not. The idea is that the orientation of the doughnut relative to Earth affects the way we perceive a black hole and its intense radiation. If the doughnut is viewed edge-on, the black hole is blocked. If the doughnut is viewed face-on, the black hole and its surrounding, blazing materials can be detected. This idea is referred to as the unified model because it neatly joins together the different black hole types, based solely upon orientation.
In the past decade, astronomers have been finding hints that these doughnuts aren't as smoothly shaped as once thought. They are more like defective, lumpy doughnuts that a doughnut shop might throw away.
The new discovery is the first time this clumpiness has been observed in an ultra-thick doughnut, and supports the idea that this phenomenon may be common. The research is important for understanding the growth and evolution of massive black holes and their host galaxies.
"We don't fully understand why some supermassive black holes are so heavily obscured, or why the surrounding material is clumpy," said co-author Poshak Gandhi of the University of Southampton in the United Kingdom. "This is a subject of hot research."
Both NuSTAR and XMM-Newton observed the supermassive black hole in NGC 1068 simultaneously on two occasions between 2014 to 2015. On one of those occasions, in August 2014, NuSTAR observed a spike in brightness. NuSTAR observes X-rays in a higher-energy range than XMM-Newton, and those high-energy X-rays can uniquely pierce thick clouds around the black hole. The scientists say the spike in high-energy X-rays was due to a clearing in the thickness of the material entombing the supermassive black hole.
"It's like a cloudy day, when the clouds partially move away from the sun to let more light shine through," said Marinucci.
NGC 1068 is well known to astronomers as the first black hole to give birth to the unification idea. "But it is only with NuSTAR that we now have a direct glimpse of its black hole through such clouds, albeit fleeting, allowing a better test of the unification concept," said Marinucci.
The team says that future research will address the question of what causes the unevenness in doughnuts. The answer could come in many flavors. It's possible that a black hole generates turbulence as it chomps on nearby material. Or, the energy given off by young stars could stir up turbulence, which would then percolate outward through the doughnut. Another possibility is that the clumps may come from material falling onto the doughnut. As galaxies form, material migrates toward the center, where the density and gravity is greatest. The material tends to fall in clumps, almost like a falling stream of water condensing into droplets as it hits the ground.
"We'd like to figure out if the unevenness of the material is being generated from outside the doughnut, or within it," said Gandhi.
"These coordinated observations with NuSTAR and XMM-Newton show yet again the exciting science possible when these satellites work together," said Daniel Stern, NuSTAR project scientist at NASA's Jet Propulsion Laboratory in Pasadena, California.
For more information on NuSTAR, visit: