Tuesday, 24 January 2017

Lightning may become one of weather's predictors

Kate Murphy, 17 January 2017, The New York Times

interconnected Now it is universally accepted that lightning flash rates are  correlated with storm intensification and severity.

Lightning’s intricate, darting dance across the sky can be mesmerising or terrifying, elegant or explosive, divine or destructive, depending on how close it is. But meteorologists have historically viewed lightning as little more than a weather byproduct. Like a rainbow, only with teeth. That view may be changing as technology has evolved to study lightning and its potential to help predict, if not drive, atmospheric events.

In 2017, two new space-based lightning sensors are set to go live, joining a growing global architecture of ground-based detection networks. “We are now in the golden age of lightning measurement and research,” said Christopher Schultz, a
meteorologist and lightning specialist at the Marshall Space Flight Centre in Huntsville, Alabama, USA.

Worldwide, thunderstorms hurl about four million lightning bolts to the ground each day. So it is not surprising that researchers have spent decades figuring out how to track and measure the phenomena, and trying to persuade funding sources that collecting data was worthwhile. “When I got started back in the 1980s, nobody cared,” said Hugh Christian, a research professor at the University of Alabama at Huntsville, USA and director of the team that developed a lightning sensor that will be attached to a truss of the International Space Station in 2017.

Locating lightning strikes

It will augment coverage provided by a geostationary satellite, known as GOES-16, that was successfully launched in November 2016. “It’s been a long road,” Christian said. “But now it’s universally accepted that lightning flash rates are correlated with storm intensification and severity.” This means the more that is known about lightning activity, it is possible to give people about the possibility of associated severe weather events like tornadoes and hail.

Experts hope to locate about 90% of lightning strikes in the Western Hemisphere, within clouds and on the ground, using detection instruments in space that work by measuring photons blinking in the clouds below and, terrestrially, by sensing radio wave disturbances.

Researchers have also discovered different and strange iterations of lightning. For example, so-called dark lightning, powerful bursts of gamma rays, can smash through the upper regions of Earth’s atmosphere and into outer space. This has implications for the functioning and longevity of satellites used for communication, military reconnaissance and GPS.

Biggest riddle

Other surprising and mysterious sorts of lightning are elves and sprites — colourful ultrafast bursts of electricity that dance above clouds into the upper atmosphere. “We’re getting pretty good at saying what lightning does, but we’re still pretty bad at saying how it does it,” Joseph Dwyer, a professor of gamma ray astronomy at the University of New Hampshire, said.

The current thinking is that ice particles in different forms within thunderstorms bump up against one another during updraughts and transfer charges. The lighter particles get positively charged and migrate to the top of the cloud, while the negatively charged, heavier particles drop to the bottom. The negative buildup at the bottom of the cloud discharges to the positively charged ground below, or upward or sideways, depending on the easiest path for release.

The same thing can happen with the accumulated positive charges if a certain threshold is met for discharge. “The problem is people have been sending up balloons and airplanes into thunderstorms for decades, and the measured electric fields are nowhere big enough to create a spark,” Joseph said. “So how does lightning get started inside thunderstorms? It’s one of the biggest mysteries in the atmospheric sciences.”


The manipulation of macrophages

Manohar G M, 17 January 2017, DHNS

protectors Macrophages scout for foreign pathogens, engulf them and  provide us with innate immunity. REPRESENTATIVE IMAGE

In the movie Terminator 3: The Rise of Machines, the character Terminatrix (a humanoid robot) is able to manipulate cyborgs to work against humans for her own advantage. Now, scientists have discovered that some strains of bacteria could do the same to some of our cells.

Mycobacterium tuberculosis, the bacterium that causes tuberculosis, is one such. It manipulates the macrophages, a type of white blood cell that hunts and engulfs invading pathogens, to act as bacterial reservoirs and provide a survival niche. This niche not only provides the bacteria with nutrients, but also helps evade the normal immune response. In a recent study, a team of scientists from the Indian Institute of Science (IISc), Bengaluru, has explored the mechanism behind the manipulation of macrophages by this bacteria.

Macrophages scout for foreign pathogens, engulf them and provide us with innate immunity. However, when specific strains of mycobacterium infect these macrophages, they reprogramme these cells to act as safe havens and to obtain nourishment from them. These modified macrophages are called foamy macrophages and are found in the granulomas of the lungs of infected individuals. “Unlike other bacteria, Mycobacterium tuberculosis is not explicitly pathogenic, but it can lie dormant. Even today, it is one of the most dreaded pathogens with 6.1 million individuals newly infected with TB and 1.4 million dead due to the disease in 2015 alone,” explains Kasturi Mahadik, a research scholar at the Department of Microbiology and Cell Biology, IISc.

Molecular mechanism
The researchers have worked out the molecular mechanism involved in the generation of foamy macrophages. The elucidation of this pathway could generate new and effective targets for drug development to cure tuberculosis. The study found that the bacterium interacts with a receptor found on the membrane of macrophages called TLR2 (Toll-like receptor).

These receptors recognise foreign substances and pass on appropriate signals to the cells of the immune system through one of the signalling pathways called NOTCH1 signalling pathway. “While viruses are legendary host modifiers, bacteria have been known to manipulate the host epigenetic machinery. Host genes involved in cell cycle progression, cell ageing, survival, inflammation and immunity are important targets for such epigenetic control,” remarks Kasturi.

Signalling pathways govern basic activities of cells and coordinate cell actions by allowing them to sense or perceive changes in their microenvironment and bring about necessary changes in activities like gene expression. The signal from
outside the cells is transmitted via a chain of mediators, the end point of which may culminate in the expression of certain genes, whose products are required to
respond to the changes in the microenvironment.

Foamy macrophages contain lipid bodies that provide nutrients and anti-inflammatory mediators to help the bacteria evade our normal immune response. Genes involved in lipid biosynthesis and lipid droplet synthesis are “regulated” or “turned on” for the purpose. Conventionally, gene regulation is brought about by a protein acting as an activator or a repressor of gene function.

Condensing or relaxing
The study found that either condensing or relaxing a chromosome regulates the genes required for the generation of foamy macrophages. Genes reside on chromosomes, and when chromosomes tightly coil up, they are not free to be transcribed and are therefore switched off. On the other hand, a relaxed and free chromosome can be bound by positive activators, which assist in gene expression and can be switched on.

The DNA in the chromosome is wound around proteins called histones. Action of methylase enzyme at certain places on histones leads to tightening of chromosomes and silencing of the genes, while actions of a demethylase enzyme can remove the methyl groups on the histones and relax the chromosome for gene expression.

Another important aspect that the study found is the role of a demethylase enzyme, namely JMJD3 (Jumonji Domain containing protein), that is responsible for demethylating histones that results in the expression of the above mentioned genes.

JMJD3 is repressed by a complex of proteins and one of them, called MINT/spen, was not found to be produced in foamy macrophages. Thus, this allows the JMJD3 to express the genes involved in foamy macrophage development.

This research is a key insight that could drive the direction for drug development against TB. “Study of factors leading to reactivation from latent TB into active TB could be vital to completely eradicate the disease. Knowledge of mycobacteria governed epigenomics would also aid the understanding of pathologies associated with the disease progression. Further, mycobacterial infection results in a predisposition to several other infections and it would be fascinating to note the impact of epigenetic factors during co-infections,” signs off Kasturi.

(The author is with Gubbi Labs, a Bengaluru-based research collective)


Radio bursts traced to faraway galaxy

Dennis Overbye, 17 January 2017, The New York Times

long way out Radio waves arrive on Earth dispersed or spread out in time by wavelength. Representative image

Astronomers have traced a series of brief, enigmatic bursts of radio waves to a galaxy far, far away and indeed a long time ago — some three billion years or so. But as much as you might be hoping or dreading it to be true, this is probably not E.T.

“We’ve joked about spaceship battles and death stars blowing up, but we think we can explain it with ordinary physics,” said Shami Chatterjee, a Cornell University astronomer. Shami is the lead author of a paper published in Nature this month that details the search for the source of the radio waves known as “fast radio bursts,” — intense pulses of radiation from the sky lasting only a few milliseconds.

These have been disappointing times for those yearning for some alien direction from Out There. Last summer, Russian astronomers reported that they had recorded a promising-sounding signal from a star in the Hercules constellation, but they dismissed it when it became public as a freak bit of random radio noise, the astrophysical equivalent of a cosmic butt dial. More recently, searches for radio signals from a set of stars with anomalous spectral features and another star known as Tabby’s Star that has shown suspicious variations in its light seem to have come up empty.

Unexpected discovery?
So at least for now, the skies appear to be bereft of intelligence. But the new results from the fast radio burster, known as 121102 — after November 2, 2012, the date it was first observed — need not discourage any aficionados of cosmic mystery. Most likely, Shami said in a telephone interview, the bursts could be caused by weird reactions between a neutron star — the dense spinning magnet left behind by a supernova explosion — and the debris from that explosion.

Or perhaps from some unexpected quirk of a supermassive black hole in the centre of the galaxy, a dwarf assemblage of stars some three billion light-years away in the constellation Auriga. There are problems with both explanations, however, he added.
Fast radio bursts have led astronomers on a merry chase ever since they were
discovered in 2007 in data recorded earlier by the Parkes radio telescope in Australia. Because they are so short and until recently have never been seen to repeat, these phantoms have been hard for astronomers to study. Usually, astronomers notice them after the fact. Moreover, radio telescopes have poor angular resolution, making it impossible to determine exactly what star or distant galaxy they came from.

The radio emissions themselves, Shami said, resemble the blasts from pulsars — the spinning neutron stars that emit clocklike pulses of radiation and whose discovery in 1968 did indeed elicit speculation about little green men. But the radio waves arrive on Earth dispersed or spread out in time by wavelength, which implies that they have travelled from far outside our galaxy.

That great distance also implies that they are enormously more powerful than pulsars in our galaxy, adding to the mystery of what they are and raising the question of why they are not seen within our own galaxy, the Milky Way. In all, 18 of the fast bursters have been spotted since they were first recognised in 2007 — a small number. If extrapolated to the whole sky, that means 5,000 to 10,000 of these flashes should happen every day.

Where are they? Lacking much evidence to the contrary, astronomers theorised that the bursts resulted from apocalyptic events like collisions of neutron stars. At one point, Shami said, there were more theoretical models of the bursts than observed bursts. “Many things go bang,” he explained.

The big break came in 2012 when the burst known as 121102 repeated itself. Subsequent observing campaigns with the Very Large Array of telescopes in New Mexico and the Arecibo radio telescope in Puerto Rico, USA recorded a total of nine bursts over 83 hours of observing time and a terabyte of data during a six-month period in 2016. That meant that whatever was causing 121102, at least, was not destroying it. “We definitely know, for this one case anyway, the radio burst is not cataclysmic,” Shami said.

In addition, papers also have been submitted to the Astrophysical Journal Letters by two other groups led by Shriharsh Tendulkar of McGill University, Canada and by Benito Marcote of the Joint Institute for VLBI in Europe, in Dwingeloo, the Netherlands.

Massive stars
Following up on the Very Large Array observations, Benito’s team on the European VLBI network was able to pinpoint the location of the burst to a faint dwarf galaxy in the Auriga constellation. Shriharsh and his colleagues then used the eight-metre Gemini North telescope on Mauna Kea in Hawaii to observe the galaxy and measure its distance. That distance, three billion light-years, confirmed the original supposition that the fast bursts come from far, far away. “The host galaxy is puny,” Shriharsh said during the news conference in Grapevine. That dwarf galaxy is only a hundredth of the mass of the Milky Way.

If this burster, 121102, is indeed typical of the bunch, the astronomers said, this might be a clue. Such galaxies are typically home to some of the most violent events in the universe, Shami said, things that go seriously boom in the night like certain kinds of gamma-ray bursts and superluminous supernova explosions that result in extremely magnetic pulsars known as magnetars.

These are the signatures of massive stars, of the deaths of massive stars, he said. But this only raises more questions. “The only one that repeats is from three billion light-years,” Shami mused. “Where are all the nearby ones?” he asked, noting that they should be even brighter, saturating our radio receivers. “It’s very curious,” he said.


Nicholas Bakalar and Kenneth Chang Jan 24 , 2017, The New York Times

Absolutely every bit of our galaxy

GALAXY'S MASS The methods of measurement of the mass of MilkyWay involve complex mathematical and statistical techniques.

Absolutely every bit of our galaxy

Astronomers have arrived at what they believe to be the most accurate measure yet of the mass of the Milky Way: about 4.8 x 10(11) times the mass of the sun, or ‘solar masses’, to use a standard unit of mass in astronomy. This comes to about 9.5 x 10(41) kg — that is, 95 followed by 40 zeros. The number, of course, is inexact, as obviously no direct measure of all the billions of stars and other objects in the Milky Way could be taken.

But in a paper to be published in The Astrophysical Journal, scientists used methods of measurement that involve complex mathematical and statistical techniques called hierarchical Bayesian analysis, as well as direct measurements of the velocity of globular clusters, the tightly packed spherical groups of 10,000 to 1,00,000 old stars that move through the galaxy. Just as the mass of the sun can be calculated by measuring its gravitational pull on Earth, the mass of the Milky Way can be calculated by measuring its gravitational pull on the globular clusters.

The estimate includes everything within 125 kiloparsecs of the centre of the galaxy — that is, within 3.9 x 10(18) km. And ‘everything’ is not just stars: There are planets, moons, gases, dust and other objects, not to mention the immense amount of dark matter. It cannot be detected directly, but its mass can be inferred from its gravitational effect on other objects.

“The biggest thing is that we’re including measurement uncertainties that are carried through the analysis,” said the lead author, Gwendolyn M Eadie, a doctoral candidate at McMaster University in Hamilton, Ontario, Canada. “So we have a good handle on the uncertainty in our mass estimate. The low end is 4.0 x 10(11) solar masses, and the high end is 5.8 x 10(11).”

Gwendolyn said that the findings were important from an astronomer’s perspective. “The methods we’ve developed could be important in other studies that do other kinds of research,” she said. “These methods have been used in other fields, but they’re starting to become more useful in astronomy now that we have computers that can do these complex calculations.” What does it mean for the rest of us? “It just satisfies curiosity about the world we live in,” she said.

When Venus smiled for a few days

For a few days, Venus smiled — sideways. When Japan’s Akatsuki spacecraft pulled into orbit around Venus in December 2015 and turned on its instruments, it almost immediately discovered a bow-shape feature in the atmosphere stretching 6,000 miles, almost pole to pole — a sideways smile.

More remarkably, while Venus’ winds blow at speeds up to 250 mph and clouds whip around the planet every four days, this gargantuan sideways smile did not move, but remained fixed above the ground for four days. Because of Akatsuki’s large looping orbit, the spacecraft could not make more observations for a month.

When the spacecraft looked at the same region again, the smile had disappeared. Except for a few brief glimmers in April and May last year, the smile has not returned. In a recent paper published in the journal Nature Geoscience, scientists working on the mission describe their observations in detail and suggest it was a “gravity wave” — a disturbance in the winds caused by the underlying topography that propagated upward. The bow-shape arc appeared above Aphrodite Terra, a highland region about the size of Africa that rises up to three miles from the surface.

Scientists working on data from the European Space Agency’s Venus Express reported finding a similar disturbance in the atmosphere. The authors of the new paper said that numerical simulations provided preliminary support for the idea, but that they still could not explain how the gravity wave forms and propagates in the lower atmosphere. Or why the prominent smile was seen in December 2015 and not since. Scientists also cannot yet answer the big question Akatsuki was sent to investigate: Why do the winds blow so fast on Venus to begin with?



A cure for neglected tropical diseases

Siddharth Kankaria Jan 24 , 2017

SILENT KILLER Statistics indicate that around 0.9 to 1.3 million new cases of Leishmaniasis surface every year.

How many tropical diseases do you know of? Malaria, dengue and sleeping sickness immediately come to mind. Maybe leprosy, if you think hard enough. But, many of us may not have heard of Cutaneous Leishmaniasis, a less dangerous but a much more prevalent cousin of kala azar (black fever). Cutaneous Leishmaniasis (CL) is caused by the protozoan Leishmania, which are transmitted by the bite of infected female sandflies. 

Dr Shailza Singh and her team from the National Centre for Cell Science (NCCS), Pune, have been studying this disease extensively for years. In a recent collaborative study with Dr Sudipta Basu and his team from the Indian Institute of Science Education and Research (IISER), Pune, the researchers have discovered a new lead compound to help combat this neglected tropical disease. Often called white leprosy, CL is known to cause skin lesions and ulcers on exposed parts of the body, while often leaving behind life-long scars and severe disabilities. According to statistics released by the World Health Organisation, around 0.9 to 1.3 million new cases of Leishmaniasis surface every year, leading to up to 20,000 to 30,000 deaths annually. Conventional methods of treating CL include administration of antimony-based compounds or oral, topical or liposomal preparations of anti-fungal compounds. However, these approaches have increasingly been proven ineffective, insufficient or too expensive to use.

The problem of causative protozoans becoming resistant to most anti-leishmanial drugs further exacerbates the situation. Thus, it has become paramount to identify new compounds that could be used to treat leishmaniasis. Several natural compounds such as alkaloids, phenolic compounds, terpenes and saponins are known to have anti-leishmanial properties. Recent studies have found a class of anti-fungal compounds called ‘coumarins’ that possess anti-protozoal properties.

Testing therapeutic efficacy
In order to take this lead forward, researchers from NCCS, comprehensively surveyed several coumarin derivatives for their therapeutic efficacy against leishmaniasis, and in this pursuit, also collaborated with the team at IISER Pune. “Our goal was to explore the possibility of using coumarin derivatives as anti-leishmanial agents. Towards this, we have designed a set of coumarin derivatives using computer aided drug designing, which could serve as probable drug candidates for the treatment of leishmaniasis,” explains Dr Shailza.

The research team’s initial screening of coumarin derivatives led to the identification of some 1000-odd compounds, which were ultimately narrowed down to five promising drug candidates, based on various judging criteria like their three-dimensional shape, size, and chemical properties. After this, the scientists wanted to test these five drug candidates for their biological activity and compatibility. They performed various microbiological experiments in this regard, and arrived at one of the compounds which showed the best anti-leishmanial properties, nicknamed ‘C2’ (compound 2). C2 was shown to cause a reduction in size and mobility of the parasitic protozoan cells, which are known to cause leishmaniasis. In addition, C2 was also shown to attenuate these protozoal cells’ ability to infect macrophages, a specialised infection-fighting cell found in the immune system of higher animals.

Further, in order to check the effect of C2 on lesions manifested during Leishmaniasis, the scientists took experimental mice infected with cutaneous leishmaniasis lesions, and then treated them with oral doses of C2. To their surprise, the leishmanial lesions in mice reduced in size by almost 50%. It was thus evident that C2 exhibited the highest anti-leishmanial properties amongst the shortlisted drug candidates both in vitro and in vivo. In order to further enhance the solubility of C2 inside the body, the scientists designed a tiny nanometre scale drug carrier made up of lipids. This lipid-based drug carrier ensures that the drug is released in a much more sustained and prolonged fashion inside the body.

Effective strategy
In further experiments with protozoan cell cultures of Leishmania, the scientists used these lipid-based drug carriers to deliver C2 inside the Leishmania cells. Their results successfully demonstrated that a sustained release of C2 within these Leishmania cells triggered them to commit cellular suicide. Such instances of cellular suicide are generally characterised by an observable decrease in the integrity of mitochondrial membranes and can be quantified using mitochondrial dyes.

This cellular suicide in Leishmania cells can be viewed as an instance of programmed cell death triggered by toxic external factors — the compound C2 in this case. Such a targeted killing of protozoal cells could be a very effective strategy of combating the spread of leishmaniasis infections within the host. Backed by these conclusive results, the researchers now feel that C2 could serve as a promising drug candidate, which, with further modification and development, can be converted into a potent anti-leishmanial compound.

(The author is with Gubbi Labs, a Bengaluru-based research collective)


Shedding light on the human genome

Natalie Angier Jan 24 , 2017 The New York Times

MAJOR ROLE TADs have profound implications for understanding a raft of heretofore mysterious diseases.

They said it was their family curse: a rare congenital deformity called syndactyly, in which the thumb and index finger are fused together on one or both hands. Ten members of the extended clan were affected, and with each new birth, they told Dr Stefan Mundlos of the Max Planck Institute for Molecular Genetics, Germany the first question was always: “How are the baby’s hands? Are they normal?” The family, under promise of anonymity, is taking part in a study by Stefan and his colleagues of the origin and development of limb malformations. And while the researchers cannot yet offer a way to prevent syndactyly, or to entirely correct it through surgery, Stefan has sought to replace the notion of a family curse with “a rational answer for their condition,” he said.

The scientists have traced the family’s limb anomaly to a novel class of genetic defects unlike any seen before, a finding with profound implications for understanding a raft of heretofore mysterious diseases. The mutations affect a newly discovered design feature of the DNA molecule called topologically associating domains, or TADs. It turns out that the vast informational expanse of the genome is divvied up into a series of manageable, parochial and law-abiding neighbourhoods with strict nucleic partitions between them — each one a TAD.

Folding protocol
By studying TADs, researchers hope to better fathom the deep structure of the human genome, in real time and three dimensions, and to determine how a quivering, mucilaginous string of some three billion chemical subunits that would measure more than six-feet long if stretched out nonetheless can be coiled and compressed down to four-10,000ths of an inch, the width of a cell nucleus.

“DNA is a superlong molecule packed into a very small space, and it’s clear that it’s not packed randomly,” Stefan said. “It follows a very intricate and controlled packing mechanism, and TADs are a major part of the folding protocol.” For much of the past 50 years, genetic research has focused on DNA as a kind of computer code, a sequence of genetic “letters” that inscribe instructions for piecing together amino acids into proteins, which in turn do the work of keeping us alive.

Most of the genetic diseases deciphered to date have been linked to mishaps in one or another protein recipe. Scanning the DNA of patients with Duchenne muscular dystrophy, for example, scientists have identified telltale glitches in the gene that encodes dystrophin, a protein critical to muscle stability. The mutant product that results soon shatters into neurotoxic shards.

Yet, researchers soon realised there was much more to the genome than the protein codes it enfolded. “We were caught up in the idea of genetic information being linear and one-dimensional,” said Job Dekker, a biologist at the University of Massachusetts Medical School, USA. For one thing, as the sequencing of the complete human genome revealed, the portions devoted to specifying the components of hemoglobin, collagen, pepsin and other proteins account for just a tiny fraction of the whole, maybe three per cent of human DNA’s three billion chemical bases. And there was the restless physicality of the genome, the way it arranged itself during cell division into 23 spindly pairs of chromosomes that could be stained and studied under a microscope, and then somehow, when cell replication was through, merged back together into a baffling, ever-wriggling ball of chromatin — DNA wrapped in a protective packaging of histone proteins.

Through chromosome conformation studies and related research, scientists have discovered the genome is organised into about 2,000 jurisdictions. As with city neighbourhoods, TADs come in a range of sizes, from tiny walkable zones a few dozen DNA subunits long to TADs that sprawl over tens of thousands of bases and you’re better off taking the subway. TAD borders serve as folding instructions for DNA.

Different domains
TAD boundaries also dictate the rules of genetic engagement. Scientists have long known that protein codes are controlled by an assortment of genetic switches and enhancers — noncoding sequences designed to flick protein production on, pump it into high gear and muzzle it back down again. The new research indicates that switches and enhancers act only on those genes, those protein codes, stationed within their own precincts. “Genes and regulatory elements are like people,” Job said. “They care about and communicate with those in their own domain, and they ignore everything else.”

What exactly do these boundaries consist of? Scientists are not entirely sure, but preliminary results indicate that the boundaries are DNA sequences that attract the attention of sticky, roughly circular proteins called cohesin and CTCF, which adhere thickly to the boundary sequences like insulating tape. Between those boundary points, those clusters of insulating proteins, the chromatin strand can loop up and over like the ribbon in a birthday bow, allowing genetic elements distributed along the ribbon to touch and interact with one another. But the insulating proteins constrain the movement of each chromatin ribbon, said Richard A Young of the Whitehead Institute for Biomedical Research, USA, and keep it from getting entangled with neighbouring loops — and the genes and regulatory elements located thereon.

The best evidence for the importance of TADs is to see what happens when they break down. Researchers have lately linked a number of disorders to a loss of boundaries between genomic domains, including cancers of the colon, esophagus, brain and blood. In such cases, scientists have failed to find mutations in any of the protein-coding sequences commonly associated with the malignancies, but instead identified DNA damage that appeared to shuffle around or eliminate TAD boundaries. As a result, enhancers from neighbouring estates suddenly had access to genes they were not meant to activate.

Reporting in the journal Science, Richard and his colleagues described a case of leukemia in which a binding site for insulator proteins had been altered not far from a gene called TAL1, which if improperly activated is known to cause leukemia. Now that researchers know what to look for, he said, TAD disruptions may prove to be a common cause of cancer. The same may be true of developmental disorders — like syndactyly.



Saturday, 21 January 2017

The spectre of superbugs

Sabrina Tavernise & Denise Grady June 07, 2016, The New York Times

Immunity The gene for resistance to colistin was first found in China.  Representative image

Military researchers in the US have identified the first patient in the United States to be infected with bacteria that are resistant to an antibiotic that was the last resort against drug-resistant germs. The patient is well now, but the case raises the spectre of superbugs that could cause untreatable infections, because the bacteria can easily transmit their resistance to other germs that are already resistant to additional antibiotics. The resistance can spread because it arises from loose genetic material that bacteria typically share with one another.

“Think of a puzzle,” said Dr Beth Bell, of the Centers for Disease Control and Prevention, USA. “You need lots of different pieces to get a result that is resistant to everything. This is the last piece of that puzzle, unfortunately, in the United States. We have that genetic element that would allow for bacteria that are resistant to every antibiotic.”

The bacteria are resistant to a drug called colistin, an old antibiotic that in the United States is held in reserve to treat especially dangerous infections that are resistant to a class of drugs called carbapenems. If carbapenem-resistant bacteria, called CRE, also pick up resistance to colistin, they will be unstoppable. “This is huge,” said Dr Lance Price, a researcher at George Washington University. “We are one step away from CRE strains that cannot be treated with antibiotics. We now have all the pieces in place for it to be untreatable.”

The gene for resistance to colistin was first found in China, where the drug is used in pig and poultry farming. Researchers reported its discovery there in November. It has also been found in the intestine of one pig in the United States. CRE is still relatively rare, causing just 600 deaths a year, but by 2013, researchers had identified it in healthcare facilities in 44 states. Dr Thomas R Frieden, director of the Centers for Disease Control and Prevention. often calls it the “nightmare superbug,” because it is resistant to all but one antibiotic — colistin.

“We risk being in a post-antiobiotic world,” Dr Thomas said. “That wouldn’t just be urinary tract infections or pneumonia — that could be for the 6,00,000 patients a year who need cancer treatment.” He added, “The medicine cabinet is empty for some patients.”

The colistin resistance in the United States came to light when a 49-year-old woman, who Dr Beth said was “connected ...to the military,” was treated for a urinary infection at a military clinic in Pennsylvania. Because her urine culture had unusual results, the sample was sent to the Walter Reed National Military Medical Center, which identified the drug resistance. The bacteria, though resistant to colistin and some other antibiotics, were not resistant to carbapenems. Doctors there published a report on the case in a medical journal.

Sample analysis
Patrick McGann, a scientist at the Walter Reed Army Institute of Research and lead author of the paper, said researchers had only started analysing samples a few weeks ago. They tested samples from 6 patients, and one of them was the woman’s. Dr Beth said researchers did not know how the patient contracted the resistant bacteria. The microbes have been found in people in Asia and Europe, but the patient had not traveled during the past 5 months. It is possible that she contracted the bacteria from food, or from contact with someone else who was infected, Dr Beth said. Public health workers will interview the woman and will probably test her family members and other close contacts for the bacteria, she said.

Infectious disease doctors have long warned that overuse of antibiotics in people and in animals put human health at risk by reducing the power of the drugs, some of modern medicine’s most prized jewels. About 2 million Americans fall ill from antibiotic-resistant bacteria every year and at least 23,000 die from those infections.

The CRE germs usually strike people receiving medical care in hospitals or nursing homes, including patients on breathing machines or dependent on catheters. Healthy people are rarely, if ever, affected.

But the bugs attack broadly, and the infections they cause are not limited to people with severely compromised immune systems. CRE was believed to be the cause of infections from improperly cleaned medical scopes that led to the death of 2 people at Ronald Reagan UCLA Medical Center in California last year.

Detecting waves

C Sivaram June 07, 2016,

Hint We have had strong indirect evidence for gravitational waves from binary neutron stars for the past few decades. Representative image

On February 11 this year, with much fanfare, the Laser Interferometer Gravitational Wave Observatory (LIGO) announced the first direct detection of gravitational waves, almost exactly a century after Einstein first predicted their existence. LIGO found the signal, which was the expected and predicted signature from the merger of 2 black holes. This was a symmetric wiggly line that gradually increased in height and then dropped in agreement with the theoretical solutions. The merger occurred a billion years ago, leading to the formation of a 60 solar-mass black hole.

Gravitational waves go through everything in their path and are not affected by the medium. This implies that the information carried by gravitational wave is precisely the same as when the astronomical system sent it out. This kind of signal, which is like a perfect, faithful messenger, is rare in astronomy. Light and other electromagnetic radiation are often distorted and diminished by the various media they pass through. For instance, we cannot see light from certain regions of our own galaxy because of the dust that is in the way. However, the intensity of gravitational waves is very weak and they cannot be generated in the laboratory.

It’s only when we have binary systems of compact objects like neutron stars and black holes revolving close to each other (with short periods) that the gravitational wave emission becomes significant. It was expected that when 2 neutron stars (stars with radii < 30 km) merge, then, the final orbit, just before the collision, can generate an intense pulse of gravitational waves at a frequency of around a kilohertz in around a millisecond.

Detectors like LIGO can be affected even if this kind of merger takes place in the Virgo cluster of galaxies, 50 million light years away. We have had strong indirect evidence for gravitational waves from binary neutron stars (the binary pulsar) for the past few decades. Their orbital periods shorten as they approach each other, in accordance with the theoretical formula for gravitational wave emission from such a system. When such objects merge, the signal would be much stronger. LIGO did not detect any signals from merging neutron stars.

Black holes, on the other hand, being more massive objects, emit more gravitational radiation when they merge, and this is the signal which LIGO saw, or rather heard, as the dominant frequency is several hundred hertz, and when the electrical signal is converted into sound, it is well within our hearing range!

Inference
The conclusion was that 2 massive black holes merged a billion light years away to form a single 60 solar-mass black hole. LIGO would ultimately have 5 detectors in different continents and countries (including one in India and Japan). Simultaneous detection would confirm that it is truly a celestial signal.

However, LIGO detectors can pick up only limited frequencies. Each arm of LIGO is 4 km long (laser beams bounced between mirrors can detect changes in path length to one part in 10^22, when a gravitational wave passes through.) But picking up frequencies, which are the richest in gravitational waves (like less than a millihertz) caused by mergers of supermassive black holes found in most galactic centres requires distances of hundreds of thousands of kilometres of arm length rather than just 4 km like that in LIGO. This is achieved more easily in spaceborne detectors.

Space-based gravitational wave detectors can avoid fluctuations in the earth’s gravitational field, which can obscure signals. Moreover LIGO’s sensitivity is limited by several noise sources (thermal noise from mirror suspensions), seismic activity at low frequencies etc. As one expects a rich spectrum of gravitational waves at lower frequencies (unlike the kilohertz one associated with LIGO) of a few millihertz to a fraction of a millihertz, spaceborne detectors have been suggested.

The Laser Interferometer Space Antenna (LISA) mission was suggested by NASA and later European Space Agency (ESA) some years ago. LISA will consist of 3 spacecrafts positioned about 5 million km apart and flying in an equilateral triangle formation around the sun.

The system would trail about 20° behind the earth orbit. Laser beams bounced between the spacecraft will form 3 separate interferometers. It was first scheduled for 2015, but NASA later withdrew it. The ESA is however going ahead. In the European E LISA, the spacecraft separated at a distance of 2 million km from each other. The project expected to be launched in 2034. The distances between the spacecrafts would be disturbed by passing gravitational waves and this can be measured by methods such as Doppler ranging. It can detect the merger of supermassive black holes billions of light years away.

Meanwhile, the Chinese have proposed their own space-based detector called Taiji (which means supreme ultimate), which is more ambitious than E LISA. In the proposed Taiji scheme (or C LISA), the 3 spacecrafts are separated by 3 million km giving access to different frequencies. The proposed launch of the Chinese mission is in the year 2033, 1 year ahead of LISA. Another project, the Tianqin project, twice cheaper than the Taiji, proposes to put the 3 spacecrafts much closer, to specifically detect the gravitational waves emitted by the white dwarfs binary HM Cancri.

In the meantime, the non-profit, Simons Foundation (USA) would fund a new observatory to search for signs of stretching in the very early stages of the expanding universe. It would hunt for the so-called Big Bang gravitational waves. This hunt for primordial gravitational waves would get 40 million dollars from the Simons foundation. The primordial gravitational waves is predicted by inflation models (of the early universe), which suggest that the universe ballooned exponentially in a fraction of a second to a large volume.

Curling patterns
Such a rapid wrenching expansion would give rise to gravitational wave ripples in space-time, that would have been imprinted in the Cosmic Microwave Background Radiation (CMBR). This would be in the form of so-called ‘B modes’, a curling pattern in the orientation of the CMBR. There is a good chance of getting a definite answer as to whether such an inflation did occur, and in what way. About 2 years ago, there was a false alarm with the announcement of primordial gravitational waves by the Background Imaging of Cosmic Extragalactic Polarisation (BICEP) experiment at the South Pole.

They claimed that they saw strong evidence of ‘B modes’, but this later turned out to be from contaminated dust in our galaxy. If the Simon observatory does discover the ‘B modes’, it would be a so-called smoking gun proof for inflation, but still would not convince everyone. Primordial gravitational waves have not been seen so far, so sceptics feel that this already rules out the most plausible versions of the inflation model. The Simons project will take off in a couple of years. The upgraded BICEP (at the South Pole) is underway.

In short, these future detectors are likely to open up the whole new area of gravitational wave astronomy, which has already seen the thrill of 2 black holes colliding and merging. Mergers of neutron stars, faraway supermassive black holes colliding and the gravitational waves generated in the fraction of a second before the Big Bang are all waiting to be discovered. 

The splendour and the nitty-gritty of genes

Abigail Zuger June 07, 2016, The New York Times

A cancer physician at Columbia University, Dr Siddhartha Mukherjee dazzled readers with his Pulitzer-winning The Emperor of All Maladies in 2010. That achievement was evidently just a warm-up for his virtuoso performance in The Gene: An Intimate History, in which he braids Science, History and memoir into an epic with all the range and biblical thunder of 'Paradise Lost'.

Twins born minutes apart may be eerily similar or just as eerily different. Even if they are not identical, they share yards of genetic material, and yet one turns out large and one small, one strong and one weak, one a poet and the other a mumbler. We see these disparities in people all the time. And now we see them in a pair of books on the gene, published on the same day. Sharing yards of genetic material, both works aim to explain the power and mystery of the human genome, yet could not be more different.

A cancer physician at Columbia University, Dr Siddhartha Mukherjee dazzled readers with his Pulitzer-winning The Emperor of All Maladies in 2010. That achievement was evidently just a warm-up for his virtuoso performance in The Gene: An Intimate History, in which he braids Science, History and memoir into an epic with all the range and biblical thunder of ‘Paradise Lost’.

Like all natural storytellers, Dr Siddhartha knows to begin at the beginning. To explain the gene, he heads back to the ancient Greeks who first struggled to explain how fathers and sons could be so similar and yet so different. To explain his own family, he must revisit partition-era India, where two of his father’s brothers succumbed to serious mental illness, a scourge that claimed one of his first cousins as well.

Those are some bad genes: They dog Dr Siddhartha’s heels as he writes, but like so many inheritances, they are complicated and disguised, muffled yet threatening. Slowly his story weaves past the usual landmarks: Mendel’s peas, Darwin’s finches, Watson and Crick’s rough cardboard models of component nucleosides. The alphabet of the gene creates the words and sentences of the genome. The eugenicists who glom on to fledgling science to purify the race contribute their own vicious story line, stretching from Oliver Wendell Holmes Jr to Adolf Hitler and beyond.

“Of all the sciences, biology is the most lawless,” he writes. Now we have become fluent enough in its language not only to understand what laws there are, but also to draft new ones. It is a familiar narrative with a big difference: Thanks to Dr Siddhartha’s remarkably clear and compelling prose, the reader has a fighting chance of arriving at the story of today’s genetic manipulations with an actual understanding of both the immensely complicated science and the even more complicated moral questions.

From the ground up
Looking to the future, Dr Siddhartha worries that “illness might progressively vanish but so might identity. Grief might be diminished but so might tenderness. Traumas might be erased but so might history.” All who contemplate such a world should understand its origins, and Dr Siddhartha ’s readers will get it from the ground up.

Dr Steven Lipkin lacks Dr Siddhartha’s skills; even with a co-author, he struggles mightily in The Age of Genomes: Tales From the Front Lines of Genetic Medicine to explain the scientific concepts underlying his work. What Lipkin does have are patients. A clinical geneticist in New York City, he is among those charged with translating complicated principles into viable medical practice. In his work, Dr Steven meets patients like “Lydia,” who worries that because she looks like her mother she is fated to die young of the same ovarian cancer.

There is “Samantha,” who suffers from a dire genetic syndrome that precludes safe pregnancy. There is “Sean,” a surfer dude with a relatively mild genetic condition that is ruining his sex life. As in the rest of clinical medicine, though, nothing is entirely predictable for these patients. Lydia is relieved to hear she has little chance of developing her mother’s cancer, but expresses exactly zero interest in knowing more about her father’s early Alzheimer’s disease, a far more heritable condition.

Samantha would routinely be advised to avoid pregnancy at all costs, save for one small detail: She shows up for her first appointment with a healthy baby boy on one knee. Her one uneventful pregnancy complicates decision making about subsequent ones. As for Sean, a honed state-of-the-art treatment to prop up his mutant genes is available. Unfortunately, it is very expensive, and he is uninsured.

Insurance is a scientific concept that Dr Siddhartha does not include in his glorious tour of human genetics, but it governs the work of clinicians who must plead, bargain and appeal for indicated tests and treatments to be approved. The society that rewrites the genetic code, as Lipkin points out, is likely to become one in which the very expensive tools of genetic medicine forge yet one more barrier between rich and poor.

When Dr Steven sequences his own genome, he runs into another feature of the landscape Dr Siddhartha overlooks: the proliferation of cut-rate, poorly standardised services out there. Lipkin first springs for a cheap genotype and gets what he pays for in the form of a syndrome he knows perfectly well he doesn’t have. A better test corrects the error.

These stories from the trenches make it clear that the clinical genetics will not be spared the misery-inducing features of the rest of medicine, in which the reigning slogan is “caveat emptor.” Let the poet sing his long, lovely epic; it is still the harried, inarticulate, much beleaguered guy in the white coat who will be cementing the transactions.


Hope for alternatives to painful biopsies

Andrew Pollack, Jun 14, 2016, The New York Times

research Aubrey Zapanta in the lab at Guardant Health, which performed a study of liquid biopsies. Jim Wilson/nyt

A blood test to detect cancer mutations produced results that generally agree with those of an invasive tumor biopsy, researchers reported, heralding a time when diagnosing cancer and monitoring its progression may become less painful and risky. The blood tests, known as liquid biopsies, represent one of the hottest trends in oncology. They take advantage of the fact that DNA fragments from tumours can be found in tiny amounts in the blood of patients with cancer.

Researchers hope that such tests can become alternatives to conventional tumour biopsies, in which a piece of the tumour is extracted by needle or by surgery — procedures that can have complications. The results of the study, the largest to date of a liquid biopsy test, give some reassurance that this might be possible. “I think this study really demonstrates the veracity of the liquid biopsy approach,” said Dr Philip Mack, director of molecular pharmacology at the University of California Davis Comprehensive Cancer Center.

The liquid biopsies are not used to diagnose cancer but rather to monitor disease progression or to detect genetic mutations in the tumour that could suggest which drug should be used to treat the disease. Just this week, the US Food and Drug Administration gave its first approval for such a test, one developed by Roche (pharmaceutical firm) to detect mutations in a particular gene.

Lung cancers with mutations in that gene are vulnerable to treatment with certain drugs, including Roche’s own Tarceva. Many liquid biopsy tests are being sold by other companies under rules that do not require FDA approval.

The study looked at the results of more than 15,000 liquid biopsies performed by Guardant Health, a Silicon Valley startup that is one of the leaders in the field. While many liquid biopsy tests now look for only a few mutations, Guardant’s test, which has a list price of $5,800, looks at mutations in 70 cancer-related genes.

The 15,000 samples came from the blood of people with various types of cancer, including lung, breast and colorectal. The researchers on the study, most of whom worked for Guardant, said the frequency and types of mutations found were similar to what is known from scientific literature.

For almost 400 patients, tumour biopsies were available, allowing for direct comparison to the blood test results from the same patient. For certain mutations that drive tumour growth, if a particular mutation was found in the blood, it was also found in the tumour 94-100% of the time.

There was much less agreement for mutations that predict resistance to particular drugs. Those might have arisen only after the treatment started, so might not have been seen in the tumour biopsy, which is usually taken at the time of diagnosis.

One shortcoming of the liquid biopsy was that for about 15% of the patients overall, no tumour DNA was detected in the blood. “There are simply tumours that do not shed DNA into circulation at detectable levels, so we are bound to miss them,” said Mack, who has been a paid speaker for Guardant.

Dr Edward Kim, an expert on lung cancer mutations who was not involved in the study, said the results showed the liquid biopsy accuracy was “very good.” He said, however, that use of an actual tumour sample allows for a more thorough analysis, including more mutations than is possible with a blood sample.

“I’m not personally ready to give up tissue,” said Dr Edward, who is the chairman of solid tumour oncology at the Carolinas HealthCare System’s Levine Cancer Institute in Charlotte, North Carolina. “It’s still the gold standard.” Still, he said, there are times when a tissue biopsy cannot be obtained, and it is difficult to do second and third tissue biopsies on a patient. In those cases, he said, “I love the option of having the blood test available.”


Providing clues of functional roles

Savitha Sekhar, Jun 14, 2016,
In Motion

analysis A tool called Normal Mode Analysis was used to study structural fluctuations in protein kinases.  representaitve image

It is said that our body houses over 25,000 different proteins. If we consider the body as a huge factory of sorts, then proteins are like the workers in it; they deal with security, communication, transportation, structural stability, maintenance, and every other role that one can envision. But unlike actual human workers, proteins are molecules, made up of units called amino acids, strung together like a long chain of beads of different materials, sizes and shapes. Protein sequences are just permutations and combinations of 20 different amino acids.

Prof N Srinivasan and his team at the Molecular Biophysics Unit, Indian Institute of Science (IISc), Bengaluru, have studied the interrelation between protein sequences, their structures and functions. It is the amino acid chains of proteins that fold upon themselves, to give a final 3D conformation that each protein molecule adopts. Except that these 3D conformations are not static. Protein molecules are in perpetual motion, which includes fluctuations in atomic positions, segmental motions and rigid body movement of compact sub-modules.

These dynamics are vital for the protein’s function at different levels. For instance, during these dynamics, when different proteins bump into each other, they talk. And when they talk, they function. Needless to say, protein dynamics and their structural fluctuations are of a much greater consequence than was realised, opening up a whole new dimension to understanding protein biology.

Protein kinase
In a recent Indo-French collaborative effort with Alexandre G de Brevern from French Institute of Health and Medical Research, Paris, researchers at Prof Srinivasan’s lab compared structural fluctuations across a class of proteins called protein kinases. Protein kinases are a large super-family of enzymes involved in communication — sensing and responding to signals. Many diseases, including cancer, have been attributed to the erroneous functioning of specific kinase molecules. The researchers used a computational approach, wherein instead of applying the traditional and expensive method of Molecular Dynamics Simulations, they used a tool called Normal Mode Analysis (NMA) to study structural fluctuations in protein kinases.

NMA calculates the patterns of motion in a variety of structures, including buildings and bridges and is used worldwide. Molecular biologists use NMA to study motions within various bio-molecules. In the case of proteins, this tool predicts the inherent mobility associated with the protein as a whole and also with each sub-part of the protein.

Kalaivani, a PhD student in the Molecular Biophysics Unit, IISc, the lead author of the study, ‘Conservation of structural fluctuations in homologous protein kinases and its implications on functional sites’ in the journal Protein, illustrates this point with an analogy: “For instance, when we look at a gear system, we can anticipate how a part would move, based on how it is linked to another. This is what NMA does — it looks at the structure of the protein, then it computationally determines, based on the connectivity between atoms, how each part of this molecule would move and how that affects the motion in its neighbouring parts, thus calculating how the protein molecule would move, as a whole.”

The researchers compared the structural fluctuations of a particular kinase with other related and unrelated kinases and showed that the inherent motions of these molecules are more similar across functionally similar and related protein kinases, than the not-so-closely-related kinases. The more closely related the 2 proteins, the more they tend to move in a similar manner.

Interestingly, the researchers could identify regions (specific sub-parts) of different related kinases that exhibited extremely similar structural fluctuations. These specific regions turned out to be the part of the proteins that were actually involved in the characteristic protein interactions and thus responsible for their functional activity.

“One of the best applications of this (study) is in drug design,” says Kalaivani. When trying to create a new drug, chemists usually try and design drugs that bind to aberrant protein kinases, thus curbing their deviant behaviour. They target regions with a sequence unique to the kinase-gone-wrong and design molecules that can bind only to that particular site. But, at times it is observed that the drug also interacts with unintended proteins, causing severe side effects. And then, the drug already having passed through many stages of research and development, finally fails during clinical trials.

How is it that a drug designed to bind to a specific unique sequence of amino acids, ends up interacting with a whole bunch of unintended molecules? The answer lies in 3-dimensional structures. Parts of 2 different protein molecules, despite not sharing the same sequence, can possibly have the same structure. Vice-versa, the same sequence of amino acids, in different proteins can sometimes form different 3D structures, depending on its neighbouring amino acids.

Apparently, completely different amino acid sequences within entirely different protein kinases can result in the formation of the same local 3D structure. So, even when we identify a unique sequence of amino acids that is found only on our protein of interest, it is possible that the drug, designed to specifically bind to one protein, may find itself binding to a number of unintended binding partners within the cell, causing an unexpected biological response, and resulting in unwanted side effects.

“It’s like a jigsaw puzzle,” explains Kalaivani, “although, a number of pieces can physically fit into a given spot, the picture is made perfect only when the correct piece is placed in its spot. Using NMA as a tool, we are adding another dimension in screening for drug binding sites. Over and above assessing the structural compatibility of the drug and its target protein, one can look for regions on the target protein showing a unique pattern of mobility and confidently begin with designing drugs that uniquely interact only with the desired target.”

(The author is with Gubbi Labs, Bengaluru)


No escape from black holes?

Dennis Overbye, Jun 14, 2016, The New York Times
EXIT POINT

latest New calculations by Stephen Hawking and other researchers  suggest that essential properties of whatever falls into black holes may survive. Brian Stauffer/NYT

A black hole has no hair.” That mysterious, koan-like statement by the theorist and legendary phrasemaker John Archibald Wheeler of Princeton has stood for half a century as one of the brute pillars of modern physics. It describes the ability of nature, according to classical gravitational equations, to obliterate most of the attributes and properties of anything that falls into a black hole, playing havoc with science’s ability to predict the future and tearing at our understanding of how the universe works.

Now it seems that the statement might be wrong. Recently Stephen Hawking wheeled across the stage in Harvard’s hoary, wood-paneled Sanders Theatre to do battle with the black hole. It is one of the most fearsome demons ever conjured by science, and one partly of his own making: a cosmic pit so deep and dense and endless that it was long thought that nothing — not even light, not even a thought — could ever escape.

But Hawking was there to tell us not to be so afraid. In a paper published recently in Physical Review Letters, Hawking and his colleagues Andrew Strominger of Harvard and Malcolm Perry of Cambridge University in England say they have found a clue pointing the way out of black holes.

‘Eternal prison’
“They are not the eternal prisons they were once thought,” Hawking said. “If you feel you are trapped in a black hole, don’t give up. There is a way out.” Black holes are the most ominous prediction of Einstein’s general theory of relativity: Too much matter or energy concentrated in one place would cause space to give way, swallowing everything inside like a magician’s cloak.

An eternal prison was the only metaphor scientists had for these monsters until 40 years ago, when Hawking turned black holes upside down — or perhaps inside out. His equations showed that black holes would not last forever. Over time, they would ‘leak’ and then explode in a fountain of radiation and particles. But his calculation violated a tenet of modern physics: that it is always possible in theory to reverse time, run the proverbial film backward and reconstruct what happened in, say, the collision of 2 cars or the collapse of a dead star into a black hole.

The universe, like a kind of supercomputer, is supposed to be able to keep track of whether one vehicle was a green pickup truck and the other was a red Porsche, or whether one was made of matter and the other antimatter. These things may be destroyed, but their ‘information’ — their essential physical attributes — should live forever. In fact, the information seemed to be lost in the black hole, according to Hawking, as if part of the universe’s memory chip had been erased. According to this theorem, only information about the mass, charge and angular momentum of what went in would survive.

Fortunately for historians, Hawking conceded defeat in the black hole information debate 10 years ago, admitting that advances in string theory, the so-called theory of everything, had left no room in the universe for information loss. At least in principle, then, he agreed, information is always preserved — even in the smoke and ashes when you, say, burn a book. With the right calculations, you should be able reconstruct the patterns of ink, the text. Hawking paid off a bet with John Preskill, a Caltech physicist, with a baseball encyclopedia, from which information can be easily retrieved.

Not bald after all

But neither Hawking nor anybody else was able to come up with a convincing explanation for how that happens and how all this ‘information’ escapes from the deadly erasing clutches of a black hole. Indeed, a group of physicists 4 years ago tried to figure it out and suggested controversially that there might be a firewall of energy just inside a black hole that stops anything from getting out or even into a black hole. The new results do not address that issue. But they do undermine the famous notion that black holes have ‘no hair’ — that they are shorn of the essential properties of the things they have consumed.

About 4 years ago, Strominger started noodling around with theoretical studies about gravity dating to the early 1960s. Interpreted in a modern light, the papers — published in 1962 by Hermann Bondi, MGJ van der Burg, AWK Metzner and Rainer Sachs, and in 1965 by Steven Weinberg, later a recipient of the Nobel Prize — suggested that gravity was not as ruthless as Wheeler had said. Looked at from the right vantage point, black holes might not be not be bald at all. The right vantage point is not from a great distance in space — the normal assumption in theoretical calculations — but from a far distance in time, the far future, technically known as ‘null infinity’.

“Null infinity is where light rays go if they are not trapped in a black hole,” Strominger tried to explain over coffee in Harvard Square recently. From this point of view, you can think of light rays on the surface of a black hole as a bundle of straws all pointing outward, trying to fly away at the speed of, of course, light. Because of the black hole’s immense gravity, they are stuck.

But the individual straws can slide inward or outward along their futile tracks, slightly advancing or falling back, under the influence of incoming material. When a particle falls into a black hole, it slides the straws of light back and forth, a process called a supertranslation. That leaves a telltale pattern on the horizon, the invisible boundary that is the point of no return of a black hole — a halo of “soft hair,” as Strominger and his colleagues put it. That pattern, like the pixels on your iPhone or the wavy grooves in a vinyl record, contains information about what has passed through the horizon and disappeared.

“One often hears that black holes have no hair,” Strominger and a postdoctoral researcher, Alexander Zhiboedov, wrote in a 2014 paper. Not true: “Black holes have a lush infinite head of supertranslation hair.” In January, Hawking, Strominger and Perry posted a paper online titled ‘Soft Hair on Black Holes’ laying out the basic principles of their idea. In the paper, they are at pains to admit that knocking the pins out from under the no-hair theorem is a far cry from solving the information paradox. But it is progress. Their work suggests that science has been missing something fundamental about how black holes evaporate, Strominger said. And now they can sharpen their questions. “I hope we have the tiger by the tail,” he said.

Whether or not soft hair is enough to resolve the information paradox, nobody really knows. Reaction from other physicists has been reserved. Juan Maldacena of the Institute for Advanced Study in Princeton, New Jersey, said of the new proposal, “Its significance for the black hole information problem remains to be seen. But it is probable that it plays some role.”


Bright but deadly

Nidhi Jamwal, 21 June 2016, DHNS

Lead in paints is one of the largest sources of exposure to lead. A 2007 study found that enamel paints had a lead concentration as high as 1,000 ppm.  Yellow colour paints had  maximum  lead levels...

A recent international study ‘Economic Costs of Childhood Lead Exposure in Low- and Middle-Income Countries’ by Teresa M Attina and Leonardo Trasande has put a monetary cost to lead exposure in India, and it’s a whooping $ 236 billion annually. 

This one-of-its-kind study by the researchers of the Section of Environmental Pediatrics at NYU School of Medicine indicates that lead exposure is a major contributor to children’s intellectual disability in low-and-middle income countries (LMICs), including India.

Researchers have created an interactive map that estimates economic impacts of lead exposure in LMICs and peg it at a total of $ 977 billion. This is much higher than the economic impact of lead exposure in developed countries — Europe and the US at $55 billion and $50.9 billion respectively. This landmark study was recently released at the United Nations Environment Assembly meeting being held in Nairobi, Kenya.

Lead, a heavy metal, is toxic in all forms. According to the World Health Organisation (WHO), children around the world today are at risk of exposure to lead from multiple sources (see figure: Sources of children’s exposure to lead). Lead poisoning can affect virtually every organ system in the body — central and peripheral nervous system, the cardiovascular, gastrointestinal, renal, endocrine, immune and haematological systems.

Exposure to lead has a permanent negative impact on children’s developing brains, including their intelligence quotient (IQ). Lead exposure early in life can reprogram genes, which can lead to altered gene expression and an associated increased risk of disease later in life, warns WHO. Gastrointestinal absorption of lead is enhanced in childhood — up to 50% of ingested lead is absorbed by children, as compared with the 10% in adults.

The study of NYU School of Medicine focuses on the neurodevelopmental impacts of lead, assessed as decrements (or reductions) in intelligence quotient (IQ) points, and how this translates into decreases in lifetime earning potential, assessed as lost lifetime economic productivity (LEP). By estimating the decrease in earnings potential (lost LEP) from children affected by lead, the researchers have estimated the economic cost of lead exposure in LMICs.

Lead in paints
According to WHO, after lead in petrol, lead in paints is one of the largest sources of exposure to lead. Leaded paint can remain a source of exposure to lead and lead poisoning for many years after the paint has been applied to surfaces. As lead-based residential paint deteriorates with age or as homes undergo renovation, lead-containing dust is generated. “One of the most important things we can do to decrease children’s exposure to lead in LMICs is to ensure lead is no longer used in household paint and other paints to which children may be exposed (such as paints on playground equipment),” recommends the recent study.

Toxics Link, a New Delhi-based NGO and a partner in IPEN’s (a global network campaigning for toxics-free future for all) Global Lead Paint Elimination Campaign, has conducted 5 studies on lead content in paints sold in the Indian markets between 2007 and 2015. In its 2007 study, it found that enamel (oil-based) paints had a lead concentration as high as 1,000 ppm (parts per million). Yellow colour paints had maximum lead levels and white had the least.

Four years later, Toxics Link released another study, ‘Double Standard: Investigating lead content in leading enamel paint brands in South Asia’, and reported that most countries in this region had either no standard for lead in paints or had a voluntary standard. Paint manufacturers were using this legal loophole to push high lead content paints into countries with weak regulatory mechanism.

Shocking numbers
Between November 2012 and February 2013, Toxics Link purchased 250 cans of oil-based house paints from various states of India and tested them. The highest lead concentration observed was 1,60,000 ppm. As many as 44% of the samples had lead levels over 10,000 ppm, whereas 83% were over 600 ppm and 90% over 90 ppm respectively.

Last year, Toxics Link conducted another follow-up study. The report, ‘National Report: Lead in Enamel Household Paints in India’ in 2015, tested 101 cans of new enamel decorative paints purchased from various Indian states and found that there was very little change in lead levels in paints analysed in 2013 and 2015. The report strongly recommended the Indian government to “establish a national mandatory regulatory framework to control the manufacture, import, export, sale, and use of lead paints and products coated with lead paint immediately.”

So far, India does not have any mandatory standard for regulating lead levels in paints. The Bureau of Indian Standards (BIS) had a voluntary standard of 1,000 ppm, which was reduced to 90 ppm in 2013. On April 8 this year, the Union Ministry of Environment, Forests and Climate Change notified new draft rules called ‘Regulation on Lead contents in Household and Decorative Paints Rules, 2016’ for public comments. These rules, to be implemented by the BIS, propose to ban manufacture, trade, import and export of household and decorative paints with metallic lead exceeding 90 ppm.

“India is finally set to have a mandatory standard of lead in paints. Based on public comments, the environment ministry will soon notify the final rules. Though no level of lead exposure is good, the limit of 90 ppm is the global best,” says Dr Prashant Rajanakar, programme coordinator with Toxics Link. According to him, most big players in the Indian paint industry have already shifted to ‘lead safe’ paints. Some of them have as low as 8-10 ppm of lead levels.