Tracking Neural Stem Cells With MRI

From Science Daily: Researchers from Carnegie Mellon University have found a way to track neural stem cells using MRI; specifically, they tracked newly born neurons in the adult rat brain, which migrate a fair distance to their destination before incorporating themselves into the nervous system.

There’s a protein called ferritin that binds to iron in cells. These researchers created a virus containing the ferritin gene and used it to insert extra copies of the gene into the particular cells they were targeting (neural stem cells, or neuroblasts). With extra copies of the gene, the cells produced extra ferritin, and ended up collecting extra iron. MRI machines rely on magnetic fields for imaging; the accumulation of iron in neuroblasts disrupted the magnetic field around them, making them appear distinct. 

This is apparently the first time researchers have been able to image neuroblasts in real time, since other common methods of visualizing cells don’t work for cells deep inside the brain, encased in the skull. 

As you may know, it was long thought that no new neurons were ever created in adult human brains. More recently, it was discovered that neuroblasts were in fact created in the brain throughout life, albeit in a very limited capacity, and much is still to be learned about them. This new technique for imaging neuroblast migration could prove very useful in understanding and controlling regeneration in the adult brain, a very tricky and important subject. 

Whoa, I didn’t quote the article at all. Here, courtesy of Science Daily:

The National Science Foundation and National Institutes of Health funded this research.

Crisis averted. Thanks SD!


“Scientists Release Most Accurate Simulation of the Universe to Date”

A new computer simulation out of New Mexico State University, running on NASA’s Pleiades supercomputer, has become the new hotness in universe simulation. Called the Bolshoi, it’s based off of data from NASA’s Wilkinson Microwave Anisotropy Probe (WMAP), which measures variation in cosmic microwave radiation remaining from the Big Bang. The Bolshoi’s predecessor, the Millennium Run, used an older version of WMAP data that has since been shown to be inaccurate. 

From ScienceDaily:

The simulation traces the evolution of the large-scale structure of the universe, including the evolution and distribution of the dark matter halos in which galaxies coalesced and grew. Initial studies show good agreement between the simulation’s predictions and astronomers’ observations.

The standard explanation for how the universe evolved after the Big Bang is known as the Lambda Cold Dark Matter model, and it is the theoretical basis for the Bolshoi simulation. According to this model, gravity acted initially on slight density fluctuations present shortly after the Big Bang to pull together the first clumps of dark matter. These grew into larger and larger clumps through the hierarchical merging of smaller progenitors. Although the nature of dark matter remains a mystery, it accounts for about 82 percent of the matter in the universe. As a result, the evolution of structure in the universe has been driven by the gravitational interactions of dark matter. The ordinary matter that forms stars and planets has fallen into the “gravitational wells” created by clumps of dark matter, giving rise to galaxies in the centers of dark matter halos.

A principal purpose of the Bolshoi simulation is to compute and model the evolution of dark matter halos…

The Bolshoi simulation focused on a representative section of the universe, computing the evolution of a cubic volume measuring about one billion light-years on a side and following the interactions of 8.6 billion particles of dark matter. It took 6 million CPU-hours to run the full computation on the Pleiades supercomputer, recently ranked as the seventh fastest supercomputer in the world.

A note about that last sentence: keep in mind that supercomputers progress in leaps and bounds. The Pleiades is the seventh fastest supercomputer in the world, but it’s about 1/8 as fast as the fastest supercomputer (which is still in production, in Japan), and less than 1/2 as fast as the fastest supercomputer that’s actually completed (in China). 

About dark matter: it’s matter that does not interact with electromagnetic radiation (like light, radio waves or infrared), which means as yet we haven’t been able to actually detect it. The reason scientists think it exists is because it’s the most likely explanation for cosmological gravitational effects they’ve observed; the matter we can detect doesn’t explain all the gravity we detect. 

Finally… I don’t know what’s up with the astronomy world, but one individual wrote the code for this simulation, which I find odd. Did he have some kind of monopoly on the data? It takes teams of programmers to make a video game world, but only one to simulate a universe, apparently. 

A New Source of Lithium For Batteries: Geothermal Power Plants

From Scientific American: In a double-dose of environmental goodness, a new technology allows lithium (the key component of lithium-ion batteries that power portable electronics and most electric cars) to be collected as a byproduct of geothermal power production.

Geothermal electricity is generated by pumping up hot water from deep underground and using the heat to make gas turn turbines and power a generator. This can be done by a) pumping up steam in the first place, and making that turn the turbines, b) pumping up hot, high-pressure water and then turning it into steam by decreasing its pressure, or c) pumping up hot water and transferring its heat to another fluid with a lower boiling point, like butane or pentane, turning those into gas. This last solution means that the water doesn’t have to be quite as hot to get the job done.

In any of these cases, the used water is then pumped back down to the source, maintaining the underground reservoir. The new development in question, from Lawrence Livermore National Laboratory, adds an extra step: before injecting the water back underground, they use a novel extraction process to remove its lithium content. This water (or brine, meaning it carries dissolved salts) has had a long time to dissolve the minerals around it underground, making it a rich source of seemingly everything:

The Salton Sea brine contains a host of other elements, and Simbol hopes to extend the extraction process to manganese and zinc—also used in batteries and metal alloys—as well as potassium, which is a vital nutrient and fertilizer, among other applications. “This brine has got half the periodic table in it and that’s a good news–bad news situation,” Erceg says, noting that cesium, rubidium and silver might also be produced the same way. The company is also exploring options for using the process’s waste silica—more commonly known as sand—in the cement industry.

This is clearly a step up from lithium mining; extracting lithium from a renewable, constant source that’s brought right to you. If this new technology simultaneously encourages the development of alternative energy and feasible electric cars, I’ll consider it a big win all-around.

A Single Molecule To Prevent Autoimmune Disorders

Researchers from McMaster University have discovered a molecule unfortunately named alphavbeta6 that’s secreted in mouse intestines to prevent an excessive immune response to digested food. From ScienceDaily:

Researchers then generated alphavbeta6 using cultured intestinal cells and found that both could be used to generate the immune tolerant cells needed to reduce or eliminate out-of-control immune reactions.

That’s basically the entirety of the information in the article; I wish it would go into a bit more detail. Why is this discovery revolutionary? The impression given by the article is that 1) alphavbeta6 can have a general effect to treat a wide variety of autoimmune disorders, and 2) this taming of the immune response to a particular stimulus is permanent.

“Currently we do not have special methods to radically treat most immune diseases; all we can do is to temporarily inhibit the clinical symptoms for those diseases,” said Ping-Chang Yang, a researcher involved in the work from the Department of Pathology and Molecular Medicine at McMaster University in Ontario, Canada. “Our findings have the potential to repair the compromised immune tolerant system so as to lead the body immune system to ‘correct’ the ongoing pathological conditions by itself.”

A bit vague, but it seems like point 2) is what they’re trying to say. If we could permanently correct aberrant immune responses, that would be a pretty incredible advancement and would solve a staggering number of illnesses. Of course, this was just the first step; any solution based on alphavbeta6 is still a long way off from having clinical applications. 

Some brief background: “cultured intestinal cells“, mentioned in the first quote above, means cells that are grown on a glass plate, as opposed to in a live animal. Cells of all kinds are grown this way; each requires very particular treatment, and live in a medium of nutrients and pH appropriate to that cell type and what the researcher wants to do with them. Isolated cells outside of an animal are much easier to work with; the trade-off is that behaviour you observe in cultured cells (in vitro) won’t necessarily represent the characteristics of cells in a live animal (in vivo) because of the huge difference in environment. When learning about cells, in many cases it makes sense to try something in vitro first before moving on to more expensive and far more time-consuming in vivo models.

Autoimmune disorders are a pretty widespread phenomenon, where your immune system recognizes some subset of your own cells as invaders and attacks them. Some that you might’ve heard of include celiac disease, type 1 diabetes, lupus, multiple sclerosis, and rheumatoid arthritis.  

Gravitational Redshift On a Cosmological Scale Verifies General Theory of Relativity

For the first time, gravitational redshift has been measured outside of the solar system, on a cosmological scale. This effect of gravity on light is predicted by the general theory of relativity, and these measurements from the Dark Cosmology Centre at the Niels Bohr Institute, published in Nature, match the predictions. If you’re familiar with redshift, feel free to skip ahead to the second quote block.

Redshift is the result of the Doppler effect on light. Wikipedia has a pretty solid explanation of the Doppler effect, which is when wave frequencies increase (and thus wavelengths decrease) as the source of the wave moves towards the observer, and vice versa for a source moving away:

When the source of the waves is moving toward the observer, each successive wave crest is emitted from a position closer to the observer than the previous wave. Therefore each wave takes slightly less time to reach the observer than the previous wave. Therefore the time between the arrival of successive wave crests at the observer is reduced, causing an increase in the frequency. While they are traveling, the distance between successive wave fronts is reduced; so the waves “bunch together”. Conversely, if the source of waves is moving away from the observer, each wave is emitted from a position farther from the observer than the previous wave, so the arrival time between successive waves is increased, reducing the frequency. The distance between successive wave fronts is increased, so the waves “spread out”.

Light can be described as a wave, with different wavelengths for different colours. Blue has the smallest wavelength and highest frequency, red has the largest wavelength and smallest frequency. That means that, as described by the Doppler effect, when a source of light (or more generally, electromagnetic radiation) is moving away from an observer it will shift towards the red end of the spectrum. Since the universe is expanding, light from distant galaxies is redshifted, and the degree of redshift can tell you their distance: this is cosmological redshift, or Hubble’s Law.

Meanwhile there is gravitational redshift: light moving from a place of stronger to weaker gravity will be redshifted. To the best of my understanding, this is because time moves slower near stronger sources of gravity, which intuitively seems to make sense as an explanation for wavelength expanding as it moves away from the gravity source. Light also has to travel further when it goes near a large source of gravity, since gravity curves space as well as time.

Gravitational redshift is what’s explained and predicted by general relativity, and what these researchers observed. From Science Daily:

Radek Wojtak, together with colleagues Steen Hansen and Jens Hjorth, has analysed measurements of light from galaxies in approximately 8,000 galaxy clusters. Galaxy clusters are accumulations of thousands of galaxies, held together by their own gravity. This gravity affects the light being sent out into space from the galaxies.

The researchers have studied the galaxies lying in the middle of the galaxy clusters and those lying on the periphery and measured the wavelengths of the light.

“We could measure small differences in the redshift of the galaxies and see that the light from galaxies in the middle of a cluster had to ‘crawl’ out through the gravitational field, while it was easier for the light from the outlying galaxies to emerge,” explains Radek Wojtak.

They also calculated the mass of the galaxy cluster and from that its gravitational potential, and armed with this could reliably predict the redshift from different regions of the cluster. Einstein emerges victorious, yet again – so you can see why everyone is very skeptical about the now famous CERN neutrino experiment

In trying to wrap my head around this I discovered this site explaining relativity, time dilation and quantum theory in a pretty digestible form; you should check it out if you have any interest.

“‘Teleportation’ of Rats Sheds Light On How the Memory Is Organized”

(Edit: It was published the day after, but I think this Wired article explains the experiment better than Science Daily)

Science Daily brings us an article about research conducted at the Norwegian Institute of Science and Technology and published in Nature this week, on the discreteness of memory in regards to classic disorientation. 

From the article:

You’re rudely awakened by the phone. Your room is pitch black. It’s unsettling, because you’re a little uncertain about where you are — and then you remember. You’re in a hotel room.

… In an article published in this week’s edition of the journal Nature, researchers at the Norwegian University of Science and Technology’s Kavli Institute for Systems Neuroscience describe exactly how the brain reacts in situations like these, during the transition between one memory and the next. The study employed a method that allowed them to make measurements right down to the millisecond level…

Their findings show that memory is divided into discrete individual packets, analogous to the way that light is divvied up into individual bits called quanta. Each memory is just 125 milliseconds long — which means the brain can swap between different memories as often as eight times in one second…

They accomplished this by very carefully measuring the electrical activity in the brains of rats. They basically trained rats to identify certain rooms by features which were actually just a product of the lighting environment, then suddenly switched the lighting on them. The rats were confused by this “teleportation”, and their brain activity revealed how our brains react to this sort of situation.

In regards to the separate mental maps the rats constructed for each room,

“But the mind doesn’t actually mix up the maps,” she says. “It switches back and forth between the two maps that represent rooms A and B, but it is never in an intermediate position. The brain can ‘flip’ back and forth between the two different maps, but it is always either or, site A or site B.”

The punchline I gather is that when we get confused about our location, we don’t actually “mix up” our location memories, we just flip between them within an 1/8 of a second. 

I find the unqualified claim that “memory is divided into discrete individual packets… Each memory is just 125 milliseconds long” to be very, very strong; I hope the Nature article does a better job of justifying that conclusion, unless this ScienceDaily article is just over-asserting their findings. If it is true I imagine it’d be a pretty huge boon for cognitive neuroscientists of every stripe; how long a memory lasts is probably relevant to understanding any kind of human perception. 

Virtual Reality Training for Soldiers

Here’s an interesting article from PhysOrg about ExpeditionDI, a new virtual training environment for the U.S. military. You put on goggles and are immersed in a 3d environment, which changes accordingly as you move through it and even shoot AI opponents with a replica M4 rifle. Up to 9 soldiers at a time can be in the simulation.

From the article:

The system can be programmed to reflect the physical characteristics and abilities of the soldiers using it, including their skill with a rifle and their walking speed. It also includes two other weapons – an M4 carbine equipped with a grenade launcher and an M249 heavy machine gun.

The training scenes are provided by the government and are based on actual towns.

Now imagine video games in 20 years… concerns about exposing teens to violence will get really, really real. 

Do you think there’s any revolutionary science necessary for this kind of simulation, or is it simply a matter of handling all the data at a volume and speed that wasn’t possible before? 

%d bloggers like this: