Monday, 6 September 2010

Imaging the Brain in the Operating Room: Challenges and Promise

Based on a seminar by Frank M. Skidmore, director of the Movement Disorders Center at the VA Medical Center


The phrase "this isn't brain surgery'' is used for good reason. Neurosugery is definitely in my top 10 list of real hard things to do. Your brain is an extremely delicate mass off specialised tissue sealed in a tough, durable casing, and many functioning regions are deeply buried that are often impossible to reach without burrowing through other active regions. Even so, it has become apparent that affecting these deep regions, or nuclei, can be hugely beneficial when treating motor diseases like Parkinson's.

Frank M. Skidmore directs the Movement Disorders Center at the VA Medical Center, Florida. A neurologist by trade, he works as a clinician diagnosing neurological disease and referring patients to teams of surgeons. Many of his patients require deep brain stimulaters (DBS), electrodes inserted into the brain that deliver electrical impulses, rather like a pacemaker. They act to interrupt misfiring nuclei with the hope of alleviating symptoms, such as loss of motor control. Unfortunately, these nuclei are small and difficult to pin point making accurate imaging of the brain, particularly while a patient is on the operating table, highly desirable.

What makes locating nuclei so difficult? Everyone's brain is different to a degree so no universal structural map exists. Even if a perfect structural map did exist, certain nuclei are independent of structure and so must be found by determining a functional map. As a result, the boundaries between brain regions are extremely blurred. But it gets worse. Opening the brain case causes pressure changes, fluids start to drain and air rushes in, so the sponge like mass of the brains begins to droop, and it does so continuously throughout an operation. Your small target is constantly moving. Even sneezing can course your brain to shift!

The situation can be alleviated some what by using a combination of MRI scans prior to an operation and a stereotactic frame, a scaffold that bolts to the skull and provides a 3D co-ordinate system for navigation. But inaccuracies in placing a frame bring additional problems and MRI scans can only provide a rough guide, as when the patient is moved to an upright position the brain will again shift in addition to the changes that occur during surgery.

Surgeons do have one extra trick up their sleeves. Pulses of electrical activity can be recorded using an electrode and played back as sound during live surgery. As it happens, different parts of the brain produce distinct sounds, including the target nuclei, which in the case of Parkinson's sounds like rain on a tin roof. This can indicate to the surgeon, with reasonable reliability, as to when he has hit the correct spot. The problem is that the electrode will have to be re-inserted until it is been placed correctly. With each re-insertion damage is caused in the form of scaring that can have major consequences to a patients mental abilities. One patient, who suffered from hand spasms, lost his ability to form coherent sentences due to scaring of the speech centre of the brain, but surprisingly he didn't mind as he could once again indulge himself in his hobby of model ship building.

The ultimate goal would be to do away with this trail and error approach and succeed with a quick, single pass insertion. Pioneering surgeons are utilising mobile MRI scanners to provide live inter-operative imaging of the brain, but this poses many new problems. The machinery is sizable and complex. Special surgical equipment is required that wont be effected by the strong magnetic fields radiated by the scanner. Alternatively, a mobile CT (computerised tomography) scanner can be used but software for tracking brain deformations during surgery does not yet exist. Even so, a CT scan can only reveal structure and not function.

If the problem of imaging could be solved it would need to be combined with robotic surgical hands that a computer could guide in precise co-ordination with live images. Advances in robotic surgery are promising, but the lack of the technology needed to map brain function and structure during surgery limits its use to body parts less complex.  Without detracting from the amazing feats that brain surgeons achieve today, there is still a long way to go if we are to be masters of our own minds.

Friday, 3 September 2010

The SixthSense Device: Projecting the digital world into reality

Designed and built by Pranav Mistry

A child develops a deep understanding with its physical world by interacting with everyday objects. We learn to associate objects with gestures, the movements we use to interact with an object, and how we can use an object to interact with other people. The use of gestures generally breaks down at the interface with computers. Data is typically accessed through a mouse, keyboard, or touch screen, and misses the intuitiveness of interacting with a physical object. SixthSense is a mobile device designed to bring alive intuitive physical objects with digital information.

Pranav Mistry wishes to bring the digital world back into reality in a cost effective way that everyone can afford. His experiments began with a series of simple devices that allow the use of real world gestures to interact with a computer. For example, by deconstructing an ordinary mouse, removing the rollers and attaching a basic system of pulleys, he was able to create a device to sense hand movements that could control a virtual pointer.

A common theme in his experiments was to bring a part of the physical world into the digital world, but on realising that what is really desired is information, and that we don't care about the computer, why not reverse the process and extract data from the digital world and paste it into the physical world. This idea lead to the SixthSense device.

The device itself is relatively simple. A potable projector hung
around the neck does the job pasting data onto any real world object, such as a wall. To allow interaction with the projection a small camera tracks the movement of a users fingers (as long as the users wears coloured rings). Your fingers can then be used to perform any number of gestures such as pinching to grab a graphic, pulling to zoom, or making the shape of picture frame to take a snapshot. When combined with a mobile internet connection, the SixthSense device allows you to carry your entire digital world with you.

The device can also recognise objects, such as a book cover. Imagine you are browsing in a bookstore. On looking at book the device can recognise and instantly project a review of the book directly onto the cover, or maybe the best online price, or a rating from your favourite book review web site. Information can also be inputted into the system via the camera. If reading a magazine article why not grab it with a pinch of the fingers and drag it onto your projected display, where you can recognise it as text and begin editing it, emailing it friends, or blogging it.

The SixthSense device is a fantastic piece of kit thats within the budget of the masses, and all the software is freely available. Its a great way to pull yourself back out of the digital world and reconnect with the physical.

Thursday, 2 September 2010

UniView: A virtual atlas of the Universe

As demonstrated by Carter Emmart


Whats our place in the universe? This is deep philosophical question thats been pondered over for thousands of years, but modern astronomy can now answer the literally form of the question, to a degree. Humans evolved to view the world as a flat horizon which seemed to stretch out to infinity in all directions. It wasn't until we got high above the earth that we saw the horizon curve and realised we are confined to the prison of a finite sphere. Fortunately, if we gaze up past the horizon to the stars our instincts are once again ignited by the vastness of space.

Carter Emmart's vision was to create a scientifically accurate, hugely detailed, virtual atlas of the universe that captures everything from the small scale features of a planets land
scape all the way up to galaxy clusters. For 12 years he managed a team of research students, astronomers, and 3D artists to create UniView, a realtime visualisation of the universe.

UniView is based at the American Museum of Natural History and was supported by NASA as part of the rebuilding of the Hayden planetarium. UniView now forms the bases for space shows in numerous domes across the world.

The software allows you take a virtual tour through the universe.
From a view of the Earth you can zoom outwards, further and further into the viewable universe until you have left our solar system, our galaxy, and our galaxy cluster, all of which are rendered accurately and beautifully. The further out you travel the further into past you move until you leave the visible universe itself and can view the universe from before time began. From here you can observe the WMAP of cosmic background radiation, the residue from the big bang.

UniView doesn't just feature planets and stars, but also reams of information on satellites, spacecraft, and space missions. You can follow the Voyager spacecrafts on their journey beyond our solar system, or take a look at the mission map of Cassini's visit to the planets. UniView is made more powerful by allowing a continuous influx of new data. For example, images taken of earths surface just hours before can be feed directly onto the virtual Earth.

As a civilisation we are beginning to see ourselves in a much wider context and by doing so it will help us understand where we are in the universe, and maybe why we are here.

For more information visit the UniView website.

Wednesday, 1 September 2010

Drilling into the frozen past


Review of a Lee Hotz article

Leave your comfy, warm living room, and venture south to the bottom of the world where you'll hit Antarctica, the highest, windiest, driest and coldest place on the planet. The entire continent has been drowned below sea level by the sheer mass of ice weighing it down. The ice mass represents a unique opportunity, to peer thousands of years into the past and observe incredibly detailed pictures of the earths ever changing climate.


A team of 45 scientists and engineers from the University of Wisconsin are drilling deep into the ice at location named WAIS Divide to extract clues as to the future of climate change. The ice contains a precise record of the rise and fall of greenhouse gases and temperature as far back as the last ice age.

Snow falls in huge quantities at WAIS Divide, with each snow fall forming a layer which is slowly compressed as it becomes buried by later snow storms. But its not just snow thats buried. Storms wash out dust, soot and trace chemicals from the atmosphere and deposits them on the fallen snow, year after year. We can observe calcium deposits from desert formation, soot from distant wild fires and methane levels indicating the strength of pacific monsoons, but most importantly each layer also traps the air itself. Roughly 10% of an ice layer is ancient air, a record of greenhouse gases, carbon dioxide, nitrous oxide, all unchanged since it was locked away thousands of years ago.

Drilling for Arctic ice isn't easy. So much snow falls at WAIS Divide the research team are practically living and operating a $8 million drill underneath the snow. They biopsy 10 feet long ice cores 10 times a day which equates to 360 years into the past per day. The ice cores are very susceptible to damage and contamination, and need to be kept at minus 20 degrees Celsius, so the team must work in a giant fridge located at the coldest place on earth. The samples they extract are shipped out to 27 independent labs across the world where they are analysed for 40 different trace elements at concentrations as low as 1 part in a quadrillion.

Why continue such obviously difficult research? Don't we know greenhouse gases are having adverse effects already? Greenhouse gases are detrimental to climate, but what we don't know is precisely what affect human activity will have on natural climate patterns, such as winds, ocean currents, precipitation rates, and cloud formation. These factors will affect billions of lives.

Snow storms have given antarctic ice an unprecedented knowledge of climate change, we just need to make sure we ask it lots of questions.

For more information on the WAIS Divide project see their web site.

Tuesday, 31 August 2010

The world's most ancient organisms



Research by Rachel Sussman

This Yareta plant just looks like a few boulders covered in moss, but it is in fact a shrub, with incredibly dense branches and clusters of green leaves at their ends. The big surprise is that this particular individual is over 3,000 years old, as are many Yareta plants, and only grows one millimeter per year.

Rachel Sussman has spent the past 5 years travelling the world researching the oldest living organisms that are 2,000 years old, or older. Her works began as both an artistic and scientific curiosity into global species longevity, an untouched area of research. Her inspiration came from the Jomon Sugi tree, a 2,180 year old tree sheencountered while visiting Japan. It's artistic and scientific beauty sparked an interest in organisms which are older than the year zero.

The oldest tortious is a mere 175 years old. A recently discovered giant clam was 405 years old (until it died in the lab), but these are just adolescents compared with the ancient plants and coral found on earth:

Brain Coral - This particular coral was found 18 meters down of the cost of the United States. It is 2,000 years old and was very lucky to escape the recent oil leak.


Armillaria (The humongous fungus) - This a predatory fungus thats hunts trees and is one of the worlds largest organisms. It kills trees in a circular pattern, slowly strangling a tree and cutting of the flow of water and nutrients. This one was dated as 2,400 years old.


The underground Forest - This is actually a tree, but highly adapted to the dry, bush like conditions that so often catch fire spontaneously. What you see is only the very top of the tree poking up through the soil, the rest is submerged below. This way only the leaves become singed by fire. This tree is 13,800 years old.


Pando Tree, clonal colony Quacking Aspen (The trembling giant) - The picture suggests a forest but genetic tests have shown this to be a single organism (hence a clonal colony). Each stem has grown from one giant root system which is 80,000 years old!

So, whats the oldest known living thing on earth? The prize goes to Siberian
Actinobacteria, a bacterium that lives within permafrost (soil below freezing). One sample is estimated to be
between 400,000 and 600,000 years old. Actinobacteria can perform DNA repair at, or below, freezing temperatures making them remarkable organisms, but also incredible vulnerable to rising global temperatures.

Ancient organisms are a living record of the past and wake up call for action in the future. They have survived for millennia in deserts, permafrost, and tops of mountains. Survived every natural disaster and human interference, but for how much longer? The need to preserve these wonders is undeniable.

For more information visit Rachel's web site

Monday, 30 August 2010

The game layer: the next step in social networking

Presented by Seth Priebatsch

The recent explosion in social networking is difficult to overlook, no matter how much an individual may want to. The network, almost single handedly defined by facebook, is complete in its construction and connects together thousands of people in what is refereed to as an open graph. The next step will be the building of a `game layer' that uses game dynamics to motivate and influence peoples behaviour. The framework around which this is built will become increasingly important.

The game layer is already well under construction on a global scale. It exists in the form of credit card schemes, air miles, loyalty cards, Tesco club cards etc. They are all designed to change our spending behaviour using game dynamics to benefit the respective companies. But the design of the game layer so far is cluttered with badly designed frameworks.

Game dynamics, when used correctly, can be very powerful forces in peoples lives. Lets look at some examples of game dynamics as they are used today:

1) Appointment dynamics - to succeed a player must do a predefined action at a predefined place\time. By introducing such a dynamic you can control what and when people do certain actions. For example, a happy hour at a bar makes people buy drinks at a predefined time. Or, to take a facebook example, the game Farmville forces users to return at certain times to water crops, otherwise they wilt. This may seem innocent, but the thousands of people who play this game can be summoned at any interval the designers wish, a very powerful force in the online world.

2) Influence and status - the ability of one player to modify another's actions through social pressure. For example, if a friend has the latest iphone it somehow makes them a better person, so i need one too. This is often used in computer gaming whereby a more successful player will have a higher rank and a prettier badge to show it.

3) Progression dynamic - success is granularly displayed and progress is made by completing short, simple, itemised tasks. Often, the desire to fill a progress bar is enough to drive a user to complete tasks. An insane example is World of warcraft, on which an average player will spent 6.5 hours a day to gradually improve his character, and pay for the fun of doing so.

The last decade was the decade of social networking. The next decade will be the decade of games. Game dynamics are a powerful tool for influencing behaviour and they will have much deeper affects on users than social networking alone. We should be conscious of the game layer and help built it well and responsibly.

Monday, 13 October 2008

FEBS Letters

See what these people have published

Managing Editor:

Felix Wieland, Heidelberg University, Heidelberg, Germany
Email: felix.wieland@bzh.uni-heidelberg.de


Beat Imhof, Centre Medical Universitaire, Geneva, Switzerland
Email: Beat.Imhof@medecine.unige.ch