About Me

My photo
Science communication is important in today's technologically advanced society. A good part of the adult community is not science savvy and lacks the background to make sense of rapidly changing technology. My blog attempts to help by publishing articles of general interest in an easy to read and understand format without using mathematics. You can contact me at ektalks@yahoo.co.uk

Saturday 27 February 2016

Optogenetics - Controlling Neurons with Light - a Way to Understand Brain Circuit Organisation?


What is Optogenetics:  Optogenetics is the combination of genetics and optics to control well-defined events within specific cells of living tissue. It also includes the discovery and insertion into cells of genes that confer light responsiveness.

How does Optogenetics work?: Certain algae respond directly to a light source. The algae detects and moves towards the light. The question is - can this ability of the gene in the algae be used to control cells in more complex organism?  We are able to ask this question because of advances in genetic technology that allow particular genes to be inserted into the DNA of a virus, which on infecting humans and other animals, transfers the algae gene to the DNA of the host.




Figure from:  http://neurobyn.blogspot.co.uk/2011/01/controlling-brain-with-lasers.html
Optogenetics gives us control on exciting or switching off neurons and provides an effective tool to study the effects of neuron activity on the functioning of the brain.
 


About forty years ago Francis Crick, Nobel Laureate of the Double-Helix fame, had commented that the then existing methods used in neuroscience were either too crude, lacking precision (inserting electrodes in the brain) or too slow (drugs targeting particular cells).  Crick had mused that light might have the properties to serve as a control tool but at the time neuroscientists did not know how to make specific cells responsive to light.
In 1999, Crick stated that one of the major problems of biology is discovering how the brain works;  when we finally understand, at the cellular and molecular level, our perception, our thought, our emotions and our actions, it is more than likely that our views about ourselves and our place in the Universe will be totally transformed.”

At present, the best tool to study the brain is fMRI  - it can reveal what various regions of the brain are doing when people respond to a stimulus.  fMRI signals measure the increased levels of oxygenated blood (BOLD) in particular parts of the brain.  Neuroscientists believe that the signals are caused by an increase in the excitation of specific kinds of brain cells. (The two slides at the end of this post describe fMRI method in more detail.  For MRI click here)
Karl Deisseroth pioneered the work in optogenetics at Stanford and showed that neural excitation indeed produces positive fMRI BOLD signals.

Optogenetics holds great promise for the study of brain circuitry and has been used in vivo to record neural activity patterns with millisecond precision and to create a wireless route for the brain. 
The 2014 Nobel Prize in Medicine was awarded to a team of researchers using optogenetics to map the function of new types of brain cells. http://www.nobelprize.org/nobel_prizes/medicine/laureates/2014/presentation-speech.html

Optogenetics can also be used to address cell behaviour in other parts of the body and for diagnostics and treatment of diseases.  I describe a proposed application of optogenetics to help sufferers of retinitis pigmentosa (RP): an incurable genetic disease that leads to blindness as it destroys rods and cones in the eye. 

A non-pathogenic virus is altered to contain a light-sensing protein (ChR2) from algae.  The virus is then injected into the ganglion cells in the eyes of RP patients.  The idea is that the genetically altered ganglion cells will be sensitive to light, giving back some vision to those afflicted by the progressive disease RP.



Because the eye is naturally exposed to light, it’s the perfect venue for a trial like this one, which seeks to switch the photoreceptive burden from the compromised rods and cones to ganglion cells in the retina.
The ganglion cells receive fewer photons and it is not clear exactly what visual granularity can be achieved here. If the experiment succeeds, the researchers expect that the experimental cohort will get monochromatic vision at very low resolution.   RetroSense CEO Sean Ainsworth said that he hopes the treatment will allow patients to “see tables and chairs” or even read large letters. Grainy, low-resolution monochromatic vision might not sound like much compared with what humans normally perceive, but these efforts are important steps on the road to long-term vision restoration. Rough shapes and gray-scale projections are a far better alternative to total blindness.  The four minute video is also of interest in this context:  http://shows.howstuffworks.com/fwthinking-show/3-ways-we-could-restore-sight-blind-video.htm

In concluding this post, it is to be noticed that the light-based control of ion channels has been transformative for the neurosciences, but the optogenetic toolkit does not stop there. An expanding number of proteins and cellular functions have been shown to be controlled by light. The field is moving beyond proof of concept to answering real biological questions, such as how cell signalling is regulated in space and time, that were difficult or impossible to address with previous tools.
For a comprehensive discussion of optogenetics, please click here.  I quote their abstract:
Fundamental questions that neuroscientists have previously approached with classical biochemical and electrophysiological techniques can now be addressed using optogenetics. The term optogenetics reflects the key program of this emerging field, namely, combining optical and genetic techniques. With the already impressively successful application of light-driven actuator proteins such as microbial opsins to interact with intact neural circuits, optogenetics rose to a key technology over the past few years. While spearheaded by tools to control membrane voltage, the more general concept of optogenetics includes the use of a variety of genetically encoded probes for physiological parameters ranging from membrane voltage and calcium concentration to metabolism. Here, we provide a comprehensive overview of the state of the art in this rapidly growing discipline and attempt to sketch some of its future prospects and challenges.
*********************************************************************************






Saturday 20 February 2016

A Promising Development - Eternal 5D Data Storage in Fused Quartz

This post involves a number of prefixes on units which not everybody will be familiar with  
These are explained in the slide at the end of the post
Blog Contents - Who am I?

Scientists at the Optoelectronics Research Centre (ORC) at Southampton University have taken a significant step in solving the problem of archiving large amounts of data

What Have They Done? -   Developed the recording and retrieval processes of digital data by femtosecond laser writing on fused silicaTheir portable memory storage has data capacity of up to 360 Terabyte per disc, is thermally stable to 1,000°C and has virtually unlimited lifetime at room temperature (13.8 billion years at 160°C ).  
The technology could be used by organisations with big data storage requirements, such as national archives, museums and libraries, to preserve their information and records.
How is big data stored at present and why do we need bigger storage capacity? -  At the moment, the longest-lasting storage technology in the world is the  M-Disc which uses Blu-Ray technology to store data for up to 1,000 years.  For personal data storage - flash disc storage lasts for a few years.
However, most data centers handling large amount of data use hard-disc drives (HDD) which are expensive for loading data.   HDD are unsuitable for long-term storage and require transferring of data about every two years. 

Total capacity of data stored has been increasing by about 60% each year and we shall need to manage 40000 Exabyte of data by 2020.  HDD  power consumption is of the order of 0.04 Watt per Gegabyte of data stored.  The power consumption of American data centers alone is expected to reach 140 billion kWh (one kWh is equal to one unit of electricity) per year costing $14 billion @10 cent per unit.
What is needed is a less expensive way to store data which can stay secure for a very long time.  
This is exactly what scientists at OCR have achieved in their new storage device combining the best of laser and nano-technologies (NT). For an introduction to NT for non-specialists, you can look at my course notes available here.  Talk-5 deals with digital revolution. 
What is the Technology? -   (I am much obliged to ORC for providing me a clear description of their technology).  Ultrafast laser induced nanogratings in fused quartz, the key of the eternal 5D memory storage system, were first discovered by Professor Peter Kazansky at ORC. These nanogratings exhibit some extraordinary properties such as extremely high thermal and chemical stability, as well as ability to manipulate transmitted light.
In conventional optical media, such as DVDs, the data is stored by burning tiny pits on one or more layers on the plastic disc using three spatial dimensions. When the data-recording ultrafast laser marks the glass, it doesn’t just make a pit. It makes a pit with the self-assembled nanogratings that are the smallest embedded structures ever produced by light. The orientation (4th dimension) and strength (5th dimension) of these nanogratings implemented as two additional parameters increase the amount of digital data held per pit. During retrieval, these 2 extra dimensions also interact with oncoming light, modulating the transmitted light from which we can derive the information stored in the 5-dimensions. The estimated ultimate capacity, which could be achieved with the 5D data storage technology, is 360 TB per disc.

The recording system uses an ultrafast laser to produce extremely short (femtosecond - a million billionth of a second) intense laser pulses of light. The file is written in up to 18 layers of nano-structured dots separated by 5 micrometers in fused quartz.  The self-assembled nanostructures change the way light travels through glass, modifying the polarization of light, which can then be read by an optical microscope, and a  polarizer similar to that found in polaroid sunglasses.



What is the future? -  As with any new technology, it takes time to reach maturity and practical demonstration is important.  OCR have already demonstrated the efficacy of 5D data storage by writing some important documents on their glass discs.
The next step is obviously to commercialize the technology for wider uptake.  
The places where I see 5D storage to be most useful is in data archiving.  I am not sure how expensive the retrieval system will be with state of art laser systems required.  

As I have been emphasizing in my publications, the new technologies are progressing rapidly and delivering marvelous new inventions/discoveries.  But, it is in the synergy that the real promise lies - where two or more of the new technologies come together and yield benefits far in excess of what any one of them could have hoped to achieve individually.


Post Script:  When I first read the research as 5D Data Storage, I started to figure out what the five dimensions could be.  Physicists understand the three space dimensions and they are happy to accept the fourth dimension of time.  Space-time form the four dimensions as far as physics is concerned.  Obviously technologists do not follow the same nomenclature and that causes confusion.  3D printing was fine but now we have 4D printing also.  5D data storage uses three dimensions of space and two parameters of nanogratings in fused quartz.  I would have called the system 3D2P storage.  Just a thought - apologies if this does not go down well.  
Prefixes for units









Thursday 18 February 2016

Why Does Steam Cause More Severe Burns Than Hot Water? - Physics of Thermal Burns

Blog Contents - Who am I?
(Click on a slide to view its bigger image)

A question I am often asked is why steam appears to be so much more effective than hot water in causing burns. Majority of thermal burns are caused by a momentary contact with a hot agent - water, steam, hot iron. Our body reacts very quickly to move away - the contact time would be of the order of our reaction time; let us say the reaction time is tenth of a second or 0.1 sec.  During this short time, heat energy is transferred to the local spot on our outer skin (epidermis) and raises its temperature to cause the burn.  Slides at the end of the blog provide information about the structure of human skin.  The reason steam causes severe burns is because it carries much more energy than water - we shall return to this after some preliminaries.

Epidermis starts to get damaged at temperatures above 44 C.  If the temperature of water is higher, then more heat energy will be transferred to the tissue and the damage will happen more quickly and severely. One talks of damage to the skin in terms of the degree of burn - first degree burn is mildest while the fourth degree burn is really severe life threatening.  Slides  at the end of this blog tells you about classification of burns.
For third degree burns to happen, the time of contact is as follows



1 second at 69 C water temperature
2 seconds at 65 C
5 seconds at 60 C
15 seconds at 56 C
Notice that the time for burn is not linear and reduces rapidly as the temperature of hot water increases.
For boiling water at 100 C it takes much less than 0.01 sec for burn to happen - third degree burn is very serious - first and second degree burn happen for even shorter contact times.
Once you splash boiling water droplets on your skin or touch a hot iron, a burn shall happen.  The heat energy is rapidly conducted away to surrounding tissue and this limits the size of the burn.  If you splash lot of boiling water then area of contact is greater and heat conduction from the central part is poor and the resulting burn is more serious in the centre.  Hot oil causes even more serious burns because oil tends to be hotter (greater than 100 C) and is sticky - does not fly off the skin as rapidly as water does.

Before looking at scalding by steam, let us consider the threshold energy that can cause burns.  I am able to find data for arc lamp induced second degree burns.  Since skin burns mainly depend on the temperature increase of the skin, the data can be used to get some idea about the amount of heat required to cause a second degree burn by water as well.  The IEEE P1584 and NFPA 70E standards state that a second degree burn is possible by an exposure of unprotected skin to an electric arc flash above an incident energy level of 5 J per square cm. 

The figure shows the time of burn for different energy flux (amount of energy delivered to one square cm of the skin per second).  The higher the energy flux, the shorter is the time for second degree burn.  

Now, we are ready to talk about steam and hot water burns.
The idea here is that both hot water and steam deposit energy on the skin but the rate of this energy deposition is greater for steam than it is for water because steam carries much more energy than hot water.  To explain this, we need to do some interesting physics.
Think what happens when you heat water in a container: 
Water temperature increases steadily until it reaches 100 C. The temperature stays at 100 C but some water is converted to steam - the conversion continues until all of the water has changed into steam - temperature stays at 100 C.  Where is this heat energy going - it is being used up in breaking the bonds between different water molecules and making them free. This energy is called the latent heat of vaporisation of water.  Latent - because the energy is hidden and has not resulted in a temperature change. This is shown in the slide 



The interesting thing is that to raise the temperature of 1g of water from 0 to 100 C requires 418 J of energy. But to convert 1g of water at 100 C to steam requires 2260 J of energy -- 5 times more energy.  


If water or steam touches the skin then its temperature will drop very quickly to about 40 to 50 C (Skin temperature is about 36 C) and 1g of water at 100 C will give up 209 J of energy to the tissue.  Steam will give up the latent heat + 209 J or 2469 J of energy to the tissue - this is almost 10 times more energy. But we must remember that steam is much lighter than water and we shall receive only a small amount of steam on the skin.

I think the actual situation is as follows:
Boiling water is a mixture of water at 100 C mixed with lot of steam.  Burns due to boiling water are exacerbated by the presence of steam.  Water at a lower temperature - say 70 or 80 C will have no steam mixed with it and a drop of lower temperature water will not cause as serious burn as a drop of boiling water.

Structure of the Skin


 Classification of Thermal Burns


UPDATE (February 2019):  A recent article in Medscape on thermal burns is a must read for its detailed medical aspects presented in a way that is easily understood by non-medics:

https://emedicine.medscape.com/article/1278244-overview#a1



  











  

Friday 12 February 2016

Gravitational Waves - Theory of General Relativity - Background and Historical Perspectives

Blog Contents; Who Am I?

Everybody is talking about the successful detection of gravitational waves (GW) at aLIGO (advanced Laser Interferometer Gravitational-Wave Observatory). 
Einstein's theory of general relativity has passed all tests.

The observation of binary pulsars in 1974 by Taylor and Hulse had confirmed the emission of GW in exact accordance with the predictions of General Relativity.  It is the actual observation of GW by aLIGO that has now  provided the final and the most stringent of tests.

The discovery itself has been covered extensively in many places - I provide some of the links
Abbott et al (LIGO Collaboration) - Original Research Paper
Physics World
Science Daily
Time.com

The excitement of this discovery is being felt throughout the world.  As it happens with many scientific discoveries, the general public tends to have short memories and the research is forgotten very quickly.  Theory of relativity is a very difficult concept to digest - even Einstein had great difficulty in convincing his fellow scientists about his theory. (Einstein was awarded Nobel Prize in 1921 - not for his theory of relativity but for explaining photoelectric effect - such was the problem with scientists able to grasp what Einstein was telling them).  It is important that some historic perspective is provided for the discovery.  The UK newspaper  Independent has an excellent introduction about the background to the discovery of gravitational waves.  

I had published my PowerPoint slides relating to a community outreach course on Einstein's Theories of Special and General Relativity and this is an excellent source for non-specialists who wish to learn more about Einstein's biography and his theories.

In the following I reproduce the slides from my course that relate to the 1974 discovery of binary pulsars by Taylor and Hulse for which they were awarded the 1993 Nobel Prize. Their observations had confirmed the emission of gravitational waves (GW) and provide an important landmark.  Some slides explaining the physics of black holes are also included.








In 1919, Eddington provided the experimental test for General Relativity when he observed the bending of light from distant stars by the gravitational field of the Sun - Newton's theory of gravitation  just cannot explain this effect but general relativity predicted the exact number.  

For the pulsars, the effect is 100,000 times greater than for Mercury's orbit and Taylor and Hulse's observation were really wonderful in establishing the general theory on a firm footing.






 
Since Taylor and Hulse's discovery of the first binary pulsar system, other binary pulsars have been observed .  Next slide describes a pair that has four times larger annual advance of periastron (point on the orbit when the distance between the stars is the least).

















http://ektalks.blogspot.co.uk/2016/02/gravitational-waves-theory-of-general.html

APS has free access to important papers on General Relativity -  for how long? - I do not know.











Tuesday 9 February 2016

Future of Privacy - What is Privacy and Why it is Important? (Part 1)

Blogger Profile - Who am I?  (Send comments to ektalks@yahoo.co.uk)

Privacy is an emotive topic and in these days of 'connected people', any mention of privacy draws strong reactions - and not without good reason.   Over the last three decades, accelerating technological advances have transformed the way we interact with fellow humans. Data flow, hence information flow, has increased beyond expectations.  Humans have always adapted to new situations but it takes time - may be a decade or more which is too slow to cope with the current rate of increase in information flow.  This has created strains between different generations.  Moral and ethical values are no longer a given but each individual struggles to find his/her own codes - many a times, not very well.  The confusion leads to behavioural problems and affects the cohesion in our societies. Naturally, with weaker societal benchmarks, emphasis has shifted to individualism - looking after self. This has created its own paradoxical situation because individualism conflicts with the erosion of privacy that accompanies with the tide of information flow.

Before going further, I would like to understand what one means by Privacy.  
I like the punchy statement - Privacy is the right to be left alone. 
Oxford Dictionary (OD) - Privacy is freedom from intrusion or public attention; avoidance of publicity. 
UN Declaration of Human Rights says:  No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honor or reputation.  Everyone has the right to the protection of the law against such interference or attacks.
All very 20th Century!  
I find protection of law part ironic.  Law has traditionally only protected the rich and as we shall see in my forthcoming publication about the 'Future of Law' the situation is only going to get much worse for the common man.  

The concept of Privacy has changed over time and is being continually redefined.  In older societies when people lived in small communities, everybody in the village knew everything about all others.  Houses were small and combined families were common. OD definition of 'freedom from intrusion or public attention' just does not hold for ancient societies. Cities and crowded places, paradoxically, gave people more scope to be their own where everybody is too busy to worry about fellow beings and some sort of privacy (OD style) was possible.  
In the modern context, Privacy sets a goal of what is desirable but seldom achievable.  The goal is laudable as freedom from intrusion brings its own benefits - one can feel relaxed and happy and could possibly be more original and creative.  Having said this - I think some of the greatest contributions in art, music, literature, science... were made by people who were unhappy and stressed.  Such are the complexities of the human mind!

What does the public think about the threat they feel from different organisations? - the slide shows the result of a recent survey: (Click on slide to see bigger image)
Something really interesting is apparent from the survey. Less than 3% people surveyed feel that they have trust in their government to keep data about them secure. Private business is trusted far more.  This is totally understandable and we shall have a lot more to say about this in Part 3.

Let us first look at the reasons why Privacy is considered important.  As we shall see later, rapid advances in technology will result in a wholesale redefinition of Privacy. Even now, some people, mostly government security czars, dismiss Privacy as unimportant for people who have nothing to hide.  
Daniel Solove has explained why Privacy matters.  Let us look at some of his reasons:
a.  Personal Data:  The more someone knows about us, the more power they can have over us. It can be used to affect our reputation, influence our decisions, shape our behaviour. It can be a tool to exercise control over us.
b.  Privacy enables people to manage their reputation: How we are judged by others affects our opportunities, friendships and overall well-being. Even after knowing the truth, people judge badly, they judge in haste, they judge out of context, they judge without hearing the whole story, and they judge with hypocrisy.  Privacy helps people protect themselves.
c.  Privacy helps people to establish appropriate social boundaries from others in society  
d.  Privacy ensures trust in relationships:  In professional relationships, e.g. with doctors, lawyers.., trust is key.  Trust broken in one relationship makes it more difficult to establish trust in new relationships.
e.  Privacy is key to freedom of thought - be it exploring ideas outside the mainstream, ideas that family and friends dislike, or political activity.
f.  Privacy nurtures the ability to Change and have Second Chances without being shackled by past mistakes - allows people to reinvent themselves
g.  Privacy matters because one does not have to explain or justify oneself all the time - It can be a heavy burden if we constantly have to wonder how everything we do will be perceived by others who might lack complete knowledge and/or understanding 

Of course, absolute privacy is also undesirable - that will be isolation from the society.  Humans are gregarious by nature, they love company and readily talk to others about personal matters.  The act of living in a society requires surrender of some of your privacy voluntarily and most people are comfortable with this but as the survey has indicated they  are also concerned that our institutions - companies and governments - do not have the ability/means/willingness to keep their private data secure.  We shall look at this in more detail in Part 3. 

The reason Privacy is a 'hot' topic just now is the latent ways in which peoples' privacy is compromised and they appear to have no control on it.  New advances in digital and nano technologies allow effective collection and manipulation of personal information.  Private information given in good faith at one place can be combined with other information about you to generate undesirable loss of privacy that you had never agreed to.  In fact it might be fair to say that with digital information flow, privacy may be impossible to defend.  We shall look at the question of physical and digital privacy in Part 2.

A development in technology that is hailed as the new paradigm is the arrival of the Internet of Things (IoT).  This promises to deliver a connected world. Implications of IoT for Privacy are far-reaching and somewhat frightening - particularly in terms of lack of security and making people more vulnerable to cyber crimes. We shall look at this in Part 4.