The Mystery of General Anaesthesia

anaethsia
credits: drahnsupkim.blogspot.com

Anaesthesia (from Greek, ‘loss of sensation’) refers to a temporary drug-induced state of diminished or completely eliminated sensory perception. Several forms exist, from the loss of sensation in a localised area of the body (regional anaesthesia) through to the total loss of conscious awareness via a general anaesthesia – effectively a reversible coma.

General anaesthesia is undoubtably of the most profound feats of modern medicine. The technique enables the most gruelling surgical procedures to be performed without the slightest pain or memory.

However, despite doctors routinely using general anaethesia for the last 150 years, exactly what transpires inside our brains during this curious phenomenon remains a mystery.

Understanding this peculiar enigma could help unveil the true meaning of consciousness itself, a concept scientists and philosophers alike have grabbled with for centuries.

A Brief History

Attempts at producing a state of unconsciousness equivalent to the modern general anaesthesia can be traced back deep into recorded history in ancient writings from great civilisations across the globe – the Sumerians, Babylonians, Romans, Egyptians and Chinese (to name a few).

Primitive anaesthesia drugs were herbal concoctions, usually consisting of extracts derived from the opium poppy, mandrake, jimsonweed, maruijuna and alcohols. Such remedies did not, however, ‘turn out the lights’ of consciousness, only going as far as numbing the pain and inducing a somewhat sleepy state. Sometimes, the patient were even knocked unconscious with a targeted blow to the head. People would often opt for certain death rather than enduring the intolerable pain of crude surgical procedures.

medevil surgery.jpg
An illustration from the 13th century manuscript, ‘Surgery’, currently stored in Trinity College Library, Cambridge

The renaissance era heralded a golden age for advancements in science. Revolutionary discoveries were made in human physiology- from the detailed anatomical sketches by the legendary polymath, Leonardo Da Vinci (1452-1519) to the pioneering discoveries in cardiology by the English physician, William Harvey (1578-1657).

Despite this acceleration in anatomical knowledge, it was not until the 18th and 19th centuries that significant advancements in general anathesia were made. For example, the English scientist, Joseph Priestly isolated nitrous oxide (‘laughing gas’) in the early 18th century.  The chemist, Humphry Davy, working in conjunction with Priestly, observed that when inhaled, the gas was “capable of destroying physical pain.”

Further discoveries in pharmacology, alongside the development and careful refinement of surgical practices has lead to general anaesthesia as we know it today. Although there are a diverse range of anethestic drugs available, many still contain derivatives of early substances such as morphine and nitrous oxide.

Looking back on how general anaesthesia has come, we can be grateful we were born in an age where surgery is not accompanied by the tortured screams of intolerable pain.

Dimming the Lights

Consciousness is often thought of as ‘all or nothing’ – a state that can be activated by flipping on some metaphorical switch.

However, after adminsiatrtion of anathetisa drugs, the patient drifts gradually into unconsioussness, making the process more akin to the slow dimming of lights. The journey into an unconscious state can be divided into four distinct stages: ‘induction’ (light-headedness but maintaining conversation), ‘excitement’ (a burst of energetic and delirious behaviour accompanied by an irregular heart and respiration rate), unconsciousness (eye roll back, pain reflexes are diminished, heart and respiration rates steady) and the final ‘overdose’ stage where the patient has fallen so deeply into comatose that they require both cardiovascular and respiratory support.

Anaethesisits can monitor brain activity  using an electroencephalogram (EEG). This sophisticated tool allows measurement of neural activity as a patient transitions from a conscious to unconscious state during general anatheisia.

The galaxy of billions of neurones making up the active brain generate electrical impulses which can be detected by the electrodes of an EEG. The collected electrical signals are transmitted to a computer which translates them into oscillating patterns of peaks and troughs – brain waves.

EEG
EEG readings, credits: eegatlas-online.com

As an aneathesised patient traverses into the world of unconsciousness, their normal high frequency but low intensity brain wave pattern mutates into one with high intensity bursts at lower frequencies. Curiously, these high intensity appear to manifest at regular intervals, as if brain processes are occurring in an organised fashion.

How does it work?

So, the general anaesthesia has transformed surgery from an agonising nightmare to a gentle slumber. But the mechanism underpinning this transformation broadly remains a mystery. Perhaps, however, this isn’t so surprising: if we can’t define consciousness, how can we begin to comprehend its disappearance?

It is widely known that drugs work by binding snugly into receptor proteins on or inside our cells, bringing about a response or actively preventing one. This can be visualised as a ‘lock and key’ concept: the drug molecule a carefully designed key which snugly fits into the receptor ‘lock’.

drug3
The ‘lock and key’ mechanism of drug action, credits: pharmacymagazine.blogspot.com On binding, drug molecule A either acts as an ‘agonist’ by mimicking the cellular response generated by the naturally occurring ligand (key) or as an ‘antagonist’ by blocking the site to the natural ligand, thus preventing a normal response.

However, to submerge a patient into such a deep abyss of mental and physical paralysis, aenthesiaologys rely not on one key, but an extensive list. Aesthetic agents range from bulky complex molecules like steroids to the inert gas, xenon which exists as single atoms.  Surely the diverse contents of this curious collection do not occupy a universal lock?

For a long time, the general consensus was that instead of the ‘lock and key’ mechanism, anaesthetics work by physically disrupting the hydrophobic domain of the phospholipid bilayer of brain cells i.e. the ‘fatty’ regions of brain cell membranes. This idea was coined ‘lipid theory’ and was rooted in the observation that the potency of aesthetics correlates markedly with their lipophilicity (i.e. ability to dissolve in oils).

In the 1980s, however, a shadow of doubt once again descended upon the biochemical anatheictic mechanism when simple test tube experiments revealed the ability of aenethetics to bind to proteins in the absence of cell membranes, suggesting cell membranes may have little to do with general anaesthesia.

Further weakening the lipid hypothesis, scientists testified to the fact that cell membrane integrity can also be disturbed by even small deviations from our body’s ideal temperature range, yet this does not induce a state of deep unconsciousness.

The primary reason for the correlation between anaesthetic drug lipophilicity and potency is now thought to stem from the greater ease at which lipophilic molecules can penetrate through the blood-brain barrier and except their effects on neurones inside our brains.

More recent studies have demonstrated that general anaesthetics may interact directly with hydrophobic sites of certain membrane-embedded proteins throughout the central nervous system (CNS). On binding, the drug causes a structural changes in the membrane protein. And since protein shape is so intimately linked with function, this action makes communication between nerve cells across the CNS (i.e. the transmission of electrical impulses across synapses)  more difficult.

synapse.png
Synaptic gap, credits: biology4alevel.blogspot.com. The gap between two neurones is the ‘synapse’ across which, electrical signals are passed via neurotransmitters which diffuse across the synapse and bind to receptors on the next neurone.

This hypothesis supports the idea that instead of completely shutting down brain activity, aesthetic drugs meddle with the brain’s internal communication. Perhaps it is the idea of internal communication within the brain then, that underpins consciousness?

Unfortunately, the challenging nature of obtaining structural information about hydrophobic membrane proteins, and thus their interaction with drug molecules, means it remains unclear as to how the anaesthetics truly excert their effects at the molecular level. Further complicating matters, studies have demonstrated that inhaled aesthetics do not interact by the specific binding (lock and key) mechanism, instead, they loosely associate with membrane proteins to disrupt their dynamic modes of motion necessary for function.

A jigsaw of consciousness

One of the difficulties in studying consciousness is the lack of a universally accepted definition for the phenomenon. After all, what is it like to see red? To taste chocolate? These questions seem to make sense but, delving deeper, you will realise an answer does not exist.

Perhaps a good place to start with defining consciousness is the statement: “I think, therefore I am” – basically, you cannot logically deny your mind (conscious awareness) exists without actively using your mind to do the denying – effectively, consciousness is the faculty that perceives that which exists. This deceptively simple proclamation was composed by the ‘Farther of Modern Western Philosophy’ and golden age mathematician, Rene Descartes (1596 – 1650).

However, the mechanism by which this perceiving of ‘that which exists’ arises, the biochemical ‘spark’ that stimulates a kaleidoscope of colourful sensations – just how consciousness arises – is a mystery. This ancient conundrum has been dubbed the ‘hard problem.’

Many scientists believe that by systematically charting the breakdown of consciousness during a general anaesthesia, some light may be shed on the answer to the ‘hard problem’ of consciousness.

As we discovered earlier, findings have revealed that anaetheics work through acting on multiple different protein receptors to block the firing of neurones throughout our brains. It appears that the resulting disharmonisation of the brain’s internal communications is a vital element in achieving general anaesthesia.

Based on the above observation, it seems that consciousness is not rooted within a discrete region or receptor of the brain – instead, it is a widely distributed phenomenon. EEG studies have supported this idea by indicating the breakdown in communication between the front and back of the brain during general anaesthesia.

Another intriguing observation comes from functional magnetic resonance imaging (fMRI) studies, a powerful brain imaging tool capable of measuring neural activity by detecting changes in blood flow. Blood flow will increase when a specific area of the brain becomes more active as it requires more oxygen, delivered via the blood.

fMRI Studies have shown that in anaesthetised patients, small ‘islands’ of neural tissue are active in response to external stimuli such as light or sound. Despite this ability of the brain to detect these stimulus, the patient remains unconscious – somehow, the sensory information fails to be processed and integrated into an overall awareness.

fMRI.jpg
fMRI scan, credits: pbs.org

This analysis strongly supports the ‘Global Workspace Theory’ (GWT) of consciousness as a very basic initial explanation to the hard problem. The theory proposes that sensory information is first unconsciously processed locally in individual brain regions. Then, each region ‘broadcasts’ their signals to a network of branching neurones which begin firing in synchronicity throughout the brain.

Therefore, according  to GTW. it is the complex interactions between discrete brain regions that integrates an overall signal to an external stimulus and produces an awareness to this stimulus – consciousness.

GTW2
A visual representation of global workspace theory, credit: NewScientist magazine

Charting and understanding the loss of consciousness during general anaesthesia may not only illuminate the nature of the conscious mind but also deepen the currently patchy understanding of dampened or altered sates of consciousness, for example, in those suffering from depression or schizophrenia. Research in this field may also lead to the development of improved techniques for detecting brain activity, and thus the emotions or needs, of people in a vegetative state, i.e. a conscious mind imprisoned in a paralysed body.

It is clear then, that research into the mechanism underlying general anaesthesia is a worthy endeavour for both tackling the age-old elusive ‘hard problem’ of consciousness, and, at the opposite end of the spectrum, developing treatments to directly improve peoples’ lives.

The art of anaesthesia is truly remarkable. Every day across the globe, millions of people are guided to the brink of nothingness, without us really knowing how. And then lead safely back home again.

Organoids

To the naked eye, the minuscule clump of cells appears insignificant. Under a microscope, however, the tiny mass reveals dazzling complexity: the delicate tubules of a kidney, the slimy mucous coated layer of intestinal lining or the exquisite folds of cerebral cortex.

budded_organoids
early intestinal organoid, credits: http://www.stemcell.com

By far one of the most intriguing advancements in the field of stem cell research over recent years has been in the development of these ‘organoids’.

The past decade has seen an explosion in research using this revolutionary piece of biotechnology, illuminating discoveries in fields from human development to drug discovery.

An organoid is effectively a miniature organ, generated from stem cell cultures programmed to differentiate into multiple organ-specific cell types.

When cultured, the cells cleverly arrange themselves into 3 dimensional systems capable to recapitulating basic functions of the corresponding organ such as filtration, contraction or even neural activity.

How are they made?

Organoids can be grown from pluripotent stem cells or from cells extracted from primary tissues (i.e. from the specific organ).

A stem cell can switch on or off certain genes to specialise into any cell type. They are thus ‘undifferentiated’ as they are yet to commit to a developmental path leading to a specific function.

‘Pluripotent’ stem cells have the potential to become any of the more than 200 cell types comprising the human body, making them extremely valuable for research.

Such cells are derived directly from a human embryo or from already specialised adult cells which are genetically reprogrammed to an embryo-cell like state (induced pluripotent cells or iPSCs).

stemcell
A stem cell being extracted from a blastocyst (a 2 week old human embryo), credits: https://www.thermofisher.com/blog/biobanking/stem-cell-biobanking-ethics-control-and-justice/

Stem cells have the unique property of self-renewal. Unlike a red blood cell, liver cell or any other specialised cell, a stem cell can replicate itself, generating an identical stem cell.

An initially small stem cell population can therefore proliferate for many months in a laboratory, yielding a millions of identical stem cells.

To grow organoids, stem cells are embedded into a 3-dimensional medium or ‘scaffold’, upon which, their population can grow.

The 3D medium also contains biochemicals which trigger the stem cells to specialise into a particular cell type.

These biochemicals are known as ‘growth factors’ and effectively mimic the natural cues the body sends during different stages in cell development.

The proliferating stem cell population thereby matures into organ-specific cell types and, gradually, supported by the 3D media, they self-organise into the intricate organoid system.

The zoo of organoids

Although it is still early days for this pioneering technique, progress being made rapidly.

To date, a diverse array of organoids have been grown in vitro, a sample of which are summarised below.

Scientists have also succeeded in growing organoids of the pancreas, intestines and even the womb.

Bile-ducts

Bile ducts are a network of long tubular structures extending from gall bladder, a little pear-shaped green sack of concentrated bile nestled just below your liver. It is via these tubes that bile, a green-yellow fluid flows into the upper part of the small intestine – the duodenum – where it aids in the breakdown of lipids (fats) during digestion.

When these ducts fail, toxic bile gradually accumulates in the liver, with fatal consequences. The concentrated bile permanently scars the liver in a process known as cirrhosis.

Despite the drastic effects of bile duct malfunction, our present insight into bile duct disorders is limited. A barrier to research into this field is the lack of bile duct ‘models’ scientists can use to explore how these disorders develop on a deeper level and to effectively test out drug treatments. Ultimately, the sole option for victims of such diseases is liver transplantation.

However, new organoid technology offers a glimmer of hope.

Just last year, researchers at the University of Cambridge extracted cells from the bile ducts of healthy volunteers and grew them on a biodegradable support made of collagen protein.

After four weeks, the cells had multiplied, completely coating their scaffold. The flexible scaffolds were then carefully folded into tubes resembling the shape of bile ducts.

The result was extraordinary: the artificial tubes exhibited the key features of a normal bile duct.

organoid_bileduct

These intricate organoids were surgically transplanted into mice to replace their damaged bile ducts. The procedure was an exciting success with the mice surviving without further complications.

It is believed that such technology will lead to generation of human scale bile ducts, opening the door to an alternative treatment for biliary disorders, without reliance on scarce liver transplant supplies.

Furthermore, these miniature bile ducts are the perfect models for testing out new drugs for treating bile duct diseases.

Lungs

Numerous research groups have taken on the daunting challenge of growing organoids representing the branching complexity of the lungs.

Lung organoids are derived for induced pluripotent cells – body cells genetically reprogrammed to an embryonic state. These are clean slates for differentiation into a diverse array of specialised lung cells.

Once isolated, scientists effectively trigger these cells to undergo to process of lung development as it would occur in the womb by initiating ‘gastrulation’  (an early phase of embryonic development) in vitro.

During gastrulation, cells organise into three distinct layers including the ‘endodotherm’ layer, from which the lungs sprout.

The resulting miniature lungs offer a unique window into understanding key aspects of the most severe lung diseases such as emphysema, cystic fibrosis and the many forms of lung cancer.

newlungorgan.png
bright field microscopy image showing the a lung organoid in the process of development, credits: Snoek Lab/Colombia University Medical Centre

Brains

The great Greek philosopher, Aristotle, believed the route of human consciousness was in the heart, a belief echoed by the Egyptians who even emphasised in the Book of the Dead, how, during burial, great care should be taken in preserving the heart of the dead while the insignificant slimy lump of grey matter, the brain, could be scooped out and thrown away.

Clearly, our knowledge of the brain has come on in quantum leaps since the ancient times. However, the entire discipline of neuroscience only really began to flourish with the birth of fMRI in the late 1990s. Since the advent of this revolutionary technique, scientists have painstakingly begun mapping the activity, pinpointing regions responsible for specific thoughts, feelings or behaviours. Through this pioneering work, some light is being shed on the origins and depth of consciousness – ultimately, what is truly means to be human.

Neuroscientists across many generations have definitely endured a steep intellectual climb to arrive at our present understanding of the brain, demonstrating the sheer complexity of this magnificent organ – a universe of 80 to 90 billion neurones, organised across a labyrinthine network of neuroanatomical structures linked by trillions of connections.

p3600135-immunofluorescent_lm_of_astrocyte_brain_cells-spl
neuron cells, credits: Nancy Kedersha/SPL

Recent developements of researchers growing mini brains (‘cerebral organdies’) in vitro no doubt symbolise a new and exciting chapter in this epic scientific journey.

Cerebral organoids are grown from induced pluripotent stem cells which are allowed to mature into an embroyoid body which consists of an ‘neuroectoderm layer’ – from which, neural cells specialise and proliferate into the brain and spinal cord.

These cells are isolated and cultured within a ‘matrigel droplet’. The spherical droplet shape encourages the proliferating neural cells to migrate from the gel’s surface to the interior, eventually forming the glob of a 3 dimensional cerebral organoid.

Furthermore, the cells organise themselves into distinct neuroanatomical regions: the cerebral cortex, retina and even the early hippocampus, which plays a significant role in short-term, long-term and spatial memory.

Unlike natural brain development in the womb, lab-grown brains lack a constant blood supply. Therefore, no nutrients or oxygen can penetrate the central brain structure, limiting the resulting complexity and size – cerebral organoids tend to reach a maximum of 4mm after 2 months.

brainorganoid
stained cross section of a cerebral organoid seen under a microscope, credits: Madeline A. Lancaster/IMBA

Not only can the cerebral organoid provide insight into brain development, but, crucially, it can aid researchers in unlocking the key mechanisms underlying the diverse range of human neurological conditions that remain largely shrouded in mystery.

For example, what causes of autism or schizophrenia? And what are the mechanisms underpinning the development of devastating neurodegenerative diseases like Alzheimer’s?

It is intriguing to consider the complicated ethical implications neuroscientists may encounter as this technique is further refined, bringing a glob of cells closer to resembling the true complexity of the human brain.

Could scientists one day ‘grow’ a cerebral organoid with some level of conscious awareness? And if so, would it be morally wrong to use it as a (highly accurate) model for exploring the roots of neurological disease and even consciousness itself? Images of fully formed disembodies brains, pulsating in glass jars sends a shiver down the spine.

Although we cannot ignore these ethical dilemmas, we must keep in mind that this radical technology is still in its infancy.

The End of Ageing?

For thousands of years, humans have fantasised about eternal life. Mention of the mysterious ‘fountain of youth’ echo from as far back as the 5th century BC in the writings of Herodotus and immortality, be it through reincarnation or ascension into a celestial kingdom, is an idea deep-rooted in religion.

Perhaps it is not only the discomfort or growing sluggishness of the mind promised by old age that drives the weaving of these myths, but also, our innately human fear of the uncertainty beyond the final frontier: death.

Lucas_Cranach_-_Der_Jungbrunnen_(Gemäldegalerie_Berlin).jpg
‘The Fountain of Youth’, depicted painting  by Lucas Crunch the Elder from the year 1546

But times have changed. With the dramatic acceleration in standards of healthcare, hygiene and nutrition over the past century, this final frontier edges ever further.

Although there are a few exceptions, life expectancy has been increasing 2.5 years every decade; that’s 25 years for every century. It will soon be the norm to surpass even the 100th year of life.

It is particularly striking to realise that the whole idea of ‘old age’ is largely a phenomena of modern times: At the turn of the 20th century, average life expectancy in the US was barely 45 years – people simply didn’t long enough survive to encounter old age.

How do we age?

It might be surprising to discover that ageing, a process so intrinsic to life itself, remains a major mystery of biology. While some scientists claim our deterioration is coded into our genes, others believe the accumulative damage by metabolic processes is at the root of our demise.

Of course, to develop effective interventions of slowing or even halting the ageing process, scientists must primarily identify the mechanism(s) by which it occurs. Another critical aspect in the longevity debate is whether the ageing occurs via multiple pathways or just one.

So far, five main hypothesis have been proposed:

The Error Hypothesis

The accumulative cell death resulting from errors that may occur during DNA replication, transcription or translation of RNA into proteins.

The Cross-linkage Theory

Based on the curious observation that with age, protein molecules and DNA develop abnormal ‘cross-linkages’ with each other. These unnecessary bridges trap proteins into a state of reduced mobility. Proteins are thus increasingly prevented from catalysing vital cellular processes. A particular problem is that enzymes are increasingly less able to break down toxins or damaged/unneeded proteins which stick around and cause further issues.

The Brain Hypothesis

This theory focuses on the neuroendocrine system – a complex biochemical network of hormones secreted from the walnut-sized hypothalamus area of the brain. The hypothalamus is often referred to as the ‘master gland’ as it stimulates and inhibits the release of hormones from the putridity gland and other key glands of the body ( e.g. thyroid, ovaries, testes).

As we age, the control the hypothalamus has over various glands weakens and even individual receptors at target cells become less sensitive to hormones. Hormones are required initiate intracellular responses to external changes, maintaining the body’s equilibrium (homeostasis).

Ultimately, the body is less able to maintain important parameters like blood glucose at ideal levels, having detrimental consequences.

The Autoimmune Theory

With age, the ability of our immune system to synthesise disease-fighting antibodies declines and we become increasingly vulnerable to a constant minefield of bacteria and viruses.

Also, crucially, the extent by which the immune system can distinguish between proteins which are its own and those of foreign invading species is reduced. As a result, the immune system essentially attacks and destroys the body’s own cells.

The Free Radical Theory

Oxidative free radicals are generated as toxic by-products of cell metabolism. Natural antioxidants in our cells ‘sop up’ free radicals by reducing them (donating electrons to them). However, free radicals that escape the cleanup process build up and the cumulative damage they inflict on DNA, proteins and mitochondria may be a major contributor to ageing.

Longevity research

As you can see, scientists are still undecided on an exact theory of ageing and to further confuse matters, the ageing process is most likely a complex interplay between these theories and a range of other metabolic processes.

Nevertheless, the past decade has seen a boom in the discussion surrounding research into life extension and increasing funding poured into this area. It is even a hope among some scientific circles, particularly in Silicon Valley, that we can one day ‘end’ ageing all together.

Researchers known as ‘biomedical gerontologists’ believe that by understanding the various biochemical mechanisms underpinning ageing, they can stop the ageing process, much like curing a disease.

In fact, in 2014, the Korean-american physician, hedge-fund manager and investor, Joon Yun, launched the ‘Palo Alto Longevity Prize‘ – an incentive prize encouraging research teams from across the globe to compete in an effort to end ageing.

Below are summaries of just a sample of the discoveries in this buzzing field or research:

Mutant Worms

C. Elegans is a species of round worm. This 1mm long unsegmented worm is an ideal ‘model’ organism for scientific research – all 959 of its transparent somatic cells are easily viewed by a microscope and, despite its primitive biology, many of its genes have functional counterparts in humans. This makes C. Elegant the ideal compromise between complexity and tractability.

In 2013, scientists at the Buck Institute on Research on Ageing induced mutations in two genes of C. Elegans. These mutations were known to inhibit key molecules involved in nutrient and insulin signalling processes – metabolic pathways that had previously been identified for their role in longevity.

The result was a remarkable five-fold increase in the 2-3 week life span of C. Elegans. An equivalent five-fold (or 130%) increase in human life span would amplify our life expectancies to 500 years.

c.elegans
c. elegans – credit: blog.ucdavis.edu/egghead/files/2014/04/worn.jpg

This was an astounding discovery. Not only did it illustrate the central role of genetics in ageing but also demonstrated that a combination of multiple gene mutations, not just one, could be key in increasing life span.

However, scientists are yet to replicate these results in more complex organisms, let alone humans.

Dietary Restriction

In the 1930s, dietary restriction (DR) was found to increase the life span significantly. Rats were maintained on a strict caloric intake, falling just short of malnutrition. The surprising effects of DR were further replicated in other animals, including C. Elegans and perhaps most controversially, laboratory rhesus monkeys.

The effects of DR have for many years been an established piece of knowledge in longevity research and the health statuses of ageing DR rodents have been monitored long term. The animals show an unexpected improvement in function compared to normal and delayed onset of age related diseases.

These findings have generated widespread interest in the effects of DR in humans. However, trials testing calorific restriction in humans would be difficult, requiring extraordinary levels of self-discipline among participants and a lifestyle heavily burdened with control measures. We must also ask ourselves if a life of DR is really much worth living, despite its potential ability to offset ageing.

Carnosine

Carnosine is a compound occurring in very low concentrations in the brain and muscle tissues. Since being disovered by the Russian chemist, Vladimir Gulevich, the compound has been proven to be involved in free radical scavenging (i.e. to act as an antioxidant) and it has been suggested to be involved in the prevention of cross-linkages.

As we age, our levels of carnosine decline. Numerous studies proclaim the potential of carnosine supplementation as a means of life extension.

Carnosine
Carnosine molecule – credit: upload.wikimedia.org/wikipedia/commons/thumb/e/eb/Carnosene.svg/1200

Telomeres

Telomeres are regions of repeated DNA nucleotide base sequences at the ends of chromosomes.

These repetitive DNA sequences protect the ends of the chromosome from ‘fraying’ like thread over time and from the general wear-and-tear they are vulnerable to as they dance around the cell during mitosis (or meiosis).

Picture1.png
chromosomes – credit: images.fineartamerica.com

 

For a number of years, many scientific circles have testified that the gradual shortening of telomeres was behind human ageing. This concept treats telomere length as a kind of morbid ‘cellular clock’.

Studies have indeed proven a that telomere shortness and age are closely correlated. However, it remains unknown whether telomere shortness is intimately linked to the ageing process itself or is simply a side-effect of age.

At present, this area is a hot topic in longevity research and scientists worldwide are exploring ways of lengthening telomeres or at least intervening with their inevitable shortening. Telomerase is the enzyme responsible for reconstructing shortened or injured telomeres and a various telomerase-activation drugs are currently under development.

What is CRISPR?

DNAGREEN
Credit: http://www.awesome-u.org

CRISPR  is a unique technique of editing genetic code – the sequence of nucleotide bases encoded in the DNA of every cell, from viruses to humans, that dictates growth, development and function.

This novel technology, derived from an ancient defence mechanism in a diverse range of bacteria, carries the exciting promise of transforming the field of biomedical research and beyond.

You may be wondering why this gene editing technique is causing such a hype – after all, scientists have been tinkering with DNA already for decades.

However, CRISPR presents the opportunity to manipulate genetic code with unprecedented accuracy and efficiency. This targeted precision offers the potential of curing any genetic disease but also brings us closer to the possibility of ‘designer babies’, a topic heavily burdened with ethical implications.

How does it work?

From as far back as the 1980s, scientists noticed an interesting pattern in the DNA of some bacteria. One set of DNA nucleotide bases would be repeated again and again, separated by other unique sequences. The curious phenomena was dubbed ‘clustered regularly interspaced short palindromic repeats’ or CRISPR.

It later dawned upon scientists that these unique sequences matched up with the DNA of viruses that specifically prey upon bacteria. These CRISPR sequences are a gallery showcasing the various viral ‘enemies’ that bacteria has encountered. But why does the bacteria store up this genetic code?

The answer lies in the ancient mechanism of bacterial immune defence. Keeping this viral DNA means the bacteria can rapidly recognise and initiate an attack on the virus on its next invasion.

The key players involved in this defence are known as Cas Enzymes. The bacteria transcribes the viral DNA sequence into an RNA molecules. Cas Enzymes each associates with one of these RNA molecules. If this Cas Enzyme encounters a virus which successfully matches the genetic material encoded on its CRISPR RNA molecules, it will chop the viral DNA in two, thereby preventing it from further replicating inside the cell. Thus, the CRISPR RNA acts as a kind of ‘weapon’, precisely guiding the Cas Enzyme assassin to its victim.

Their are a diverse range of Cas Enzymes, but the most widely known is ‘Cas9’. The Cas9 enzyme, associated with a specific CRISPR RNA sequence together makes up the CRISPR-Cas9 system (often shortened to just ‘CRISPR’).

It may now be becoming apparent to you just how specific sequences of DNA are targeted by CRISPR technology. The key lies is the CRISPR RNA molecule harnessed by the Cas9 enzyme.

By tailoring the base sequence of the CRISPR RNA to a code complimentary to a target gene sequence, scientists can effectively guide their Cas9 enzyme to that unique gene. Cas9 is then guided with extraordinary precision to a specific location in the gene, where it cleaves both strands of the DNA. What happens next is a truly fascinating example of how humans have tapped into such a deep level of the molecular world.

Recognising the cut (damaged) DNA, the cell initiates a repair mechanism. Scientists can exploit this repair mechanism to introduce mutations to that gene.

DNA is a (very very long) string of nucleotide bases, A, T, C and G. Previously in molecular biology, restriction (cutting) enzymes cleaved DNA every time they came across for instance, a ATGG sequence, recklessly dicing an entire genome. With its ability to recognise lengthier DNA sequences – up the 20 bases long – CRISPR is more fine tuned towards a particular gene, thus by far a ‘cleaner’ method of modification.

What can it be used for?

The first reports of the use of CRISPR to edit human cells were published back in 2013 by researchers from laboratories at MIT and Harvard. Since then, several studies have demonstrated the effectiveness of CRISPR in revolutionising the treatment of genetic diseases. A 2016 review article in the journal of biotechnology hinted at the potential for CRISPR to correct the genetic defects underpinning severely life limiting diseases such as cystic fibrosis and cataracts. It is clear this innovative technology will play a central role in the future of medicine.

But of course, the genetic code is universal, so CRISPR also has the power to manipulate any organism on the planet. Focussing on medicine alone is short-sighted – this technology unlocks the possibility of altering whole ecosystems.

An example of the role CRISPR has to play beyond the scope of human genetics is in agriculture. With the global human population forecast to hit over 9 billion by 2050, the pressure on the planet sustain such vast numbers is increasingly urgent. Use of CRISPR to target genes in crop plants that confer to increased yield, drought tolerance and greater nutrient density therefore offers hope for future food security.

A further exciting application is the creation of ‘gene drives’ – genetic systems which increase the probability of a particular trait passing from one generation to the next. For example, use of the CRISPR to introduce a deleterious mutation into the mosquito population could slowly eliminate the vector for the malaria parasite, plasmodium falciparum. Such gene drives can also be introduced in invasive plant and pest species.

Ethical implications?

The genetic modifications made to human embryos and reproductive cells (sperm and eggs) are changes that will be passed from that individual to future generations. This type of fundamental genome alteration is known as ‘germline editing’ and has awakened a host of ethical issues.

Even if use of CRISPR to eliminate the gene in an embryo that would result in development of a life limiting disease in that individual, scientists remain unsure as to how such an edit could affect future generations. Perhaps the modification gives rise to alleles with unforeseen and disastrous off-target side effects which only emerge, with catastrophic results, in generations to come. Like the universe, many aspects of the human genome are shrouded in mystery – for instance, the complex interplay between genes and the environment introduces another layer of unpredictability for gene editing.
And of course, what if CRISPR gene editing drifts from being a therapeutic tool into the realm of enhancing human characteristics. From this there might insidiously emerge a ‘two-tiered’ human population – in one, the rich, who, able to afford the luxury of physical feature, intelligence or even life expectancy ‘upgrades’, are lifted increasingly above everyone else.

However, it is important to realise that this technology is still in its early stages and such possibilities are as yet highly unrealistic. Researchers are still in the process of refining the technique to reduce the likelihood of off-target mutations and refining specificity.

Cross section of winter jasmine leaf

 Jasminum nudiflorum

Just like our skin, a leaf has multiple layers, each serving a specific purpose. These photos show the structuring and shape of cells varying throughout the leaf – marking each distinct layer.

Waxy cuticle – The topmost and thinnest layer, not visible in these photos.

  • The waxy cuticle is a protective film synthesised by the epidermal cells below, consisting of lipid and hydrocarbon polymers impregnated with wax.
  • It is impermeable to water of liquid and vapour form, thus acting to reduce evaporation of water from the plant to the atmosphere. In addition, the waxy cuticle acts as a defensive layer against harmful fungi, bacteria and viruses.

Upper Epidermis – The line of blue-dyed cells, the topmost visible layer in these photos.

  • This layer is just 1 cell thick but consists of a wide variety of specialised cells, making it multifunctional. These cells are tightly interlocked to provide strength against mechanical stress. Furthermore, this gives the leaf ‘bendability’ to facilitate growth because the walls of each cell are flexible. The cell types in the epidermis include:
  • Basic epidermal cells (often called ‘pavement cells’ due to their flat polygonal shape), make up the majority of cells in the epidermis and are the only ones visible in these photos due to their large size. They are the least specialised cells of the epidermis and do not activate the genes required for development of green chloroplasts. This means they are transparent, allowing light to transmit through them into photosynthetic cells in the lower layers. The main function of a basic epidermal cell to synthesise and secrete a waxy substance which rises to the surface of the leaf and readily polymerises to form the protective waxy cuticle.

(The following specialised cells of the epidermis are not easily identifiable in the photos).

  • Guard cells (seen clearly in the ‘celery epidermis’ photos) are responsible for control of exchange of water vapour, carbon dioxide and oxygen in and out of the plant. One pair of guard cells border each stoma (singular. Stomata – tiny pore in the epidermis) – the opening through which gaseous exchange The stiff and rigid inner lining of the guard cells means that when they become turgid (fill with water), they flex and bend away from each other, opening the stomata. Conversely, when the guard cells are flaccid (lose water), they close the stomata by decreasing the curvature between them. There are green chloroplasts in the guard cells which capture energy to fuel this process. To find out how turgidity and flaccidity of the guard cells is brought about on a molecular level, read the post titled ‘celery epidermis’. Due to their role in regulating plant water content and respiration capabilities, the density of guard cells and their stoma in the epidermis is highly dependant on environmental conditions. For example, in a drought, excessive water loss by evaporation from the stoma is costly to the plant. Therefore, we would expect fewer cells in the epidermis to differentiate into guard cells and consequently, a lower stoma density to reduce water loss as much as possible.
  • Arranged around each guard cell, three to four Subsidiary cells are found, these are distinctively smaller than guard cells and lack chloroplasts. Plant cells have a rigid cellulose wall, so the constant expanding and contracting of the guard cells to control gaseous exchange could disturb the shape of the surrounding plant cells by putting repeated mechanical stress on their cell walls. The subsidiary cells solve this problem by forming a cushioning around each guard cell to protect the surrounding cells from guard cell expansion and contraction.
  • Trichrome cells, otherwise known as ‘leaf hairs’ are cells that protrude outwards from the epidermis, just visible to the naked eye as tiny hairs on the leaf surface. Trichomes can be made up of a single cell (unicellular) or multiple cells (multicellular) depending on the plant species. Trichomes can make the plant less appetising to small herbivores as they can deposit sharp crystals of calcium oxalate on the leaf surface. They can also reduce water loss from the stoma caused by wind by breaking up the air flow across the leaf. In addition, the hairs can hold on to moisture, increasing the plant’s water availability – this would be particularly important for the winter jasmine plant whose native area is in the humid Chinese mountains.

(The top right photo has the best view of the two following layers)

Palisade Mesophyll – The layer below the upper epidermis, identifiable by the vertically elongated oval-shaped cells coloured pink by the dye.

  • The palisade cells have the highest concentration of chlorophyll of all the plant cells, making them a central site for
  • When the leaf absorbs light from the sun and from this the chlorophyll harness the light energy. They utilise it to fuel to reaction of carbon dioxide and water to produce glucose – chemical energy.
  • Below is the chemical equation for photosynthesis – the reaction on which all life on Earth ultimately depends upon.photosynthesis-formula

Spongey Mesophyll layer – Below the pink vertically elongated palisade cells, you will see irregularly shaped spongey mesophyll cells coloured blue by the dye.

  • Cells in this layer contain less chloroplasts than those in the palisade layer since they have less access to sunlight.
  • These cells are loosely packed, providing air spaces through which carbon dioxide can diffuse through from stomata in the lower epidermis, to chloroplasts in the palisade and mesophyll cells for use in photosynthesis.
  • Air spaces also provide a high internal volume to surface area ratio, increasing rate of photosynthesis as more chloroplasts are exposed. This rate is further increased by the moist surfaces of mesophyll cells, allowing rapid diffusion of oxygen and carbon dioxide out of and into the cell respectively.

Lower Epidermis and waxy cuticle – The very bottom layers.

  • This layer is similar to the upper epidermis – a single cell thick and containing a range of specialised cells, coated by a waxy cuticle. However, because the underside of the leaf is less exposed to sunlight, there is less risk of water evaporating from the stomata on the lower epidermis as opposed to the upper. Therefore, the majority of stoma are located in the lower epidermis to reduce water loss.