CRISPR

Jasmine Barnard

     The scientific community is abuzz with the discovery and new use of CRISPR, a gene editing technique derived from an ancient prokaryotic anti-viral defense mechanism. CRISPR stands for ‘clustered regularly interspersed short palindromic repeats.’ Lets unpack that a bit. In the 1980’s researchers who were studying bacteria genomes noticed short lengths of DNA that repeated themselves over and over again, with “junk” DNA separating each repeat. These sequences were concentrated in several clusters throughout the genome, and seemed to have no purpose except to perplex scientists. Turns out, these regularly interspersed repeats were not a merely a fun byproduct of evolution run amok, but a part of an anti-virus defense system.

      In bacteria, this system consists of CRISPR sequences paired with Cas (CRISPR associated) proteins, a subset of proteins which form different complexes within the cell that perform various different functions in the CRISPR/Cas defense system. Cas enzymes can cleave invading viruses, Cas transcription complexes help transcribe sections of CRISPR in the bacteria’s genome, and Cas nuclease complexes recognize old viruses and dismantle them with the aid of guiding bits of CRISPR sequences. This process is triggered when a new bacteriophage (virus that infects bacteria) enters a cell, and is met by a Cas I complex, which cleaves the virus and removes bits of its nucleotides and inserts them into the bacterium’s genome as a “spacer”, or the junk DNA that separates CRISPR repeats. This spacer is then transcribed the next time the CRISPR length of DNA is transcribed by a Cas II complex, and the spacers are processed into crRNA bits, or lengths of RNA that can recognize its anti-parallel sequence in the original virus (all DNA and some RNA is double stranded, the anti-parallel strand is the opposite strand whose nucleotides match up perfectly with the first strand). These crRNA bits are incorporated into a Cas III complex. The crRNA bits guide the Cas III complex to the corresponding bacteriophage, where the complex then proceeds to chop up the offending virus into harmless bits. Essentially, the CRISPR sequences sit back and guide the Cas proteins in doing all the hard work. It may be helpful to think of CRISPR/Cas complexes as memory B cells in the human immune system. They both recognize old pathogens that they have come into contact with before, and work to destroy them before they can do any repeat damage. Cas proteins are like hit men, while CRISPR sequences in the DNA is the hit list.

     The reason the discovery and understanding of this mechanism is so exciting (besides the fact that hitmen proteins floating around a cell is cool in itself) is that homologous systems can be set up and utilized in mouse and human cells. A simplified version of this system has already been used in various research projects in order to remove or insert genes from target cells. This system consists of a human engineered CRISPR/Cas9 complex. Cas9 is a specific Cas protein from the bacteria Streptococcus pyogenes, a species you will be familiar with if you’ve ever had strep throat. Cas9 is a protein complex of endonuclease enzymes (nucleotide cutters) that is guided by bits of CRISPR sequences. Researchers can design a CRISPR/Cas9 complex to make nicks in DNA in order to insert synthetic DNA with the properties of their choosing, or completely remove preexisting lengths of DNA. This is particularly useful for gene knock-out experiments when studying genomes.

     The medical possibilities include developing therapies or preventative treatments against viral infections such as HIV, and possible genome editing that could cure genetic disorders. However, the implications can be frightening. This could be a step closer to human genetic enhancement, which brings with it a host of problems of inequality of access, moral issues that have to do with consent and unforeseen consequences, and humane use on both humans and animals. With every new scientific development comes new ethical quandaries that must be addressed, and the CRISPR/Cas9 complex is no different.

References

1. Barrangou, Rodolphe, Christophe Fremaux, and Hélène Deveau Deveau. "CRISPR Provides Acquired Resistance Against Viruses in Prokaryotes." Science. AAAS, 23 Mar. 2007. Web.

2. Cong, Le, F. Ann Ran, and David Cox. "Multiplex Genome Engineering Using CRISPR/Cas Systems." Science. AAAS, 15 Feb. 2013. Web.

3. Doudna, Jennifer A., and Emmanuelle Charpentier. Science. AAAS, 28 Nov. 2014. Web.

4. Zhang, Sarah. "Everything You Need to Know About CRISPR, the New Tool That Edits DNA." Gizmodo. N.p., 06 May 2015. Web. .

Staying Ahead of Infection: A New Plan of Attack

Irena Feng

     Before the advent of antibiotics, common infections 8killed quickly, as there were no drugs to stop them. When antibiotics were first developed in the early 20th century, they were seen as miracle cures; the antibiotics eliminated the bacteria easily by killing them or interfering with their growth enough to allow the body’s immune system to destroy them. However, almost a century after the first antibiotic was discovered, we may be heading back into a new post-antibiotic era as bacteria begin to find new ways to fight back.

The Superbug

     Antibiotic resistance refers to when an antibiotic is no longer effective against its target, often because the bacteria have changed to reduce their vulnerability to the drug. The development of resistance against certain drugs was inevitable; just as natural selection favors survival of the fittest, so does our use of antibiotics, which favors the few resistant bacteria, allowing them to outlive their vulnerable peers and multiply into resistant strains, colloquially known as superbugs. For years, the scientific community’s response to these emerging strains of resistant bacteria was new antibiotics; however, the speed at which resistance can spread creates new challenges.
     Bacteria use horizontal gene transfer methods such as conjugation to physically share resistance genes. First discovered in 19461, conjugation (image below) is the bacterial version of sexual recombination between cells with or without a DNA sequence called the F plasmid (F+ and F- cells, respectively), spreading the F plasmid throughout a population of bacteria.
     The Hfr strain of bacteria takes this one step further. In the Hfr strain, as the F plasmid is transferred from donor to recipient bacterium, it is not only taken up in the recipient bacterium but also integrated into its chromosome2, becoming a permanent portion of the recipient’s genome and resulting in a high frequency of genetic recombination. When conjugation occurs again after that, replication and transfer of the F plasmid inadvertently brings along neighboring genes in the donor cell. In this way, conjugation can lead to not only the transfer of F plasmid but also of additional genetic material. This allows genes that provide antibiotic resistance to be transferred between bacteria, quickly making an entire population of bacteria resistant rather than keeping the resistance to a single bacterium.
     The World Health Organization reported in 2014 that antibiotic resistance was a serious threat to public health3 worldwide when it surveyed over 100 countries and found that in many cases, antibiotics against common infections like pneumonia and sepsis no longer work in the patients that need them. In some countries, over half of people treated could not use the antibiotics available because the antibiotic-resistant bacteria rendered available antibiotics useless. An even more severe consequence of antibiotic resistance is increased mortality rates. For example, staph infections are caused by staphylococcus bacteria, but can usually be treated with a widely-used class of antibiotics called beta-lactam antibiotics4, which include penicillin and methicillin. However, people infected with MRSA (methicillin-resistant Staphylococcus aureus) are 64% more likely to die when compared to those infected with a non-resistant form5, demonstrating the dangers inherent in multidrug-resistant bacteria like MRSA.

How Phage Therapy Works

     A new plan of attack against these multi-drug resistant bacteria involves a new weapon: viruses. While antibiotics target multiple species of bacteria (sometimes even off-targeting our natural helpful bacteria), viruses that target bacteria – called bacteriophages – focus on specific species of bacteria, and sometimes even specific strains within a species6. A bacteriophage works against bacteria by injecting its own DNA into the bacteria, essentially hijacking the bacteria’s replication mechanisms to replicate itself and produce more bacteriophages. The propagation of this cycle depends on the availability of the target7; when there no more bacteria that match the bacteriophage, the phage ceases to infect and kill.
      One concern about using bacteriophages is the lysing process. In lytic phages, the bacteria are taken over and forced to produce phages until the cells explodes, killing the cells and releasing phages into the body system to infect more bacteria. However, when the bacteria are destroyed, their other contents are also spilled into the surroundings. These contents contain proteins and toxins that could bring further complications and cause the body to overreact to dying bacteria8. Scientists at MIT9 are working to counter this by producing phages that kill the bacteria without lysing them. In these phages, nicknamed phagemids, there are small pieces of DNA instead of a full-fledged viral genome. Phagemids kill the bacteria but also prevent lysis, thus keeping the bacterial toxins within the bacteria. The dead bacteria can then be eliminated by the body’s natural defenses.
      Phage therapy can also be used to enhance the effectiveness of regular antibiotics, as phages can be used to sensitize the bacteria to the antibiotics by removing the genes conferring antibiotic resistance. At Tel Aviv University10, researchers used bacteriophages to insert into bacteria a gene-editing system called CRISPR/Cas9 to destroy the genes responsible for resistance. The engineered phage contains CRISPR genes with short interjections that are expressed as RNA sequences designed to target specific DNA sequences. These RNA sequences then associate with Cas9 (an RNA-guided nuclease) which cuts the target DNA as directed by the RNA sequences, removing and destroying the targeted DNA sequence. Using genetic engineering, we can manufacture specific sequences that can target whatever we want, including antibiotic-resistance genes. One potential concern is that such a method would require prior knowledge of the specific sequences conferring resistance, but this concern is currently being addressed by experimenting with DNA sequencing that could make it easier to target and excise specific sequences. With the ability to resist antibiotics gone, the targeted bacteria may once again be vulnerable to the antibiotics we have available.

Viral Revival

      With an increasingly better understanding of the mechanisms of action and the relative safety profiles of different treatments, phage therapy as a clinical option is becoming increasingly popular. In the early 2000s, clinical trials began to analyze immune responses to phages11; in 2015, clinical trials began to test for the safety dosages for phage cocktails (multiple phages combined) and other phage components, from specific enzymes to phage tail proteins. The timeframe for bacteriophage development is also much shorter; while antibiotic development could take years, selecting new phages against bacteria is a relatively short process that could take at most a few weeks12.
     The investigation of phage therapy against bacteria began around the same time as antibiotics hit the markets, and eventually yielded to antibiotics since the antibiotics were easier to produce and standardize12. However, as we increasingly come face-to-face with resistant bacteria nowadays, it may be time to reconsider phage therapy’s consignment to the past.

References

1. Griffiths, Anthony JK, William M. Gelbart, Jeffrey H. Miller, and Richard C. Lewontin. 1999. “Bacterial Conjugation.” In Modern Genetic Analysis. New York: W. H. Freeman. http://www.ncbi.nlm.nih.gov/books/NBK21351/.

2. Griffiths, Anthony JF, Jeffrey H. Miller, David T. Suzuki, Richard C. Lewontin, and William M. Gelbart. 2000. “Bacterial Conjugation.” In An Introduction to Genetic Analysis, 7th ed. New York: W. H. Freeman. http://www.ncbi.nlm.nih.gov/books/NBK21942/.

3. “Antimicrobial Resistance: Global Report on Surveillance 2014.” 2014. World Health Organization. http://apps.who.int/iris/bitstream/10665/112642/1/9789241564748_eng.pdf? ua=1.

4. Elander, RP. 2003. “Industrial Production of B-Lactam Antibiotics.” Applied Microbiology and Biotechnology 61 (5): 385–92. doi:10.1007/s00253-003-1274-y.

5. World Health Organization. 2014. “WHO’s First Global Report on Antibiotic Resistance Reveals Serious, Worldwide Threat to Public Health.” WHO Media Centre. April 30. http://www.who.int/mediacentre/news/releases/2014/amr-report/en/.

6. Holmfeldt, Karin, Mathias Middelboe, Ole Nybroe, and Lasse Riemann. 2007. “Large Variabilities in Host Strain Susceptibility and Phage Host Range Govern Interactions between Lytic Marine Phages and Their Flavobacterium Hosts.” Applied and Environmental Microbiology 73 (21): 6730–39. doi:10.1128/AEM.01399-07.

7. Borysowski, Jan, Ryszard Miedzybrodzki, and Andrzej Gorski, eds. 2014. Phage Therapy: Current Research and Applications. Caister Academic Press. http://www.horizonpress.com/phagetherapy.

8. Staropoli, Nicholas. 2015. “Swan Song for Antibiotics? Can Phage Therapy and Gene Editing Fill the Gap?” Genetic Literacy Project. July 26. https://www.geneticliteracyproject.org/2015/07/26/swan-song-for-antibiotics-can-phage-therapy-and-gene-editing-fill-the-gap/.

9. Krom, Russell J., Prerna Bhargava, Michael A. Lobritz, and James J. Collins. 2015. “Engineered Phagemids for Nonlytic, Targeted Antibacterial Therapies.” Nano Letters 15 (7): 4808–13. doi:10.1021/acs.nanolett.5b01943.

10. Yosef, Ido, Miriam Manor, Ruth Kiro, and Udi Qimron. 2015. “Temperate and Lytic Bacteriophages Programmed to Sensitize and Kill Antibiotic-Resistant Bacteria.” PNAS 112 (23): 7267–72. doi:10.1073/pnas.1500107112.

11. Miedzybrodzki, Ryszard, Jan Borysowski, Beata Weber-Dabrowska, Wojciech Fortuna, Slawomir Letkiewicz, Krzysztof Szufnarowski, Zdzislaw Pawelczyk, et al. 2012. “Clinical Aspects of Phage Therapy.” In Advances in Virus Research, 83:73–121. Elsevier, Inc. http://www.sciencedirect.com/science/article/pii/B9780123944382000037.

12. Madhusoodanan, Jyoti. 2016. “Viral Soldiers.” The Scientist, January 1. http://www.the-scientist.com/?articles.view/articleNo/44785/title/Viral-Soldiers/.

13. Sulakvelidze, Alexander, Zemphira Alavidze, and J. Glenn Morris, Jr. 2001. “Bacteriophage Therapy.” Antimicrobial Agents and Chemotherapy 45 (3): 649–59. doi:10.1128/AAC.45.3.649-659.2001.

Image: Swatski, Rob. 2010. “BIOL 102 Chp 27 PowerPoint Spr10.” LinkedIn SlideShare, April 2. http://www.slideshare.net/robswatski/biol-102-chp-27-powerpoint-spr10.

Irena Feng is a first-year student at the University of Chicago majoring in Biological Chemistry/Chemistry and minoring in East Asian Languages and Civilizations. Her interests include exploring a variety of scientific topics through reading and research.

TB: A Global Health Issue

Kalina Kalyan

     Tuberculosis, commonly abbreviated as “TB,” is a top ten cause of death across the world. Tuberculosis is caused by a bacteria known as Mycobacterium tuberculosis.1 Worldwide, one in three people alive today are thought to be infected with Mycobacterium tuberculosis (Mtb) during the course of their lives. However, most are unaware of the presence of Mycobacterium tuberculosis in their lungs because the pathogen is typically kept “in check” by the body’s immune system. This is known as latent TB. In people infected with latent Mtb, symptoms are not shown and the only sign of TB is through the tuberculin skin test or through the TB blood test. Although those with latent TB will show a positive result for these tests, the presence of Mtb does not mean that one will be infected with active TB.2 Those who are infected by latent TB have a ten percent risk of contracting active TB. When a person has active TB, they exhibit symptoms and are able to pass the disease on to other people.

     Once a person contracts active TB, symptoms will often appear two to three months after exposure. However, sometimes symptoms can take years to appear. These symptoms commonly include cough, fever, night sweats, and weight loss. Oftentimes, those infected with TB don’t immediately notice their symptoms, as the severity of the illness escalates with time. This delay also leads the infected to postpone seeking treatment, thus resulting in the unknowing transmission of the bacteria to others. In a year, those with active TB can affect 10-15 people with whom they are in close contact.

     TB is most prominent in six countries; India, Indonesia, China, Nigeria, Pakistan and South Africa. These six countries compose 60% of the total cases of TB. It has been found that over 95% of TB deaths take place in low to middle income countries. Contraction of TB is highly related to nutrition status and is considered a “poor man’s disease.” Lack of proper nutrition and sanitary conditions in the countries where the disease is most prominent continues to contribute to the presence of TB. Poverty and poor access to service challenges the successful treatment of those with TB.

     Impoverished countries tend to have higher rates of both TB and HIV. Those who are infected with HIV are at a much higher risk of contracting the disease than those who are not.1 HIV and TB are so closely connected that their relationship is often deemed “co-epidemic.” In 2015, at least a third of people living with HIV were infected with Mycobacterium tuberculosis. These people have a 20 to 30 times higher risk of developing active TB than those who do not have HIV. In recent studies, the correlation between HIV and TB has been further explored. HIV and TB are a deadly combination in which the contraction of one speeds up the growth of the other. In 2015 alone, there were 1.2 million cases of TB amongst those living with HIV. People with advanced HIV infection are vulnerable to a wide range of infections called “opportunistic infections.” Opportunistic infections are infections that essentially take advantage of the opportunity offered by an already weakened immune system. TB is an HIV related opportunistic infection.

     It is commonly thought that as HIV progresses, the immune system is generally weakened. However, it has been proposed that as HIV advances, the immune system’s focus is weakened through its attention to antiviral responses. This attention on antiviral responses makes it difficult for the immune system to defend the body against the Mtb pathogen, which is bacterial.2

     A vaccine formulation has been created in Fudang University in Shanghai that claims to act against both HIV and Mtb simultaneously. This vaccine contains antigens from both of the pathogens of Mtb and HIV and induces a cellular immune response.3It is important to note, however, that this vaccine is only effective in those affected by Mtb and not active TB.

      Those not infected with HIV but who are infected with active TB have several treatment options through antibiotics. A combination of drugs is necessary to actively combat TB as taking more than one drug better kills the bacteria and is more likely to prevent the patient from becoming resistant to the drugs. To ensure thorough treatment, it is often recommended that a patient take his or her pills in the presence of someone who can supervise treatment. This approach is known as DOTS (directly observed treatment, short course). DOTS cures TB in 95% of cases and in some parts of the world this treatment costs as little as $10 for a six month supply. A vaccine called BCG is available for the prevention of TB. However, this vaccine was first used in the 1920s and studies have shown it to be very variable in its ability to effectively protect people from the disease in modern day. BCG can also cause false positive TB readings and can lead to a fatal disease called disseminated BCG, in those with weakened immune systems. A drug known as isoniazid can be used as a preventative measure for those at high risk of contracting TB. Those who have inactive TB can take a course of isoniazid for several months in order to prevent active TB.6

      TB is both preventable, and curable. However, it continues to be a major global health issue. TB treatment depends largely upon the responsibility of the patient and healthcare professionals. Treatment is only effective when antibiotics are taken for several months. An incomplete treatment can lead to drug resistant TB which poses a number of even more grave concerns. The connection between impoverished countries and presence of TB is undeniable. Several factors such as malnutrition, unsanitary conditions, and lack of proper health care contribute heavily to the spread of TB. These conditions cannot be neglected by countries that have nearly eliminated the presence of TB within their borders.4

      In countries that have not yet been able to successfully eliminate the presence of TB, there is a prominent connection between stigma and cases of active TB. Treatment for active TB involves isolation and daily visits to health care professionals. These aspects of treatment can cause many patients to feel uncomfortable completing their treatment as they may feel like an “outcast” or even be rejected by their families and friends. This stigma that surrounds TB can eliminate an infected person’s desire to even see a physician or seek help out of fear for being deemed “contagious.” India is one of such countries that still possesses a significant amount of stigma towards those with TB, despite bearing one third of the world’s TB burden. In India it is common for those with TB to experience social isolation and rejection. This leads to people with TB hiding their symptoms and failing to receive adequate treatment. If this stigma were to be eliminated, or even reduced, the number of TB cases would decrease significantly as those who need treatment would live in less fear. The public is frequently misinformed about TB and although it is important that research of TB be continued, in order to relieve this burden the stigma surrounding the disease must be targeted effectively and health education must continue to be expanded.5

References

1. “Tuberculosis” World Health Organization. Last modified October 2016 http://www.who.int/mediacentre/factsheets/fs104/en/

2. Lucy C. K. Bell “In Vivo Molecular Dissection of the Effects of HIV-1 in Active Tuberculosis.” PLOS Pathogens, 2016; 12 (3): e1005469 DOI: 10.1371/journal.ppat.1005469

3. “Experimental vaccine elicits robust response against both HIV and tuberculosis, study suggests” Science Daily, May 21, 2012.https://www.sciencedaily.com/releases/2012/05/120521152645.htm

4. Sandro Galea. “The Unnecessary Persistence of Tuberculosis.” The Huffington Post. Last modified October 26, 2016. http://www.huffingtonpost.com/sandro-galea/the-unnecessary-persisten_b_12661768.html

5. Anita S. Mathew and Amol M. Takalkar “Living with Tuberculosis: The Myths and the Stigma from the Indian Perspective” Oxford Journals. 2007. http://cid.oxfordjournals.org/content/45/9/1247.full

6. “HIV/TB Co-infection” Aids Centre. Last modified 2010. http://aids.md/aids/index.php?cmd=item&id=276

The Twin Paradox

Cho Yin Kong

     It is an unnamed year, far in the future. Humans (or post-human creatures) have invented the technology to fly into space at a velocity comparable to the speed of light. There are two twins, with the sort of typical names found in every math word problem, Alice and Bob. Alice decides to fly far off into space with her new-age ship. At one point, she turns around to returns to Bob. According to Einstein’s special relativity equations, Bob sees that Alice has aged less than him. Nevertheless, according to Alice who feels as if she is standing still, Bob is the one who “flies” away from her, so she will see a Bob that has aged less than her. The paradox shows that as this cannot happen, and both Alice and Bob cannot see an older version of each other at the same time, relativity must be wrong. However, if we take into account how Alice would have to turn around her ship, we have to apply physical laws that reveals how Alice should be younger than Bob from both of their perspectives.

The math of the paradox



     On June 30th, 1905, Einstein formed his two postulates for his theory of special relativity: 1) laws of physics hold in all inertial frames; 2) in all inertial frames, the speed of light in a vacuum is always the finite, unchanging velocity of m/s, represented by c.1 The significance of the first law suggests that there is no specific, preferred frame of reference; each inertial frame cannot be distinguished from each other. The second law implies that it was impossible for an observer to travel at a speed larger or equal to c.

     One can see this though one of Einstein’s most famous thought experiments: An observer sits on a train at almost the speed of light, c, and holds a mirror in front of his face. In order for him to see his reflection, light would need to travel to the mirror, and reflect back to the observer's eyes at c, from the observer's point of view. This means that if you were standing on the still platform, the light from the mirror would be travelling at almost 2c. As light must travel at the same speed c everywhere, it must be the case that the time and distance components of speed are perceived differently by the observer in the train and the observer on the platform. Time and distance can be dilated depending on the frame of reference from which you observe the phenomena.

     The observer doesn’t notice anything different about the light from normal. The person on the platform must see the light that hits the mirror travelling very slowly, and the light that is reflected back to the person as travelling very quickly. Intuitively, the light that hits the mirror must account for the difference between the actual velocity of the train, and the speed of light, and seems to travel more slowly to let the train catch up. Similarly, the light that is reflected from the mirror seems to travel a lot faster, because it’s happening opposite the train’s direction. From these principles, we can see the time dilation equation, which is given by ,

where delta t' is the time duration in the frame moving away from the observer, and delta t is the time duration as seen from the still observer’s point of view.



     In a case similar to Einstein’s thought experiment, if there are two observers, one of which is speeding away at a speed that is comparable to c, we could imagine that one inertial frame is speeding away at a certain constant velocity, similar to the train. If two events happen, the time interval measured by the observer at rest will be different to the observer on the frame that is speeding away. This difference in time intervals is the time dilation. There are two things we can notice: a) as v approaches c, the time dilation becomes more significant. When v is insignificant to c, the time dilation and difference is trivial; b) let

in the speeding frame, the measured time is longer than the resting frame, so time must pass slower than in the resting frame for it to match up. For any given delta t, Bob’s time waiting on earth, Bob believes that Alice will experience a time dilation of delta t’, so that she must have experienced a shorter time from her perspective.

      Tying this back to the paradox, the first law implies that both observers in different frames can’t have different laws of physics, and time dilation can be seen from either frame’s point of view. If Alice is travelling at a constant velocity away from Bob, she sees the earth moving away, and her measured time is delta t, while Bob measures the slower delta t’. From Bob’s point of view, Alice is moving away while he is staying still, so Bob measures delta t while Alice is supposed to measure delta t’. Is the paradox right then? And Einstein wrong?

The solutions to the paradox



      The answer is has been an unequivocal “no” from the physics community, and there are several ways to show this. The two most prominent ways are through general relativity and acceleration. This paradox supposedly exists only in inertial frames, a reference frame where Newton’s first law holds. 2This would be true if we removed the effects of earth’s gravitational field on Alice’s travelling, and suppose that Bob is a floating observer. In (x,y,z,t) coordinates, Bob is at (0,0,0,t). Bob still uses the same time dilation formula, where he measures the longer t, and Alice will be younger. However, at some point, has to turn around and return to Bob, which means she has to decelerate, stop, turn around, and accelerate back to Bob. Her relative coordinates change with time, and is at (f(t),g(t),h(t),t). Thus, she can’t be an observer in an inertial frame, and the equivalence principle can’t be applied.3 Alice can’t use the same time dilation formula, and it makes sense that Alice returns younger than Bob.



      The second way is explaining the paradox through general relativity. Two objects exert a gravitational force on each other, and this force is dependent on the mass and distance of both objects.

The key is that this force can act over any distance. Einstein created his general principles of relativity, and formed additional postulates with some consequences: a) Special relativity applies to all motion in any system that you define; b) space-time curvature (the linearity of space and time) can be affected by matter with mass, and this matter is also affected by space-time. Thus, the gravitational field causes a distortion or curvature in space-time, which means that any object with a mass causes this curvature, and that different gravitational fields can affect time, causing gravitational time dilation. The lower down the object in the potential well, the longer the time dilation.

      The premises are the same as those from before; when Alice flies away from Bob, Alice sees that Bob’s time is slow. It is the same as if Bob is “flying” away from Alice; he also sees her time as slow. However, when she decelerates to start turning around, she is reducing her force. In order for her to feel as if she was still standing still, she imagines that there’s a gravitational field that acts against her, and that the universe is accelerating towards her. From her point of view, Bob is “higher” than her in the gravitational field, so his clock runs faster. As we can see from the equation, the further the distance, the higher Bob is. After Alice turns around, she will once again see Bob’s time as dilating. However, in the brief moment that Bob’s clock runs fast, Bob’s total time that passes by is longer than the total time that Alice sees that his clock runs slow. Thus, when Alice returns, she is younger than Bob.4 We can also consider the acceleration that Alice goes through when she leaves from Earth, but as she is still very close to Earth, the effect is trivially small

      We can see that this paradox came from assuming that Bob and Alice’s situation both were symmetrical. However, once the situation is examined carefully, it is clear that the deceleration and acceleration that Alice necessarily goes through means that the situations are asymmetrical. Once we treat each situation as it is, Alice will be younger than Bob from her point of view, and Bob will be older than Alice from his point of view; there is no paradox. In addition, there has been variations on the paradox that shows that Alice is older than Bob 5, depending on the properties of space-time that one chooses to apply to this situation. The moral of the story is not that Alice is younger than Bob, but that even in theoretical thought problems, we should account for all the situations that might happen realistically.

References

1) Nobel Media AB. “The Postulates of Special Relativity.” Accessed November 23, 2016. https://www.nobelprize.org/educational/physics/relativity/postulates-1.html.

2) Pössel, Markus. 2010. “The Case of the Travelling Twins.” Einstein Online 04: 1007. Accessed November 23, 2016. http://www.einstein-online.info/spotlights/Twins.

3) Weiss, Michael. “The Twin Paradox: The Spacetime Diagram Analysis.” Accessed November 23, 2016. http://math.ucr.edu/home/baez/physics/Relativity/SR/TwinParadox/twin_spacetime.html.

4) Simonetti, John. “Special Relativity --- The Twin Paradox.” October 21, 1997. Accessed November 23, 2016. http://www.phys.vt.edu/~jhs/faq/twins.html.

5) Abramowicz, M.A., and Bajtlik, S. 2009. “Adding to the paradox: the accelerated twin is older.” arXiv:0905.2428v1.

The Good, the Bad, and the Unclear: Sex and gender in psychiatry

Elizabeth Lipschultz

     Post-traumatic Stress Disorder is a serious condition that is “increasingly at the center of public as well as professional discussion,” according to the DSM-5. The condition results from exposure to physical, emotional, or sexual violence. However, the way it affects children may be different depending on whether the child is a boy or a girl. On November 11th, Dr. Victor Carrion of the Stanford University School of Medicine released the results of a study on the brains of adolescent boys and girls with Post-Traumatic Stress Disorder (PTSD). The results of the study showed a marked difference in the size of the insula between boys and girls affected with PTSD.

     In a control group of teenagers without trauma-related disorders, there was no difference in the brain structures between boys and girls. However, there were significant differences between the brains of adolescent boys and girls with PTSD, both with regard to the control group and to the other sex. The study found that the insulas of adolescent girls who were affected by PTSD had smaller volumes and surface areas than the control group, while the insulas of adolescent boys with PTSD were larger in both volume and surface area than the control group.

     The insula, a region of the brain which is not yet well understood by neuroscientists, is believed to play a major role in the feeling of social emotions (like lust, disgust, pride, and embarrassment), as well as empathy. It is thought that this difference in size of insula between post-trauma boys and girls may have implications for how PTSD presents in pediatric patients, depending on sex. To further explore this, longitudinal studies must be performed in order to track how both male and female patients progress over time.

     It is tempting on the part of laypeople and scientists alike to use interpret results studies such as Dr. Carrion’s to assume that there is some sort of biologically-based sexed distinction in the way the brain handles psychological trauma. However, it is important to note that merely observational studies are utterly incapable of predicting the mechanisms behind observed differences. This is especially true for studies that find contrasts between different groups but do not propose and test mechanisms that might explain the disparities.

     The Stanford press release is an example of a case study for the ambiguity that comes with medical or psychological results that may appear to have a relationship to biological sex. While it is true that some psychological phenomena do have possible bases in biological sex, the scientific community has been burned before by assuming that apparent psychological sexual dimorphisms were the result of sex (typically assigned based on one’s reproductive organs or sex chromosomes), rather than gender (a person’s self-representation as a man, woman, or something in between). Laura Hirshbein’s 2010 article Sex and Gender in Psychiatry: A View from History, published in the Journal of Medical Humanities, provides historical context. For example, during the mid-19th century, many women were diagnosed with “female hysteria”, which was believed to be related to hormonal imbalances in women due to their biology. By the early 20th century, female hysteria ceased to even be diagnosed, as it was discovered that the condition was usually just a catchall in women for disorders like epilepsy or schizophrenia, which caused erratic behavior in patients.

     Hirshbein presents three examples of common complications that lead to misattribution of innate sex-based differences where none exist: “identifying factors as sex-based when they are really gender-based; overlooking changes in masculine and feminine roles over time; and placing too great an emphasis on hormones.” Hirshbein exemplifies these by chronicling the tendency of 20th century psychologists to relate female attitudes and mental disorders of females to innate qualities of women or to biological causes like menopause, while lacking any evidence of these mechanisms. Often, psychological phenomena that were once attributed to sex differences are found to be more related to one’s developmental experience, social roles, and gender identity.

      It is possible that the case study above of insular development in adolescents with PTSD may truly be based on sexual dimorphism-- perhaps boys and girls develop differently neurologically in the presence of stress for reasons related to their biological sexes. However, it is also possible that this neurological difference could be based on gender rather than sex. This seems unintuitive, but hormonal responses to stress may be influenced by social conditioning that dictates children how they should express (or not express) stress depending on their genders.

      While the processes behind many dimorphisms are not yet understood, acknowledging that uncertainty is the first step to trying to solve the mysteries that remain. There is, as of the writing of this article, no unified theory of sex and gender in psychology. In order to develop such a theory, the combined skills of neurologists, chemists, psychologists, and sociologists must be recruited. Only through rigorous study of the pathways of neurohormones, the sociology and psychology of gender in humans, and how the two interrelate will we be able to understand the effects of sex versus gender on psychological disorders.

      While this is a complicated task, it is also an extremely worthy one. With proper research, it may be possible to personalize treatments for psychological disorders in a way that makes them much more effective.

References

1. Digitale, Erin. "Traumatic Stress Changes Brains of Boys, Girls Differently." Stanford Medicine News Center. November 11, 2016. Accessed November 15, 2016. http://med.stanford.edu/news/all-news/2016/11/traumatic-stress-changes-brains-of-boys-girls-differently.html.

2. Blakeslee, Sandra. "A Small Part of the Brain, and Its Profound Effects." The New York Times. February 06, 2007. Accessed November 15, 2016. http://www.nytimes.com/2007/02/06/health/psychology/06brain.html.

3. American Psychiatric Association. "Posttraumatic Stress Disorder - DSM-5." American Psychiatric Publishing. 2013. Accessed November 29, 2016. http://www.dsm5.org/Documents/PTSD Fact Sheet.pdf.

Opioid Abuse Deterrence: Where Does Big Pharma Draw the Line?

Alborz Omidian

     One of the biggest challenges facing pharmaceutical companies that specialize in opioids, or pain medications, is the dilemma of manufacturing effective, sustainable drug formulations for patients while simultaneously making their medications resistant to tamper and abuse. Modern pain medications largely operate via an extended release mechanism, meaning that their active ingredient is designed to be released into the bloodstream at a controlled concentration over a long period of time, providing extended pain relief.

     Naturally, this creates an opportunity for individuals to tamper with medications to ingest larger doses. Pharmaceutical companies have a vested interest in minimizing the potential risks associated with their products and take steps to formulate medications that are both effective as a drug and resistant to abuse. In other words, the ideal pain medication from the perspective of a pharmaceutical company would provide immediate and prolonged relief to a patient who has undergone a painful surgical operation for example, but is also resistant to tampering via crushing, breaking apart, or other attempts to extract its active ingredient for the sake of abuse.

     How is this delicate balance achieved? The production of modern pain medications includes a wide range of abuse deterrent technologies. For example, a common method of abusing opioids is to either snort (insufflate) the product or to administer it via direct injection.1 Both of these methods first require the crushing of the medication into a fine powder, because it is usually obtained in tablet form. As such, syntheses of many currently available drugs incorporate some type of crush resistance or crush deterrence into their design by enhancing the mechanical strength of the pill to resist physical transformations such as chewing, cutting, and grinding.2

     Other technologies are more creative and less obvious. For example, one of the most common methods for abusing a previous formulation of OxyContin was to prepare it for injection by crushing and then dissolving in an aqueous solution.3 As such, reformulated OxyContin, which is currently on the market, is produced as a solid tablet matrix (matrix referring to the part of the tablet that encapsulates the active ingredient) with chemical properties that both resist mechanical stress and deter injection by turning the active ingredient into a highly viscous gel upon contact with water.4 Other abuse deterrent technologies work by incorporating an opioid antagonist as part of the tablet matrix itself.5 If a patient takes the medication as intended by ingesting the tablet whole, then the opioid antagonist is never released into the bloodstream, and the patient receives pain relief as intended. However, any attempt to tamper with the tablet itself releases the antagonist from the tablet matrix, mixes it with the active ingredient, and deactivates the opioid, preventing abuse.

     However, attempts to circumvent abuse deterrent technologies are often as ingenious as the technologies themselves. Some include using solutions of acetone and other common household products to chemically extract active ingredients from tablets without actually tampering with them6, such as suspending opioid formulations in solutions of Coca-Cola or precisely cracking the outer layer of tablet to increase how much of the active ingredient can be absorbed into the bloodstream at once without activating the release of opioid antagonists.7 Though the sources for these tampering methods are usually from online forums discussing abuse methods and are highly questionable, they do signify that pharmaceutical companies must constantly develop new, innovative technologies to prevent the abuse of their products. Unfortunately, there are drawbacks to this focus. In an attempt to lower the potency of their drugs to discourage abuse, pharmaceutical companies sometimes make their medications too weak for individuals who suffer from intense, chronic pain and require a higher dose of pain relief than the average user. These individuals often resort to self-medication and are often the same people who communicate online about attempts to bypass abuse deterrent technologies.

     Where do pharmaceutical companies draw the line? How does an organization decide on prioritizing abuse deterrence for certain individuals versus the actual efficacy of their medications for the common patient? Is there a win-win solution for both of these issues, or must they constantly be weighted against one another in never-ending competition? Furthermore, what justifications do private corporations have to control individuals and their choices to put certain substances into their bodies? As the influence of commercial medication becomes more prominent and pervasive in the coming years, it may be pertinent for future researchers and healthcare professionals to tackle these dilemmas directly.

References

1. Surratt H, Kurtz SP, Cicero TJ. 2011. “Alternate routes of administration and risk for HIV among prescription opioid abusers.” J Addict Dis 30: 334-341.

2. Mastropietro, DJ. Omidian H. 2013. “Current approaches in tamper-resistant and abuse-deterrent formulations.” Drug Dev Ind Pharm 39: 611-624.

3. Schneider, JP. Matthews, M. Jamison, RN .2010. “Abuse-deterrent and tamper-resistant opioid formulations: what is their role in addressing prescription opioid abuse?” CNS Drugs 24: 805-810.

4. Pappagallo, M. Sokolowska, M. 2012. “The implications of tamper-resistant formulations for opioid rotation.” Postgrad Med 124: 101-109.

5. Colucci, SV. Perrino, PJ. Shram, M. Bartlett, C. Wang, Y. et al. 2014. “Abuse potential of intravenous oxycodone/naloxone solution in nondependent recreational drug users.” Clin Drug Investig 34: 421-429.

6. bluelight.org. 2010. “Experiment Thread New Formulation Oxycodone Extraction” Last modified may 2011. http://www.bluelight.org/vb/threads/523580-Experiment-Thead-New-Formulation-Oxycodone-Extraction

Is Predictive Analytics the Future of Healthcare?

Nila Ray

     In 30 years, physicians may be relying on computers to diagnose you. Predictive analytics is a rapidly growing field that applies the usage of statistical methods, machine learning, and artificial intelligence (AI) to provide predicted outcomes from massive databases of information. With these tools, we can not only minimize costs but also improve the impact of services and the quality of lives. Healthcare, especially, is an area that could utilize predictive analytics to save patients extra visits, lower hospital costs, and improve administration. It is already projected to reveal a booming billion-dollar market within the coming years in the healthcare industry.

     One of the most impressive steps toward predictive analytics was the development of an AI called Watson by IBM. The company originally developed him around 2006 with the intention of creating a machine that could quickly answer any question, like those on Jeopardy. He could take the clues given to Jeopardy contestants and prepare answers faster than human contestants could even buzz in because of his predictive capabilities. IBM’s next goal was to bring Watson into markets, especially to tackle the health industry. There are over 5,000 hospitals in the United States with healthcare data doubling every two years. Much of the data in healthcare is unstructured and invisible to systems, as much as 80%. There are dozens of notes from previous visits, vital signs, illness and wound assessments, pieces of medical and family history, and even daily habits that a patient’s file accumulates over the course of their lifetime, that are often overlooked before diagnosis. During a visit, there is only so much a doctor can review before seeing each individual patient. Analyzing much of this data would allow potential complications to be proactively addressed or a more accurate diagnosis to be given by the provider to avoid multiple visits. With an intelligent system, such as Watson, each patient’s data could be structured, compiled, and analyzed to allow the doctor to recognize the patterns that often remain unseen. However, predictive analytics does not only apply to individualized healthcare. Watson can scavenge through 200 million pages of text in just three seconds. This capability allows analysis of not only individual patient files, but also of the thousands of patients over multiple providers. This data would allow experts to assess population trends and concentrate preventative health education in areas displaying concerning trends. However, this powerful method is not just limited to artificial intelligence.

     Often, the problem in patient care lies in the provider’s diagnoses. For a physician in a hospital, it is crucial to determine how much treatment a patient needs so that the establishment does not waste funds and so that the patient recovers successfully, especially in cases of severe illness. The physician is not able to track each patient’s real-time status in-person to make sure his or her risk of worsening does not increase. There can often be a misrepresentation of a patient’s status and an under- or overtreatment on the physician’s part. However, with the help of a real-time predictive analytics software, such as AWARE, the patient in critical condition can be comprehensively monitored and receive a more precise, timely diagnosis. AWARE is a tool that aids in decision-support with an algorithm that can take real-time and high-value data from electronic medical or health record (EMR or EHR) systems to evaluate a patient’s condition throughout all organ systems. There have been trials showing that the use of AWARE in ICU units to structure clinical data on patients allowed physicians to more effectively deliver treatments. This is just one example of the daily application of predictive algorithms in healthcare.

     The application and foundation of predictive analytics have the ability to be implemented into normal patient care. It is driven on the hundreds of EMR and EHR databases that already hold enough information to predict most patient results. Because algorithms propel the predictions, they can be adjusted to how broad or specific the analysis of data should be. As mentioned before, the purpose is not only to improve individual care, but also overall care for institutions as well. One target area would be to lower costs by assessing treatments given to low- and high-risk patients and to determine preventable readmission rates. By allocating the correct treatments to the patients on a need-based system, the hospital would save funding. A recent report showed Atlantic Health, which oversees five hospitals, cut $70 million in operational costs within three years via predictive analytics. Similarly, by assessing common symptoms among groups of patients—that are more likely to have a stroke, for example—the hospital can determine what the next step is for these patients, and even begin applying targeted preventative care and education.

     Most importantly, the tool can exist, but the providers must make use of them properly. Predictive analytics is not a field that comes without its challenges. The cooperation of those in the healthcare industry and the stability of the tool itself are essential. The influx of patient data will consistently grow, and often, may come with missing gaps. This data must be scalable and appropriately filtered to optimize the analytical results. In addition, the application should be able to fit privacy requirements, such as HIPAA (Health Insurance Portability and Accountability Act), as the current medical record systems do. Health providers and medical staff will need to train toward being able to use the tool with skill and sustainable data management, with a collaborative vision to apply the technology to their routine patient care. However, as improvements continue, these challenges can hopefully be overcome for the greater benefit of predictive analytics. With such a tool, the process of treatment in healthcare can become increasingly efficient and expedited – and who knows, it may just be within the next 30 years.

References

Healthcare Finance. 2016. “Healthcare predictive analytics market should hit $19.5 billion by 2025, research shows.” Last modified Nov 28, 2016. http://www.healthcarefinancenews.com/news/healthcare-predictive-analytics-market-should-hit-195-billion-2025-research-shows

American Hospital Association. 2016. “Fast Facts on US Hospitals.” Last modified Jan 2016. http://www.aha.org/research/rc/stat-studies/fast-facts.shtml

IBM. 2016. “IBM Watson Health.” Accessed Nov 14, 2016. https://www.ibm.com/watson/health/

Harvard Business Review. 2016. “Making Predictive Analytics a Routine Part of Patient Care.” Last modified April 21, 2016. https://hbr.org/2016/04/making-predictive-analytics-a-routine-part-of-patient-care

Healthcare IT News. 2016. “Predictive analytics help Atlantic Health save $70 million in labor, operational costs.” Last modified Feb 18, 2016. http://www.healthcareitnews.com/news/predictive-analytics-help-atlantic-health-save-70-million-labor-operational-costs

Harbinger Systems. 2015. “Predictive Analytics in Healthcare.” Last modified Dec 24, 2015. http://blog.harbinger-systems.com/2015/12/predictive-analytics-in-healthcare/

HealthData Management. 2015. “Top Challenges to Analytics in Healthcare? Not Technology.” Last modified Sept 23, 2015. http://www.healthdatamanagement.com/opinion/top-challenges-to-analytics-in-healthcare-not-technology



Nila Ray is a second-year student at the University of Chicago majoring in Biological Sciences. Her interests include medicine, biotechnology, and healthcare.

The Advent of the Internet: How has Memory Changed?

Clara Sava-Segal

     Think of all the times you’ve pulled out your phone to “just Google it.” Despite developing alongside computers in the early 1960s, the Internet started having a major impact on culture and communication in the mid-1990s along with the introduction of email, instant messaging and social networking. At its core, its power lies in providing access to a large amount of information instantaneously.

     However, the Internet does not just distribute information—it inherently shapes the process of thought. In his The Atlantic article Is Google Making Us Stupid, American writer Nicholas Carr notes that the mind now accepts information in the manner in which the Internet presents it. Of course, various studies have explored the psychological ways by which media shapes opinion. But more interestingly, research has also explored how the advent of the Internet produced various cognitive changes--especially when it comes to memory. Raising the question: has neural processing itself changed?

     At its core, the entire means of processing a text has indeed changed (i.e., reading). A five-year study observed computer searches and determined that people tend to spend little time on a particular source: reading only the first or second page and rarely returning to saved articles. Carr (2008) stated that there might be more reading nowadays with constant access to text messaging, emails and the Internet itself than in the 1970s when the television was the primary source of information. However, this “information age” has progressed so that the immense amount of accessible information (stimuli) is impossible to entirely process.

     Undoubtedly, how we process information is tied to memory. Short-term memory is the mind’s capability to hold onto small amounts of information that can easily be manipulated for short periods of time, while long-term memory reflects our more extended knowledge. Much of what we experience with online reading is reflective of not only our short-term memory, but also our working memory. To break it down: short-term memory is centered on what information we keep in our minds before forgetting or transferring it to long-term memory. For instance, if we are told a phone number and then forget it, the brief time that we knew that number was due to our short-term memory. Our address, however, is committed to our long-term memory for future usage. Working memory is similar to short-term memory, but it deals more with the manipulation and organization of memory. Baddeley created a social model in order to explain the processes they identify with our working memory. As it stands, the model looks at the collaboration between three, distinct, but also abstract brain functions. These are not separated by cranial location, but rather by purpose. The proposed working memory model holds that there exists an executive functioning region that controls the interaction between two regions that process what phonological and visuo-spatial information, respectively, comes in. For instance, if we are listening to a video (phonological) that has subtitles (visuo-spatial), the information presented would be brought and understood together by the executive function property. The actual act of conjoining and manipulating this presentation of information is a product of our working memory. The processed results would either briefly exist in our short-term memory or be transmitted to our long-term memory. The movement into long-term memory is assessed by a forth component of the model termed the ‘episodic buffer.’ Evidently, such actions demand extensive integration between various internal brain processes.




     Knowing the complexity of our memory systems, it is easy to understand why why they are failing us in this modern age. If we observe reading habits, it is not uncommon for students to read articles for school and not be able to remember what was just read. It is just as common to skip through a piece as opposed to reading it from start to finish. These characteristics are not found only in online reading, but in physical reading as well. Therefore, we observe As we observe that internet usage oftentimes very brief and not sequential. Thus, our experiences alone could suggest that memory systems have not yet evolved to match the rapid societal changes. The brain is incapable of incorporating all the information the Internet throws at us into its long-term memory. Therefore, the information--for the most part--is dismissed from someone’s short-term memory.

     From one perspective, the inability to store everything in our long-term memory is inherent to the human condition and aren’t failings of our memory system. Daniel Schacter (1999) argued for what he termed “the seven deadly sins of memory.” Taking an evolutionary perspective, Schacter suggested that the brain cannot process everything a person is exposed to in his daily life--people, thoughts, interactions, etc. Therefore, we “forget” certain things, making it easy for us to remember what is most important. If we were to take everything from our short-term memory and put it into our long-term memory, our brain would completely overload. Our working-memory is unable to discern and really manipulate the information it is exposed to because there is too much of it.

     But, Schacter noted this distinction in 1999, before considering the implications of new technology. The amount of modern stimuli demands that we begin to reconsider our understanding of our memory system. New technology necessitates that we reevaluate how much it is acceptable to forget. As such, it is not unreasonable to assume that the advent of the Internet would force the brain to readapt. As it stands, we see inherent changes within our ability to process information.

     But how has brain processing actually changed? Wolf (2007) explained that the contemporary style of reading is different from traditional reading in that it revolves around efficiency. People who read off the Internet simply become decoders of information. They no longer engage in making connections with that information nor do they absorb it, as individuals always expect to have later access to that information. The comfort that availability provides has led to the development of a reliance on the Internet.

     To expand, Sparrow, Liu and Wegner (2011) consider how the Internet serves as a sort of external memory source. By acknowledging that certain information can be found quickly, people choose to not remember it. The concept of having an external memory source was observed prior to the Internet age, in group work and long-term relationships. Individuals that have an external memory source develop transactive memory in which one might not hold the specific memory, but knows where to find it- i.e. in a person or location. Studies have shown that when presented with any question (easy or difficult), the individual automatically thinks of the Internet even if they do already know the answer. Likewise when asked to recall information learned from the internet, subjects were able to remember where they read a piece of information over what the actual information was (Sparrow, Liu, Wegner, (2011).

     Therefore, it is undeniable that the advent of the Internet has changed our means of processing information. However, the brain is just readapting its old techniques to meet the needs of a changed society. Perhaps, with time, the brain will evolve differently in response to its new environment. This would be a whole new age for memory--whether positive or negative. Such realizations may provide insights into the addictive relationship the Internet has with society.

References

1. Carr, N. (2008). Is Google making us stupid? The Atlantic 302, 56-63.

2. Sparrow, B., Liu, J., & Wegner., D. (2011). Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips. Sciencexpress. Retrieved May 28, 2012 from http://www.sciencemag.org/content/333/6043/776

3. Wolf, M., (2007). Proust and the Squid: The Story and Science of the Reading Brain. New York: HarperCollinsPublisher.

4. Working Memory Model." Psychology. Wordpress, n.d. Web.

What Does the Replication Crisis mean for Psychology and Research?

Julia Smith

     With headlines like “Is Science Broken?”1 and “Psychology’s Credibility Crisis,”2 it is clear that people are questioning the fundamental reliability of the scientific process, and specifically that of psychology. Recently, the replication crisis – the realization that many published scientific findings cannot be replicated – shook up the world of research. Psychology’s replication crisis is currently in the spotlight, but it has implications for all scientific research. The replication crisis is a divisive subject because it calls into question the validity of previously accepted discoveries in psychology and, by extension, that of future studies. The discovery of the replication crisis certainly poses problems, but, having been reminded of research’s shortcomings, scientists are placing a renewed emphasis on skepticism and rigor.

     Replication is the process of re-creating an experiment to see if its findings can be reproduced. Reproducibility is a cornerstone of science: a finding is not considered valid if replicated experiments do not consistently yield the same results. There is a spectrum of replication types, with exact replication on one end and conceptual replication on the other. In exact replication, researchers follow the procedure of the original experiment to the letter. In conceptual replication, researchers test the previously established principle but diverge from the experimental methods of the first study.3

     Journals want to publish original research, so replication studies were not considered worthwhile until it became clear that psychology was in the midst of a replication crisis. Researchers first uncovered the replication crisis by meticulously reproducing old experiments and treating published findings with a healthy dose of skepticism. This increasing skepticism of research has grown over the past decade or so. In 2005, Dr. John Ioannidis published a well-known paper entitled “Why Most Published Research Findings are False.”4 This paper used a mathematical model that took into account factors such as a study’s sample size, effect size, flexibility in experimental design, and bias to support the title’s proposition. This article raised awareness in the scientific community about the potential for error, pinpointed some contributing factors, and encouraged researchers to practice skepticism. Though there was some doubt regarding the reliability of Ioannidis’ model, it was, nevertheless, an important reminder that our research practices are not infallible; “it opened up the possibility that, to the extent that this model holds true, […] many false positives would be published,”5 says Dr. Ronald Thisted, a University of Chicago researcher who studies reproducibility.

     Another major step in this growing awareness – and the reason people know about the replication crisis – was the Reproducibility Project. In 2011, Dr. Brian Nosek, a University of Virginia researcher, started the Reproducibility Project, coordinating the reproduction of 100 recently published psychological studies. Dr. Matt Motyl, a University of Illinois at Chicago scientist who worked in Dr. Nosek’s lab and participated in the project, describes its inception: “In our lab we would talk about weird things in the literature and any time we would look at a paper that seemed weird or counterintuitive we were like ‘how does that happen.’” Often, they found that dubious statistics and sample sizes were to blame, but these observations were just idle talk until one day they came upon a study that supported Extrasensory Perception, the theory that some humans have a “sixth sense.” Skeptical of this proposition, Nosek’s lab and others ran replications of the experiment. They could not replicate the original results, but the same psychological journal that had published the original article would not publish their findings. Dr. Motyl recalls, “That increased our interest in trying to replicate things, and at that point we decided ‘Okay, let’s do this very scientifically. Let’s randomly sample 100 studies from some of the big journals in the field and then assign them to teams.’ Many of us did elaborate pilot tests and we talked to original authors and tried to get them to approve our materials. […] We thought that trying to replicate would be important for the field because science is supposed to be able to replicate. If it doesn’t replicate then something’s wrong there.”6 In this way the Reproducibility Project began. Dr. Nosek and the 270 researchers who participated in the project were onto something: only 36% of the 100 findings could be replicated. Results such as these can make us wonder how these original findings were discovered and published. 7

     There are a variety of possible reasons for contradictions between original findings and the findings from a replication study. Some of these explanations, such as p-hacking, implicate the scientists involved in the original study. P-hacking is the manipulation of data to get a p-value under .05 (which means the results are statistically significant and thus publishable). Despite the obsession in research with sub-.05 p-values, this alone is not always a sufficient standard of proof. 8 Other reasons for non-replication reflect weaknesses in the methods of the original study: perhaps a sample size was too low or a factor other than the independent variable strongly influenced the results. It is also possible that the replication studies themselves had methodological flaws. Experimental design and working with data are thorny tasks and sometimes no one is to blame, but the fact remains that now psychology has a host of contradictions to explain.

     For many researchers, the replication crisis constitutes an important reminder to conduct rigorous research, but it does not mark the end of psychology – or science – as we know it. Psychology journals have taken steps to address the problems raised by the replication crisis. Though original findings are still favored, a journal is now more likely to publish replications. Journals also now allow more space for researchers to list their methods and calculations. This push towards greater transparency facilitates replications and incentivizes rigor over flashy findings with little backing. These promising new practices seem to be working, and Dr. Marc Berman, a cognitive psychologist from the University of Chicago, says the replication crisis reminded us that “Everybody needs to be more careful. Everybody needs to really rigorously evaluate their science and make sure that what they’re doing is replicable.” At the same time, Dr. Berman cautions, “You don’t want it to go too far the other way where scientists lose their creativity and their freedom. You’ve got to find that good balance.”9 Hopefully, with a renewed sense of skepticism, the field can successfully re-equilibrate.

References

1. Woolston, Chris. 2015. “Online debate erupts to ask: is science broken?” Nature 519, 393. Accessed December 8, 2016. doi: 10.1038/519393f

2. Horgan, John. 2016. “Psychology’s Credibility Crisis: the Good, the Bad, and the Ugly.” Scientific American. Accessed December 8, 2016. https://www.scientificamerican.com/article/psychology-s-credibility-crisis-the-bad-the-good-and-the-ugly/.

3. Noba. 2016. “The Replication Crisis in Psychology” Accessed November 26, 2016. http://nobaproject.com/modules/the-replication-crisis-in-psychology#content.

4. Ioannidis, John P. A. 2005. “Why Most Published Research Findings Are False.” PLOS Medicine. Accessed November 26, 2016. DOI:http://dx.doi.org/10.1371/journal.pmed.0020124.

5. Dr. Ronald Thisted in discussion with the author, November 2016.

6. Dr. Matt Motyl in discussion with the author, November 2016.

7. Open Science Framework. 2015. “Estimating the Reproducibility of Psychological Science.” Last modified March 3, 2016. https://osf.io/ezcuj/wiki/home/.

8. FiveThirtyEight Science. 2015. “Science Isn’t Broken.” Accessed November 26, 2016. http://fivethirtyeight.com/features/science-isnt-broken/#part1.

9. Dr. Marc Berman in discussion with the author, November 2016.

Functionally Curing Type 1 Diabetes?

Maritha Wang

     A diagnosis of type 1 diabetes, or diabetes mellitus, means daily insulin injections, constant monitoring of glucose levels, and unending misconceptions from people who think you somehow brought it upon yourself. After all, don’t you get diabetes from poor diet and lack of exercise? Type 1 diabetes is oftentimes confused with type 2 diabetes, the type of diabetes where your body is resistant to insulin, thereby causing your blood glucose levels to rise to higher levels than normal. In actuality, type 1 diabetes is an autoimmune disease where your body cannot produce insulin, the key hormone the body uses to regulate blood sugar levels. The body actually attacks the beta cells in your pancreas that produce insulin. Without this insulin, the body’s sugar levels fluctuate wildly, causing symptoms including increased thirst, frequent urination, extreme hunger, and in some cases, blurred vision.1

     Currently, there is no cure for type 1 diabetes. The search for a cure is one of the most pressing issues in medical research in the 21st century and has led to the creation of organizations specifically focusing on treatment of type 1 diabetes like the Junior Diabetes Research Foundation (JDRF) and the American Diabetes Association. Researchers, many of whom are associated with these foundations or at least garner funding from them, have made much progress in the management of type 1 diabetes in the last few decades.

      One of the most widely used technologies currently for the management of type 1 diabetes is the insulin pump.2 These pumps deliver insulin through a catheter-like tube that is inserted into the skin with a needle. However, they are large, clunky, and inconvenient.

     Other studies currently being conducted include the islet transplantation experimental procedure in which insulin-producing clusters of cells called islets are transplanted into the pancreas. This has been a difficult procedure since the islets must be grown first from stem cells (a procedure that was under much scrutiny in the early 2000s) and then successfully transplanted without being killed off by the human body’s natural immune response.

     Other efforts to patients who have type 1 diabetes include the creation of smartphone applications such as the Sugar.IQ application, created by Medtronic and IBM, that uses “real-time personalized insights” to predict the onset of hypoglycemia (low blood sugar). It also enables users to track their food consumption and provides data about the impact of specific foods on their diabetes. The application was just unveiled this past September.3 Sugar.IQ is expected to be fully released later this year after some user testing.

     Most recently, many doctors, engineers, and scientists from all around the world have turned their attention to artificial pancreas systems. One of the most promising systems so far, the PEC-Encap device, has even made it to human clinical trials.

     ViaCyte, a San Diego, California based biomedical device company that currently holds more than 200 patents world-wide, has in recent years been developing the PEC-Encap device. This device could potentially act like bioartificial pancreas, which could be the “holy grail” of type 1 diabetes treatment.4 The first part of the solution is creating the pancreatic cells the type 1 diabetes patients lack. This is achieved using PEC-01 cells, which are manufactured from pluripotent embryonic cells that scientists can induce into the needed cells.5 The second part is the device, which in itself is an engineering challenge. It must be able to protect the PEC-01 cells from the body’s immune system while remaining permeable so that the insulin can be released from the islets in the device to the cells outside of it.

     Essentially, ViaCyte’s device encapsulates insulin-producing islets. The device and procedure are currently in clinical trials, which began on humans in 2014. The initial phase is focusing on testing the safety of the device, while future studies will focus on determining insulin dosage requirements.

     While the development of this device has passed many hurdles, it still has a long way to go before we can determine whether it will functionally cure type 1 diabetes. Federal regulation of biomedical devices is stringent since the cost is high if something goes wrong. Nevertheless, PEC-Encap is one new technology giving us hope for type 1 diabetes patients. The days of unending injections and finger pricks are not yet over, but may be soon.

References

1. Mayo Clinic. 2014. Diseases and Conditions: Type 1 Diabetes. http://www.mayoclinic.org/diseases-conditions/type-1-diabetes/basics/symptoms/con-20019573.

2. American Diabetes Association. 2015. Insulin Pumps. http://www.diabetes.org/living-with-diabetes/treatment-and-care/medication/insulin/insulin-pumps.html?referrer=https://www.google.com/.

3. HealthcareITNews. 2016. Medtronic introduces IBM Watson-powered Sugar.IQ diabetes app. http://www.healthcareitnews.com/news/medtronic-introduces-ibm-watson-powered-sugariq-diabetes-app.

4. American Diabetes Association. 2015. Type 1 Diabetes at a Crossroads! http://care.diabetesjournals.org/content/38/6/968.

5. Viacyte. PEC-Encap™ (VC-01™) – Improving Diabetes Treatment. http://viacyte.com/products/pec%E2%80%90encap-vc-01/.



Maritha Wang is a first-year student at the University of Chicago majoring in Chemistry. She is interested in exploring research in a variety of fields including chemistry, materials science, and medicine.

Your mind on meditation

Yohyoh Wang

     In recent years, a large population of busy, working Americans have joined the wellness movement. Sunlit yoga classes swelling into atonal choirs of soft oohs and ahhs, Whole Foods aisles bursting with supplements to sharpen one’s mind, and self-help articles urging readers to slow down and take time for yourself are all symptomatic of a collective embrace of wellness and mindfulness. One notable practice – meditation – has pushed to the forefront of this blossoming movement. Conscious relaxation is practiced by Wall Street bankers via Transcendental Meditation, offered in smartphone applications like Headspace, and even advocated on our own campus by the Health Promotion and Wellness department.

     Meditation is an ancient practice, having been practiced by Buddhist and Hindu monks since around 1500 B.C.1, but it has only recently become accessible to a modern, secular audience. Novice meditators are instructed to simply sit comfortably, breathe naturally, and focus on the physical sensations of the body. This brief practice serves as a break from the stresses and worries of day-to-day life. While meditation offers emotional perks - long-time and novice meditators alike have reported feeling relaxed, calmer, and clearer-headed after their practice - its benefits extend further into the human body, particularly the brain. This article will explore the research conducted on meditation’s effect on neurological thinking and function, as demonstrated by participants with a various amounts of meditation experience.

Short-term benefits of meditation

     One does not need to adapt a lifestyle change to develop resilience to stress; even short bouts of meditation prove highly beneficial to a frazzled mind. A study led by David Creswell from Carnegie Mellon University2 examined the effects of meditation training on patients’ responses to the Trier Social Stress Test (TSST), designed as a series of controlled stressful situations. Participants in the experimental group received three days of twenty-five-minute meditation trainings, during which they were instructed to pay attention to their breath, bodily sensations, and thoughts and emotions. For the same duration, participants in the control group were instructed to read and analyze a set of poems. Upon taking the TSST, the experimental group self-reported lower levels of stress than did the control group; that is, those who underwent three days of meditation training perceived that they were less stressed than those who did not.

     Brief meditation has also been shown to alter brain anatomy, particularly in regions associated with self-referential thinking. In a study led by Britta Hölzel from the Massachusetts General Hospital3, participants underwent the eight-week Mindfulness-Based Stress Reduction (MBSR) program, consisting of weekly group meetings and at-home exercises such as mindful yoga and meditation. An average of 22.6 total hours was spent per individual over the course of the program. Two weeks after the MBSR program concluded, MRI scans of the participants were recorded and compared to scans taken prior to the experiment. Results revealed increased gray matter concentration in four regions of the brain commonly associated with self-referential processing: the posterior cingulate cortex (PCC), the left temporo-parietal junction (TPJ), and two clusters in the cerebellum. The PCC is involved in attention and emotion, the TPJ in language and comprehension, and the cerebellum in coordination and body awareness.

     Other studies4 have identified additional brain regions positively affected by meditation. These regions include the anterior cingulate cortex, associated with self-regulation, and the hippocampus, associated with emotion and memory. An increase in gray matter (neuronal cell bodies and connections) in these areas suggests that meditation can fortify the higher processes associated with psychological well-being and resilience against stress.

     Most working Americans can only practice meditation as a “moonlight” activity – a routine calming of an overworked mind – by starting or ending their day with ten minutes of mindfulness. Thankfully, even small bouts of meditation can generate psychological and neuroanatomical changes associated with overall emotional amelioration.

Long-term benefits of meditation

     Those who fuse mindfulness with their everyday lives, or turn away from the causes of their stress to pursue a lifestyle of mindfulness, often tout the enduring benefits of prolonged, rigorous meditative practice. The portrayal of the calm, wise monk in literature and pop culture is an ever-present reminder of this common sequitur. Look no further than His Holiness the 14th Dalai Lama, who maintains his presence on the world stage with press conferences and self-penned articles urging people to practice kindness.

     To examine the effects of long-term meditative practice, Antonietta Manna of the G. D’Annuzio University Foundation5 led a study comparing the functional magnetic resonance (fMRI) data of Buddhist monks and “moonlight” meditators while meditating. Novice meditators with only ten days of practice showed fMRI activation in the posterior and anterior cingulate cortex, which was consistent with the Hölzel study conclusions. Monks also showed activation of the anterior cingulate cortex; but in addition to this area, the anterior prefrontal cortex and superior temporal gyrus were also activated above baseline. These two regions are associated with personality expression and social cognition, respectively. The increased stimulation of social cognitive regions in the brains of the monks, for whom mindfulness is a lifestyle, suggests that meditation could, in addition to bringing about peace of mind, encourage prosocial, or selfless, behavior.

     The case for meditation’s positive influence on prosocial behavior has been made by multiple studies. A single day of compassionate meditation training, which included meditation and mindfulness exercises, increased compassionate behaviors of participants in a virtual game. 6 A two-week period of training enhanced participants’ willingness to take on financial burdens to help someone who needed the money. 7 An eight-week period of training raised the percentage of participants who gave up a seat to someone in more need. 8

     Although meditation in any dosage seems to proffer mental and emotional benefits, it remains to be seen whether the mindfulness movement alone can cultivate a prosocial environment for the working American. “Moonlight” meditators, while able to temporarily retreat into peaceful meditation, may require deeper or lengthier sessions than what currently fits in a busy schedule. Perhaps regular meditation on its own does not give rise to significant neurological or psychological benefits; but, for now, the post-Om buzz is enough to keep the movement going.


References

1. Robert Puff. “An Overview of Meditation: Its Origins and Traditions,” Psychology Today, July 7, 2013, accessed November 30, 2016, https://www.psychologytoday.com/blog/meditation-modern-life/201307/overview-meditation-its-origins-and-traditions.

2. J. David Creswell, Laura E. Pacilio, Emily K. Lindsay, and Kirk Warren Brown. “Brief mindfulness meditation training alters psychological and neuroendocrine responses to social evaluative stress,” Psychoneuroendocrinology (2014): 44, accessed November 9, 2016, doi:10.1016/j.psyneuen.2014.02.007.

3. Britta K. Hölzel, James Carmody, Mark Vangel, Christina Congleton, Sita M. Yerramsetti, Tim Gard, and Sara W. Lazar. “Mindfulness practice leads to increases in regional brain gray matter density,” Psychiatry Research: Neuroimaging (2011): 191, accessed November 10, 2016, doi: 10.1016/j.psychresns.2010.08.006.

4. Christina Congleton, Britta K. Hölzel, and Sara W. Lazar. “Mindfulness Can Literally Change Your Brain”, Harvard Business Review, January 8, 2015, accessed November 10, 2016, https://hbr.org/2015/01/mindfulness-can-literally-change-your-brain.

5. Antonietta Manna, Antonino Raffone, Mauro Gianni Perrucci, Davide Nardo, Antonio Ferretti, Armando Tartaro, Alessandro Londei, Cosimo Del Gratta, Marta Olivetti Belardinelli, and Gian Luca Romani. “Neural correlates of focused attention and cognitive monitoring in meditation,” Brain Research Bulletin (2010): 82, accessed November 12, 2016, doi: 10.1016/j.brainresbull.2010.03.001.

6. Susanne Leiberg, Olga Klimecki, and Tania Singer. “Short-Term Compassion Training Increase Prosocial Behavior in a Newly Developed Prosocial Game,” PLoS One (2011), accessed November 14, 2016, doi: 10.1371/journal.pone.0017798.

7. Helen Y. Weng, Andrew S. Fox, Alexander J. Shackman, Diane E. Stodola, Jessica Z. K. Caldwell, Matthew C. Olson, Gregory M. Rogers, and Richard J. Davidson. “Compassion training alters altruism and neural responses to suffering,” Psychological Science (2013): 24, accessed November 14, 2016, doi: 10.1177/0956697612469537.

8. Paul Condon, Gaëlle Desbordes, Willa B. Miller, and David DeSteno. “Meditation Increases Compassionate Responses to Suffering,” Psychological Science (2013): 24, accessed November 14, 2016, doi: 10.1177/0956797613485603.

9. Shaw, Bob. The Dalai Lama on a chairlift in the mountains of New Mexico. April, 1991.

Image Source: http://www.slate.com/content/dam/slate/articles/arts/culturebox/2014/02/140226_CBOX_DalaiLamaSkiing.jpg.CROP.original-original.jpg

Breakthrough Starshot & The Past, Present, and Future of Interstellar Travel

Tom Klosterman

     In March, the scientific community was buzzing with excitement over “Starshot” - the new project initiated by billionaire Yuri Milner’s Breakthrough Foundation: “Starshot.” Starshot aims to demonstrate the possibility of ultra-fast spacecrafts that could achieve interstellar travel within a single generation. For decades, voyages between our solar system and the trillions of other star systems in our galaxy have endured only in the dreams of futurists and sci-fi writers. But now, even Stephen Hawking himself has backed Starshot, joining the ranks of dozens of other expert physicists, astronomers, and astronauts affiliated with the promising venture.
     The celestial target for the glamorous new project is Alpha Centauri, the star system nearest to us. This system likely features Earth-like terrestrial planets, making it a good locale for exploring extra-terrestrial life, and a potential backup plan for humans if we need to flee Earth.
      While traveling through Alpha Centauri, the Starshot mission intends to employ thousands of “Nanocrafts" to record data. Each tiny spacecrafts will weigh only a few grams and carry cameras, thrusters, power supplies, and navigation and communication equipment. Attached to each Nanocraft will be a thin “Lightsail," also only weighing a few grams, which will measure several square meters in area but only hundreds of atoms thick. Both pieces of equipment present monumental engineering challenges, but once achieved, the miniature probes will be able to reach Alpha Centauri in merely 20 years. With current technology, the same journey would span 30,000 years.
     The miniature size of Starshot’s probes is a radical departure from previous interstellar craft designs. One of the earliest serious enterprises was a British study named “Project Daedalus.” During its 5 year life span in the 1970s, the developers proposed a humongous spaceship designed to carry 50,000 tons of fuel, measuring nearly as large as the Empire State Building. For obvious reasons, the ship was never built and the project scrapped. However, the central idea that ‘bigger is better’ has persisted to this day in pop science and science fiction (Think Interstellar), which is why Starshot is so revolutionary.
     In the decades since Daedalus, several other projects have been proposed, both manned and unmanned. In recent years, the most influential group has been “Project Icarus.” Their proposed elegant spacecraft would draw its power from the nuclear fusion of heavy water. It attracted attention for its readily available fuel source and its flashy design; it was even nicknamed “Firefly” for its colorful tail. No doubt, many different designs will be proposed in the years to come, but its may be difficult to imagine a design as promising as Breakthrough Starshot.
     Starshot appears more reasonable than these previous projects because due to its innovative yet practical technologies. Once designed, the Nanocrafts would require few raw materials and each launch would only cost several thousand dollars.
      But not all stages of the project are low scale. After being rocketed into space, an array of lasers on Earth would put the “wind” in the light sails, with “wind” in this case meaning 100 Gigawatts worth of energy, the same amount of power in France’s entire electrical grid — or, the power output of fifteen nuclear power plants.
      The ridiculous amount of energy in the proposed laser array is just one earthly obstacle that Starshot is facing. Even if the international community would permit such a potentially dangerous laser to exist, it would cost billions of dollars. Also, as 20 years would pass before the projected arrival at Alpha Centauri, scientists involved in the early stages of the project might not still be around to see its termination. Another concern is exactly how such minimalist spacecrafts would be able to communicate intelligible information back to Earth. And as there would be a 4-year delay before the information would reach Earth, any errors that the Nanocrafts make in the transmission process would be difficult and time-consuming to detect and correct. But perhaps the most important question is Why even bother?
      The most monumental difficulties of interstellar travel proposals are not just the engineering and financial hurdles. The scientific community must be convinced that the journey is worth the time and resources. Every prospective project includes seemingly unattainable levels of technology and cannot promise quick enough results to make the money and effort spent seem immediately worthwhile. However, as Paul Gilster, an expert in the field, says: “History shows that humans are a visionary and exploring species. We will move into the cosmos if we can because it is in our nature”. Momentous scientific pushes often produce advances relevant to everyday life on Earth, and an interstellar mission would be no different. The possibility remains that we may one day have to leave this planet, and Breakthrough Starshot seems to be our best current option to investigate options for our uncertain future.

References

Breakthrough Starshot. "Breakthrough Initiatives." Breakthrough Initiatives. April 12, 2016.

Brumfiel, Geoff. "Stephen Hawking's Plan For Interstellar Travel Has Some Earthly Obstacles." NPR. April 14, 2016.

Discovery News. "Sizing Up the Daedalus Interstellar Spacecraft : Slide Show : DNews." DNews. December 12, 2012.

Gilster, Paul. "Defending the Interstellar Vision." Centauri Dreams. July 27, 2012.

Ghosh, Pallab. "Hawking Backs Interstellar Travel Project." BBC News. April 12, 2016.

Lamontagne, Michel. "'Firefly' Starship to Blaze a Trail to Alpha Centauri? : DNews." DNews. February 20, 2015.

Image (Creative Commons): https://c2.staticflickr.com/2/1551/26338959171_ce97602414_b.jpg

Tom Klosterman is a first-year student at the University of Chicago majoring in Physics.

Virus Induced Gene Silencing

Clara Sava-Segal

     At its most simple level, genetic research aims to identify the role of genes. Why? The most pertinent implication lies in a capability to diagnose and treat diseases. Genes code for proteins, which determine all fundamental characteristics and functions of life. With the uncovering of a “simple” gene, one can begin to fathom not only its role in said particular organism, but also begin to piece together certain imperative evolutionary comparisons. Various methods are employed to isolate the role of respective genes in organisms that are dependent on the interactions of the millions of gene-encoded proteins. Each method has its benefits and lapses, but their development is crucial.
      Oftentimes, it is easiest to do this by getting rid of a gene (“silencing”/”knocking out”) a gene and seeing the effect this has on the resulting organism. In some cases, silencing a gene is both more time consuming and economically efficient than screening through all of the genes of an organism, as is done more commonly.
      Let’s take an example at the floral level. Generally, the Aquilegia coerulea flower has the distinct rich reddish-pink seen in Figure 1. However, we can see the same flower depicted in Figure 2, without its pigment. This flower has a “silenced” Anthocyanidin Synthase (ANS) gene. Seeing that the flower is now white, we can extrapolate that the ANS gene is involved in pigmentation. The technique employed to “silence” this gene is termed “Virus Induced Gene Silencing,” a mode of reverse genetics.


Figure 1

Figure 2



      It is crucial to understand the properties of general reverse genetics before looking at its subsets. As a whole, reverse genetics [“getting rid of a gene”] can be understood only by comparison to “regular” (“forward”) genetics. Within the latter, an organism with a naturally found or induced mutant phenotype is observed in order to identify which gene encoded that phenotype. That gene is correlated to DNA and amino acid sequences.1 In contrast, reverse genetics does not start with the phenotype, but rather commences at the level of the DNA or protein sequence. This DNA is manipulated to create a specific mutant gene, and that gene codes for a mutant phenotype. The visible changed phenotype generally determines the function of the gene, as seen within the ANS mutant. More simply, as identified by Figure 3, the two processes are complete opposites and mirror one another. Of course, this only works when the genes code for a physical attribute.


Figure 3



      As previously identified, it is generally imperative to have a mutant phenotype or gene within the process. Having a mutant reveals function; take, for example, a Drosophila in which “knocking out” one gene or mutating it can have a drastic influence on the phenotype of the Drosophila. For instance, if the gene that codes for red eyes is mutated, the eyes will appear white. Thus, if the intention were to determine which region of the fly genome coded for eye color, various regions would be mutated until the resulting fly had the white-eye phenotype. This thought process, though, makes the assumption that a specific gene codes for a particular function. However, it is not always that simple. As the organisms become more complex, the genome has more and more regulatory mechanisms and genes that function in collaboration, making the selection of mutants much more intricate.
     For example, since Drosophila are vertebrates, the eye gene color is actually dependent on many regulatory elements including promoters, or regions on the DNA that initiate transcription. More simply, these promoter regions “let” the genes encode for certain phenotypes, and they themselves need to be “turned on” to do this. The promoter regions are “allowing” multiple genes to work. Therefore, reverse genetic techniques cannot simply mutate a gene, but also have to look at the mechanisms acting on that gene. From one perspective, having a promoter sequence could be easier: these could be blocked, and the gene would not be expressed independent of whether or not it is mutated. However, working with these regulatory mechanisms is much more complicated. The promoter could not only be regulating the eye color gene, but other genes, as well. Then, the resulting phenotype would have multiple changes and it would be impossible to record which change came from which gene. Very simply, reverse genetics only functions successfully when there is one unknown change being made at a time. There is too much interdependency for function and regulation that occurs in organisms otherwise.
      With all this being said, VIGS- the process used to make the changes in Figure 2- provides a more interesting, less invasive and simpler method. In part, this is due to its role in the plant genome, which has different regulatory mechanisms, but mostly the reason for the success of VIGS is its employment of the organism’s own immunodefense pathway. Employing the plant’s own system means that researchers do not need to take into account the various regulatory mechanisms that have the potential to disturb because the organism is “self-disturbing.” Unlike other methods of reverse genetics, the “mutation” is actually a “self-silencing.” How exactly does this occur?
      VIGS employs the RNA interference (RNAi) pathway to induce transient gene silencing or knock down. The process depends on the use of viral c-DNA vectors that hold the designated gene. Plants are inoculated with Agrobacterium that contains the viral vectors. 4 These organisms then use their own innate antiviral defense mechanisms to fight off the viruses. 5 Since the host genes are within the viruses, the plants’ defense mechanisms end up killing off their own expression of the targeted genes.
      But, how exactly does the RNAi pathway work after the plant is exposed to the virus? Dicer enzymes chop up the viral vector double stranded DNA into short interfering RNA (siRNA). This siRNA combine with various proteins, which are species-dependent, to form an RNA-induced-silencing complex (RISC), that is then activated. This activation is dependent upon the unzipping of the siRNA that exposes the DNA strands. Normally, this exposure would permit for transcription into a messenger RNA that would be translated into proteins. These proteins would be the direct product of the gene expression and generate the encoded phenotype.
      However, since this is part of an immunodefense pathway, the exposure of the DNA preps it for destruction. The exposure of anticodons is crucial. These anticodons on the siRNA are complementary to messenger RNA that will come in, bind, and then be destroyed by the RISC. Therefore, translation will never occur and the gene product will not be produced. 6 The following figure (Figure 4) displays this entire process.


Figure 4



      Figure 4 shows a plasmid labeled “TRV2-AqANS.” This refers to “TRV,” standing for the Tobacco Rattle Virus, which is next to the Aquilegia plant ANS gene. Since the Aquilegia flower was treated with this plasmid, it fought off both the TRV and its own gene and thus lost its pigment. (Figure 2).

      To conclude, the process of VIGS is essential to understanding gene function within plants, and serves as a non-invasive mechanism of reverse genetics by using the organism’s internal immune system, thus avoiding many of the issues that arise with reverse genetics. With VIGS, researchers do not need to induce “mutations” directly, as the organism silences its own genes. Furthermore, this skips over the regulatory mechanisms existent in most organisms. A resulting phenotype can result from hundreds of the genes within the plant genome. We just need to select a gene and a plant will tell us its role.
      Having these kinds of techniques are instrumental to genetics research that can have great impact on not only establishing lineages in evolutionary biology, but also in medical research. Virus Induced Gene Silencing (VIGS) has so far been used exclusively with the plant genome. However, reverse genetics is a staple of modern day genetics research and has been used across hundreds of model organisms. The development of these advanced techniques is crucial for the development of procedures applicable to the human genome. Therefore, even though VIGS is not directly applicable to our everyday lives, each improvement in reverse genetics is imperative. For instance, since VIGS makes use of the innate immune system of the plant, it is not unrealistic that the human immune system, too, can be harnessed for our own genetic research.
      Furthermore, as initially mentioned, since so many genes are conserved across evolutionary development, understanding the scope of genes in certain vertebrates with reverse genetic techniques can help us identify genes that are influential to not only the development of the human species, but also the expansion of certain diseases. In the future, it may very well be possible to “silence” genes that predispose an individual for diseases.

References



1. Griffiths AJF, Miller JH, Suzuki DT, et al. An Introduction to Genetic Analysis. 7th edition. New York: W. H. Freeman; 2000. Reverse genetics. Available from: http://www.ncbi.nlm.nih.gov/books/NBK21843/

2. Steller, H., and V. Pirrotta. “Expression of the Drosophila White Gene under the Control of the hsp70 Heat Shock Promoter.” The EMBO Journal 4.13B (1985): 3765–3772. Print.

3. Gould, Bille, and Elena Kramer. "Plant Methods." Virus-induced Gene Silencing as a Tool for Functional Analyses in the Emerging Model Plant Aquilegia (columbine, Ranunculaceae). BioMed Central, 12 Apr. 2007. Web. 16 May 2016.

4. Lu, R., AM Martin-Hernandez, JR Peart, I. Malcuit, and DC Baulcombe. "Result Filters." National Center for Biotechnology Information. U.S. National Library of Medicine, 30 Aug. 2003. Web. 16 May 2016.

5. "RNA Interference." RNA Interference. N.p., n.d. Web. 16 May 2016.

Clara is a third year in the college studying Psychology and Biology. She intends to go into neuroscience research looking at childhood brain development. She is a Co-Editor in Chief of Scientia.

Pick Your Poision: The Prohibition Era

Irena Feng

     On the night of January 16, 1920, a somber mood swept across America as the nation prepared for Prohibition, which banned the manufacture and sale of alcohol.1 These changes had little impact on individuals who had collections of alcohol; however, those who were not as fortunate had to settle for the new drink that took the place of legal swill: wood alcohol.

Wood Alcohol

     As its name suggests, wood alcohol was first synthesized from wood via destructive distillation: slabs of wood were heated to vaporize all liquids; the vapors were then condensed, distilled, and collected.2 In history, wood alcohol (also known as methyl alcohol) was widely used, such as in embalming by the Egyptians. Today, wood alcohol appears in solvents (dissolving agents), fuels, cleaning fluids, and a variety of household items.3 Notably, wood alcohol is also used to denature grain alcohol, turning drinkable alcohol into toxic industrial product.
     Wood alcohol is the perverse cousin of grain alcohol (also known as ethyl alcohol), the substance that provides the intoxication accompanying traditional alcohols such as beer, wine, or liquor.3 Compared to grain alcohol, wood alcohol is much more simple in structure, comprising of a single carbon atom surrounded by three hydrogens and the hydroxyl group characteristic of all alcohols.4 However, while grain and wood alcohol may be indistinguishable from each other in taste and smell, wood alcohol is lethal even in minute doses.2

Succumbing to Poison

     While our livers neatly dispatch of grain alcohol by turning it into harmless products such as water and carbon dioxide, they struggle with wood alcohol. The process of breaking down wood alcohol forms formaldehyde and formic acid, which are toxic to the body.5 Formaldehyde is known as an irritant and exhibits neurotoxic effects against the nervous system.6 Formic acid is most recognized as part of bee sting venom, inhibiting cytochrome oxidase complex in mitochondrial respiration, which produces energy for the body.7 The amount of each in wood alcohol poisoning is enough to cause abnormally high levels of acid in the body, which could lead to coma or death.8
     The poisoning process unfolds in two stages: first, the affected individual experiences a shorter intoxication period than expected while the wood alcohol diffuses through the body; then, severe dizziness ensues as the newly formed toxins begin to wreak havoc on the internal organs. Physical signs of poisoning include sudden weakness, severe stomach pain and vomiting, and blindness.9
     The most sensitive areas to this poisoning are the eyes, brain, and lungs since blood flow to those regions is higher, and therefore they are subject to a higher influx of poisonous products. In the eyes, the optic nerve and retina are very sensitive to formic acid salts; autopsies done on victims of wood alcohol poisoning show swelling and bloodiness in the optic nerve area, resulting in sudden blindness.10 In the brain, the parietal cortex (the visual processing center) is also damaged, often with shrunken or destroyed neurons.6 Nor are the lungs exempt from this chaos: because the lungs work in a high metabolic state with oxygen intake, formaldehyde and formic acid concentrate there, inflaming lung tissue8,9 and possibly leading to the victim’s death.

Illegal Options Aplenty

      With the law banning the manufacturing and distribution of the innocuous grain alcohol, illegal saloons, known as speakeasies, turned to other sources of alcohol to satisfy thirsty customers.11 Luckily, alcohol still existed aplenty as industrial product. Many attempted to take denatured solutions and re-distill them into non-toxic alcohol, typically with little success.
      A new generation of cocktails appeared as bartenders tried to mask the wood alcohol in their drinks, adding fruit juices and more to popularize the drink. In the poorer speakeasies, however, cocktails couldn’t be as fancy. One particularly famous drink from New York’s Hell’s Kitchen was called “Smoke”; simply a cloudy concoction of water and fuel alcohol, it was incredibly cheap and deadly.9
      While Smoke poisoned drinkers with wood alcohol, other drinks had different methods. Ginger Jake, for example, caused paralysis and loss of limb control after a night out drinking. As later discovered, Ginger Jake was synthesized by taking a tonic and adding plasticizer9 (used to keep plastic from becoming brittle) to induce intoxication. This lethal combination forms organophosphates, neurotoxins that eventually shred nerve cells in the spinal cord.12

Government-Sanctioned Poision

     The original intention of Prohibition was to eliminate alcohol in a wave of religious revivalism, as many political leaders viewed alcohol as a threat to familial and marital relations. With the rapid rise of speakeasies and their methods of distilling industrial product, the government scrambled to find new ways to discourage drinkers. Since toxic liquor couldn’t dissuade drinkers, the government’s plan of action was to increase the toxicity of their next target: industrial alcohol.13
     Re-distillation of industrial alcohol was usually fairly unsuccessful, with substantial amounts of wood alcohol remaining in each drink. However, this lethality did not deter people from drinking. The government responded by requiring more added chemicals in industrial alcohol, which made it more difficult to distill, attempting to force liquor syndicates to give up. Common additives included benzene, kerosene, gasoline, formaldehyde, petroleum products, ether, or simply more wood alcohol.9,13
     The Chemist’s War became a hallmark of Prohibition, with government chemists working against the distillers producing semi-drinkable alcohol, doing little to stopper the drinking throughout the nation. According to medical reports, deaths from acute and chronic alcoholism in New York City alone tripled from 2714 deaths in 1921, to 6602 in 1924.14Having failed to build a less corrupt society, the federal government eventually ended Prohibition.15

The End

     The night before the official annulment of Prohibition, December 6, 1933 rolled around with much more festivity than fourteen years earlier. When Prohibition was officially repealed, retail stores and high-class hotels alike rolled out the now-legal wines and cocktails in celebration. Poisonous wood alcohol faded from society and once again yielded to its milder cousin, grain alcohol. To this day, despite being the less toxic of the two, grain alcohol is more widely consumed and thus remains the better killer.

References

1. History.com Staff. 2009. “Prohibition.” History.com. Accessed 18 February 2016. http://www.history.com/topics/prohibition.

2. Solomons, T.W. Graham, Craig B. Fryhle, and Scott A. Snyder. 2014. “Important Alcohols and Ethers.” In Organic Chemistry, 11th ed., 503. John Wiley & Sons, Inc.

3. “Alcohol.” 2016. Drugs.com. Accessed February 10. http://www.drugs.com/alcohol.html.

4. Vale, Allister. 2007. “Methanol.” Medicine 35 (12): 633–34. Accessed 10 February 2016. doi:10.1016/j.mpmed.2007.09.014.

5. Heller, Jacob L. 2015. “Methanol Poisoning.” MedlinePlus. U.S. National Library of Medicine. https://www.nlm.nih.gov/medlineplus/ency/article/002680.htm.

6. Songur, Ahmet, Oguz Aslan Ozen, and Mustafa Sarsilmaz. 2010. “The Toxic Effect of Formaldehyde on the Nervous System.” In Reviews of Environmental Contamination and Toxicology, 203:105–18. Springer New York. Accessed 18 February 2016. http://link.springer.com/chapter/10.1007/978-1-4419-1352-4_3.

7. Liesivuori, Jyrki, and Heikki Savolainea. 1991. “Methanol and Formic Acid Toxicity: Biochemical Mechanisms.” Pharmocology & Toxicology 69 (3): 157–63. Accessed 18 February 2016. doi:10.1111/j.1600-0773.1991.tb01290.x.

8. “Formic Acid [MAK Value Documentation, 2003].” 2012. In The MAK Collection for Occupational Health and Safety, 19:170–80. Accessed 18 February 2016. http://onlinelibrary.wiley.com/doi/10.1002/3527600418.mb6418e0019/full.

9. Blum, Deborah. 2010. The Poisoner’s Handbook. Penguin Books.

10. Mittal, BV, AP Desai, and KR Khade. 1991. “Methyl Alcohol Poisoning: An Autopsy Study of 28 Cases.” Journal of Postgraduate Medicine 37 (1): 9–13. Accessed 10 February 2016.

11. Reitman, Ben L. 1919. “Green Giraffe Haunts Jag on Wood Alcohol.” Chicago Tribune, December 29. http://archives.chicagotribune.com/1919/12/29/page/2/article/green-giraffe-haunts-jag-on-wood-alcohol.

12. Abou-Donia, Mohamed B. 2003. “Organophosphorus Ester-Induced Chronic Neurotoxicity.” Archives of Environmental Health 58 (8): 484–97. ISSN:0003-9896.

13. Blum, Deborah. 2010. “The Chemist’s War.” Slate. 19 February 2010. Accessed 10 February 2016. http://www.slate.com/articles/health_and_science/medical_examiner/2010/02/the_chemists_war.html.

14. “Dr. Norris’s Poison Liquor Report.” 1927. The Literary Digest, February 14. Accessed 25 February 2016. https://www.unz.org/Pub/LiteraryDigest-1927feb26-00014.

15. “Prohibition: America’s Failed ‘Noble Experiment.’” 2012. CBS News. June 12. Accessed 10 February 2016. http://www.cbsnews.com/news/prohibition-americas-failed-noble-experiment/.

Image Credit (Creative Commons): “ARMAS DE HYDRA. HYDRA VULGARIS.” Accessed November 24, 2015. https://www.flickr.com/photos/microagua/16624195064

Irena Feng is a first-year student at the University of Chicago majoring in Biological Chemistry/Chemistry and minoring in East Asian Languages and Civilizations. Her interests include exploring a variety of scientific topics through reading and research.

The Bilingual Mind

Lindsey Jay

     Always boasting to your friends how you can speak three different languages? Now you will have yet another thing to brag about. Being bilingual or multilingual is not only practical but is also beneficial for a multi-tasking brain. Research has revealed that possessing a bilingual mind may overall have greater cognitive benefits than having a monolingual mind.
     Previously, researchers believed that bilingual children would perform lower on cognitive tasks, such as symbol manipulation and reorganization, than monolingual children (1). They reasoned that having to switch between multiple languages would be confusing and would cause unwanted interference for children when trying to change thoughts between languages (1). However, Professors Peal and Lambert, who studied French-speaking monolingual children and French-English-speaking bilinguals in Canada, found that bilingual children performed higher not only in non-verbal spatial tasks, such as symbol manipulation and reorganization, but also language tasks overall (1). They had better performance multitasking in complex symbol (not correlating to any language) manipulation and language manipulation. Later studies found that bilingual children had a significant advantage in metalinguistic awareness (such as differentiating between the form and meaning of words) and non-verbal problems that required participants to ignore misleading information (1).
     Nonetheless, bilinguals must deal with a problem that monolinguals do not face: switching between languages. This switching between languages is seen in in electroencephalography (EEG) scans of bilinguals, where joint activation of the languages was observed (1). Researchers Thierry and Wu used EEG to study how the brain keeps both Chinese and English online. They found that in Chinese speakers studying English words, participants accessed the Chinese forms of words when making semantic judgments about the English words (1). Their results showed that both brain regions are activated no matter what language is being spoken, and a person does not think discreetly in just one language at a time (1). This finding brought up the idea that bilinguals must learn how to switch their attention between the two languages. Because they must constantly shift their attention, they may overall have better executive control. Executive control is the set of cognitive skills based on limited cognitive resource for functions such as inhibition, switching attention, and working memory (1). Though bilinguals were found to have relatively weaker skills in their respective languages compared to monolinguals, they demonstrated better executive control than their monolingual counterparts (1).
     Further research has shown that in fMRI scans language switching is accompanied by activation in the dorsolateral prefrontal cortex (DLPFC), which is part of general executive control system (1). Learning another language actually broadens and strengthens connections within one’s neural network. This leads to advantages on particularly difficult tasks that demand switching thought processes in the same way that using multiple languages requires (1). More broadly speaking, bilinguals seem to have a greater ability to monitor their environment than monolinguals (2). Bilingual minds have “exercised” this metaphorical muscle of monitoring environments much more than monolingual minds, and have the stronger neural networks to show for it.
     The benefits of bilingualism also extend into aging minds. In a study of 44 elderly Spanish-English bilinguals, Professor Tamar Gollan of University of California, San Diego found that they were more resistant than monolinguals to the onset of dementia and other Alzheimer symptoms (2). Bilingualism may contribute to cognitive reserve in old age, which is the idea that engagement in stimulating physical or mental activities can help maintain cognitive functioning in healthy aging and help delay the onset of dementia (1). Thus, a more active brain may be the cause of delayed symptoms of dementia and Alzheimer’s in aging individuals.
      These findings have been focused on people who have been bilingual since early childhood, but what about learning a new language later in life? It turns out that your brain will still reap benefits, but different ones from those discussed earlier. The hippocampus, left middle frontal gyrus, inferior frontal gyrus, and superior temporal gyrus, which deal directly with language learning, all showed an increase in brain size according to Martensson et al. from Lund University (3). It is unclear as to whether learning a new language later in life could help monolinguals to the same degree that it helps bilinguals from early childhood. But because the brain has been shown to form new connections in the brain regions associated with language learning, it is possible that if both languages are used regularly enough it could lead to increased executive control. The brain is highly plastic, and the possibility that learning another language later in life has similar cognitive benefits should not be discounted.
      Another question that scientists have yet to address is exactly how bilingual a person must be in order to benefit. According to Professor Giannakidou at the University of Chicago, bilingualism lies on a spectrum rather than on a binary (4). It is thus difficult to even define what exactly “bilingualism” is in the first place. Moreover, it is unclear whether learning multiple languages would have an additive effect on the cognitive benefits. Overall, the next step this research takes should address these questions to help resolve these more nuanced issues.

References

1. Bialystok, Ellen, Fergus I.m. Craik, and Gigi Luk. "Bilingualism: Consequences for Mind and Brain." Trends in Cognitive Sciences 16, no. 4 (April 2012): 240-50. doi:10.1016/j.tics.2012.03.001.

2. Bhattacharjee, Yudhijit. "Why Bilinguals Are Smarter." The New York Times. March 17, 2012. Accessed April 26, 2016. http://www.nytimes.com/2012/03/18/opinion/sunday/the-benefits-of-bilingualism.html.

3. Johan Martensson, et al. "Growth of Language-related Brain Areas after Foreign Language Learning." Growth of Language-related Brain Areas after Foreign Language Learning. October 12, 2012. Accessed April 26, 2016. doi:10.1016/j.neuroimage.2012.06.043.

4. Giannakidou, Anastasia. "Dimensions of Bilingualism." Lecture, Language and the Human, Kent 107, Chicago, November 18, 2014.

Climate Change: Effects on El Niño and Chicago Winter

Tom Klosterman

     Way back in October, the Chicago Tribune predicted a mild winter for Chicago1, with above average temperatures and below average precipitation. Indeed, the months of December and January were exceedingly mild: the mean temperature was 12ºF above normal in December, and January snowfall was 1.41 inches less than normal.2 The city has been feeling the effects of this balmy weather: the Chicago Department of Transportation states that it is ahead on its pothead repairs, with only 60% as many to fill as last year.3 The suburbs La Grange, Westerns Springs, and La Grange Park have all reported using “significantly less” salt than the past two years.4 These are just two of the factors saving Chicago big bucks this temperate season.
     Whenever temperatures stay above normal for an extended period of time, cries of “Global Warming!” arise from the environmentally concerned, in the same way periods of freezing weather draws cries of “Global Warming?” from the skeptics. Indeed, this winter is showing typical conditions for a planet slowly heating up. Not only are average temperatures rising, there have been fewer extreme weather events like snowstorms and negative temperatures. Across the nation, cities are experiencing record mild temperatures. New York City, for example, is currently experiencing the warmest winter on record.5 Of course, global warming could be partially to blame for this, but all focus this year has been on a more naturally occurring factor.
     Cue the ENSO weather phase known. The “El Niño Southern Oscillation” has been occurring naturally for thousands of years.6 It describes atmospheric pressure and ocean temperature changes in the Pacific Ocean, which have far-reaching ramifications around the globe. Every few years, the ENSO typically swings between two states - El Niño and La Niña. seasons with cooler ocean water are named “El Niña” seasons. This causes below average temperatures in many locations, including the Northern Plains and Pacific Northwest of North America.
     But La Niña’s hot-headed brother “El Niño” is winter’s most abominable and feared enemy in the Midwest. El Niño is spurred on by warm waters in the Pacific once every 2 to 7 years on average. To begin, strong westerly winds or other semi-isolated events will cause a small increase of temperature in the Eastern Tropical Pacific. This affects the atmospheric currents, which in turn further heat the ocean, creating a feedback loop.6 Warm, moist air piles up all over the Eastern Pacific throughout the summer. Then, during winter in the United States, the warm air in the Southwest pushes the Polar Jet Stream north, reducing the amount of frigid air brought to the Midwest from the Arctic and raising average temperatures.7 The previous El Niño season, 2009-2010, was very moderate and eventually was mitigated by non-related atmospheric currents that brought cold air back down from the North. However, El Niño decided to revisit the world in 2015, stronger than ever. The “Super El Niño” we are experiencing now is one of the strongest on record, rivaling the winter of 1997-98, where winter temperatures were 12.4ºF above average.
     And certainly, El Niño seems to be having a huge effect on the Chicago area. Although intermittent periods of heavy snow and cold have developed, overall the winter has been easy and mild. As mentioned above, the city has been saving money and labor due to the warmer temperatures and reduced snowfall. But if you think El Niño could be Mother Nature’s compensation for previous snowy, windy, cold conditions called “snowpocalypes” or “snowmaggedons,” think again.
      El Niño, although convenient for Chicago, causes major global disturbances. The weather pattern causes Southeast Asia to dry up, increasing wildfire risk in areas such as Indonesia. Warm water in the Pacific due to El Niño also encourages “coral bleaching” - the large scale death in coral reefs.8 As the moist air from the Pacific dumps rain on the Southern US states and northern South America, flooding in the Americas increases. On the other hand, droughts are exacerbated in other places, such as Australia, India, Indonesia, the Philippines, Brazil, parts of Africa, western Pacific islands. And across the globe, non-optimal growing conditions reduce crop production and the profits of fishermen and farmers, especially those from poor countries. Can all of these tragedies be chalked up to the cruel cycles of Nature?
      To answer that, we now return to climate change. 2015, the beginning of our current “Super El Niño,” was the hottest year on record. Even without the overall warming effects of El Niño the year would have kept its record, suggesting reverse causality: did the warmer temperatures caused by greenhouse gases intensify El Niño?
     Several recent studies from Nature Climate Change have proposed that there may indeed be a connection between the increase in global temperatures and the strengthening of recent El Niños.9,10,11 Future, stronger, El Niños also have the potential to stretch further east, implying even worse effects on the US.11,12 In light of these investigations and discoveries, perhaps it is unfair to scapegoat Mother Nature completely for Chicago’s mild winter.13 Having shown to be highly correlated with human greenhouse gas emissions, global warming is a background factor that at the very least does not weaken El Niño. In all likelihood, our carbon dioxide and methane emissions are helping to heat the Pacific Ocean and assisting El Niño in warming the United States, while also ruining crops and inducing both floods and droughts worldwide.

References

1. Chicago Tribune Staff. "Latest Projections Still Favor Mild Winter for Chicago." Chicago Tribune, October 15, 2015.

2. "Chicago January Weather 2016." AccuWeather. Accessed February 15, 2016.

3. Gallardo, M. "City Says It's Ahead on Pothole Repairs Thanks to Mild Winter." ABC7 Chicago. February 1, 2016.

4. Mannion, A. "La Grange, Other Towns Use Less Salt Due to Mild Winter." Chicago Tribune, February 11, 2016.

5. Erdman, J. "Winter 2015-2016 U.S. Mid-Term Report Card." The Weather Channel. February 6, 2016.

6. UK National Weather Service. "El Niño, La Niña and the Southern Oscillation." Met Office. 2012.

7. "El Nino Primer." Weather.gov. 2015.

8. Stone, M. "El Niño Is Killing Earth's Coral Reefs." Gizmodo. February 23, 2016.

9. Power, S., et al. "Robust Twenty-first-century Projections of El Niño and Related Precipitation Variability." Nature 502, no. 7472 (2013): 541-45.

10. Cai, W., et al. "Increasing Frequency of Extreme El Niño Events Due to Greenhouse Warming." Nature Climate Change 4, no. 2 (2014): 111-16.

11. Cai, W., et al. "ENSO and Greenhouse Warming." Nature Climate Change 5, no. 9 (2015): 849-59.

12. Santoso, A. et al. Late-twentieth-century emergence of the El Niño propagation asymmetry and future projections. Nature 504, 126–130 (2013).

13. Cho, R. "El Nino and Global Warming-what's the Connection?" Phys.org. February 3, 2016.

Image Credit (Creative Commons): https://upload.wikimedia.org/wikipedia/commons/7/7a/Movement_of_surface_waters_during_El_Nino.jpg

Tom Klosterman is a first-year student at the University of Chicago majoring in Physics.

Decoding Mona Lisa’s Smile: The Neuroscience Behind Art

Natalie Petrossian

     As Pablo Picasso once said: “art is a lie”, and it is the artist’s objective “to convince others of the truthfulness of his lies” in order to create a masterful piece. This concept could not be truer of Leonardo Da Vinci’s Mona Lisa. One moment the enigmatic woman seems to be smiling, and the next her smile fades away. How can a two dimensional expression on a 500-year old portrait continue to baffle us?
     Throughout history, artists have figured out ways to create illusions to convince us to buy into the “lie” of art. Since the human brain is wired to make sense of lines, colors, and patterns – even on a two-dimensional plane – artists have managed to exploit our visual shortcomings to portray depth and brightness that actually does not exist.
      The Italians have one word to explain this masterful manipulation: sfumato.1 Meaning blurry and ambiguous, sfumato leaves much of the interpretation to one’s imagination. However, neuro-aestheticians like Dr. Margaret Livingston from Harvard Medical School use another term to explain this phenomenon: dynamism.2 She, among other scientists, believes that Mona Lisa’s smile comes and goes not because her expression is enigmatic, but because of how our visual network is designed.
     The human eye has two distinct regions for viewing the world: the fovea and the peripheral area.3 Centrally located in the back of the retina, the fovea has a high density of cones, which are the cells responsible for seeing colors, reading fine print, and picking out details. The peripheral area, which surrounds the fovea, is dense in rods. These photoreceptors are responsible for differentiating black and white, and seeing motion and shadows. Consequently, these two zones constitute the two major processing streams for our visual system, which Dr. Livingstone calls the “what” and “where” streams.4The “what” allows us to see in color and recognize details in faces and objects, while the “where” is less detail-oriented and color insensitive, allowing for faster processing as well as helping us navigate our environment. When these two types of cells are stimulated upon seeing an image, the activity is transmitted to the visual brain through the optic nerve where the stimuli are grouped together to give rise to our observed image.5 This image is formed by specific neurons of the visual cortex in the brain.
     Both of these channels constantly encode data about an object’s size, clarity, brightness, and location within our visual fields. However, they can occasionally send mixed signals to the brain, explaining how Mona Lisa can be beaming one moment and somber the next. When the center of your gaze is focused on her eyes, your coarser peripheral vision registers her mouth.6 And because peripheral vision is not responsible for detail, it readily picks up low-frequency shadows from Mona Lisa’s cheekbones and upper lip, which suggest the curvature of a smile. Conversely, when the viewer’s eyes are directly on her mouth, their central vision does not see the shadows, which makes the smile fade away. This dynamism in her expression creates a flickering quality that changes as you move your eyes around the painting, producing the presence or absence of a smile.7
      While the perception of Mona Lisa’s smile does depend on the location of your gaze, the visual cortex in the brain is hardwired to interpret visual information in specific ways, regardless of the focus on your gaze. Based on current neuroimaging studies, there have been over thirty visual sensory areas identified in the brain, each tasked with a specific function.8 The principal visual area, V1/V2, has been shown to respond to vertical and horizontal lines, such as those created by the light-dark edges from the shadows on Mona Lisa’s face. Adjacent to V1/V2 in the visual cortex is V3, which is necessary for recognizing the shape, size, and form of an object. Below V3 lies V4, which is the visual center for color perception, and thus receives activation signals from retinal cone cells. While all of these areas work together in allowing us to see the world around us, the most important visual area in observing the flickering, dynamic quality of Mona Lisa’s smile is the area V5. Responsible for identifying motion in the visual field, V5 has direction-specific neurons that fire in response to oriented lines. Hence, lines drawn in different orientations on a painting, such as for the shading in Mona Lisa’s smile, are thought to stimulate V5 neurons and provoke an imaginative sensation of movement.
      In addition to the varying orientations in brush strokes, scientists believe that the painting’s dark background and light contrasts are also crucial in portraying the flickering quality in Mona Lisa’s smile.9 Artists often play with luminance through their use of materials, shading, colors, and textures in order to give the illusion of three dimensions. But since the range of luminance in real life is far greater than what can be reproduced in a two-dimensional painting, artists have to place shadows and emphasize light in areas that wouldn’t be present in real life.10 In the case of the Mona Lisa, Da Vinci revolutionized her appearance by adding dark shadows above her lip, near the bridge of her nose, and extending beyond her eyes, in addition to emphasizing her cheekbones and her upper neck. For every one of these dark contrasts, Da Vinci added extra light on her forehead, directly below her eyes, and on her chin in order to trick the eye into perceiving depth. By highlighting certain features and muting others, he not only created a very natural expression, but also managed to impart the illusion of movement in an otherwise static painting.
     Despite these advances in the neuroaesthetics of Mona Lisa’s smile, scientists and art historians have barely begun to scratch the surface of this iconic masterpiece. By attempting to understand Da Vinci’s techniques, they have uncovered more questions than answers. How did this man, who had little to no biological knowledge of the visual system, know how to manipulate our hard-wired visual sensibilities in order to achieve his greatest work of art? Why did he not apply these same techniques to his other paintings? And how is it that no other great artists of his or future times have figured out how to reproduce these techniques in their paintings? Additionally, Mona Lisa’s smile is not the only enigma of this great work of art: how is it that her eyes seem to follow the viewer at nearly every angle? Which techniques and visual quirks did Da Vinci exploit in order to accomplish this feat with paint and a flat canvas?
     While these logistical questions primarily aim to explain how a painting was created, the cultural questions that arise in the process are equally intriguing. Art historians are still searching for answers about the identity of “Mona Lisa” herself, and her mysterious relationship to Da Vinci. Considering its alternative title, “La Gioconda”, the portrait is thought to be of Lisa Gherardini, wife of a Florentine cloth merchant named Francesco del Gioconda.11 However, even this promising theory raises additional questions: who commissioned the painting, how long did it take Da Vinci to complete it, how long did he keep it, and how did it end up in the French Royal collection instead of in the hands of the commissioner? Meanwhile, neuroscientists are simply using these works of art to better understand the brain. By attempting to explain our perception of art, they hope to uncover the mechanisms by which our brains see and interpret the world around us. By tracing these complex processes, scientists might one day discover the manner with which we experience our personal realities, and the factors that may influence our perception throughout our lives.

References

1. Sandra Blakeslee. "What Is It With Mona Lisa's Smile? It's You!" The New York Times, November 21, 2010.

2. Chakravarty, Ambar. "Mona Lisa’s Smile: A Hypothesis Based on a New Principle of Art Neuroscience." Medical Hypotheses 75, no. 1 (February 19, 2010): 69-72. Accessed January 1, 2016.

3. Kolb, Helga. "The Organization of the Retina and Visual System: Photoreceptors." NCBI. May 1, 2005. Accessed January 5, 2016. http://www.ncbi.nlm.nih.gov/books/NBK11522/.

4. Livingston, Margaret. "Neuroscience & Art: Margaret Livingstone Explains How Artists Take Advantage Of Human Visual Processing." Interview by Cara Santa Maria. Accessed January 5, 2016. http://www.huffingtonpost.com/2013/01/07/neuroscience-art-margaret-livingstone_n_2339429.html.

5. Kolb, Helga. "The Organization of the Retina and Visual System: Photoreceptors." NCBI. May 1, 2005. Accessed January 5, 2016. http://www.ncbi.nlm.nih.gov/books/NBK11522/.

6. Huang, Mengfei. "The Neuroscience of Art." Stanford Journal of Neuroscience, 2009, 24-26. Accessed January 5, 2016. http://web.stanford.edu/group/co-sign/Huang.pdf.

7. Chakravarty, Ambar. "Mona Lisa’s Smile: A Hypothesis Based on a New Principle of Art Neuroscience." Medical Hypotheses 75, no. 1 (February 19, 2010): 69-72. Accessed January 1, 2016.

8. Ibid.

9. Landau, Elizabeth. "What the Brain Draws From: Art and Neuroscience." CNN. September 15, 2012. Accessed January 5, 2016. http://www.cnn.com/2012/09/15/health/art-brain-mind/.

10. Ibid.

11. Scailliérez, Cécile. "Mona Lisa – Portrait of Lisa Gherardini, Wife of Francesco Del Giocondo." Louvre. Accessed January 5, 2016. http://www.louvre.fr/en/oeuvre-notices/mona-lisa-portrait-lisa-gherardini-wife-francesco-del-giocondo.

Image Credit (Creative Commons): Amandajm. "Mona Lisa." Digital image. Wikimedia Commons. June 8, 2010. Accessed February 29, 2016. https://commons.wikimedia.org/wiki/File:Mona_Lisa.jpg.

Natalie Petrossian is a fourth-year student at the University of Chicago majoring in Biology.

Complexity Theory: Chaotic Hearts Lead to New Understanding of Depression

Stephanie Wiliams

     "Life exists at the edge of chaos. The fate of all complex adapting systems in the biosphere—from single cells to economies—is to evolve to a natural state between order and chaos” -Stuart Kauffman

     Researchers are reviving a decades-old hypothesis that challenges classical homeostasis models of physiology: the more complex an individual’s physiological signals are, the healthier the individual is. As humans grow older and encounter disease, their physiological signals degrade and lose complexity. In young populations, EKGs show healthy, irregular patterns, and exhibit fractal-like complexity. In aged populations, EKGs show regular, simple sinus rhythms. These differences in heart signal complexity allow researchers to measure beat-to-beat heart rate (HR) fluctuations in diseased and healthy backgrounds, and form inferences from the coded information. This is especially useful for collecting information about neuroanatomic control in disease backgrounds that have been elusive to regular diagnostic methods, such as mental illness (Listed et al., 2011). By comparing the signals of ill patients with those of healthy patients, researchers can now detect disruptions and begin quantifying the degree of neuroanatomic disruptions in mental illnesses.
      “Complexity,” in the pure, mathematical sense of the word, refers to “the length of the shortest binary input into a universal Turing machine such that the output is the initial string”(Cover and Thomas, Elements of Information Theory). Put less technically, complexity refers to behavior that does not repeat itself, yet is deterministic within dynamic nonlinear systems. The human body needs to adapt to internal and external stressors, which requires it to be nimble and adaptive (Goldberger, 2002) Overly rigid patterns in EKGs indicate the presence of a sick heart that can’t respond to the body’s demands (Goldberger, 2002). Thus, the more “complex” an individual’s physiology is, the more likely they are to survive. Dr. Madalena Costa, Ph.D, an authority on chaotic dynamics, says that complex physiological systems are distinguished by their “multiscale organization” and “temporal and spatial organization” (Costa, 2002). Essentially, the healthiest signals are the most time irreversible, which allows them to adapt to both “internal and external variables.” Importantly, this correlation between complexity and health extends beyond heart dynamics to other biological measures, including brain waves, breathing rates, balance and gait.
     The challenge now facing researchers is the development of accurate algorithms and computational tools that can detect when signal complexity is breaking down. If an individual’s heart is running counter to a regular sinus pattern, but suddenly begins exhibiting regular behavior, then the individual is very likely on the verge of experiencing a cardiac attack. Anticipatory medical devices, currently under development at the Wyss Institute at MIT, aim to detect these changes in physiological signals and alert individuals to the possibility of an attack. Goldberger hopes to develop wearable devices that can read out these signals: “What we’d like to do is probe those signals for encoded information telling us that the body’s physiology is about to drive off a cliff.” However, realistically developing devices that can quantify the complexity of physiological dynamics is indeed a challenge. To apply the idea of complexity to physiology, researchers first have to map the states of the systems into strings of characters (Costa, 2002). Entropy-based measures can be used, but there is no “straightforward correspondence” between entropy and complexity (Costa, 2002). Ultimately, researchers will need not only to develop foolproof methods of quantifying complexity, but also standard models for complexity under healthy conditions as well as with the changes that occur with the onset of pathology and aging (Costa, 2002).
     Major depression (MD), a highly prevalent disorder, is an important target of nonlinear dynamics research. The psychopathology, which is currently diagnosed according to the protocol of the DSM-5, affects multiple physiological functions, and leads to a loss of complexity in heart rate dynamics (Listed et al., 2011). The excitement surrounding the loss of complexity in MD patients stems from the fact that researchers can assume MD patients have neuroautonomic perturbations, and can use this when analyzing the degradation of complexity in patient’s signals (Listed et. al, 2011). The central autonomic network, led primarily by the prefrontal cortex, allows the brain to send integrated output commands to the heart. Perturbations in neuroautonomy can therefore manifest as perturbations in the heart signals. Analyses of the heart signals and the degrees of complexity among varying degrees of depression in patients has potential to lead to an elucidation of the degree of dysfunction in the physiology of each patient, and thus, potential to lead to a quantifier for the disease.
      Investigations of MD using the complexity hypothesis have made significant progress in quantifying depression. In one study, a group of researchers including EKG expert Ary Goldberger compared the heart rate dynamics of depressed young to middle aged men to healthy counterparts, and found that the depressed patients’ signals were less complex. The study observed 24 unmedicated adults during an acute episode of MD, which was diagnosed according to DSM-IV-TR criteria. Researchers measured the complexity in the men’s cardiac interval time series using a method called multi scale entropy (MSE), and assessed the complexity of these fluctuations during sleeping hours. Their results revealed a significant reduction (p<.03) in complexity. These findings strongly support the correlation between HR dynamic complexity and psychopathology. This is profound in two ways. Firstly, researchers now have a method, albeit tentative, to probe the dynamical changes that occur in MD. Secondly, researchers now have a substantive, novel biomarker for MD that could replace the highly debated diagnostic criteria of the DSM-IV.
      Though the field of nonlinear dynamics in physiology is rapidly advancing, significant obstacles still stand in the way of its definitive application in clinical settings. Current models of the complex dynamics will require a great deal of improvement before they can be used as analytic tools in medicine. However, when the research does finally catch up to the mathematics, it will be hard to understate the benefits of nonlinear dynamics in medicine.

References

1. M. Cover, J.A. Thomas, Elements of Information Theory, Wiley, USA, 1991.

2. Costa, M., Goldberger, C.-K. (2002). Multiscale Entropy Analysis of Complex Physiologic Time Series . Physical Review Letters, 89(6), 102-106.

3. Goldberger AL, Peng CK, Lipsitz LA (2002). What is physiologic complexity and how does it change with aging and disease? Neurobiol Aging. Jan-Feb;23(1):23-6

4. Leistedt SJ-J, Linkowski P, Lanquart J-P, et al. Decreased neuroautonomic complexity in men during an acute major depressive episode: analysis of heart rate dynamics. Translational Psychiatry. 2011;1(7):e27-. doi:10.1038/tp.2011.23.

Hydra and the Immortality Gene

Irena Feng

     Greek mythology told of a fearsome nine-headed monster named the Hydra. The fact that the Hydra had nine heads meant it was virtually immortal, for when one head was chopped off, two more would grow in its place. Today, over 2000 years later, the Hydra of legend no longer terrorizes society, but still lives on in a genus of small, freshwater animals called the Hydra. These centimeter-long organisms span only about 20 different cell types,1 but underneath their apparent simplicity lies a secret ability that they share with their namesake: Hydra, despite their humble appearance, are immortal.

The Process of Growing Old


     The large majority of a living organism’s genetic information is stored as DNA, which needs to be protected from damage. Each time a cell undergoes DNA replication in preparation for cell division, an unfortunate limitation on the proteins involved causes DNA at the ends of chromosomes to be lost. Over time, the persistent loss of genetic information could lead to serious problems for the cell and for the organism in general, like deformation or cancer.2 Luckily, the cell has found ways to postpone this eventual crisis by capping chromosomes with repeated sequences of nucleotides called telomeres. These telomeres can be shortened freely, acting as expendable protection for the chromosome because they don’t code for any genes and can afford to be lost.
     Telomeres play a crucial role in the cell by protecting the chromosome from damage as well as indicating when DNA has reached the critical point in damage when it can no longer be repaired properly. Similar to how the plastic tips at the ends of shoelaces prevent them from fraying, telomeres keep chromosomes separate from each other, preventing ends from fusing together and other issues like DNA disintegration. These issues contribute strongly to chromosomal instability, which is one of the leading causes of the development of tumors and cancer.3 Since telomeres shorten with each cell division, the length of a chromosome’s telomeres can indicate how many times a cell has divided.4 Once a cell divides too many times, the telomeres can’t protect chromosomes anymore and so DNA damage accumulates, eventually causing the cell to commit cellular suicide (apoptosis).5 Apoptosis is necessary so that the damaged cell does not continue to accumulate mutations, which could lead to the development of a variety of illnesses, including cancer. This gradual loss of dividing cells forms the basis of a process called replicative cellular senescence, which is the aging of the organism through effects on stem cell populations and the immune system.4,6
     Hydra are an exception to this rule, never aging despite frequent cell division. Cells in the Hydra’s three tissue layers are created and discarded in a flash, allowing for a constant displacement of cells away from the body column.1 As cells continuously move outwards, they are quickly replaced. With so much cell division, one would expect Hydra to reach replicative cellular senescence rapidly. However, oddly enough, it has been proven that Hydra does not undergo senescence.7 They can still succumb to the typical death, such as being ingested by other organisms, but theoretically, Hydra can exist forever in ideal conditions.

Telomeres,Telomerase, and the FoxO Gene

      As mentioned above, telomeres shorten every time a cell divides, a situation that eventually leads to apoptosis for cells and senescence for the organism. To promote longevity, cells employ another mode of protection in the form of the enzyme telomerase. Telomerase is composed of RNA and a catalytic subunit8,9 which work together to elongate telomeres during DNA replication to offset the normal shortening of telomeres. Despite being hailed as a miracle enzyme, telomerase is mysteriously absent in most differentiated cells, appearing only in cancerous and/or immortalized cells (mutated cells that just keep dividing). Unlike most organisms, Hydra cells utilize functioning telomerase to counteract telomere shortening. As a result, these cells have sufficiently long telomeres and do not age.
      Longevity and senescence strongly correlate with the FoxO gene in particular.6,10 It is with FoxO that we can make a connection between a “longevity gene” and telomerase. FoxO transcription factors play many roles throughout the cell, from regulating apoptosis to counteracting stresses such as overheating or starvation.11 They also control parts of the cell cycle, telling the cell when to do at checkpoints depending on internal and external signals. One protein in the FoxO family, FOXO3a, is a longevity factor – its overexpression leads to a marked increase in an organism’s lifespan.12FOXO3a prevents senescence by enhancing the expression of the gene coding for telomerase’s catalytic subunit,13 thereby enhancing its activity within the cell. Therefore, FOXO3a plays a critical role in the regulation of telomerase activity.
      FOXO3a is strongly expressed in all cell layers of Hydra,1 and is thus one of the primary factors contributing to Hydra’s ability to self-renew. The relationship between FOXO3a expression, telomerase, and longevity in Hydra suggests the possibility of a future where immortality transitions from a whimsical motif in Greek mythology to an attainable reality.

What About Us?


     In addition to the constant presence of telomerase, Hydra’s asexual method of reproduction1 also factors into its immortality. This method requires the presence of stem cells that constantly renew themselves in further cycles of cell division. Also, the more frequently cells divide, the easier it is to avoid a build-up of cellular and genetic damage since the cells are so quickly discarded. For humans, however, this rapid replacement of cells may not be as practical. Highly specialized cells such as neurons in the brain and cardiomyocytes in the heart depend strongly on their connections with other cells so that organs can function as cohesive units. Replacing cells would reset these connections and reduce the functionality of our complex organs. Despite this disparity between Hydra and humans, the similarities between genes are important to our understanding of how aging works. Although immortality still remains out of reach with our current levels of knowledge, future studies in senescence and immortality may help to uncover the key to longevity for the human race.

References



1. Klimovich, Alexander, Anna Marei Bohm, and Thomas C.G. Bosch. “Hydra and the Evolution of Stem Cells.” In Stem Cell Biology and Regenerative Medicine, edited by Charles Durand and Pierre Charbord, 113-35. Denmark: River Publishers, 2015. E-book. https://books.google.com/books?id=yRjjBQAAQBAJ&printsec=frontcover

2. Clancy, Suzanne. 2008. “DNA Damage & Repair: Mechanisms for Maintaining DNA Integrity.” Nature Education 1(1): 103. Accessed November 21, 2015. http://www.nature. com/scitable/topicpage/dna-damage-repair-mechanisms-for-maintaining-dna-344

3. Wai, Lin Kah. 2004. “Telomeres, Telomerase, and Tumorigenesis -- A Review.” The Medscape from WebMD Journal of Medicine. http://www.medscape.com/viewarticle/482667

4. Kuilman, Thomas, Chrysiis Michaloglou, Wolter J. Mooi, and Daniel S. Peeper. 2010. “The essence of senescence.” Genes & Dev 24(22): 2463-79. doi: 10.1101/gad.1971610.

5. Shay, Jerry W., and Woodring E. Wright. 2000. “Hayflick, his limit, and cellular ageing.” Nature Reviews Molecular Cell Biology 1(1): 72-76. doi: 10.1038/35036093.

6. Boehm, Anna-Marei, Philip Rosenstiel, and Thomas C.G. Bosch. 2013. “Stem cells and aging from a quasi-immortal point of view.” Bioessays 35(11): 994-1003. doi: 10.1002/bies.201300075.

7. Khokhlov, A.N. 2014. “On the immortal hydra. Again.” Moscow University Biological Sciences Bulletin 69(4): 153-7. doi: 10.3103/S0096392514040063.

8. Cong, Yu-Sheng, Woodring E. Wright, and Jerry W. Shay. 2002. “Human Telomerase and Its Regulation.” Microbiol. Mol. Biol. Rev. 66(3): 407-25. doi: 10.1128/MMBR.66.3.407-425.2002.

9. Zhang, Yong, LingLing Toh, Peishan Lau, and Xueying Wang. 2012. “Human Telomerase Reverse Transcriptase (hTERT) Is a Novel Target of the Wnt/-Catenin Pathway in Human Cancer.” J Biol Chem 287(39): 32494-511. doi: 10.1074/jbc.M112.368282.

10. Kenyon, Cynthia J. 2010. “The genetics of ageing.” Nature 464(7288): 504-12. doi: 10.1038/nature08980.

11. Dumas, Kathleen Johanna. 2013. “Characterization of Novel Regulators of FoxO Transcription Factors.” PhD diss., University of Michigan.

12. Carter, Matthew E., and Anne Brunet. 2007. “FOXO transcription factors.” Current Biology 17(4): R113-4. doi: 10.1016/j.cub.2007.01.008.

13. Boehm, Anna-Marei, Konstantin Khalturin, Friederike Anton-Erxleben, Georg Hemmrich, Ulrich C. Klostermeier, Javier A. Lopez-Quintero, Hans-Heinrich Oberg, Malte Puchert, Philip Rosenstiel, Jorg Wittlieb, and Thomas C.G. Bosch. 2012. “FoxO is a critical regulator of stem cell maintenance in immortal Hydra.” PNAS 109(48): 19697-702. doi: 10.1073/pnas.1209714109.

14. Yamashita, Shuntaro, Kaori Ogawa, Takahiro Ikei, Tsukasa Fujiki, and Yoshinori Katakura. 2014. “FOXO3a Potentiates hTERT Gene Expression by Activating c-MYC and Extends the Replicative Life-Span of Human Fibroblast.” PLoS ONE, 9(7): e101864. doi: 10.1371/journal.pone.0101864.

Image Credit (Creative Commons): “ARMAS DE HYDRA. HYDRA VULGARIS.” Accessed November 24, 2015. https://www.flickr.com/photos/microagua/16624195064

Irena Feng is a first-year student at the University of Chicago majoring in Biological Chemistry and minoring in East Asian Languages and Civilizations. Her past research in genetics and developmental biology has led to an interest in DNA and its maintenance, modifications, and role in development.

Criminal Minds: The Biological Basis of Criminal Behavior

Lindsey Jay

     Imagine what it would be like if we lived in the world of the 2002 science fiction movie, “The Minority Report,” where psychics help to arrest would-be criminals before they commit a crime. While we cannot see into the future yet, scientists have been studying criminal behavior in the field of neurocriminology since the late 19th century in order to try to understand what differentiates criminals from the average person.
     Cesare Lombroso studied criminals and claimed that they could be identified by physical attributes, that criminality was inherited, and that criminals were a form of more primitive humans.1,2 His findings were controversial, with suggestions that features such as a beaked nose correlated with criminality.1 However, the idea that we could biologically predict criminal behavior fascinated many people, and it continues to be a hot topic of inquiry today. Lombroso’s claims are no longer accepted as true, but the idea that there may be a biological bases for criminal behavior motivated the creation of the discipline of neurocriminology.
      Neurocriminology centers on studying the brains of criminals in order to better understand, predict, and prevent crime.3 With the emergence of brain imagining technologies, neurocriminology took off; now we can actually “see” and compare brains and brain activity. This new technology allows scientists to see what parts of the brain light up during various tasks, a possible gateway into understanding how people think. Neurocriminology is mainly interested in looking at the brain activity of criminals compared to that of “normal” people.4 Scientists in the field hope to discover what types of motivations or biological bases would prompt violent behavior.
     Dr. Adrian Raine, a professor in the University of Pennsylvania’s Criminology Department, is a pioneer in neurocriminology studies. Using positron emission tomography (PET) scans, which use a radioactive tracer to study tissue metabolic activity, Raine identified a particular area of interest: the prefrontal cortex.2,4 The prefrontal cortex is responsible for executive decision-making. In a study of 42 murderers’ brains, he found that there was much less activity there compared to those of average humans. This suggests that murderers may have less control over violent behavior since their prefrontal cortex is not as active.2       Raine had two explanations for what causes such a lack of activity: nature and nurture. In a study done with over 100 twins, researchers found that about half of their aggressive and antisocial behavior was genetic.2 Simply put, some people are more predisposed to violent behavior than others, much like how some are more predisposed to alcoholism. Additionally, those that are diagnosed with antisocial personality disorder (APD) are much more likely to have violent tendencies.
      The “nurture” explanation is slightly more complicated because there are so many factors to consider. Any environmental factor may play a role in shaping a person’s behavior. Raine cited something as simple as shaking a baby when it is crying to have the potential to cause head trauma and damage to the prefrontal cortex. Other factors such as lead exposure or alcohol use during pregnancy may also contribute to lower activity in the prefrontal cortex.2
      However, this trend of lower activity in the prefrontal cortex does not apply to all criminals. In particular, the prefrontal cortex of long-time serial killers has been shown to function normally. These types of killers have excellent control in their decision-making and have calculated plans of action, unlike murderers who spontaneously kill. Raine found that they have a reduction in the size of the amygdala, the emotion center of the brain.2 The amygdala makes up a small portion of the brain and is involved in fear responses, conscience, and remorse (sciencedaily, wsj).2,5 A reduction in its size may affect how long-term murderers can conduct cool, calculated decisions uninhibited by emotions. On top of that, Dustin Pardini of the University of Pittsburg and his team found that 26-year-old men with smaller amygdalas are up to three times more likely than men with normal-sized amygdala to be aggressive, violent, and to show psychopathic traits three years later.6
     With all these findings on criminal minds, many wonder how to prevent the creation of future criminals. Though we cannot change genetic predispositions, we can prevent events such as drinking alcohol during pregnancy and try to adjust other environmental factors. As for how to punish criminals, Raine believes that, much like in “The Minority Report,” future criminals can be apprehended based on their brain images before the crime is committed by screening all males at the age of eighteen.3 Raine also believes that those who have suspicious brain scans should be sent to a holding facility away from society.3 His proposal has been met with many objections, for there exist many moral implications for punishing someone for something they have not yet done. While an ideal world may include one that is criminal-free, the cost would be a restriction on people’s freedom. Raine’s ideas, though radical, spur interesting thought on how to handle crime in the future. For now, researchers are focusing on how to target and help individuals predisposed to violent behavior in other ways such as teaching them meditation techniques.2
      The field of neurocriminology has made great advances, but there is still much to be explored in the subject, especially how to apply the research to predict and prevent crime. In the near future, we may see a change in how we handle and prevent crime. Stay tuned.

References



1. McFarnon, E. “The ‘born criminal’? Lombroso and the origins of modern criminology”. Historyextra. October 2015.

2. Raine, A. “The Criminal Mind”. The Wall Street Journal. April 2013.

3. Dahl. O. “Neurocriminology: The Disease Behind the Crime”. Dartmouth Undergraduate Journal of Science. November 2013.

4. NPR. “Criminologist believes violent behavior is biological”. NPR. March 2014.

5. ScienceDaily. “Amygdala”. ScienceDaily.

6. Miller, A. “The Criminal Mind”. American Psychological Association. Vol. 45. 2014.

How Should We Understand Overpopulation?

Varun Joshi

     Christiana Figueres, the head of the United Nations Framework Convention on Climate Change, mentioned earlier this year that we should make “every effort” to curtail the growth rate of the global population.1 This call to reduce the human population can be traced back to the United Nation’s recent finding that by 2050, the world will be populated by approximately 9.6 billion people, an alarming surge from the current figure of 7.2 billion.2 This presents a problem because, according to Figueres, we are “already exceeding the planet’s carrying capacity.” 3
     The current population growth rate marks a worrying trend, encapsulated in the word “overpopulation”, which implies there are a greater number of people living on the planet than is sustainable. However, sustainability is a vague descriptor for such a complex issue. The problem of overpopulation is difficult to solve because it requires us to take both the sustainability of the human population and the environment into consideration when developing plans to combat it.
      An unsustainable population is one of which a significant portion of the population is unable to meet its basic needs for survival. The human population meets this criteria as at least a seventh of the global population, one billion people, is “malnourished or starving.”4 Even the maintenance of this dismal status quo is predicted to require around 900 million more hectares of land to be deforested for agricultural purposes – a far cry from the 100 million hectares that are currently available for deforestation.5
     A straightforward solution to overpopulation may be to decrease rates of human procreation, allowing for the planet’s resources of water, land, and space to be spread out more evenly amongst those that are living. However, there are ethical issues involved in such practices, as many would argue that unrestricted human procreation is a human right. A more ethical starting point would be contraception, women’s rights and education, and family planning. The Guttmacher institute mentions that 215 million women in the developing world who would prefer not to become pregnant do because they lack access to modern contraception.6 Furthermore, by setting educational agendas for women, advancing women’s rights, and increasing family planning, unplanned and coerced pregnancies would decrease and people would be able to make more informed decisions regarding the conception of children.       However, if the goal of sustainable population growth is not to limit population growth but rather to accommodate a growing population, there lies promise in technological developments. For Professor Erle Ellis, humans have always altered their environment through technology to meet their needs, from building cities to using genetic engineering to increase crop yields by 22%.7 There is no reason why we cannot use technologies in the future to increase agricultural yields, build complex skyscrapers to increase housing, more efficiently manage space within cities, and solve problems such as global warming. Technology would allow us to meet human needs around the world, as it has done in the past.
      However, one may inquire about whether there is ever an “optimum” population. Jesper Ryberg, a professor of Ethics and Philosophy of Law at Roskilde University in Denmark, defines the “optimum” population as one which, under a defined area and state of affairs, “maximizes well being over time.”8 Creating an “optimum” population, defined by the ability of its members to support their well being over time, contrasts with establishing a system that sustains the status quo of having access to basic needs. There is a distinction between meeting one’s needs and fostering one’s well being, as the latter allows for human flourishing and growth while the former simply aims to provide access to shelter, food, and water. If the goal is to create a world where every human flourishes, perhaps population control will be necessary. This is because human flourishing would likely involve access to not just food but variety in food, not just education but quality and personalized education, not just living spaces but homely living spaces, and not just access to social benefits but also supportive social institutions. Even if technology can meet our needs, surpassing them and enabling flourishing over the next few decades will likely require population control.
      While overpopulation threatens human sustainability, one should also consider its impact on environmental sustainability. According to the United Nations Environmental Programme, the past century or so has been renamed the “Anthropocene Epoch” because of the massive damage that humans have caused to the Earth’s natural environment, such as a dramatic increase in the amount of CO2 released, and a species extinction at a rate up to 1000 times the natural rate.9 Population growth is also causing damage to the world’s coastlines, a catastrophic situation for marine organisms. 10 Human overpopulation is crowding out other species, depleting common resources, and harming the atmosphere.
     Using environmentally friendly technologies or geoengineering could curtail the effects overpopulation has on the environment, but this might not be the best solution. Alan Weisman, former University of Arizona professor, notes that technological development can only go so far because it requires resources, which are finite.11 Furthermore, technologies may result in unintended, harmful consequences. Perhaps both population control and new technologies will be needed to create a sustainable environment. 12However, to create a flourishing Earth we will have to curtail how much of the environment we use for our own ends, requiring a radical decrease in human consumption in all areas of life, from energy to clothing.13
      Population growth threatens the sustainability of both the environment and the human race, and even though the creation of new technologies has great promise, saving one will likely come at the expense of saving the other. Furthermore, should we expand our focus from simply sustainability to the flourishing of humans and the environment? Does such a goal make population control morally right? These questions will guide us in our quest for solutions to the problem of overpopulation.

References



1. Bastasch, Michael. "UN Climate Chief: We Should ‘Make Every Effort’ To Reduce Population Increases." The Daily Caller. April 16, 2015. Accessed December 2, 2015.

2. "World Population Projected to Reach 9.6 Billion by 2050." United Nations Department of Economic and Social Affairs. June 13, 2013. Accessed December 2, 2015.

3.Bastasch, Michael. "UN Climate Chief: We Should ‘Make Every Effort’ To Reduce Population Increases." The Daily Caller. April 16, 2015. Accessed December 2, 2015.

4. Biello, David. "Another Inconvenient Truth: The World's Growing Population Poses a Malthusian Dilemma" Scientific American. October 2, 2009. Accessed December 2, 2015.

5. Ibid.

6. Kristof, Nicholas. "The Birth Control Solution" New York Times. November 2, 2011. Accessed December 2, 2015.

7.Klümper, Wilhelm, and Matin Qaim. "A Meta-Analysis of the Impacts of Genetically Modified Crops" PLOS ONE. November 3, 2014. Accessed December 2, 2015.

8.Ryberg, Jesper. "The Argument from Overpopulation: Logical and Ethical Considerations" JSTOR. May 1, 1998. Accessed December 2, 2015. http://www.jstor.org/stable/pdf/27503595.pdf?acceptTC=true.

9."One Planet, How Many People? A Review of Earth’s Carrying Capacity""United Nations Environmental Programme. June 1, 2012. Accessed December 2, 2015. https://na.unep.net/geas/archive/pdfs/geas_jun_12_carrying_capacity.pdf.

10. De Sherbinin, Alex, David Carr, Susan Cassels, and Leiwen Jiang. ""Population and Environment"" National Center for Biotechnology Information. December 14, 2009. Accessed December 2, 2015. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2792934/#R84.

11.Gais, Hannah. ""How Many People Is Too Many People?"" US News. September 27, 2013. Accessed December 2, 2015.

12."One Planet, How Many People? A Review of Earth’s Carrying Capacity"United Nations Environmental Programme. June 1, 2012. Accessed December 2, 2015.

13."Humans: The Real Threat to Life on Earth." The Guardian. June 29, 2013. Accessed December 2, 2015.

Why Do Those Pesky Neutrinos Have Mass? - The 2015 Physics Nobel Prize

Tom Klosterman

     On October 6, the Royal Swedish Academy of Sciences awarded the 2015 Nobel Prize in Physics to Takaaki Kajita from Japan and Arthur B. McDonald from Canada for the discovery that neutrinos have mass. Independently, the two scientists led teams which discovered that neutrinos traveling through the earth and from the sun, respectively, undergo metamorphosis: frequent oscillation between three flavors, a condition that can only hold if they have mass.
      In your typical high school chemistry class, one of the first things you learned is that matter is composed of tiny particles called ‘atoms.’ These are in turn composed of protons and neutrons in the nucleus, with orbiting electrons. Later on, you learned that some nuclei are unstable, and split up, or decay, by emitting radiation. One way this can happen is Beta (β) decay. As any old Wiki article can tell you, one electron is emitted in β decay as the nucleus splits. Scientists assumed the energy released in β decay would always be constant, but in 1914, James Chadwick showed that it can take a spectrum of values.1 An Austrian physicist, Wolfgang Pauli, soon proposed that a new electrically neutral particle was released in β decay alongside the electron and differences in its energy accounted for the variance in the energy.
     Consider that at this time, all matter was thought to consist solely of protons and electrons. Needless to say, Pauli’s imaginary particle, which he dubbed the “neutron,” electrified the scientific community. When Chadwick proved the existence of the neutron in the nucleus in 1932,2 Pauli realized it was far too massive to be the neutral particle in β decay. The theoretical particle was renamed the “neutrino.” Then, more than forty years after the first glimpse of neutrinos, Cowan and Reines conclusively proved their existence in 1956.3 Because of their neutrality and minuscule size, the particles are almost impossibly hard to detect, but by setting up a detector near a nuclear reactor, the team was able to detect the byproducts of several β decays that could have only been caused by the elusive neutrino. Finally the miniature neutrinos had been found.       Fast forward to the 1990s. The Standard Model of Physics was now almost fully developed. In particular, the Muon (μ−) and the Tau (τ−) particles, had been confirmed as more massive analogues to the electron (e−). When any of the three are produced, corresponding neutrinos are typically produced as well: electron neutrinos (ve), muon neutrinos (νμ), or tau neutrinos (ντ). For example, ve particles are produced with the electron in β decay. Importantly, The Standard Model calculates the mass of neutrinos to be zero. Lastly, astrophysicists had shown that only ve particles are emitted from the sun, in staggeringly huge numbers. However, even the most sensitive detectors, built hundreds of feet under the ground, only produced a handful of neutrino collisions a day. But scientists began to notice a discrepancy: they were only capturing about a third of the neutrinos that they should capture. Was neutrino theory wrong or did scientists need to rethink all of solar science?
      This is where Takaaki Kajita and the Super Kamiokande detector came in. During the 1990s, the Japanese detector showed a similar result with a νμ source across the earth, hinting that perhaps neutrinos ‘disappear.’4 However, the disappearance was proportional to the distance from the source, so a theory was postulated in which neutrinos can oscillate between the three types. The farther away from the source, the more the neutrinos would settle into a one to one to one ratio between the three types. The exact mechanism for this was unknown, but experimentally it held up.
      Then, in Canada, the Sudbury Neutrino Observatory (SNO), led by Arthur McDonald, showed more accurate results that the number of ve particles detected on Earth is only a third of the number produced in the sun. After showing νμ and ντ particles in the detector as well, the SNO was able to conclude in 2002 with 99.999% accuracy that neutrinos from the sun oscillated between the three flavors during the trip to Earth.5
     How then does this oscillation prove the existence of neutrino mass? Unfortunately the exact reason is very mathematical and complicated, and involves words like eigenstates and phase factors. But qualitatively, it is a consequence of Quantum Mechanics, where matter simultaneously takes the form of waves and particles. The three flavors of neutrinos (ve, νμ and ντ) each describe a certain wave, with energy related to the mass of the particle. When a neutrino is created, it is a certain combination of these three waves with one wave (the particular flavor of the neutrino) dominating. If the three flavors of waves all had the same energy, they would never shift around and the particle would never oscillate. Therefore, while one of the neutrinos could still have zero mass, the other two may not.
      This has large scale implications on physics as a whole. As mentioned, the Standard Model only allows for massless neutrinos. So the Standard Model, which the entire field of particle physics is based on, will have to be updated. In addition, even though the neutrino masses are incredibly small, there are so many generated in stars, supernovae, and anywhere with radioactive decay that the total mass of neutrinos could outweigh all of the stars in the universe combined. It is clear that much will come out of Kajita and McDonald’s discovery of neutrino mass, hence the two were awarded the 2015 Nobel Prize in Physics.

References



1. Chadwick J 1914 Verh. der Deutschen Physikalischen Ges. 16 383.

2. Chadwick J 1932 Nature 129 312.

3. Reines F and Cowan C L Jr 1956 Science 124 103.

4. Fukuda, Y., et al. 1998 Physical Review Letters 81 1562.

5. Q.R. Ahmad et al 2002 Physical Review Letters 89 011301.

Tom Klosterman is a first-year student at the University of Chicago majoring in Physics.

Can You Feel What I Feel? Outside Factors May Affect Empathy

Elizabeth Lipschultz

     “Developing and maintaining empathy for others, even in the face of strong dislike or disagreement, is one of society’s most pressing concerns,” the psychologist Tony Hacker once declared.1 Mutual understanding and concern is behind some of the most beautiful things that people do-- things like tending the sick, aiding the poor, and comforting the mourning. A lack of empathy can allow people to commit unimaginable atrocities, while its presence is the impetus that strengthens social bonds and motivates humans to altruism. Recent research by scientists at the University of Vienna and Beijing Normal University, however, indicates that certain situations may affect the degree to which empathy is experienced.
      In a study released last month in the Proceedings of the National Academy of Sciences, researchers in Austria performed an experiment in which participants were given placebo painkiller tablets before receiving a small shock, and then observing the administration of a shock to another individual.2 These subjects, as expected, rated their own personal pain upon being shocked as significantly lower than a control group of individuals who received no tablet. However, the participants who had taken the placebo “painkiller” also rated the pain of the others that they were observing as significantly less severe than the control group did. These effects (both the reduction of personal pain and the reduction in empathic response to the pain of others)  were both reduced when subjects were also administered Naldextrone. Naldextrone is often used in order to help people overcome addiction to pain medications because it blocks the the opioid pathway which normally gives opiates their effectiveness. Naldextrone works to block the effects of real opioid painkillers as well as the effects of placebo ones. This is significant because it strongly indicates that there may be a relationship between the way real pain is experienced within the brain and the experience of empathy. When people experience empathy, they may be actually feeling the pain of others in a very real sense-- that is to say, the brain may attempt to internally emulate the experiences of others, resulting in an understanding of observed pain that is based in actual shared experience.
     This is relevant because more people are taking prescription medications to treat pain now than ever before. According to the United States Center for Disease Control, the amount of prescription painkillers prescribed and sold in the US has almost quadrupled in the last 16 years. 3In addition to those being prescribed painkillers, the CDC reports that in 2013, nearly 2 million Americans were abusing these medications. When such a significant portion of the population is using opioid pathway stimulating medications in high quantities, this means that a large subgroup of society could be living day-to-day with a depressed capacity for empathy.
      Effects to the opioid pathways of the brain may be able to increase or lessen the power of the experience of empathy, but these effects are not the only modulators of empathy. A study published recently in Social Cognitive and Affective Neuroscience indicates that social status may also play a role in empathy.4 Chinese scientists did an experiment in which they ranked participants in an artificial social hierarchy based on proficiency at an arbitrary skill. Once the hierarchy was established, subjects underwent functional Magnetic Resonance Imaging (fMRI) to monitor brain activity while watching videos of other participants (of inferior or superior rank in the system) receiving painful stimuli. The results of the fMRI showed that subjects’ brains showed more activity in areas associated with empathy when the individual being hurt was ranked inferior to the subject than when the individual was ranked superior. This indicated that empathy, at the neural level, may be biased toward those the concerned individual believes they are socially superior to. This seems very counterintuitive to the stereotype that the rich lack empathy toward the poor, and warrants further research. Perhaps there are limits on this effect, or perhaps the stereotype of the aloof rich may have less truth to it than previously imagined.
      In a highly stratified society, differences in social status could impact who certain individuals feel empathetic toward, and to what degree. It is possible that, within a workplace, individuals may be more capable of empathy when it is directed toward those who work under them than those who have authority over them. Information like this could affect how workplaces and communities are best structured in order to lead to the most supportive environment possible.
      Traditionally, empathy is seen as a static trait. Empathy is often described as if it were an immovable part of a person’s personality-- people are described as more or less empathetic in a very concrete way, one that seems to imply that empathy is independent of outside considerations. Recent research, though, paints a broader picture of the experience of empathy. Empathy may be much more reliant on things like medication or social status than has ever been suspected in the past. Knowing what factors influence the way that or the degree to which empathy is experienced in the face of observed trauma could be critical to the mission of creating and protecting a cohesive human society.

References



1. Hacker, T. (2013) Building empathy builds society. In The Seattle Times. Retrieved from:http://www.seattletimes.com/seattle-news/health/building-empathy-builds-society/

2. Rütgen, M., Seidel, E., Silani, G.,, Riecankský, I., Hummer, A., Windischberger, C., Petrovic, P., Lamm, C. (2015) Placebo analgesia and its opioidergic regulation suggest that empathy for pain is grounded in self pain. Proceedings of the National Academy of Sciences of the United States of America, 112(41). Retrieved from: http://www.pnas.org/content/112/41/E5638.full.pdf

3. Injury Prevention & Control: Prescription Drug Overdose. (2015) In Centers for Disease Control and Prevention. Retrieved from: http://www.cdc.gov/drugoverdose/epidemic/index.html

4. Feng, C., Feng, X., Wang, L., Tian, T., Li, Z., Luo, Y. (2015). Social hierarchy modulates neural responses of empathy for pain. Social Cognitive and Affective Neuroscience, 10(11). Retrieved from: http://scan.oxfordjournals.org/content/early/2015/10/26/scan.nsv135.full.pdf

Elizabeth Lipschultz is a second-year student at the University of Chicago majoring in Biology and minoring in Computational Neuroscience

The Next Frontier

Stephanie Williams

      In what feels like science-fiction fantasy, researchers have found that swapping intestinal bacteria between two individuals can cause the recipient to take on the donor’s personality. Microbiologist Premsyl Bercik and gastroenterologist Stephen Collins investigated this phenomenon by colonizing the intestines of one strain of germ-free mice with bacteria from another strain. They found that recipient animals took on the donor’s personality: mice turned from outgoing, friendly creatures to timid and reserved ones (Bercik, 2011). In a similar experiment, Irish researchers removed all intestinal microbes from mice, and found that the animals suddenly lost their ability to recognize other mice (Schmidt, 2015).
      These are only two of the many recent experiments that highlight the remarkable role that the microbiome plays in human health. A densely populated ecosystem, the microbiota that inhabit our gut harbor more than 3.3 million genes, exceeding the human genome by two orders of magnitude. A portion of these genes are conserved across individuals, and are primarily responsible for basic nutrition: harvesting nutrients and energy, fermenting mannose, fructose, cellulose and sucrose into short chain fatty acids, and biosynthesizing vitamins. The rest of the genes, which vary from individual to individual, provide an immense repository of signaling information that can distinguish individuals from each other within a population. These unique genes are largely uncharted territory, but there is speculation that medical researchers could tailor prescriptions to the different kinds of aberrant genes in the microbiome, increasing the effectiveness of medication on a person-by-person basis. This lies, along with other personalized medicine endeavors, far in the future.
     Together, the three million microbiome genes code for roughly 160 species of bacteria that populate the intestine. These species coexist in a highly dynamic environment and are affected by a range of factors: ethnicity, geography, circadian rhythms, gene polymorphisms, epigenetics, ENS, diet, innate and adaptive immunity, bile acids, and host metabolites, among others (Wu, 2015). Despite the variability in species and in the factors affecting their activities, there is evidence suggesting that, like distinct blood types, there exist three distinct gut population types—Bacteroides, Prevotella and Ruminococcus—which are named after their most populous genus. Each population is equipped to carry out certain metabolic tasks given certain raw material. Bacteriodes efficiently metabolize carbohydrates, Prevotella break down gut mucus, and Ruminococcus efficiently absorb sugars (Jones, 2011). Each has associated benefits and drawbacks; having one dominant type predisposes an individuals for other a set of health issues; individuals with bacteriods as the dominant type may struggle with obesity, individuals with dominant prevotella may experience gut pain, and individuals with dominant Ruminococcus may experience natural weight gain (Jones, 2011). Knowing which genus dominates the gut ecosystem can inform doctors who are creating treatment plans for individuals facing metabolic-related obesity disorders.       The importance of microbiota is hard to overstate. The organisms play a major role in the host’s mental health, immune system, metabolism, homeostasis, and the transformation of small bioactive molecules that act as drug-like regulators. Disrupting the homeostasis between the microbiota and the host (“dysbiosis”) has a more important role than host genetics do in the development of certain diseases, such as Irritable Bowel Disease, obesity, and type 2 diabetes. “We don't want to oversell it, but everywhere we look there is some connection,” says Michael Dority, program administrator for the Host Microbiome Initiative.
      Autism is one disease that has long been associated with intestinal dysbiosis; Individuals with autism spectrum disorder often report experiencing intestinal problems. To investigate the link to the microbiome, researchers took a metabolomics survey of affected individuals, and discovered one particular microbial metabolite, 4-ethylphenylsulfate (4EPS) (Garber, 2015). When the researchers Fed 4EPS to normal mice, they developed anxiety-like behaviors that resembled those of autistic mice. Currently, the mechanism that the metabolite uses to effect the changes is being investigated. This is one of many promising examples of potential alternative therapies for systemic disorders.
      Other biota research has focused keenly on the effect of diet on the metabolic activities of microbiota. Certain diets can prevent bacteria from carrying out their metabolic tasks. Dietary-fiber deficient diets, for example, lead to “a reduction in short-chain fatty acid production,” says microbiologist Justin Sonnenburg, “the modern, relatively fiber-deficient Western diet is out of balance with the gut microbial community, and it's this simmering state of inflammation that the Western immune system exists in that's really the cause of all the diseases that we've been talking about.” This is particularly important given the recent finding of the three different micobiota populations.
      Small molecule drugs that can target certain genes in microbiome and bacterial species’ metabolites are currently being pursued as therapies for previously untreatable chronic diseases. Many traditional therapies are ineffective, and treat symptoms rather than the root of chronic disease. The human microbiome is a “massive untapped source of novel drug targets,” commented Tadataka Yamada, medical and scientific officer, ““we're talking about roughly several million microbial targets.” The human genome, in contrast, has traditionally provided only 20,000 drug targets (Garber, 2015). As a part of their normal functioning, microbiota naturally produce thousands of drug-like compounds. The problem is, as Sonnenburg said, “We don't [know] what their chemical structures are and...we don't know what pathways and receptors they're binding to in our biology.” Simulating the body’s natural function of producing small drug-like compounds, and hijacking its own machinery for doing so is a promising idea for treating diseases linked to or affected by signaling pathways with their origins in the enteric nervous system (ENS). Elucidating these structures and pathways will be the next steps in microbiology research, and will likely be used as a blueprint for the creation of new drugs.
      As for the current “probiotic” and “prebiotic” treatments, which are supposed to promote “good” bacteria, it is best to remain cautious of their claims. Most probiotics and prebiotics do not account for the three major microbiota populations, and they exert system wide effects—they alter the entire microbial community. Stan Hazen, the chair of cellular and molecular medicine at the Cleveland Clinic Lerner Research Institute, says that the new approach may be more reliable than probiotics and prebiotics. “[They are] a huge black box. That's why I actually think that the drugs approach is a more scientifically predictable and tractable approach.”
      Though a great deal of the microbiome’s signaling and content lays still untouched by researchers, there is remarkable potential in what has already been discovered. Future research will likely provide novel drugs that can alleviate previously incurable aspects of chronic disorders, and will help catalyze the movement towards personalized medicine.

References



1. Bercik, P., et al. (2011). The intestinal microbiota effect central levels of brain-derived neurotropic factor and behavior in mice. Gastroenterology, 141 (2), 599-609. doi: 10.1053/j.gastro.2011.04.052

2. Carabotti, M., Scirocco, A., Maselli, M. A., & Severi, C. (2015). The gut-brain axis: interactions between enteric microbiota, central and enteric nervous systems. Annals of Gastroenterology : Quarterly Publication of the Hellenic Society of Gastroenterology, 28(2), 203–209.

3. Garber, K. (2015). Drugging the Gut Microbiome. Nature Biotechnology 33, 228–231.

4. Jones, N. (2011). Gut study divides people into three types. Nature. doi:10.1038/news.2011.249

5. Qin, J. et al. (2010). A human gut microbial gene catalogue established by metagenomic sequencing. Nature, 464, 59–65.

6. Reardon, S. (2014). Gut Brain Link Grabs Neuroscientists. Nature, 515, 175-177.

7. Shmidt, C. (2015). Mental health: Thinking from the Gut. Nature, 518, S12-S15. doi:10.1038/518S13a

8. Young, E. (2012). Gut Instincts: The Secrets of your Second Brain. New Scientist, 2895, 38-42, doi:10.1016/S0262-4079(12)63204-7

9. Wu, H., et al. (2015). Linking Microbiota to Human Diseases: A Systems Biology Perspective. Trends in Endocrinology & Metabolism, 26, 12, 758 – 770.

Who Can Counter the Food Waste Problem?

Jeremy Chang

     France made international headlines recently when the government passed legislation requiring supermarkets to donate unsold food to charities or to production facilities for animal feed.1 The United Kingdom supermarket chain Tesco has also announced an expansion plan for its stores to donate unsold food to charities, including women’s refuges and children’s clubs.2 These acts are widely seen as part of an emerging movement to counter the issue of food waste.
     Food waste has increasingly been viewed by environmental and food security organizations as an aspect of modern life bordering on the absurd. About one-third of all food produced worldwide is currently lost or wasted in food distribution and consumption systems.3 This is happening in a world where 1 out of 9 people are still chronically hungry.4
     Developed countries deserve much of the blame as consumers in these nations throw away almost as much food as the entire net food production of sub-Saharan Africa (222 million vs. 230 million tons). Concomitant with this prodigious amount of food waste are increased emissions of methane—a greenhouse gas—when left in traditional landfills.3,5
     The food waste issue tends to be overlooked in United States even though the average American is responsible for 20 pounds of wasted food per month.6 This problem is not going to be fixed through ignorance or negligence, but rather through awareness and cooperation. Luckily, efforts on multiple levels of society are coming together to address the problem of food waste through innovative and exciting ways.

Command and Control…and Publicly Shame?

      Governments must play a key role in shaping food waste practices, which the French government demonstrated last month. The European Union has emerged as an international leader with its goal for member states to reduce food waste by at least 30% by 2025.7 In the United States, most policy actions for combating food waste occur at the state or municipal level.
      The state of Massachusetts enforced legislation last year that prohibited organizations that produce a ton or more food waste a month from throwing their food waste into landfills. Instead, organizations must ship their waste to anaerobic digesters or to composting facilities.8 Anaerobic digesters decompose food waste quickly through a combination of mechanical and organic processes, leading to the safe collection of methane that can be used to generate electricity.
      Around 1,700 organizations including businesses, colleges, and hospitals must participate in the ban. The food waste ban has started off robustly with little opposition, and now places such as Vermont, Connecticut, and New York City are also looking for policies to reduce food waste in landfills.8
     Across the country to the city of Seattle, Washington, the city government is attempting a more unorthodox method for reducing food waste in landfills. A new law requires residents and commercial establishments to compost food waste themselves or to enlist services to send food waste to composting facilities.9 For half of this year, the city will promulgate the failures of residents who do not obey the new law. Any trashcan that contains more than 10% food waste by volume receives a red tag for the neighborhood to see. For those individuals who are impervious to peer pressure, the city will enact monetary fines for each infraction later in the year: $1 for households and $50 for apartment complexes.9 The effectiveness of those red tags of shame may be up in the air, but the approach is definitely novel.

The March of the Ugly Produce

     While food waste manifests as leftover scraps in most of our minds, large amounts of food are thrown away even before supermarkets restock their shelves. People in developed countries have come to expect perfect produce when shopping, resulting in the disposal of edible fruits and vegetables that simply are not up-to-par aesthetically.
     At the institutional level, businesses are beginning to embrace the sale of misfit produce. France’s third largest supermarket chain, Intermarche, kicked off the trend last year with an advertisement campaign that trumpeted “the grotesque apple,” and the “unfortunate clementine” among other cheeky titles.10 In October 2014, Intermarche opened aisles in all 1,800 of its stores for ugly produce discounted at 30% off.10 Competing French supermarket brands have launched similar ventures, and this march of ugly produce is gathering traction in other developed countries such as the United Kingdom and Canada.11
     Yet even with the energy behind this movement, businesses including Intermarche have admitted that these boon times for ugly produce are transitory; the market for these fruits and vegetables is still limited. Nevertheless, these business initiatives help chip away at the notion that stores must stock only the most beautiful produce and throw away less-than-perfect produce.

The Power of People

     Grassroots movements and action at the individual level represent the most important sources for reforming food practices. It is up to people to elect government officials who recognize the social and environmental consequences of food waste. It is up to people to vote with their wallets as consumers to encourage businesses to reduce food waste. Most importantly, it is up to people to become informed of the issue and to make conscious efforts to mollify the food waste problem.
     Sometimes the most effective ways are the simplest ways for decreasing food waste. The U.S. Environmental Protection Agency recommends people to buy only food portions that they can finish, to preserves leftovers for future meals, and to donate unused food.12 The benefits of these practices are smaller financial expenditures for food, the reduction of methane emissions, the conservation of resources, and the growth of a more steady food supply.12 Multiple areas of society at the governmental, institutional, and grassroots levels are beginning to combat the food waste issue, and it is only through constant cooperation among these groups that the problem of food waste will be successfully addressed.

References

1. Chrisafis, Angelique. "France to Force Big Supermarkets to Give Unsold Food to Charities." The Guardian. Guardian News and Media Limited, 22 May 2015. Web. 29 May 2015.

2. "Tesco Expands Charity Food Scheme - BBC News." BBC News. 2015 BBC, 4 June 2015. Web. 11 June 2015.

3. United Nations Environment Programme, Regional Office of North America. "Food Waste: The Facts." World Food Day. United States Committee for FAO, n.d. Web. 29 May 2015.

4. Ferdman, Roberto A. "One in Every Nine People in the World Is Still Chronically Hungry." Washington Post. The Washington Post, 16 Sept. 2014. Web. 29 May 2015.

5. Ferdman, Roberto A. "Americans Throw out More Food than Plastic, Paper, Metal, and Glass." Washington Post. The Washington Post, 23 Sept. 2014. Web. 29 May 2015.

6. Buzby J, Hyman J. “Total and per capita value of food loss in the United States”, Food Policy, 37(2012):561­570.

7. "EU Actions against Food Waste." EU Actions Against Food Waste. European Commission, Last Modified: 3 Dec. 2014. Web. 29 May 2015.

8. Kaplan, Susan. "Not In Our Landfill: Massachusetts’ Ban On Food Waste." News. New England Public Radio, 15 Oct. 2014. Web. 29 May 2015.

9. Radil, Any. "Tossing Out Food In The Trash? In Seattle, You'll Be Fined For That." NPR. NPR, 26 Jan. 2015. Web. 29 May 2015.

10. "Intermarché - Inglorious Fruits & Vegetable." Intermarché - Inglorious Fruits & Vegetable. Intermarché, n.d. Web. 29 May 2015. .

11. Godoy, Maria. "In Europe, Ugly Sells In The Produce Aisle." NPR. NPR, 9 Dec. 2014. Web. 29 May 2015.

12. "Reducing Wasted Food Basics." EPA. United States Environmental Protection Agency, Last Updated 4 February 2015. Web. 11 June 2015.

Image Credit:Holt, Kate. “Boxes of Fresh Vegetables from AusAID.” Wikimedia Commons, June 9, 2009. Available URL: http://commons.wikimedia.org/wiki/Category:Vegetables#/media/File:AUSAID _SOUTH_AFRICA_(10672632116).jpg

Jeremy Chang is a first-year at the University of Chicago majoring in the biological sciences and hopes to earn an M.D-Ph.D. in the future.

The Microbial World Inside of Us

Jeremy Chang

     The human gastrointestinal tract, the largest microbiome of the body, is a red-hot field in science today. The topic has garnered so much attention that even the mainstream news discusses the field’s latest findings. What makes this dark (and potentially smelly) environment so appealing to scientists?
     The gut microbiome is novel because unlike the brain, which has already developed partially at birth, the gut microbiome begins as a sterile environment and develops into a dense network of bacteria. This microbial world plays a vital role for nutrient processing, immune function, and the homeostasis of the gastrointestinal system.1 The sheer complexity of the interactions between the gut microbiome and human body has historically made studying the microbiome difficult. With the development of advanced sequencing technology, however, scientists are piecing together the puzzle within every human.

Colonization and Early Exposure

      Colonization of the intestinal tract begins immediately after a baby is born. The growth is so fast that the microbial population reaches that of an adult’s in only a few days.2 The tremendous microbial bloom is characterized by its fair share of chaos as different families of bacteria compete for dominance. The variability of bacterial profiles for young babies demonstrate the frenetic pace of the ecosystem’s development.3 Yet, despite the appearance of disorder, there is a method to the madness.
      Evolution has fine-tuned the symbiotic relationship between gut microbes and the host—a process called coevolution. A baby’s intestinal tract does not represent “open season” for all types of bacteria. In the colon, for example, only five bacterial subtypes exist, which is an extremely low number given the diversity of bacteria.4 This means that only microbes that are highly specialized can inhabit the ordinarily hostile environment of the gut.
      A single gene can make all the difference when a bacterium attempts to colonize the gut. The intestinal lining contains innate colonization resistance mechanisms that prevent any run-of-the-mill microbe from attaching itself to the lining, so specialized genes on the bacterium’s part are necessary. Genes that appear to be prominent during the colonization process are those that allow the organisms to utilize the local resources of the intestinal environment. When these genes function properly, the bacterium can overcome the host’s resistance mechanisms.5
     Despite the high turnover rate of the dominant bacterial type early in development, the individual’s gut microbiome stabilizes by three years of age. It is believed that this occurs because existing bacteria in the microbiome act as alternative forms of colonization resistance, actively excluding other microbe types from the ecosystem. This would indicate that early exposure to microbes for a baby is essential for a healthy gut. Even the mode of delivery can affect the baby’s microbiome for up to seven years.6 Babies who are born through a Cesarean section (C-section) have reduced exposure to maternal microbes, which may lead to increased numbers of infections and allergenic disorders.7 Other factors such as hygiene, use of antibiotics, and infant feeding practices (breast milk versus formula) undeniably play a role in the development of the gut ecosystem.8

Diet and Disease

     Unsurprisingly, diet influences the long-term profiles of an individual’s gut microbiome immensely, and it is also strongly linked to the pathogenesis of numerous diseases. Scientists have been hard at work revealing the role of gut bacteria as an intermediary that connects A to B.
     One study attempted to correlate diet, the gut microbiome, and disease by comparing individuals of radically different diets. The study focused on native Africans who had high-fiber diets and urban African Americans who had high fat diets. The bacterial subtype Prevotella dominated the microbiomes of the native Africans and Bacteriodes dominated the microbiomes of the African Americans. The preponderance of Bacteriodes enterotype for the African Americans was associated with higher secondary bile acids.9 Given the fact that African Americans have significantly higher rates of colorectal cancer compared to native Africans, the study highlighted a correlation that implicated the microbiome’s potential involvement in disease.
     The possible role of gut bacteria in sickness has also been elucidated through mouse models.10This experiment began with mice who had a nonfunctioning IL-10 gene, which traditionally codes for anti-inflammatory proteins. The mice were then fed a diet of saturated milk fat. The absence of a bile acid due to the genetic defects of the mice led to the presence of organic sulfur in the gut. There was a subsequent bloom of B. wadsworthia, which uses the organic sulfur sources to produce hydrogen sulfide (H2S).11 High H2S concentrations induce inflammatory responses, alter gene expression, and prevent DNA repair, contributing to numerous illnesses.12 When humans were placed on a fatty diet based on animal product, a similar result occurred—there was an increase in the B. wadsworthia bacterium.13 These findings provide strong evidence that abnormal gut microbiomes can translate into long-term, harmful effects for hosts.
      The world in the last half century has trended towards a Westernized high fat diet, and it shows. In the same time span, there has been a dramatic increase in inflammatory bowel disease, diabetes, obesity, and cardiovascular disease. For most of these afflictions, there is some association with altered gut microbiota structure whether it be low microbial diversity, the growth of harmful bacteria, or even suboptimal bacterial ratios in the microbiome. Hopefully, our research in the gut microbiome will lead to proactive methods of maintaining healthy gut bacteria. By helping them, we are helping ourselves.

What’s Next?

     With our current knowledge, a comprehensive understanding of the gut microbiome remains elusive. Similar to many things, but especially for the gut ecosystem, we know less than we would like to know. By understanding the ecosystem more clearly, we gain a better appreciation of our intimate relationship with the microbial world that has followed us since our births. On the practical side, we will also be more prepared to treat and prevent diseases that are rising at alarming levels around the globe. This field definitely deserves our attention, and do not be surprised to find yourself reading more articles about our unseen companions.

References

1. Sang Sun Yoon, Eun-Kyoung Kim, and Won-Jae Lee, “Functional genomic and metagenomic approaches to understanding gut microbiota–animal mutualism,” Cell Regulation, 24 (2015): 38-46, accessed February 24, 2015, doi:10.1016/j.mib.2015.01.007.

2. Yatsunenko, T.a, Rey, F.E.a, Manary, M.J.bc, Trehan, I.bd, Dominguez-Bello, M.G.e, Contreras, M.f, Magris, M.g, Hidalgo, G.g, Baldassano, R.N.h, Anokhin, A.P.i, Heath, A.C.i, Warner, B.b, Reeder, J.j, Kuczynski, J.j, Caporaso, J.G.k, Lozupone, C.A.j, Lauber, C.j, Clemente, J.C.j, Knights, D.j, Knight, R.jl, Gordon, J.I., “Human gut microbiome viewed across age and geography,” Nature, 486 (2012): 222-227, accessed February 24, 2015, doi: 10.1038/nature11053.

3. Chana Palmer, Elisabeth M Bik, Daniel B DiGiulio, David A Relman, Patrick O Brown, “Development of the Human Infant Intestinal Microbiota,” PLOS Biology, (2007): accessed February 24, 2015, doi: 10.1371/journal.pbio.0050177.

4. Paul B. Eckburg, Elisabeth M. Bik, Charles N. Bernstein, Elizabeth Purdom, Les Dethlefsen, Michael Sargent, Steven R. Gill, Karen E. Nelson, David A. Relman, “Diversity of the Human Intestinal Microbial Flora,” Science, 308 (2005): 1635-1638, accessed February 24, 2015, doi: 10.1126/science.1110591.

5. S.M. Lee, G.P. Donaldson, Z. Mikulski, S. Boyajian, K. Ley, S.K. Mazmanian, “Bacterial colonization factors control specificity and stability of the gut microbiota” Nature, 501 (2013): 426–429, accessed February 24, 2015, doi10.1038/nature12447.

6. S Salminen, G R Gibson, A L McCartney, E Isolauri, “Influence of mode of delivery on gut microbiota composition in seven year old children,” Gut, 53 (2004): 1388-1389, accessed February 24, 2015, doi: 10.1136/gut.2004.041640.

7. Peter Bager, Jacob Simonsen, Steen Ethelberg, and Morten Frisch, “Cesarean Delivery and Risk of Intestinal Bacterial Infection,” The Journal of Infectious Diseases, 201, (2009): 898-902, accessed February 24, 2015, doi: 10.1086/650998.

8. Joël Doré, and Hervé Blottière, “The influence of diet on the gut microbiota and its consequences for health,” Food Biotechnology • Plant Biotechnology, 32 (2015): 195-9, accessed February 24, 2015, doi: 10.1016/j.copbio.2015.01.002.

9. Junhai Ou, Franck Carbonero, Erwin G Zoetendal, James P DeLany, Mei Wang, Keith Newton, H Rex Gaskins, and Stephen JD O'Keefe, “Diet, microbiota, and microbial metabolites in colon cancer risk in rural Africans and African Americans,” American Journal of Clinical Nutrition, 98 (2013): 111-20, accessed February 24, 2015, doi: 10.3945/ajcn.112.056689.

10. Vanessa A Leone, Candace M Cham, and Eugene B Chang, “Diet, gut microbes, and genetics in immune function: can we leverage our current knowledge to achieve better outcomes in inflammatory bowel diseases?,” Autoimmunity * Allergy and hypersensitivity, 31 (2014):16-23, accessed February 24, 2015, doi:10.1016/j.coi.2014.08.004.

11. S.D. Devkota, Y. Wang, M.W. Musch, V.L. Leone, H. Fehlner-Peach, A. Nadimpalli, D.A. Antonopoulos, B. Jabri, E.B. Chang. “Dietary-fat-induced taurocholic acid promotes pathobiont expansion and colitis in Il10−/− mice,” Nature, 487 (2012): 104–108, accessed February 24, 2015, doi:10.1038/nature11225.

12. M.S. Attene-Ramos, G.M. Nava, M.G. Muellner, E.D. Wagner, M.J. Plewa, H.R. Gaskins, “DNA damage and toxicogenomic analyses of hydrogen sulfide in human intestinal epithelial FHs 74 Int cells,” Environ Mol Mutagen, 51 (2010): 304–314, accessed Feburary 24, 2015, doi: 10.1002/em.20546.

13. L.A. David, C.F. Maurice, R.N. Carmody, D.B. Gootenberg, J.E. Button, B.E. Wolfe, A.V. Ling, A.S. Devlin, Y. Varma, M.A. Fischbach, “Diet rapidly and reproducibly alters the human gut microbiome,” Nature, 505 (2014): 559–563, accessed February 24, 2015, 10.1038/nature12820.

Image Credit: Newton, Curtis. “Baby beim futtern.” Wikimedia Commons, August 23, 2008. Available from: URL: http://commons.wikimedia.org/wiki/Category:Babies#/media/File:Baby_ beim_f%C3%BCttern.JPG

Jeremy Chang is a first-year at the University of Chicago majoring in the biological sciences and hopes to earn an M.D-Ph.D. in the future.

Leprosy and the Modern Plague

Annie Albright

     The urban streets of Madurai, a temple city in central Tamil Nadu, India, are overflowing; auto-rickshaws, stands overladen with mangoes and pomegranates, businessmen burden with briefcases and papers, women garlanded with strands of sweet, pungent jasmine flowers, and intermittent herds of cows, goats, and chickens all spill from the various side streets into the dried up Vaigai river that bisects the town, in a scene of commotion incarnate
     Just 30 miles south-east of the city, however, stands a compound- a colorless splotch in the center of an otherwise vibrant tapestry- where the chaos gives way to quiet and sterility. Coating the compound walls is a curious, tin-scented yellow dust; turmeric powder, a natural antiseptic. This is the Mission Leprosy Hospital, Manamadurai.
     Leprosy is a disease of dichotomies. It is simultaneously ancient and contemporary, disfiguring and inconspicuous, and infamous for its communicability, though the contagion, Mycobacterium Leprae, is really quite reserved in its spread.1
     Stereotypical leprosy presents with the appearance of pigmentless spots, or lesions, on the skin. It was these lesions that denoted “lepers” in the biblical era, though this characterization was unfortunate, because the lesions, like so much associated with the disease, have no uniform identity; they may be sunken or raised, splotchy or defined; they may not be pigmentless at all, but red or copper-colored. This variability adds to the difficulty of diagnosing leprosy. Further complicating the issue, are the two subsets of leprosy, tuberculoid and lepromatous, which have differing pathologies and prognoses, with the latter posing a far more lethal threat.
     Leprosy is associated with the invasion of Mycobacterium Leprae into human macrophages and Schwann cells. Whether the disease will develop, and which subset path it will take, is dependent on the bacteria’s ability to trigger a specific immune response in its host. Infected macrophages in a host are immediately tagged with an antigen that is then recognized by an undifferentiated CD4 helper T cell.2 The differentiation of this helper T cell into either T-helper1 (tuberculoid) or T-helper2 (lepromatous) determines the path of the disease, and this immune response produces the peripheral neuropathy and nerve degeneration characteristic of the malady.2 For this reason, leprosy susceptibility has a significant but poorly understood genetic component likely related to the Human Leukocyte Antigen (HLA) genes, including the DRB1, DQA1, DQB1, and DQ promoters.2 Both successful infection and the disease progression are fostered by malnutrition and a compromised immune system; in fact, such environmental factors might be essential to the development of leprotic symptoms.
     Far from being the wildly contagious blight as it has historically been cast, our modern understanding of immunobiology reveals leprosy to be a relatively incommunicable disease, which poses little to no threat from simple skin-to-skin contact or brief exposure. This has presumably always been the case- and yet, for the past two-thousand years, it has been viewed as necessary to separate, isolate, and ostracize lepers from greater society. Even today, though patients at the Mission Leprosy Hospital in Manamadurai aren’t explicitly forced to reside within the compound, the stigma they face outside of its walls, and rejection from their families and community hold many of the inmates within the hospital as effectively as bars.3
     Few diseases conjure up as strong an intrinsic sense of repulsion as leprosy. In light of the recent Ebola epidemic, and the anxiety it has stirred up in the United States population, it is interesting to consider what turns an infectious disease- something which can be remediated and cured- into a “plague,”- something virulent and lethal- in the eyes of the public. While the term “plague,” originally referred only to the deadly, infectious disease caused by the bacteria Yersinia Pestis, in the twenty first century it has taken on a new meaning. There are several contributing factors that together constitute the parameters of a modern plague.
     One such aspect is the ability of a disease to disfigure or disable. Smallpox, for instance, is infamous for the pitted, granular scars it left on its survivors, frequently on highly visible areas such as the hands and feet. Leprosy’s potential to disfigure, usually through amputation of extremities or limbs has long been a hallmark of the disease. The mechanism of peripheral neuropathy that leads to ulceration and ultimately amputation in leprosy patients is not unique to the leprotic condition it functions in the exact same way as that in diabetes patients. There are 62 million diabetics in India; there are also 134,000 new leprosy patients a year.4,5
     In considering the implications of the infamous and storied medical history of leprosy we readily see that lepers have long been ostracized and isolated from their communities, in most cases cruelly and unnecessarily. The legacy of this stigma is difficult to escape. “Hansen's disease is still called kusht in most Indian languages, as it was in the sixth century during Sushrutha the Indian Hippocrate’s time. Even Mohandas “Mahatma” Gandhi's efforts to destigmatize the disease fell short”.1 Mental disorders face an equal stigma to that leveled against lepers. Some, such as schizophrenia, bipolar disorder, and depression, inspire prejudice6. Others engender intense fear; according to Alzheimers.org, a recent poll conducted by Home Instead of Senior Care found that Americans across all age groups are more afraid of developing Alzheimer’s than heart disease, stroke, or cancer.
     Intimately tied to the stigma surrounding both leprosy and mental illness is the fear of the unknown; while leprosy is the oldest recorded human disease, and the discovery of its pathology and causal agent Mycobacterium Leprae was made in 1873, M. Leprae is one of the only human disease-causing bacterium that has never been cultured in the lab.1,3 As discussed above, very little is known about its method of infection, beyond the fact it is minimally communicable. Ebola poses a similar enigma; its animal vector is unknown and its mutagenic potential is theoretically boundless. Similarly the brain and its pathophysiology continue to be counted among science’s greatest mysteries.

“Fear is a contagious disease, spreading from its first victim to others in the vicinity until it is powerful enough to take charge of a group, in which event it becomes panic.”- Ernest Gann

     The fear of disease has its own symptoms and pathology. For all our fear of the unknown it is the specter we recognize we seem to fear most. The term “plague” engenders fear far greater than does “Yersinia pestis”. Similarly leprosy is much less intrinsically frightening than Hansen’s disease (the name given to the condition in the 19th century in an attempt to change public perception). The recent incidence of the Ebola virus in the United States and the media coverage that turned it into a household name that in addition to the very real threat posed by the virus, has spurred the onset of an even more resistant pathogen: panic.



References

1. Jacob, Jesse T. and Carlos Franco-Paredes. 2008. “The Stigmatization of Leprosy in India and Its Impact on Future Approaches to Elimination and Control.” PLoS Negl Trop Dis 2(1): e113. Accessed November 2th, 2014. doi:10.1371/journal.pntd.0000113

2. Fitness, J, and K Tosh and AVS Hill. 2002. “Genetics of susceptibility to leprosy.” Nature 3:441-453. Accessed November 25th, 2014. doi:10.1038/sj.gene. 363926

3. Luka, Edward E. 2010. “Understanding the Stigma of Leprosy” South Sudan Medical Journal. Accessed November 25th, 2014.

4. Kaveeshwar, Seema Abhijeet and Jon Cornwall. 2014. “The current state of diabetes mellitus in India.” Australas Med 7(1): 45-48. Accessed November 25th, 2014. doi: 10.4066/AMJ.2013.1979

5. “Global leprosy: update on the 2012 situation.” World Health Organization Weekly epidemiological records 35:365-380. Accessed November 25th, 2014.

6. Sartoris, Norman. 2007. “Stigmatized Illnesses and Health Care” Croat Med J 48(3): 396-397. Accessed November 25th, 2014

Gagnon, Bernard. Madurai and View of the Gopurams of Meenakshi Temple. Digital image. Wikimedia Commons. Wikipedia, 3 Feb. 2006. Web. 24 Nov. 2014. .

Mycobacterium Leprae. Digital image. Wikimedia Commons. Wikipedia, 13 Mar. 2007. Web. 24 Nov. 2014. .

Annie Albright is a first year student at the University of Chicago majoring in biology.

Is Big Data the Next Big Step for Health Care?

Salman Arif

     As the force that is “Big Data” revolutionizes everything from government to grocery stores, it presents huge potential for the healthcare sector to study the massive amount of healthcare data that grows each day.Many industries have already taken advantage of the power of big data, such as research, politics, and the private sector, and as private and public initiatives in the healthcare system advance, unique applications of big data to the health sector will also. Physician reimbursement, treatment outcomes, and public health models are just a few areas that stand to improve from analyzing the numbers behind healthcare in the US and around the world. Unfortunately, the challenge of aggregating all of this data impedes the evolution of big data in healthcare. Overcoming these challenges is a work in progress, but the benefits of big data in healthcare will reveal themselves as time goes on.
     The reduction of health care costs through big data analysis is a major goal for both public and private interests. As it stands, healthcare costs represent about 18% of US GDP (compared to under 12% and 10% for Canada and the UK, respectively) and medical procedures in the US are notoriously expensive relative to their peers in the developed world. One way big data can alleviate high costs is by enabling the study of recipient demographics and outcomes. This will hopefully help the providers choose treatment options that provide the best care per dollar. The federal government’s release of a massive Medicare reimbursement dataset, done under the auspices of improving transparency and allowing consumers to make more informed decisions, has been praised and opposed in equal measure by two opposing schools of thought. Some believe data transparency will provide fitting solutions, while others posit numbers don’t tell the whole truth.1 One tradeoff surrounds the identification of overly expensive procedures and doctors. Though such an analysis could lower costs, it sacrifices physician privacy. One further criticism of the Medicare dataset surrounds the occasional absence of data about hospital readmission. Some have noted that inclusion of such data would allow providers to predict and flag patients with a high potential for readmission through analysis of historical patient records.2 This would hopefully result in an efficiency-boosting shift of resources to ensure proper treatment and monitoring after discharge. Additionally, the shift in payment schemes to prioritize outcomes (paying based on quality of care) over fee-for-service charges (paying for all services, regardless of quality)3 will incentivize providers to use data analytics to identify what treatments are most likely to be successful in unique cases. With all of the emphasis on costs, however, it is easy to overlook improving the main goal of health care: patient care.
     Big data, once it clears logistical hurdles regarding access, has the potential to play a vital role in improving outcomes by analyzing historical evidence for how certain groups of patients react to certain treatments. Newly formed tech company Flatiron Health is working to use data in the fight against cancer.4 With only about 4% of current cancer treatment data being aggregated, the Flatiron founders hope to collect and organize much of the millions of remaining cases. This collection can be studied for trends, relationships between metrics, and outcomes and used to help physicians make decisions on what treatments to offer certain patients, identify cost-effective options, and connect patients with clinical trials to advance the development of new medicines. While Flatiron is focused on cancer, similar strategies can be employed for other illnesses. The wealth of knowledge that is possible from big data advancement in health care provides ample opportunities to improve quality and outcomes of patient care.
     With all of the aforementioned benefits, why hasn’t the healthcare field taken full advantage of all that big data has to offer? Health care has lagged behind other sectors in exploiting big data for several major reasons. One of the main concerns is about privacy as it relates to medical records. Although data can be de-identified(by removing names and some other personal information), in order for data to be useful it must contain some amount of identifiable material. Both public and private efforts in healthcare big data must take great care to maintain high levels of security and privacy when it comes to handling patient information. Another issue is the availability of electronic material. While the modern world seems to be entirely digital, analytics companies face troves of data in non-digital formats. Despite recent movement toward electronic medical records (EMRs), the major obstacle to big data in health care is aggregating all of the information that exists in other forms such aswritten reports, audio recordings, and faxes, in addition to the lack ofconsistent formatting. Take this example from Flatiron Health: “When it came to measuring the level of a single protein—albumin, commonly tested in cancer patients—a single EMR from a single cancer clinic showed results in more than 30 different formats.”4 This is just one example among hundreds of metrics, all measured across an extensive network of cancer centers using any number of electronic and non-electronic record formats. While efforts are being taken to develop computational techniques to read and organize the data that exists in EMRs, the challenge of consolidating the other record forms still exists. Future ventures will face this lack of uniformity in addition to strict regulations and concerns about medical data, which pose significant challenges to the use of big data in healthcare.
     Big data has lofty goals, and to make efforts worthwhile will require corporations and people to use information for the greater good in a field where huge sums of money are at stake. Big data in healthcare is advancing, and despite the obstacles that still remain there is great potential to revolutionize the way we treat and care for people. There are signs of progress - health care is picking up traction as a priority in the startup and tech worlds, where new ideas seem to be born every day, and physicians are supporting the efforts that will help them improve quality of care. Big data is an enticing solution to today’s healthcare dilemmas, but not necessarily a panacea, and only time will tell what benefits will come.

References

1. The New Yorker. 2014. “What Big Data Can’t Tell Us About Health Care.” April 23, 2014. http://www.newyorker.com/business/currency/what-big-data-cant-tell-us-about-health-care.

2. Bates, Saria, Ohno-Machado, Shah, and Gabriel Escobar. 2014. “Big Data in Health Care: Using Analytics to Identify and Manage High-Risk and High-Cost Patients.” Health Affairs, 33, no.7 (2014):1123-1131. http://content.healthaffairs.org/content/33/7/1123.full.pdf+html.

3. Kayyali, Knott, and Steve Van Kuiken. McKinsey & Company. 2014. “The Big-Data Revolution in US Health Care: Accelerating Value and Innovation.” April 2013. http://www.mckinsey.com/insights/health_systems_and_services/the_big-data_revolution_in_us_health_care.

4. Miguel Helft. 2014. “Can Big Data Cure Cancer?” Fortune, July 24. http://fortune.com/2014/07/24/can-big-data-cure-cancer/.

Salman Arif is a first-year student at the University of Chicago.

ISIS and Chemical Weapons Disarmament: The Political Necessity of Technological Progress

Alison McManus

     The issue of chemical weaponry, since its inception during the First World War, has increasingly become a political concern. The Assad regime’s August 2013 use of nerve gas against civilians remains fresh in international memory, as does the political confrontation that followed. In July 2014, the issues of weapons storage and disarmament gained further attention when the Iraqi government confirmed reports that ISIS rebels had seized the retired Al Muthanna Chemical Weapons Complex. The seizure of the facility draws attention to the larger imperative of dismantling the region’s remaining chemical weaponry, which should be addressed with a combination of political finesse and the furthering of disarmament technologies.
     On June 11, 2014, ISIS rebels overwhelmed guards at the Al Muthanna plant and proceeded to raid the facility, raising concerns that active chemical weapons agents may have come into their possession. The facility, located 45 km northwest of Baghdad, had produced up to 4,000 tons of chemical weapons per year under the Hussein regime, including the nerve gas Sarin and the blister agent sulfur mustard.1 Following the Gulf War, the United Nations shut down the facility’s productive capabilities, leaving a small amount of chemical weaponry sealed in two bunkers at the site. The contents of these bunkers is at the center of the international response to the seizure, a response that has included both reassurance and concern.
     Those who seek to reassure refer to the most recent report on the contents of the Al Muthanna facility, which was conducted by the Iraq Survey Group in 2004. While the report conceded that the stored weapons could be dangerous, it also noted evidence of chemical leaks within the bunkers.1 These leaks would render the munitions both militarily useless and immediately hazardous to anyone who attempted to move them. A full inspection of the bunkers was impossible due to issues of safety, but if substantial leaking was indeed present at the time of the raid, ISIS rebels could not have obtained usable chemical weaponry.
     Stereotypical leprosy presents with the appearance of pigmentless spots, or lesions, on the skin. It was these lesions that denoted “lepers” in the biblical era, though this characterization was unfortunate, because the lesions, like so much associated with the disease, have no uniform identity; they may be sunken or raised, splotchy or defined; they may not be pigmentless at all, but red or copper-colored. This variability adds to the difficulty of diagnosing leprosy. Further complicating the issue, are the two subsets of leprosy, tuberculoid and lepromatous, which have differing pathologies and prognoses, with the latter posing a far more lethal threat.
     Even so, ISIS’s seizure of the Al Muthanna facility has raised concerns about the security of additional, more recently produced chemical weapons agents in Iraq and Syria. The seizure of Al Muthanna reveals an important political fact: the region’s current instability makes the long-term securing of chemical weapons agents unfeasible, even if these weapons are declared to the United Nations and guarded by a small military force. Consequently, there is substantial pressure to accelerate the process of disarmament.
     The process of chemical weapons disarmament may take a variety of forms, many of which are either dangerous or costly. Burial of chemical weapons was once common practice, as was dumping liquid weapons into local oceans and seas. These practices are currently outlawed by the Chemical Weapons Convention of 1993, in favor of two far safer methods of disarmament: closed incineration and neutralization.2
     The first of these methods, in which chemical weapons agents are simply burned into inert ash, water, and carbon dioxide, was the U.S. Army’s preferred method for the destruction of its chemical weapons stockpile.3 While this method is valued for its speed, it has also been criticized for its environmental impact. The Chemical Warfare Working Group cites a number of drawbacks to incineration methods, including malfunction, which would expose workers to dangerous agents, and the formation of toxic byproducts, which would present a public health concern even if incineration procedure is followed perfectly.4 Mustard gas, for instance, is often contaminated with mercury, making incineration impossible due to the release of toxic fumes.5
     These considerations prompted research into neutralization as an alternative procedure for disarmament. While the majority of the U.S. chemical stockpile was being destroyed by incineration, the Assembled Chemical Weapons Assessment Program, established by Congress in 1996, investigated a series of neutralization techniques.3 Of the methods it investigated, it selected neutralization by hydrolysis as the most feasible option. In this process, chemical warfare agents are combined with hot water and sodium hydroxide to form a less dangerous but still toxic hydrolysate compound, along with other organic products. The hydrolysate is then further neutralized by one of three methods: incineration, chemical oxidation, or biotreatment. The latter two methods have been the focus of recent efforts to improve neutralization technologies.
     A 2009 study by Fallis et al. examines how the efficiency of hydrolysate oxidation might be improved.6 He aims to bridge what he calls the “fundamental dichotomy” in the oxidation step of neutralization, namely that the hydrolysate to be oxidized is immiscible in water, while effective oxidizing agents tend to be miscible in water. This phase incompatibility is typically overcome through a process known as supercritical water oxidation (SCWO), in which superheated water allows for increased mixing of the hydrolysate and the oxidizing agent. However, Fallis’s work suggests that oxidation of the hydrolysate may not require such high temperatures. Through the use of a manganese Schiff-base as a catalyst, he demonstrates that oxidation may occur at the phase barrier between the hydrolysate and the oxidative agent, eliminating the need for difficult phase mixing and improving the process of neutralization.
     Alternatively, the toxic hydrolysate product may be broken down by microorganisms, as demonstrated by the disarmament of the U.S. Army’s Pueblo, Colorado chemical weapons stockpile. The full report on the facility’s disarmament, released in 2006, indicates that the neutralization of sulfur mustard agents may be achieved through the activated sludge process, in which bacteria decompose organic contaminants in specially-designed reactors known as Immobilized Cell Bioreactors (ICBs).7 Notably, this process is effective for both aged and contaminated chemical munitions. If implemented on a larger scale, it could facilitate the disarmament of the world’s numerous stockpiles of old chemical weapons.

     The issue of disarmament is no doubt both relevant and complex, requiring immediate international cooperation and carefully calculated political action. Yet the process of disarmament is constrained not only by politics, but also by technological progress. At the moment, neutralization technology is functional but cumbersome. Its improvement would not alone provide an exhaustive solution to the issue of chemical weaponry. It would, however, offer a wider variety of strategies for disarmament, and in doing so, allow greater flexibility in international responses to chemical warfare.

References

1. Charles A. Duelfer. “Duelfer Report on Chemical Weapons in Iraq.” Iraq Survey Group. 30 September 2004.

2. Organization for the Prohibition of Chemical Weapons. “Chemical Weapons Convention.” 13 January 1993.

3. Philip A. Marrone, Scott D. Cantwell, and Darren W. Dalton. “SCWO System Designs for Waste Treatment: Application to Chemical Weapons Destruction.” Industrial & Engineering Chemistry Research, Vol. 44, 2005.

4. Michael R. Greenberg. “Public Health, Law, and Local Control: Destruction of the U.S. Chemical Weapons Stockpile.” American Journal of Public Health, Vol. 93, No. 8, 2003.

5. Lois Ember. “Chemical Arms Disposal.” Chemical & Engineering News, May 2007.

6. Ian A. Fallis, Peter C. Griffiths, Terrence Cosgrove, et al. “Locus-Specific Microemulsion Catalysts for Sulfur Mustard (HD) Chemical Warfare Agent Decontamination.” Journal of the American Chemical Society, Vol. 131, 2009.

7. Mark A. Guelta. “Biodegradation of HT Agent from an Assembled Chemical Weapons Assessment (ACWA) Projectile Washout Study.” Edgewood Chemical Biological Center: U.S. Army Research, Development, and Engineering Command. September 2006.

8. Image credit (Creative Commons): PEO Assembled Chemical Weapons Assessment. 8 September 2011. Pueblo Chemical Agent-Destruction Pilot Plant Brine Reduction System. Flickr.

Alison McManus is a third-year student at the University of Chicago who plans to major in both chemistry and history. She is particularly interested in (and often puzzled by) the moral and political dimensions of technology.

Sticks and Stones: No physical injuries, no problem?

Natalie Petrossian

     After several tragedies in the past few years, bullying has gone from being considered a “soft” form of abuse to a high-profile issue in our society. With new research into the neurological and psychological effects of peer harassment and victimization, scientists strongly believe that the resulting emotional and stressful impacts on a teen’s brain could cause unforeseen short- and long-term effects.
     The middle and high school periods are sensitive ones. Emotionally, individuals are struggling to find themselves and determine their place within their communities, and neurologically, their brain connections are still forming. Bullying at this developmental stage can certainly leave neurological scars: hormones can be thrown out of whack, connectivity in the brain can be stunted, and even the growth of new neurons can be impaired.1 The chronic stress experienced because of bullying during this time is so influential on an individual’s brain and emotional health that researchers have added it to the category of childhood trauma.
     Martin Teicher, a neuroscientist at McLean Hospital in Belmont, MA, has long been examining this relationship between bullying and the brain in young adults.1 Teicher and his colleagues most recently studied a group of young adults who varied in how much verbal harassment they had received from their peers. Not surprisingly, what the scientists found was that the subjects who had been bullied reported more symptoms of depression, anxiety, hostility, and other psychiatric disorders than the non-victimized subjects. With these initial, confirming results, Teicher and his team decided to further explore the causes and effects of this relationship by conducting brain scans of 63 of their subjects. Surprisingly, the victimized individuals had, on average, about 40% smaller corpus collosums – the thick bundle of nerve fibers that acts as the bridge between the two hemispheres, and which is critical in visual processing, memory, and emotional balance, among many other functions. Moreover, the neurons within their corpus collosums had less myelin – the insulation layer around the axon that speeds the propagation of the action potential from one neuron to the next. While the molecular and cellular basis of this phenomenon is still unclear, these structural changes seem to be strongly linked to the higher rates of depression and anxiety that the bullied individuals experienced, and perhaps to their altered stress-processing mechanisms post-bullying.
     In addition to causing structural damage, bullying is also implicated in causing emotional imbalances due to a disruption of hormonal homeostasis, especially for the hormone cortisol. Cortisol is an end product of the stress (fight or flight) response, and it is produced in the adrenal glands along with other stress hormones such as adrenalin. When a person experiences a threatening situation, their cortisol levels rise, causing the immune system to slow down and the learning systems to shut down, while their bodies prepare to manage the threat by either confronting the situation, or attempting to flee. Cortisol, like many other hormones, has far-reaching influence: by pushing the body into hyper-alertness and hyper-activity, it accelerates one’s heart rate, which propagates its further distribution throughout the body, and communicates with both the hypothalamus and the pituitary glands in the brain to further regulate this response.
     Tracy Vaillancourt, a psychologist at the University of Ottowa, has been studying the effects of cortisol as a result of the stress associated with bullying on an individual’s emotional responses in a group of 12-year olds, where some of them had a history of being verbally harassed by their peers and others didn’t.2 After assessing their cortisol levels every six months, Vaillancourt noticed an alarming trend: the bullied subjects showed a substantial decrease in cortisol production compared to the control subjects; and, the bullied female subjects, compared to the bullied male subjects, showed an even lower level of cortisol. While it is still unclear why the bullied females produced less cortisol than the bullied males, the overall decrease in cortisol production points to the brain’s desensitization to cortisol. In other words, since the bullied subjects’ fear response was consistently activated, their bodies would need increasingly larger amounts of cortisol in order to sustain the same level of response. However, this resulting hypercortisolemia eventually becomes neurotoxic: over-stimulation for each subsequent fear response begins to cause neuronal cell death in associated brain structures, such as the hypothalamus, pituitary glands, and amygdala, which then leads to less cortisol production. Additionally, damage to these areas of the brain can cause a cascade of changes that affect attention, impulse control, sleep and dietary regulation, anxiety coping, verbal memory, and clear thinking.
     Unfortunately, this is only the tip of the iceberg. When considering the effects of these changes in the long-term, the potential dangers become more variable and unclear. Since repeated bullying leaves an indelible print on a young adult’s developing brain, it may have long-lasting repercussions in adulthood. According to a large statistical study published in “Psychological Science”3, adults who experienced childhood bullying were generally more likely to struggle to hold a regular job, to develop unhealthy habits such as smoking, to maintain poor social relationships, in addition to being at an increased risk for developing long-term psychiatric disorders, such as depression or anxiety. These findings, although not definitive, hint at just how damaging the consequences of bullying can be: emotional trauma may be just as harmful as physical injuries.
     This interdisciplinary research between neurology and psychology is still fairly new, and as provocative as these findings are, they raise many more questions than they answer. On the neurological end, much is still to be determined at the molecular and cellular level. How and why do bullied males produce more cortisol than bullied females? What other previously unconsidered hormones could potentially be involved in this response? Moreover, it becomes difficult to differentiate cause and effect. Is it possible that certain teens are predisposed with certain biological and biochemical traits that somehow make them more likely to be targeted as a victim, or perhaps more likely to develop certain psychiatric diseases as a result? How is it that not every bullied child carries the repercussions of the traumatic events into adulthood? How can a child become more resilient to the effects of bullying?
     While current research is still looking for clinical answers, our society has a responsibility. Studies have already shown that a young adult’s environment, which includes the home, school, and everything in between, is largely responsible for molding both their brain and their mind. Even though schools have established therapy programs and anti-bullying campaigns in order to alleviate the effects of, and discourage the act of bullying, peer harassment continues. More importantly, it carries a social stigma that, despite all of these efforts, is still ineffaceable. If we want any chance of changing the way bullying is handled, we need to start by changing the way it is perceived. Bullying, especially in today’s age, can no longer be considered as a “soft” form of abuse. With the rise of technology and cyber-bullying, peer harassment has become easier to accomplish, anonymous, and omnipresent. The school environment has become alarmingly more hostile than it was even a decade ago, and peer victimization has surpassed the school walls to infiltrate the home and become an all-pervading danger. Facebook, Tumblr, Instagram, and Twitter have become mediums for permanent and public humiliation, and even vectors for previously inexistent anonymous harassment. As a result, what was once a localized threat has become both impossibly difficult to anticipate and control, and has taken on an alarming and tragic face: lethality.
     In June 2012, the Center for Disease Control and Prevention predicted that about one in twelve teenagers who had been victims of either cyber bullying or conventional bullying would attempt suicide.4 Considering that such a significant percentage of the future generation is emotionally and physically compromised, bullying needs to be treated as a serious public health concern.

References

1. Teicher, Martin, Jacqueline Samson, Yi-Shin Sheu, Ann Polcari, and Cynthia McGreenery. "Hurtful Words: Exposure to Peer Verbal Aggression Is Associated with Elevated Psychiatric Symptom Scores and Corpus Callosum Abnormalities." The American Journal of Psychiatry Vol 167, no. 12 (2010): 1464-471.

2. Vaillancourt, Tracy, E. Duku, D. Decatanzaro, H. MacMillan, C. Muir, and LA Schmidt. "Variation in Hypothalamic-pituitary-adrenal Axis Activity among Bullied and Non-bullied Children." Aggressive Behavior Vol 34, no. 3 (2008): 294-305.

3. Wolke, Dieter, William Copeland, Adrian Angold, and E. Jane Costello. "Impact of Bullying in Childhood on Adult Health, Wealth, Crime, and Social Outcomes." Psychological Science Vol 24, no. 10 (2013): 1958-970.

4. Youth Risk Behavior Surveillance — United States, 2013." Center for Disease Control and Prevention: Morbidity and Mortality Weekly Report Vol 63, no. 4 (2014).

Natalie Petrossian is a third-year student at the University of Chicago majoring in the Biological Sciences and specializing in Neuroscience.

Narrative Medicine: Story as Prescription

Carrie Chui

     The hospital is rife with stories of human fragility and compassion. For Dr. Rita Charon, MD, PhD, the way to discover them is an act of close listening and observing: a practice she calls narrative medicine. The concept of narrative medicine, conceived by Dr. Charon in 2000 at the Columbia University College of Physicians and Surgeons, seeks to understand the clinical potential of the humanities and medicine.1 To Charon, narrative medicine, which describes the ability to sense a story, interpret it, and then consider it through a retelling, is a model of humane and effective medical practice that, in her opinion, is especially important today, in the modern age of bureaucratized medicine.
     Narrative medicine is by no means the only paradigm to recognize the confluence of humanities and medicine. Many medical school programs often cite medicine and the humanities together, believing in the ability of medical humanities to help doctors develop clinical acumen, compassion, and empathy, today. 2,3 To better understand the concept of narrative medicine, Charon describes it as a source of knowledge, which, by revealing itself only in the doctor-patient experience, works in a logic opposite to that of “logicoscientific knowledge,” a generalizable notice that can be discovered by being kept at a remove.4 However, for Charon, these two types of knowledge need not be viewed in direct competition with each other. In fact, Charon, acknowledges the importance for both in a successful clinical practice, going as far as to say that the clinical merits of narrative medicine can only be readily manifest when both sets of knowledge are applied.4
     For Charon and other practitioners, practicing narrative medicine can mean that the clinical encounter with the patient begins not with a systematic set of questions about the patient’s health, but by inviting the patient to describe his or her condition to the doctor in a self-introductory manner.1 In this act of temporarily suspending the doctor-patient dichotomy, Charon quickly finds that what is revealed in conversation and suggested through body language is not only information about the patient’s state of health, but also insight into who they love, what they love, their greatest passions and most gripping fears.1 To have the ability to sense the story, recognize it, receive it, and then be moved by it to action by retelling it—be it verbally, through writing, art, or another medium, is to be fluent in the practice of narrative medicine. Described as a practice “fortified by the knowledge of what to do with stories,” narrative medicine is proposed as a way to consider patient stories thoughtfully and productively.1
     Beyond the rewards of a better emotional connection with the patient, Charon suggests that narrative medicine is, on many levels, transformative for doctor and patient.4 The potential of narrative medicine can directly be beneficial to the physician, who can better understand his patient, and steer this newfound understanding towards the development of a thoughtful and more accurate treatment plan [4]. Physicians who exercise their narrative capacities can also become more reflectively engaged in their practice, cultivating “affirmation of human strength, familiarity, and suffering”.4 Charon cites her own encounter with a patient, in a scenario in which both of these effects unfold. The patient, she recalls, was a survivor of cancer who, convinced that her cancer has come back despite seeing recurring negative results, was only able to find some comfort and peace in Charon’s writing about her, which she had shared upon their next clinical encounter.4
     Finally and perhaps most relevant to thinking about health policy, Charon argues that in as far as caregivers are able to sustain a narrative situation with their patients, so too might they be able to nourish a relationship of public trust with greater society as a whole, a situation she believes is crucial for filling in the gaps of a bureaucratized health system.5 In an issue of JAMA, Dr. Michael H. Monroe, MD, a practicing physician of 14 years describes what, in his opinion, is sacrificed in the bureaucratization of medicine, for which narrative medicine might suggest a solution:
“There needs to be sufficient space for that which will always remain uncounted because it cannot be counted; because it cannot be counted should not diminish it. The bureaucratization of medicine with increasingly complex rules, codes, algorithms, prompts, bylaws, schedules, and administrative structure is leaving its mark, but medicine at its fundamental is still about suffering, healing, and comforting; it is about individuals; it is about relationships and trust; it is about stories.”5
     The notion of “relationships and trust” that Monroe refers to is heavily implicated in Charon’s concept of public trust achieved in narrative medicine. To engage in narrative medicine is to yield towards the patient’s yearning for rescue and healing, subscribing to the accountability of the caregiver to these public expectations.4 Narrative medicine facilitates honest conversations with the patient, and for Charon, these conversations might often consist of talk about “meaning, values, and courage that scientific or rational debates can’t often compass”.4 Enacted through a retelling, physicians become a critical vessel for delivering authentic, valuable “uncounted” knowledge to policy makers, fulfilling their duty in greater society. 4,5
     If it is true that the skills of narrative medicine is as accessible as the teachings of Charon’s program in narrative medicine seems to suggest, then for the benefit of the patient, physician, and greater society, narrative medicine may be an important means for achieving a greater sense of fidelity that the practice of medicine necessarily embodies.1,4,6 By donating the expertise to better understand the situation of the patient beyond the clinic, the practice of narrative medicine reaffirms the role of a doctor, a commitment that avows to the betterment of the patient, to his peers, and to greater society.

References

1. Charon, Rita. 2011. “Honoring the stories of illness: Dr. Rita Charon.” Talk presented and filmed at an independently organized TEDx event.

2. Lisa Pevtzow. “Teaching compassion: Humanities courses help aspiring doctors provide better care.” Chicago Tribune, March 2013. Accessed May 20. http://articles.chicagotribune.com/2013-03-20/health/ct-x-medical-school-arts-20130320_1_doctors-students-humanities.

3. Brown University. 2013. “Creative Medicine Series: A four-part lecture series examining and celebrating the link between medicine and the arts.” Accessed May 20. http://www.brown.edu/Departments/Humanities_Center/events/creativephysician.html.

4. Charon, Rita. 2001. “Narrative Medicine: A Model for empathy, reflection, profession, and trust.” The Journal of the American Medical Association 286: 1897-1902.

5. Monroe, Michael. 2011. “A piece of my mind. Drawer on the Right.” The Journal of the American Medical Association 305: 1176-1177.

6. Narrative Medicine. 2014. “The leader in narrative best practices and team-based healthcare programs.” Accessed May 20. http://www.narrativemedicine.org/mission.html

7. Flickr. “Doctor greating patient.” Accessed May 23. https://www.flickr.com/photos/59632563@N04/6104068209/

Carrie Chui is a third-year student at The University of Chicago majoring in visual arts and biological sciences, with plans of going to medical school. Fascinated by the both the arts and sciences, her articles research current ways in which they intersect.

A New Kind of Scientist: The Influence of Crowdsourcing

Stephanie Diaz

     How do you analyze large amounts of complex information quickly? Nowadays, with the power of the internet, the answer is videogames.
     Recently, scientists have been using video games to crowdsource people to analyze data. Before the use of video games, scientists relied on the collective computing power of people with internet access with projects like the SETI@home which involved downloading a program that analyzes radio signals from space when a computer is not in use. Now scientists can have people analyze data through videogames.
      Take for example Eyewire, a computer game designed by Sebastian Seung, a neuroscientist from the Massachusetts Institute of Technology. For years, scientists have looked for the elusive answer to the question of how direction selectivity works.1 Direction selectivity is a phenomenon in which a signal is sent from the retina to the brain only if the movement of the object being seen is aligned with a path from the center of a neuron to a dendrite. After fifty years and with the help of over 120,000 people around the world, scientists have finally been able to propose an answer.1,2
     Scientists struggled to explain direction selectivity for uncovering the answer required analysis of high resolution images of retinal tissue to discover the neural pathways, a very labor intensive process. This process was very long and tedious and proved to be too complicated for computers to do.2 Eyewire provided the solution to this problem by creating a game where players would be given a high-resolution picture of tissue and be tasked with tracing the neural network, specifically the starburst amacrine cells, photoreceptors, bipolar cells, and ganglion cells. Players would gain points based on accuracy and speed with which they analyzed these pictures. This game was incredibly successful- more than 120,000 players from 140 countries have played.
      Eyewire has allowed Seung and his team to create three-dimensional maps of the retinal neural network. This work of thousands of gamers has allowed scientists to discover the positions of certain types of cells in relation to each other and how those positions contribute to direction selectivity. Eyewire has not only helped answer a question that is fifty years old but also made gamers scientists.
      Eyewire is not the only game of its kind. The games Foldit and EteRNA also use similar concepts. Foldit allows gamers to fold together proteins. Points are awarded based on how energetically efficient the gamer synthesizes the protein, meaning creating an enzyme with the least amount of energy results in the most points. Together, gamers have come up with the structure of an AIDS-related enzyme, a Mason-Pfizer monkey virus retroviral protease, within three weeks4. These results were published in Nature, and several more papers are currently being written about what algorithmic discoveries have been made by gamers.3,4 Foldit allows people to test all the possibilities
     Similarly, EteRNA allows players to work with RNA molecules. Scientists will post specific challenges, creating a specific shape like a key or a specific structure that is a real RNA design problem. EteRNA players build their skill and intuition as to how to synthesize the desire RNA molecules. Every two weeks, the players vote on which enzyme they deem has the best design. Scientists will make it and send a picture of the actual molecule and information on its behavior back to the players.4 Understanding how RNA works will allow scientists to manipulate cell functions and cure diseases.
      This is not to say that crowdsourcing is a perfect method. People can make mistakes, and scientists have to take this into account. For Eyewire, a team of neuroscientists review the work done by gamers and correct any errors made. For games like Foldit and EteRNA, mistakes are part of the trial-and-error method that allow gamers to create efficient syntheses and replicate enzymes. The games are not intended to replace the need for a scientific authority on a study so much as to get as many people involved in order to collect data quickly or to get as many minds as possible thinking about a problem. As is the cause with Foldit and EteRNA, crowdsourcing allows for people from different types of backgrounds to work on problems that require critical thinking skills and not necessarily complete understandings of the science behind it.
      While crowdsourcing will not replace scientists, it can bring science to a community level and allow for advancements that could not be done otherwise. Eyewire plans to expand into mapping parts of the brain related to the olfactory system, but this is just the beginning of a much larger trend. Other games, such as Phylo which helps identify similar sections of DNA between species through color matching, are allowing anyone to become part of the scientific process.

References

1. Boyle, Alan. 2011. "Gamers solve molecular puzzle that baffled scientists." NBC News, September 19. Accessed May 5, 2014. http://cosmiclog.nbcnews.com/_news/2011/09/18/7802623-gamers-solve-molecular-puzzle-that-baffled-scientists.

2. Palca, Joe. 2014. "Eyewire: A Computer Game to Map the Eye." NPR, May 5. Accessed May 5, 2014. http://www.npr.org/2014/05/05/309694759/computer-game-aides-scientist-mapping-eye-nerve-cells?utm_source=tumblr.com&utm_medium=social&utm_campaign=skunkbear&utm_term=nprnews&utm_content=20140505.

3. Sutter, John. 2011. "Why video games are key to modern science." CNN, October 23. Accessed May 5, 2014. http://www.cnn.com/2011/10/23/tech/innovation/foldit-game-science-poptech/.

4. Peckham, Matt. 2011. "Foldit Gamers Solve AIDS Puzzle That Baffled Scientists for a Decade" Time, September 19. Accessed May 9. http://techland.time.com/2011/09/19/foldit-gamers-solve-aids-puzzle-that-baffled-scientists-for-decade/.

5. Cossins, Dan. 2013. “Games for Science” The Scientist, January 1. Accessed May 17. http://www.the-scientist.com/?articles.view/articleNo/33715/title/Games-for-Science/.

Stephanie is a first-year student at the University of Chicago majoring in Chemistry and minoring in physics.

A New Lens: Direction selectivity elucidated through novel “EyeWire” mapping

Tima Karginov


     Direction selectivity of the human retina has recently been supported as a potential mechanism for response to moving visual stimuli. Fifty years after its discovery, direction selectivity may be further explored using a novel space-time wiring technique. In an attempt to model the retina and elucidate direction selectivity, researchers behind the project created an intelligent computer program called “EyeWire” and invited gamers to challenge their own neuronal connections.
     The central mechanism behind direction selectivity is the release of neurotransmitters by excitatory and inhibitory interneurons – integral parts of the nervous system that join neural networks together. The neurotransmitters move through the ganglion cell – a neuron on the inner surface of the retina that receives visual information from photoreceptors through bipolar cells (BC) – and rapidly creates a direction preference for bright and dark moving objects. Direction selectivity, then, is a response of the retina to a stimulus and a “choice” to follow the stimulus.1
      Direction-selective ganglion cells (DS cells) form crucial contact with starburst amacrine cells (SACs) and BCs in order to follow their preferred stimulus. SACs are activated by a motion outward from the cell body to the tip of the dendrite – SAC’s namesake branches, which “burst” off of the center cell and respond to electrochemical stimuli. Bipolar Cells, on the other hand, are classified into different types based on the time it takes for them to react to a visual stimulus. SACs are linked to their corresponding BC cells and react by direction selectivity based on where the motion occurred. Essentially, the SAC-BC circuit is dependent on the spatiotemporal phenomenon of motion – the idea that an object in one area will be in another area after a time gap. DS ganglion cells are thus part of a space-time wiring complex required for direction specificity in the retina.2
      In order to elucidate the theory of space-time wiring, researchers needed to initially map out a wide array of DS cells and SACs. To do so, Dr. Sebastian Seung and collaborators took on a massive project using mouse retinal images and advanced programming. Seung’s group created EyeWire: A video game designed to color code serial block-face scanning electron microscopy (SBEM) images of three-dimensional neurons. To recreate the neuronal structure, the researchers used an existing data set of BC-SAC circuitry from the mouse images and grouped images into “voxels” –pixel-like images with a volume in three-dimensional space. The group then opened the game to online “citizen scientists,” enabling users to construct neuronal networks. EyeWire is now accessible to any user with Internet and has over 120,000 subscribers since its launch in December of 2012.3,4
      The goal of each player is to color images near a pre-determined location of a neuronal part or search for a new location of a neuronal piece to color. In order to do so, users click into a 2D slice while a 3D rendering is presented nearby. I gave the game a shot and found it rather puzzling; the interface is repetitive and it is not always clear why a certain piece goes in the indicated place. I managed to complete 50 cubes over several weeks and the game became more challenging and intriguing as I joined neuronal chains. EyeWire resembles an endless jigsaw puzzle with points awarded for each successfully completed neuron. The scientific function behind the game seems straightforward. According to the EyeWire group, it takes a professional researcher nearly 50 hours to map out a full neuron and given that there are approximately 81 billion such cells found in the human brain, traditional techniques prove ineffective for mapping the brain as a whole. The players’ efforts, therefore, play a crucial role in expediting neuronal mapping.
      Recently, the Seung group expanded upon their original efforts with the “Starburst Challenge.” The key innovation of this trial was to allow users to partake in a test for the hypothesis of direction selectivity. Rather than merely color-coding regular neurons, users were now able to trace a wide variety of cells in the retina. These new types of cells included SACs, BCs, photoreceptors, and ganglion cells. Using new neuronal maps from the Starburst Challenge, researchers were able to identify arranged patterns of the various cells. The BC2 cell, for instance, was found to be near the center of SACs and contained a time lag when responding to a stimulus. BC3a cells, on the other hand, were found to be further out on the dendrites of SACs and, along with the BC2, were known to fire off signals to the ganglion cell. The more powerful dendrite (with the stronger signal) then determines the direction and exhibits directional selectivity.3,4
      Although direction selectivity is likely not the only mechanism behind retinal function, the Seung group was able to use a novel, crowdsource method to support an age-old hypothesis. EyeWire technology represents a possibility for further exploration of human networks, ranging from the olfactory bulb to the ear. As a result of their efforts, over 2000 of the EyeWire enthusiasts were noted for their efforts through a co-authorship with the Seung group on their most recent Nature publication.
      Check out EyeWire for yourself at eyewire.org.

References

1. B. Sivyer and S. Williams. 2013. “Direction selectivity is computed by active dendritic integration in retinal ganglion cells.” Nature Neuroscience. 16, 1848-1856. Accessed May 1, 2014. doi:10.1038/nn.3565

2. D. Vaney, B. Sivyer, W. Taylor. 2012. “Direction selectivity in the retina: symmetry and assymetry in structure and function.” Nature Neuroscience. 13.3, 194-208. Accessed May 1, 2014.doi:10.1038/nrn3165

3. H.S. Seung et. al and the EyeWirers. 2014. “Space-time wiring specificity supports direction selectivity in the retina.” Nature.509, 7500. 331-336. Accessed May 1, 2014. doi: 10.1038/nature13240

4. A. Boyle. 2014. “EyeWire Video Gamers Help Untangle Retina’s Space-Time Secrets.” NBC News. Accessed May 1, 2014. http://www.nbcnews.com/science/science-news/eyewire-video-gamers-help-untangle-retinas-space-time-secrets-n95861

5. Image edited from eyewire.org

Tima Karginov is a first-year student at the University of Chicago majoring in Biology with a specialization in Neuroscience. Tima currently does research on spinocerebellar ataxia and hopes to continue exploring neuroscience through programs such as EyeWire.

Beyond the Hype of Wearable Technology

Katherine Oosterbaan

     Smart phones are so passé. They’re old news, and everyone has them—it’s time to move on to something better, something you can more easily take with you everywhere—wearable technology. Or so many big technology companies believe. From the Google Glass to the highly anticipated Apple iWatch, wearable tech is popping up all over the place, and is widely touted in Forbes as the future of consumer electronics.6 Imagine being able to type essays just by saying them out loud, or drawing a number in the air to pay a bill. It’s not science fiction—these things already exist. Is it worth jumping on the wearable technology bandwagon, or is it just the latest fad that’s bound to die out?
     Wearable tech began in the 1980s with the introduction of the calculator watch, which was a revolution of its time for small computing. As computing power advanced and it became easier to scale down computers, the potential for wearable devices skyrocketed. One of the first companies to bring this technology to consumers was Nike.1 Its fitness wristband, the FuelBand, was capable of measuring steps taken, calories burned, altitude climbed, and more, and then sending that data back to a person’s home computer. These simple devices were considered the first modern wearable technology to really take off, and healthy lifestyle wearable technology became more common. However, even though these products were well received by a small niche of people, they’ve already begun to die out in favor of the ever popular, more widely accessible phone app. Instead, wearable tech has begun to move in new directions.
     One of these is healthcare.2 This most practical direction could allow people to spend less time in a hospital and speed up diagnoses—people with chronic conditions could wear a small device that constantly monitors their condition and sends periodic updates to doctors, as well as alerts the patient if something’s wrong. Additionally, combining the data from millions of people could allow doctors to aggregate population information and produce new data on diseases. This could help cut the cost of healthcare, a perpetual concern, and patients would have a greater ability to take their care into their own hands.3 Additionally, wearable tech could help to create a community of people at a national or global level who are all suffering from the same disease, or who all want to lose weight.2 For example, instead of having just one workout buddy who also suffers from diabetes, you could virtually connect to dozens across the country who have the same condition and are the same age. This opens the door for the popular concept of “gamifying” fitness, where people compete against each other to lose weight, quit smoking, or other various activities. Although these practical applications are far-reaching, there is another arena of wearable technology that is garnering a lot of attention.
     Personal computing has advanced in leaps and bounds over the past few years, and technology companies are constantly seeking to one-up each other to create the “next big thing” that will define the new must-have personal device. For many, it’s wearable devices, the prime example of which is the Google Glass.4 Complete with a built-in sound system that uses bone vibration, this device, which still requires a smartphone connection, can enable you to call people, send texts, take pictures, give directions, look up what movies that guy’s been in, view your social media feed, and much more. Another popular device is the SmartWatch concept5, which also works with phones to give consumers a more limited range of functions, such as message alerts, voice control of phones, and social medial updates. As wearable tech takes off, companies expect to be able to turn these devices phone-independent, and even though most aren’t on the market yet, the amount of investment being funneled in shows that these firms truly believe wearable tech is the future.6
     Despite the many benefits of wearable tech, there is one huge drawback—privacy. While it’s great to aggregate healthcare data from wearable technology users, it’s hard to guarantee their privacy, and new laws and regulations would have to be formulated to specify exactly how and when it’s okay to source data from these users. Far more concerning than these ramifications, however, are those for everyday wearable tech like Glass. If you can and do wear your Glass all day every day, what’s to stop the Google from selling your information-- like favorite restaurants, or transportation preferences, or basically any other information that they can glean--to advertisers? In the wake of the recent spying scandals in the US, there’s even a fear that government organizations could watch people through their wearable technology, which poses a whole host of problems about the line between privacy and safety. Even though wearable tech seems to be the future, it’s clear that some serious considerations have to be made before we view our world through technology-tinted lenses.

References

1. Wallop, Harry. “Is Nike ‘pulling FuelBand’ the end of wearable tech?” The Telegraph. Last modified April 22, 2014. http://www.telegraph.co.uk/technology/news/10779395/Is-Nike-pulling-FuelBand-the-end-of-wearable-tech.html. Accessed May 12, 2014.

2. Afshar, Vala. “Wearable Technology: the Coming Revolution in Healthcare” Huff Post Tech. Last modified May 4, 2014. http://www.huffingtonpost.com/vala-afshar/wearable-technology-the-c_b_5263547.html. Accessed May 12, 2014.

3. Schull, Natasha D. “Obamacare Meets Wearable Technology” MIT Technology Review. Last modified May 6, 2014. http://www.technologyreview.com/view/526576/obamacare-meets-wearable-technology/. Accessed May 12, 2014.

4. Shanklin, Will. “Review: Google Glass Explorer Edition 2.0” Gizmag. Last modified January 4, 2014. http://www.gizmag.com/google-glass-review/30300/. Accessed May 12, 2014.

5. “Smart Watch Review” TopTenREVIEWS. http://smart-watch-review.toptenreviews.com. Accessed May 12, 2014.

6. Spence, Ewan. “2014 Will Be The Year Of Wearable Technology” Forbes. Last modified November 2, 2013. http://www.forbes.com/sites/ewanspence/2013/11/02/2014-will-be-the-year-of-wearable-technology/. Accessed May 12, 2014.

Katherine Oosterbaan is a second-year student at the University of Chicago majoring in Chemistry and minoring in Slavic Languages and Literatures.

The BRAIN Initiative: From Imaging to Treatment

Gustavo Pacheco

     In April 2003 one of the greatest feats in human exploration was accomplished, the sequencing of the human genome.1 The human genome was principally sequenced by the U.S. government and Celera Genomics. Despite having mapped out all the hereditary information, the question of how genes interact to yield various human processes is still a mystery. Researchers are challenged by the difficulties in understanding in how neurological processes yield thought, emotions and mental disease. This issue is shifting to approaches to understand the cellular interactions between individual neurons and interactions between parts of the brain mediated by molecular interactions. The U.S. Department of Health and Human Services is addressing this shift in research through the development of the Brain Research through Advancing Innovative Neurotechnologies (BRAIN) initiative.2
     The BRAIN initiative seeks to encourage research that builds off maps of neural pathways which would prove to be useful in understanding the cellular connections that underlie thought and disease. In addition, the Gene Expression Nervous System Atlas (GENSAT)3 was established to pictorially map specific cells using the expression of genes throughout the mouse central nervous system of genetically engineered mice. The BRAIN initiative has been developing the “connectome”, a project similar to the GENSAT. The connectome project seeks to map out the neural networks that are involved in human function on a neuron to neuron level.4 The connectome will provide a better understanding of neural connectivity, how the whole brain is wired and the effects of different neural wirings. Discovering different neural circuits provides the hope of understanding how psychiatric and neuronal disorders develop.
     Developing a map like the connectome, requires advanced imaging techniques. One of these neurological mapping techniques is the brainbow. This technique tags individual cells using artificially constructed genes that express various fluorescent proteins.5 Artificially constructed genes are formed by cutting and pasting components from different genes linked to a fluorescent portion. Therefore, a gene of interest is selected and a portion of a gene that encodes a brightly colored protein is inserted close to the gene of interest so wherever the gene of interest is transcribed and translated into a protein a fluorescent color would be present. Thus, localization and expression level could be known. This allows for the extensions of cells to be mapped and the contacts between them to be visualized. This has provided very impressive visual representations of neural networks. Such a novel technique, however, requires improvements in efficiency and cost effectiveness. The BRAIN initiative has provided funds to develop this neural imaging technology.
     The BRAIN initiative is currently developing specific goals for mapping the brain, and selecting research labs that can engineer technology to meet these goals. One of these goals is to understand the dynamic activity of neural circuits.6 To realize this goal, the BRAIN initiative searches for technologies that can conduct large scale recordings and manipulations of neural activity to understand dynamic signals in the nervous system. Through the incorporation of the brainbow technology and other methods of essentially color coding neurons, we hope to obtain information about the complex neural networks present in the brain. This allows for the identification of miniscule differences in neural system circuitry that lead to the identification of dysfunction in cellular interactions present in mental health disorders.
     However, a major issue with these measurements is that it is difficult to conduct both temporal and spatial measurements simultaneously with great accuracy. The optimal technology seeks to reduce this problem pinpointing specific regions of neural circuitry and understand how components of the circuit function in relation to other circuits not only by their connections but how they interact over time. This is important in mental health research because to understand mental health disorders such as autism and epilepsy it is not enough to understand how neurons are connected, but also how they interact over time. Being able to understand both through imaging technologies of neural activity would allow for more targeted treatments.
     However, a major issue with these measurements is that it is difficult to conduct both temporal and spatial measurements simultaneously with great accuracy. The optimal technology seeks to reduce this problem pinpointing specific regions of neural circuitry and understand how components of the circuit function in relation to other circuits not only by their connections but how they interact over time. This is important in mental health research because to understand mental health disorders such as autism and epilepsy it is not enough to understand how neurons are connected, but also how they interact over time. Being able to understand both through imaging technologies of neural activity would allow for more targeted treatments.
     The BRAIN initiative seeks to build upon the research that has already been conducted in understanding the brain through mapping and funding current efforts. The BRAIN initiative has very promising objectives for revolutionizing the field of neuroscience through technological developments. The research that these new technologies can produce will allow us to understand the brain in radical new ways.

References

1. All About The Human Genome Project (HGP)." All About The Human Genome Project (HGP). N.p., n.d. Web. 10 May 2014.

2. "Brain Research through Advancing Innovative Neurotechnologies (BRAIN) - National Institutes of Health (NIH)." U.S National Library of Medicine. U.S. National Library of Medicine, n.d. Web. 11 May 2014.

3. "GENSAT Brain Atlas of Gene Expression in EGFP Transgenic Mice." GENSAT Brain Atlas of Gene Expression in EGFP Transgenic Mice. N.p., n.d. Web. 11 May 2014.

4. "NIH Blueprint for Neuroscience Research." The Human Connectome Project. N.p., n.d. Web. 11 May 2014.

5. Lichtman, Jeff W., Jean Livet, and Joshua R. Sanes. "A Technicolour Approach to the Connectome." Nature Reviews Neuroscience 9.6 (2008): 417-22. Print.

6. USA. Department of Health and Human Services. RFA-NS-14-007: BRAIN Initiative: New Technologies and Novel Approaches for Large-Scale
Recording and Modulation in the Nervous System (U01). NIH, n.d. Web. 11 May 2014.

Photograph Reference: Smith, Stephen. Brainbow. 2007. Circuit Reconstruction Tools Today, Current Opinion in Neurobiology. Wikimedia Commons. Web. 4 June 2014.

Gustavo Pacheco is a first year student at The University of Chicago Majoring in Biological Sciences and Romance Languages. His interests include Genetics and Pathology.

Chronic Lyme Disease and the Standard of Medical Care

Lauren Petersen

     The controversy surrounding chronic Lyme disease is perhaps as chronic as the disease itself. Ever since Lyme disease was discovered in the 1970s, it has been debated why ten to twenty percent of patients who are treated with a two- to four-week course of recommended antibiotics continue to experience signs of illness. Following treatment, these patients experience symptoms such as fatigue, pain, or joint and muscle aches for extensive periods of time, sometimes lasting decades.1 This condition is known as “chronic Lyme disease” by those who believe the symptoms are due to occult, persistent infection; to those who believe that the symptoms have other etiologies, the condition is known as “Post-treatment Lyme Disease Syndrome.” The uncertainty surrounding this illness has led to one of the greatest controversies in medicine: should patients receive additional courses of antibiotics for these persistent symptoms? Both physicians and patients are baffled by contradictory information regarding chronic antibiotic treatment in medical literature and especially on the internet.
      At the center of this controversy are four double-blind placebo controlled trials that suggest the inefficacy of continued treatment through intravenous or oral antibiotics. These placebo-controlled studies, in which patients were randomized to unknowingly receive either antibiotics or a placebo, ultimately showed that the antibiotics did not improve symptoms any more than a placebo.2 Furthermore, these studies showed a large placebo effect, for upwards of one third of the patients reported improvement with just a placebo. These studies additionally showed that extended use of antibiotics beyond the recommended treatment was associated with serious side effects, which have been borne out by other case reports of patients who experienced treatment complications such as infection and even death.3 However, advocates of long-term antibiotic treatments claim that these placebo-controlled trials are flawed, basing evidence off of the multiple instances in which patients have seemingly improved with treatment.4
     Based on results of the placebo-controlled trials, the Infectious Disease Society of America (IDSA), which represents the vast majority of infectious disease physicians in the United States, created diagnostic and treatment guidelines that warn against prescribing long-term antibiotics.6 These guidelines, endorsed by the National Institutes of Health and the Centers for Disease Control and Prevention, have set the standard of care for the treatment of persistent symptoms due to Lyme disease; thus, nearly all hospitals and doctors adhere to them. These guidelines also serve as the foundation for medical insurance coverage, and consequently insurance companies often refuse to pay for long-term antibiotic treatments that can cost more than $5,000 a month.6 Furthermore, some doctors that diagnose chronic Lyme disease and treat patients are occasionally investigated and sanctioned by state medical boards.6
     Frustrated by the perceived inflexibility of the IDSA guidelines and the resultant lack of insurance coverage for long-term treatment, many ill patients have banded together to form the International Lyme and Associated Diseases Society. This society, also known as ILADS, adheres to a broader definition of Lyme disease and encourages more research to be done in diagnosis and transmission.7 Additionally, ILADS has formed its own chronic Lyme disease guidelines, which states that chronic Lyme disease can be diagnosed based on symptoms alone rather than clinical data. These guidelines also suggest that antibiotics can be given for indefinite periods without clinical criteria for treatment success, stating only that treatment with antibiotics can be carried out through "clinical judgment.”7 These guidelines would therefore allow doctors to prescribe long-term antibiotics more readily and without penalty. Groups of patients have organized to support the use of long-term antibiotic therapy, and in this role they aim to protect chronic Lyme disease doctors against medical board sanctioning as well as to encourage treatment coverage from insurance companies.6
      The competing IDSA and ILADS guidelines raise questions regarding what constitutes the standard of medical care and what should happen to those who deviate from this standard of care. To resolve this issue for chronic Lyme disease, advocacy by ILADS supporters has led to passage of state laws preventing physicians from being sanctioned for prescribing long-term antibiotics or even forcing physicians to tell patients that accepted Lyme diagnostic tests are unreliable. This movement was taken one step further when the Connecticut Attorney General sued the IDSA and charged that its Lyme disease guidelines inhibited alternative medical practices.4 This lawsuit was eventually settled without any admission of wrongdoing and with an agreement to have a review of the IDSA guidelines by an independent panel of experts, who ultimately upheld the original guidelines. While most would agree that the courts or politicians should not dictate the standard of medical care, the chronic Lyme disease controversy points to the difficulties in interpreting medical science and translating it to a universally accepted practice of medicine. To some, chronic Lyme disease antibiotic treatment is a modern and dangerous form of snake oil; to others, it is a misunderstood miracle; to all, confusion abounds.

References

1. Centers for Disease Control and Prevention. 2014. “Post-Treatment Lyme Disease Syndrome.” Last modified February 24. http://www.cdc.gov/lyme/postLDS/.

2. Mark Klempner, Linden Hu, Janine Evans, Christopher Schmid, Gary Johnson, Richard Trevino, DeLona Norton, Lois Levy, Diane Wall, John McCall, Mark Kosinski, and Arthur Weinstein. 2001. “Two Controlled Trials of Antibiotic Treatment in Patients with Persistent Symptoms and a History of Lyme Disease.” The New England Journal of Medicine 345:85-92. Accessed May 10, 2014. http://www.nejm.org/doi/full/10.1056/NEJM200107123450202.

3. Robin Patel, Karen Grogg, William Edwards, Alan Wright, and Nina Schwenk. 2000. “Death from Inappropriate Therapy for Lyme Disease.” Clinical Infectious Diseases 31:4: 1107-1109. Accessed May 10, 2014. http://cid.oxfordjournals.org/content/31/4/1107.long.

4. Paul Auwaerter, Johan Bakken, Raymond Dattwyler, Stephen Dumler, John Halperin, Edward McSweegan, Robert Nadelman, Susan O’Connell, Sunil K Sood, Arthur Weinstein, and Gary Wormser. 2009. “Scientific evidence and best patient care practices should guide the ethics of Lyme disease activism.” Journal of Medical Ethics 37:68-73. Accessed May 10, 2014. http://techland.time.com/2011/09/19/foldit-gamers-solve-aids-puzzle-that-baffled-scientists-for-decade/.

5. Paul Lantos, William Charini, Gerald Medoff, Manuel Moro, David Mushatt, Jeffrey Parsonnet, John Sanders, and Carol Baker. 2010. “Final Report of the Lyme Disease Review Panel of the Infectious Diseases Society of America.” Clinical Infectious Diseases 51:1-5. Accessed May 10, 2014. http://cid.oxfordjournals.org/content/51/1/1.full.

6. Daley, Beth. 2013. “Drawing the Lines in the Lyme Disease Battle.” The Boston Globe, June 2. Accessed May 10, 2014. http://www.bostonglobe.com/metro/2013/06/01/lyme-disease-rise-and-controversy-over-how-sick-makes-patients/OT4rCTy9qRYh25GsTocBhL/story.html.

7.International Lyme and Associated Diseases Society. “Top Ten Tips to Prevent Chronic Lyme Disease.” Accessed May 10, 2014. http://www.ilads.org/lyme/lyme-tips.php. Image credit (public domain): Chaos. 6 June 2004. “Various Pills”. Wikimedia Commons. 6 June 2004.

Lauren is a first year student at the University of Chicago majoring in biological sciences.

Elementary Science Education: No Teacher Left Behind

Sydney Reitz

     An alarming trend has risen: as the need for scientists and engineers increases, elementary science education remains stagnant—and in some cases, it is getting worse. Unprepared teachers who have often studied little to no science themselves are pressed to teach science subjects with which they themselves are unfamiliar, and more alarming yet, class time devoted to science is shrinking nationwide.1
     Skills learned throughout elementary school have long been considered the foundation of higher education. For example, the basic essay structure is taught during this time, and students learn information that allows them to expand on this structure and grow as writers throughout their careers. However, studies have shown that students in U.S. schools are far more proficient in English and math than in the sciences early on. Teachers and science institutions nationwide are working hand-in-hand to combat this phenomenon.
      Debbie Leslie, an expert on elementary science education at the University of Chicago’s Center for Elementary Mathematics and Science Education, attributes this trend to two simple facts: science is not prioritized in classrooms nationwide, and teachers are often unprepared to teach science.2
     Elementary science has fallen in priority in the classroom for a wide variety of reasons. On an administrative level, statewide science tests are less frequently administered and scores hold less impact on funding decisions than English or math test scores, leading to less attention in the classroom. In Illinois, statewide science skill evaluations are only administered the fourth grade and then again in the seventh grade, while English and mathematics evaluations are administered during grades three through eight.3
     For this reason, teachers and administrators generally allocate fewer classroom minutes to science than to English or math. A study showed that in the 2007-2008 school year, teachers of grades one to four designated 11.7 hours on average to English and Language Arts, while they only designated 2.3 hours to science education per week.1
      Additionally, elementary science educators are often not scientists; this experiential discrepancy can lead to educator discomfort, lack of confidence, and a decline in quality of instruction. Bryan Wunar, Director of Community outreach at Chicago’s Museum of Science and Industry (MSI), is a primary coordinator of the MSI’s elementary education outreach program. He speaks to the need to restructure science education and better train teachers.4
     Restructuring science education is no small feat. To train good young scientists, educators must understand what needs to be taught, and how to teach it. However, the standards addressing elementary science education until recently were low. In April of 2013, the National Research Council (NRC), the National Science Teachers Association, the American Association for the Advancement of Science, and Achieve, all prominent voices in science education, released the Next Generation Science Standards (NGSS).5 The release of the NGSS is an enormous step in creating a standard for how science is taught in elementary schools. Unlike the Common Core curricula which educators are required to follow, these standards are not mandatory but give science educators a concrete framework for educational material.
     Additionally, these standards introduce the concept that scientific inquiry and process are just as important as fact-based knowledge, and explore relationships such as cause and effect and measurements in a wide array of areas. Says Wunar, “The NGSS is a set of educational standards that will help to identify what all kids should know—and what they should know how to do—as a result of science education.”
     However, teachers are already at a disadvantage; few are trained specifically in science education, leaving them to rely upon relics from their own elementary education. The MSI plays an enormous role in promoting the betterment of science education by providing training for teachers. These year-long courses promote science literacy and give educators tools for utilizing the NGSS to become better teachers.
     Furthermore, the MSI provides these educators with the materials needed to not only structure curricula but also allow the students to engage with science, such as magnifying glasses or markers. Wunar emphasizes that the MSI is invested in teaching “habits of mind and practices used by scientists rather than a set of facts.” The courses and materials are at no cost to educators or the school system; the program is entirely funded by patrons of the MSI. Because resources to fund this program are finite, the MSI targets schools with higher percentages of students eligible for the reduced or free lunches program, a widely used measure of the socioeconomic status of students attending. A recent study showed that there is a 28-point achievement gap on the NAEP Science Scores between fourth-grade children who are and aren’t eligible for free or reduced lunch, with children who are eligible scoring 17% lower on average than children who aren’t eligible.1 Teachers attend the MSI’s courses in pairs from each school, and 32% of Chicago schools have alumni from the MSI’s program. Wunar and the other members of the MSI’s educational outreach programs aim to enroll teachers from all Chicago public schools. By promoting teacher confidence when addressing science education, the MSI’s outreach program and the NGSS as a pair have the potential to raise the standard for science education, thereby promoting allocation of more classroom minutes to science and increasing child science competency.
     While Wunar and Leslie agree that the need for proficient science teachers is still incredibly high, programs like those at the MSI and other science institutions nationwide are working fervently to better U.S. elementary science education. By increasing inquiry-based science education based on learning as scientists do, Chicago’s MSI and other institutions alike will aid in the burgeoning of a new era in elementary science education.

References

1. Rolf K. Blank, “What is the impact of decline in science instructional time in elementary school? Time for elementary instruction has declined, and less time for science is correlated with lower scores on NAEP.” Paper prepared for the Noyce Foundation, 2012.

2. Unpublished interview: Debbie, Leslie. Interview by Sydney Reitz. Phone interview. Chicago, May 9, 2014.

3. Illinois State Board of Education. 2013. Illinois Standards Achievement Test: Interpretive Guide. New Jersey: Pearson.

4. Unpublished interview: Wunar, Bryan. Interview by Sydney Reitz. Phone interview. Chicago, May 8, 2014.

5. Next Generation Science Standards. 2014. “Development overview.” Last modified 2014. http://www.nextgenscience.org/development-overview.

Sydney Reitz is a rising senior at the University of Chicago majoring in Comparative Human Development and focusing on cognitive neuroscience

Complexities of Cancer Treatments

Austen Smith

     Thanks to improvements in clinical treatments and chemotherapy drugs, the worldwide population of cancer survivors has grown to an estimated 28 million.1 Concurrent with this uplifting positive trend, however, is the leaden reality that these new and improved treatments often cause additional suffering. Increasingly, those who survive cancer are subjected to pain caused by both their disease and their medical treatments. Indeed, even if the cancerous cells are eliminated, survivors are often left to cope with treatment-related maladies. In light of these truths, researchers are continuing to seek modified chemotherapy drugs whose side effects are attenuated.
     Among the existing set of preferred medications is paclitaxel, a first-line chemotherapy drug for the treatment of various cancers, including metastatic breast cancer, advanced ovarian cancer, non-small-cell lung cancer, and Kaposi’s sarcoma.2 As with other taxane-class drugs, paclitaxel’s mechanism of action involves stabilizing microtubules to prevent depolymerization of the tubulin monomeric units. The breakdown of microtubules is necessary to allow cell division to proceed from the metaphase stage into the anaphase and telophase stages. Thus, microtubule stabilization halts cell division in metaphase, causing cell death in cancer cells.6 Paclitaxel and other related taxane-class drugs do not come without unwanted effects, however. One of its most prevalent adverse effects is chemotherapy-induced peripheral neuropathy (CIPN). Among cancer patients treated specifically with paclitaxel, more than 80% experience concurrent, dose-dependent neuropathic pain and sensory disturbances associated with CIPN.3
     While the adverse effects of paclitaxel chemotherapy may seem secondary in comparison with the immediate destructive effects of cancer—such as tumor growth, metastasis, and widespread cell death—neuropathic pain can actually impair a patient’s regular chemotherapy regimen. Chemotherapy treatment may even be terminated early if the patient and doctor agree that the adverse neuropathic side effects outweigh potential benefits. The National Cancer Institute (NCI) has reported that CIPN is one of the most common reasons that cancer patients stop their treatment early, and in turn experience decreased quality of life and survival rates.4
     Researchers have yet to obtain a complete mechanistic understanding of taxane-induced peripheral neuropathy. However, the adverse effects of taxane drugs seem to be localized within a relatively small region in the roots of the spinal cord—namely, the sensory cell bodies in the dorsal root ganglion (DRG).5 This fairly specific localization has allowed a multitude of research groups to examine the putative mechanisms by which paclitaxel, the most commonly used taxane drug, causes neuronal damage. For example, recent research suggests that the peripheral changes caused by taxane chemotherapy treatments tend to disrupt homeostasis within sensory neurons by altering extracellular ion concentrations, such as that of Ca2+, within the DRG.5 Clinical manifestations of disrupted homeostasis are variable and may include persistent paresthesia of the hands and feet, loss of reflexes, numbness, and mild fatigue.6 While certain treatments (e.g. lithium, as will be discussed) may help to maintain appropriate physiological calcium levels, such treatments do not constitute a reliable protective treatment strategy against CIPN.
     Due to the complexity and relative obscurity of the mechanisms underlying CIPN, there is currently no standard treatment for it.7 Moreover, cancer patients’ chemotherapy regimens frequently involve a combination of drugs, which adds complexity and variability to the severity of taxane-induced toxicity. For example, paclitaxel is often administered in conjunction with platinum-based chemotherapy agents, such as cisplatin or oxaliplatin, which prevent cell division by tightly binding DNA. Platinum-based agents may produce additional neurotoxicity, however, especially in patients with BRCA gene-related breast cancer, triple negative cancer, or ovarian cancer.8
     At present, one of the more promising lines of CIPN research examines the use of lithium as a preventative measure against CIPN.9 Preliminary studies suggest that lithium pretreatment may prove to be sufficient for the prevention of paclitaxel-induced peripheral neuropathy (PIPN). In addition, several clinical trials are collecting data on lithium pretreatment in human patients. One study at the Washington University School of Medicine, titled “Neuroprotective Effects of Lithium in Patients With Small Cell Lung Cancer Undergoing Radiation Therapy in the Brain,” promises to reveal the safety and efficacy of lithium treatments in humans. Started in March 2012, the study is due to be completed in April 2017.10
     Until early evidence favoring the use of lithium treatments is supported or refuted in clinical trials, a new form of paclitaxel bound with albumin protein-based nanoparticles, known as nab-paclitaxel, seems to exhibit clinical promise as an alternative chemotherapeutic agent to conventional paclitaxel. In September, 2013, the FDA approved nab-paclitaxel based on its improvement of overall survival in 861 patients with metastatic pancreatic cancer.11 Initial studies suggest that nab-paclitaxel should replace paclitaxel as a first-line chemotherapeutic agent, primarily due to its improved efficacy, decreased toxicity, and more favorable tolerability.12 A critical Phase III study showed nab-paclitaxel to be safely infused at significantly higher doses than would be possible for the standard paclitaxel drug, thereby allowing cancer patients to receive more of the paclitaxel drug in a shorter amount of time.12 Nevertheless, since nab-paclitaxel is a drug with virtually identical pharmacological properties to paclitaxel, it comes with many of the same concerns. Clinical trials of nab-paclitaxel demonstrate a better toxicity profile than conventional paclitaxel, though the incidence of transient, lower-severity sensory neurotoxicity may be higher.6 Even when paclitaxel is combined with stabilizing albumin, it remains essential to consider complexities of drug interactions; hypothyroidism and alcoholism, for example, are known to augment nab-paclitaxel-induced neuropathy.6
     Given the clinical successes of nab-paclitaxel, this new formation of paclitaxel will likely become a frontline chemotherapy drug for the various types of cancer patients already mentioned. Further experimentation is necessary to determine the efficacy and limitations of nab-paclitaxel, as well as potential differences in its physiological effects and the mechanistic reasons for those differences. Since albumin can act as a free-radical scavenger11, it seems plausible to predict further attenuation of PIPN when nab-paclitaxel is used as the primary chemotherapy drug, leading as it should to decreased oxidative stress (i.e. to less imbalance between radicals and antioxidants). The albumin may therefore help to disarm reactive oxygen species that could otherwise lead to cell death or electrophysiological disturbances.2
     As paclitaxel remains, despite the success of nab-paclitaxel, a widely used and preferred chemotherapy drug, effective PIPN treatment strategies will figure centrally in increasing quality of life among cancer patients. Pending the results of current clinical trials, lithium treatments may prove to be as effective at preventing peripheral neuropathy in humans as they are in mice. Nevertheless, research to date suggests that even new technologies such as lithium and nab-paclitaxel are limited in efficacy and still result in many of the same adverse effects, even if attenuated. The near future of PIPN management will then likely see a combination of lithium and nab-paclitaxel strategies to more completely guard against the severe adverse effects of paclitaxel-induced peripheral neuropathy. In the long term, however, cancer treatments free of neuropathic side effects are unlikely to be achieved until the complex mechanisms of PIPN are more fully understood.

References

1. Park SB et al. “Chemotherapy-Induced Peripheral Neurotoxicity: A Critical Analysis.” CA Cancer J Clin (2013) Vol. 63 No. 4: 419-37.

2. Areti A et al. “Oxidative stress and nerve damage: Role in chemotherapy induced peripheral neuropathy.” Redox Biology (2014) 2: 289-95.

3. Winer EP et al. “Failure of higher-dose paclitaxel to improve outcome in patients with metastatic breast cancer: cancer and leukemia group B trial” J Clin Oncol (2004) Vol. 22 No. 11: 2061–8.

4. NCI Cancer Bulletin. “Chemotherapy-induced Peripheral Neuropathy.” National Cancer Institute. http://www.cancer.gov/aboutnci/ncicancerbulletin/archive/2010/022310/page6. Accessed 2 May 2014.

5. Peters et al. “An evolving cellular pathology occurs in dorsal root ganglia, peripheral nerve and spinal cord following intravenous administration of paclitaxel in the rat.” Brain Res (2007) 1168: 46-59.

6. Lee EQ and Wen PY. “Neurologic complications of non-platinum cancer chemotherapy.” In: UpToDate, Basow. DS (Ed). UpToDate. 2014.

7. Windebank AJ and Grisold W. “Chemotherapy-induced neuropathy.” Journal of the Peripheral Nervous System (2008) Vol. 13: 27-46.

8. Han Y and Smith MT. “Pathobiology of cancer chemotherapy-induced peripheral neuropathy (CIPN).” Front Parmacol (2013) Vol. 4 No. 156: 1-16.

9. Mo et al. “Prevention of paclitaxel-induced peripheral neuropathy by lithium pretreatment.” FASEB J. (2012) 11: 4696-709.

10. ClinicalTrials.gov. “Neuroprotective Effects of Lithium in Patients With Small Cell Lung Cancer Undergoing Radiation Therapy to the Brain.” US NIH. http://www.clinicaltrials.gov/ct2/show/NCT01553916?term=lithium&rank=8. Accessed 9 March 2014.

11. Cancer Drug Information. “FDA Approval for Paclitaxel Albumin-stabilized Nanoparticle Formulation.” National Cancer Institute. http://www.cancer.gov/cancertopics/druginfo/fda-nanoparticle-paclitaxel. Accessed 2 May 2014.

12. Megerdichian et al. “nab-Paclitaxel in combination with biologically targeted agents for early and metastatic breast cancer.” Cancer Treatment Reviews (2014).

Austen is a fourth-year student at the University of Chicago majoring in Biological Sciences with a Specialization in Neuroscience and minoring in Germanic Studies.

The Politics of Stargazing

Magdalen Vaughn


     There is a certain type of nature enthusiast whose beating heart is stilled when he or she visits the Mount John Observatory. In June 2012,1 the 4300 square kilometers housing this site were declared the fourth International Dark Sky Reserve. Located in the center of New Zealand’s South Island, the Aoraki/Mt Cook Mackenzie region is home to locales like Mount John, and Lake Tepako, the township below the mountain.
     If you choose to visit Lake Tepako for the quality of its night skies, you might be one of these starlight enthusiasts. However, it is not just individual hobbyists who enjoy the fruits of Starlight Reserves. Both groups and individuals, of varying scientific or political authority, are interested in the establishment of dark sky reserves and dark sky parks.2 Astronomers, environmental activists, marketing agencies, and photographers all value a clear night sky in different ways. These actors advocate for the preservation of starlight by supporting legislation that both limits light usage at night and radio emissions throughout the entire day, and promotes the stargazing tourist industry. Yet, people’s protection of the stars, which dwell in the realm of ‘things,’3, brings up interesting questions of agency. How, specifically, do we in the United States understand ownership or rights with respect to the night sky?
      Activists for the preservation of starlight on the international scale have been making significant theoretical and political contributions since 2007.4 International Dark Sky Association (IDA) and Starlight Initiative stand as leaders in the establishment of unpolluted sky-scapes, like the Aorake/Mount Cook reserve. However, they were founded at different times and for seemingly different political purposes. A non-profit operating out of the U.S. since 1988, the IDA is made up of astronomers and stargazers all around the world who believe in the importance of clear night skies. Members of the IDA work with members of the state, in the U.S. and abroad, and evaluate applicants for the title of Dark Sky Reservation. The necessary criteria for a Dark Sky Reservation title, for any of the IDA’s member countries around the world, include a degree of unnatural light polluting the area, clearly observable sky phenomena like the Milky Way or aurora, visual limiting magnitude and some more complex astronomical criteria.5 From the first reserve, Mont Mégantic in Quebec, established during 2008, to the most recent, Westhavelland International Dark Sky Reserve in Brandenberg, Germany, the IDA has maintained a process of acceptance and publicity for each proposed stretch of land. The designation of an International Dark Sky Reserve requires expertise and consistent time commitments from members of the IDA and the IDA is currently a prominent authority on light pollution in the United States.
      However, on the larger scale of multinational organizations like the United Nations, there is no such consensus on the politics of protecting locations with clear night skies. Interactions between the Starlight Initiative, the World Heritage Committee (WHC) and the United Nations Educational Scientific and Cultural Organization (UNESCO) at-large prove that international political authority is not so easily established in terms of governing the natural world.
      For example, in 2009 the International Astronomical Union (IAU) and the WHC commissioned a working group that would take practical steps to include scientific components of the World Heritage.
      Within the framework of the Global Strategy for the balanced, representative and credible World Heritage List…the Thematic Initiative on Astronomy and World Heritage, aims to establish a link between Science and Culture towards recognition of the monuments and sites connected with astronomical observations… not only scientific but also the testimonies of traditional community knowledge.6
      In 2010, after participating in the IAU’s general assembly in Rio de Jainero, the Starlight Initiative and the Astronomy and World Heritage working group helped author the “Declaration in Defense of the Night Sky and the Right to Starlight.”7 In this work, historians and astronomers weighed in on the political and cultural import of stargazing practices. The document cites the involvement of members from organizations UNESCO, the UN’s tourism organization, environmental protection groups and multiple international scientific authorities like CIE, the International Commission on Illumination.
      Yet, a year later the World Heritage committee issued a statement explicitly separating activities pertaining to starlight reserves from the political domicile offered by the World Heritage properties list (sanctioned by the World Heritage Convention of 1972). In separating themselves from the project, the WHC states, “neither Starlight Reserves, nor Dark Sky Parks can be recognized by the World Heritage Committee as specific types or categories of World Heritage cultural and natural properties since no criteria exist for considering them under the World Heritage Convention.”7 ‘Starlight Reserve’ is a term that has the same significance as Dark Sky Reserves established by the IDA, but the Starlight Initiative intended for Starlight Reserves to garner support from the UN. Refusing to recognize the concept of Starlight Reserve the WHC effectively strips starlight of protection by the UN, even though the UNESCO, as well as the UN’s environmental and touristic departments, continues to be associated with Starlight Initiative on their webpage and in publications.
     While the IDA has been operating for more than twenty years, and more than five ‘starparks’ or ‘starlight reserves’ have been established in the United States since 1998, there is no way to measure the impact of theoretical materials produced by the IAU and the Starlight Initiative, which prioritize the protection of clear night skies. Still, the multi-voiced fight to preserve starlight is generating dialogue between many international scientific, cultural and political organizations.
      Dark Sky Reserves are all over the world, calling to interested travelers and serving as reminders of the great divide between nature and culture. Groups created by the IAU have crossed what some see as the rational boundary between the natural world and the social world. Though individuals can visit dark sky reserves to promote ‘starlight tourism,’ a lack of consensus on what starlight is and who has a right to it may have consequences. For example, individuals may believe they have the right to claim non-polluted light, in ‘undeveloped’ countries as if starlight were property. Even so, the IAU, alongside the UN, is committed to ‘developing’ astronomy in the Western economic sense—birthing the unheard of concept of a Right to Starlight. This term gives the sense that, while the UN may not be officially protecting starlight, stargazing has cultural and political importance on the international stage.

References

1) “Aoraki Mount Cook Mackenzie.” Accessed May 24, 2014. http://www.mtcooknz.com/mackenzie/stargazing/.

2) “About IDS Places.” Accessed May 28, 2014. http://www.darksky.org/international-dark-sky-places/about-ids-places.

3) Scott, Charles E. 2002. The Lives of Things. Studies in Continental Thought. Bloomington: Indiana University Press.

4) “Objectives of the Starlight Initiative.” Accessed May 24, 2014. http://www.starlight2007.net/index.php?option=com_content&view=article&id=199&Itemid=81&lang=en.

5) “Reserves.” Accessed May 24, 2014. http://www.darksky.org/international-dark-sky-places/about-ids-places/reserves.

6) “World Heritage Centre - Astronomy and World Heritage Thematic Initiative.” Accessed May 24, 2014. http://whc.unesco.org/en/astronomy/.

7) “The IAU Strategic Plan 2010 – 2020 ‘Astronomy for the Developing World - Building from IYA2009.’” In International Year of Astronomy 2009. Rio de Janiero, 2009. http://iau.org/static/education/strategicplan_2010-2020.pdf.

8) “World Heritage Centre - Astronomy and World Heritage Thematic Initiative.” Accessed May 24, 2014. http://whc.unesco.org/en/astronomy/.

Maggie is a graduating International Studies major at the University of Chicago. She is interested in the intersection of Space Sciences and contemporary popular culture.

On Dual Use Research of Concern

Tejong Lim


     Needless to say, research in the life sciences has led to innumerable advances, but we would be irresponsible to laud such progress without considering the inherent risks. The government provides funding to scientists who are expected to publish their research for public knowledge so that others may learn from their findings. Unfortunately, the same knowledge made available to benefit society can also harm it. Bioweapons have been used since even 600 BC, ranging from contaminating water wells to hurling plague-ridden cadavers over enemy walls, and modern scientific knowledge has significantly increased the potential harm that can be inflicted.1
     Public awareness of the potential misuse of scientific research increased dramatically in 2001 when the Wimmer lab at the State University of New York synthetically assembled poliovirus de novo with commercial oligonucleotides based on their previous paper from 1991.2 The oligonucleotides were assembled to form the poliovirus complementary DNA (cDNA), which was then transcribed by RNA polymerase to synthesize the RNA genome. After translation of the genome, the capsid proteins self-assembled to form the poliovirus. Virulence tests and receptor-specific antibodies in mice demonstrated the virulence of the synthetic poliovirus to match that of the wild type strain and the possibility of in vitro synthesis of infectious agents de novo.3 Although the Wimmer lab’s objective was to support the World Health Organization’s “closing strategies of the poliovirus eradication campaign”, the ability to synthesize the infectious agent for poliomyelitis has the obvious potential for bioterrorism.3 Wimmer was criticized for enabling potential bioterrorists to create a deadly pathogen; if poliovirus could be artificially synthesized, the same process could theoretically generate other viruses. The debate demonstrated the need for more practical regulation of research publication to minimize the risk of bioterrorism without impeding scientific progress.
      Research relating to knowledge with both beneficial and pernicious potential is called dual use research of concern, or DURC. In 2004, the Committee on Research Standards and Practices to Prevent the Destructive Application of Biotechnology responded to growing worries by forming the National Science Advisory Board for Biosecurity (NSABB), a United States advisory panel to the National Institutes of Health (NIH) made up of 25 voting members appointed by the Secretary of the Department of Health and Human Services (HHS) and with expertise in fields ranging from microbiology to national defense.1,4The NSABB now defines DURC as “research that, based on current understanding, can be reasonably anticipated to provide knowledge, products, or technologies that could be directly misapplied by others to pose a threat to public health and safety, agricultural crops and other plants, animals, the environment, or materiel.”2 Despite these actions, the exact classification for research posing a true threat and in need of censorship remains unclear.
      How much of a threat is bioterrorism that research publication should be censored? Bioweapon usage has been uncommon in the recent decades, with the most recent scare being Amerithrax in 2001. Silicon traces with the anthrax spores caused fear that the spores had been weaponized to increase virulence, but later investigations concluded that the silicon was naturally occurring. Scientists have been unable to reproduce the same powered spores, indicating that the anthrax was not a direct consequence of public research.5 No bioterrorist attack directly enabled by research publication has occurred as of yet, perhaps due to the extra time, funds, and skills needed to prepare and use pathogens as opposed to firearms or explosives. Further, pathogens can take days to kill their host while more conventional weapons are often much quicker.
      The most recent debate over DURC arose in December 2011, when the NSABB reviewed two NIH-funded research papers on factors enhancing H5N1 influenza transmissibility. The NSABB recommended that the HHS require the two labs to omit the methodologies to prevent replication of their experiments, while general conclusions could remain.6 When HHS called for Yoshihiro Kawaoka of the University of Wisconsin and Ron Fouchier of the Erasmus Medical Center to redact their experimental methods, a heated argument ensued.7 Kawaoka, who identified four gene mutations that enhanced H5N1 transmissibility, argued that his research’s benefits significantly outweighed the risks. “The redaction of our manuscript, intended to contain risk, will make it harder for legitimate scientists to get this information while failing to provide a barrier to those who would do harm,” Kawaoka claimed.8 His colleague Fouchier agreed, “By following the NSABB advice, the world will not get any safer, it may actually get less safe.”8 After months of debate, another review convinced the NSABB to allow the full publication of both papers in May 2012. The NSABB admitted to overreacting and attributed its reconsideration to a new DURC policy and to Fouchier’s revision to the paper emphasizing the low lethality of his experimental virus.9
      As the H5N1 debate demonstrates, the current research atmosphere favors the publication of DURC. From 2004 to 2008, only 28 papers were examined for DURC out of the 74000 papers received by Nature and its journals, and currently, no paper has been rejected by any journal for risk of bioterrorism.10 The World Health Organization stated in February 2012 at the Technical Consultation on H5N1 Research Issues, “Final responsibility for the identification and implementation of appropriate risk assessment, mitigation, and containment measures for work with laboratory-modified H5N1 strains lies with individual countries and facilities.”11 Further, the US government in 2012 implemented the Government Policy for Oversight of Life Sciences Dual Use Research of Concern, establishing research guidelines for a narrow scope of only 15 agents.12 Scientists therefore largely have the current responsibility of deciding whether to publicize their findings, but the effectiveness of this model remains dubious due to its tolerant subjectivity and lack of standardized censorship. A safer strategy might enforce stricter governmental censorship of experimental methods and require researchers to obtain permission from the NSABB before accessing censored information. More centralized DURC regulation on a global scale may also prevent risky knowledge from being published in a country with lenient standards. Hopefully, a catastrophic bioterrorist incident will not be needed for policymakers to establish more practical, well-defined DURC regulations.

References

1. Riedel, Stefan. 2004. “Biological Warfare and Bioterrorism: A Historical Review.” Proceedings (Baylor University. Medical Center) 17 (4): 400–406.

2. Federation of American Scientists. “Case Studies in Dual Use Biological Research.” Accessed 2014 February 2. http://www.fas.org/biosecurity/education/dualuse/index.html.

3. Cello, Jeronimo, Aniko V. Paul, and Eckard Wimmer. 2002. “Chemical Synthesis of Poliovirus cDNA: Generation of Infectious Virus in the Absence of Natural Template.” Science 297 (5583): 1016–18. doi:10.1126/science.1072266.

4. “National Science Advisory Board for Biosecurity (NSABB).” 2014. Accessed March 8. http://osp.od.nih.gov/office-biotechnology-activities/biosecurity/nsabb.

5. Dance, Amber. 2008. “Silicon Highlights Remaining Questions over Anthrax Investigation.” Nature News, September. doi:10.1038/news.2008.1137. http://www.nature.com/news/2008/080929/full/news.2008.1137.html.

6. National Institutes of Health. 2011. “Press Statement on the NSABB Review of H5N1 Research.” Last modified 2011 December 20. http://www.nih.gov/news/health/dec2011/od-20.htm.

7. Imai, Masaki, Tokiko Watanabe, Masato Hatta, Subash C. Das, Makoto Ozawa, Kyoko Shinya, Gongxun Zhong, et al. 2012. “Experimental Adaptation of an Influenza H5 HA Confers Respiratory Droplet Transmission to a Reassortant H5 HA/H1N1 Virus in Ferrets.” Nature. doi:10.1038/nature10831. http://www.sciencedaily.com/releases/2012/05/120502143852.htm.

8. “A Central Researcher in the H5N1 Flu Debate Breaks His Silence.” 2014. Text. Accessed February 9. http://news.sciencemag.org/2012/01/central-researcher-h5n1-flu-debate-breaks-his-silence.

9. “Free to Speak, Kawaoka Reveals Flu Details While Fouchier Stays Mum.” 2014. Text. Accessed February 9. http://news.sciencemag.org/2012/04/free-speak-kawaoka-reveals-flu-details-while-fouchier-stays-mum.

10. Satyanarayana, K. 2011. “Dual Dual-Use Research of Concern: Publish and Perish?*.” The Indian Journal of Medical Research 133 (1): 1–4.

11. “WHO | Guidance for Adoption of Appropriate Risk Control Measures to Conduct Safe Research on H5N1 Transmission.” 2014. WHO. Accessed February 20. http://www.who.int/influenza/human_animal_interface/biosafety_summary/en/.

12. Uhlenhaut, Christine, Reinhard Burger, and Lars Schaade. 2013. “Protecting Society.” EMBO Reports 14 (1): 25–30. doi:10.1038/embor.2012.195.

13. Education, UAF Center for Distance. 2006. Colorized Avian Flu Virus. http://www.flickr.com/photos/uafcde/112988956/.

Tejong Lim is a third-year student at the University of Chicago majoring in biological sciences with a specialization in microbiology.

“We’ve Done All That We Can”: Ethical Issues in End-of-Life Care

Stephanie Bi


     One of the core tenets of the Hippocratic Oath, which all physicians take at the start of their careers, states that “most especially must I tread with care in matters of life and death. If it is given me to save a life, all thanks. But it may also be within my power to take a life; this awesome responsibility must be faced with great humbleness and awareness of my own frailty.”1 How exactly a physician, or any healthcare provider, for that matter, should determine how and when to end treatment for a patient is put to individual interpretation. End-of-life care is arguably one of the most fundamental questions of medical ethics, as it deals with the delicate and blurred boundary between life and death. To optimize end-of-life-care, a holistic and communicative approach can be implemented through a transparent doctor-patient relationship, a cohesive interdisciplinary team, and continuous follow-up with family members through the entire bereavement process.
     The terms “hospice” and “palliative care” are often conflated. To clarify, a “hospice” is not a place and often is a treatment team including a visiting nurse that is typically situated in the patient’s home. On the other hand, “palliative care” is generally given in an institutionalized facility such as a hospital or nursing home. Hospice care necessitates that one forego extensive life-prolonging treatment, while this is not necessarily the case with palliative care.2 Additionally, to be eligible for hospice, one must have a physician write a note certifying that he or she had a life expectancy of less than six months.3
      To provide optimal end-of-life care, a doctor/patient relationship must be founded on ideals of transparency. Studies show that even though doctors usually make the admission to patients when a cancer is incurable, most are hesitant to give a more specific prognosis, even when pressed. For example, more than forty per cent of oncologists report offering treatments that they believe are unlikely to work. To avoid creating a sense of false hope that can be easily crushed, “concurrent care” is a viable option. In 2004, when the healthcare management company Aetna implemented an experimental program that allowed patients to receive hospice services without forgoing other treatments, the number of enrolled patients electing to use hospice jumped from 26% to 70% after two years. Despite the patients not giving up other treatment options, usage of hospitals and ICUs dropped by more than two-thirds, causing overall costs to fall by almost a quarter. Similar findings have emerged from data in La Crosse, Wisconsin hospitals, in which patients are asked upon admission to a hospital, nursing home, or assisted-living facility four questions about their wishes for end-of-life care. The questions themselves do not matter so much as the fact that they get conversation amongst the patient, the patient’s family, and the treatment team about end-of-life care options early on. Thus, early discussion about end-of-life treatment can reduce the false hope built on obscure treatment options with little probability for success.3
      Physicians are by no means the only members in treating patients with terminal illnesses. As they see scores of patients every day, they may end up spending the most time providing day-to-day care. An integrative and interdisciplinary team composed of nurses, chaplains, social workers, psychologists, and other members must work together to provide the most efficient and effective care for a terminally ill patient. For instance, one of the services that can be provided by the team is a social worker skilled in working with family systems. A treatment team that communicates well not only with the patient’s family, but also with one another, is crucial to patient care.4
      Hospice and palliative care nurses, in particular, come into the closest contact for the longest amount of time with the most terminally ill patients. Author of What It’s Like to Be a Hospice Nurse Kimberly A. Condon had worked in emergency medicine for 14 years before deciding to switch to a hospice nurse specialty. On one of her first site visits as a hospice nurse, Condon visited a newborn infant with chronic seizures, right as he was taking his final breaths. Starting to sob uncontrollably, she felt terribly ashamed for ruining the parents’ final moments with their son, berating herself for not being “their support, their rock” in their time of need. To Condon’s surprise, the parents expressed intimate gratitude and hugged her. Condon reasoned that perhaps by being emotionally “there” with the parents, she had not failed them after all.5 Thus, end-of-life care professionals in particular must maintain a delicate balance between expressing empathy and presenting oneself as strong and reliable.
      It is common for medical personnel such as Condon to feel grief, failure, self-doubt, powerlessness or a reconsideration of working with terminally ill patients. To address this issue, it is important to provide both educational and emotional support for professional caregivers. Emotional support can come in the provision of a professional counselor, organization of routine debriefing sessions, and access to collegial support and mentorship groups. On the education front, there are many online resources for personal and institutionalized education including: The Initiative for Pediatric Palliative Care, Education in Palliative and End of Life Care, and the American Academy of Hospice and Palliative Medicine.6
      So what happens when the treatment team has “done everything they can”? Does the physician’s duty to the patient’s care extends past the patient’s death? For the late patient’s family, especially for pediatric patients, mourning is a long process. Loss of a child can increase risk for anxiety, depression, suicidal ideation, decreased quality of life, relationship struggles, and social decline. Many parents report that despite feeling properly treated before and during the patient’s death, they experienced feelings of abandonment by the treatment team during the bereavement period. According to Koch et al, the ethical concept of “nonabandonment” must be encompassed in a physician’s duty, along with the other factor of acting in the best interest of the child. 4
      Especially with pediatric patients, it is important to have the interdisciplinary team comfort and guide the parents and siblings of the patient. Oftentimes, siblings are neglected before, during, and after the patient’s death and as a result, feel alienated, lost, and traumatized. A recent study of bereaved parents and siblings by Steele et al. recommended that healthcare providers address these following themes: (1) improved communication with the medical team, (2) more compassionate care, (3) increased access to resources, (4) ongoing research, (5) offering praise. Actively engaging siblings in treatment discussions throughout the entirety of the process, and assigning a child life specialist/social worker from the interdisciplinary team to work with the sibling, can achieve these terms. 4
     Death is often a taboo topic in colloquial discourse, setting off difficult conversations and decisions in healthcare. We must directly address ethical issues concerning end-of-life care, rather than tiptoeing around them. A comprehensive and holistic model of care will allow for the best care of both the patient and the patient’s family through the use of a cohesive treatment team and follow-through with the patient’s family even after the patient’s death. Despite the human instinct to fight for survival, it is crucial that we calmly and realistically consider what awaits all and eludes none when medicine can do us no more good – the earlier, the better.

References

1 Tyson, Peter. "The Hippocratic Oath Today." PBS. http://www.pbs.org/wgbh/nova/body/hippocratic-oath-today.html (accessed February 21, 2014).

2 Villet-Lagomarsino, Ann . "Hospice Vs. Palliative Care." Hospice Vs. Palliative Care. http://www.caregiverslibrary.org/caregivers-resources/grp-end-of-life-issues/hsgrp-hospice/hospice-vs-palliative-care-article.aspx (accessed February 22, 2014).

3 Gawande, Atul. "Letting Go." The New Yorker. http://www.newyorker.com/reporting/2010/08/02/100802fa_fact_gawande?currentPage=all (accessed February 21, 2014).

4 Jones, Barbara, Nancy Contro, and Kendra Koch. "The Duty of the Physician to Care for the Family in Pediatric Palliative Care: Context, Communication, and Caring." The Duty of the Physician to Care for the Family in Pediatric Palliative Care: Context, Communication, and Caring. http://pediatrics.aappublications.org/content/133/Supplement_1/S8.long (accessed February 22, 2014).

5 Condon, Kimberly A . "What It's Like to Be a Hospice Nurse." Slate Magazine. http://www.slate.com/articles/life/family/2013/06/nurses_and_hospice_care_personal_essay_from_a_nurse_working_in_end_of_life.html (accessed February 22, 2014).

6 Michelson, Kelly N, and David M Steinhorn. "Pediatric End-of-Life Issues and Palliative Care." Clinical Pediatric Emergency Medicine. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2344130/ (accessed February 22, 2014).

Stephanie Bi is a second-year student at the University of Chicago majoring in Biological Sciences and English Language and Literature and is planning on attending medical school. She is strongly interested in the ethical and social aspects of medicine.

Art’s therapeutic effect to on cancer treatment

Gustavo Pacheco


     With difficult treatment procedures such as chemotherapy, it is no surprise that cancer debilitates both the patients physical and mental state. However, even with the challenges that cancer brings, many patients are capable of maintaining optimistic outlooks through different therapy methods. In recent years there has been “a growing number of hospitals across the US, [in which] cancer patients are using art.”1 Could there be a treatment benefit from the artistic expression of cancer patients?
     Over the years, various studies have suggested that methods of artistic expression benefit cancer patients as a palliative measure of dealing with the disease and pain. One measure of the effects of the treatment is the improvement in the quality of the patient's life. Cancer treatment programs have implemented outlets for artistic expression for patients to alleviate stress and to prevent the patients from dwelling on their disease. Recently, the role of stress in exacerbating many diseases has also been addressed and cancer is not an exception. Therefore, activities such as those that allow for creative expression could be beneficial in cancer treatment.
      Stress can often lead to serious problems for a cancer patient by increasing inflammation, which exacerbates patient’s already deteriorating health. Inflammation is the body's attempt to protect itself by removing harmful stimuli such as cancer cells, irritants, or pathogens to begin the healing process of the affected organ2. However, in the process of an inflammatory response to a microenvironment of cell proliferation, survival and migration occurs, which are all processes in tumor cell development and cancer progression. Tumor cells overtake some signaling molecules and their receptors in the immune system to use them for invasion, migration, and metastasis. Thus, it is possible that high stress leads to higher inflammation and therefore favoring of cancer initiation and progression.
     In addition to reducing stress, there are other benefits of artistic expression. In narrative studies, it has been reported that artistic expression provides a heaven, allows for a better analysis of the situation, and enhances the quality of life in patients with breast cancer3. Interestingly, it is not only the process of making art that helps the patient but also the appreciation of art. Terminal cancer patients benefitted from art production and observation of body linings, artistic interpretations of human bodies, because it helps patients to manage their emotional crisis4. In addition another study indicated that patients undergoing radiotherapy for breast cancer treatment had a marked improvement because they were able to cope better with the situation through art therapy5. However, while these artistic therapy provides benefits, the effects of these therapies are in many cases difficult to measure. The completion of more studies on art therapy’s effect on cancer patients could therefore shed more light into specific interventions like body lining therapy.
      Some therapeutic methods that have been employed have included writing and music. In one study, patients were asked to write pre-appointment, post-appointment, and follow up expressive writing tasks. During these tasks, the patients wrote for 20 minutes about their thoughts on their cancer6. Patients that engaged in expressive writing claimed to rethink their outlook thereby suggesting that their quality of life had improved. It appeared that by giving the patients the chance to rethink their illnesses, patients were able to improve their quality of life. This result could possibly be explained by how the reflective writing allowed patients to relieve stress due to their condition by expressing their contained thoughts and emotions.
      Additionally, the Ankara Numune Education and Research Hospital in Turkey has been conducting a study in which cancer patients take music and art classes during their chemotherapy . The patients stress levels are measured prior to the chemotherapy session and then again during the time they are involved in the classes and undergoing chemotherapy7. The results of this study show that with the art and music classes, there was a decrease in stress of the patients according to psychiatric measurements. These creative activities thus allow patients to forget about their disease briefly and relieve their stress.
      Results from these studies are able to strongly suggest that artistic expression helps a patient coping with cancer. Art therapy has the potential to decrease tension and stress in the patient. This helps in mitigating inflammation to reduce the progression of cancer, and therefore improve the health of the patient. Hopefully, with more studies of the relationship between art and health, more policies and programs will allow for artistic therapy and activities to occur in more hospitals and treatment centers.

References

1. Machiodi, Cathy. "The Bone Fractured Fairy Tale: A Story of Art as Salvation." Arts and Health. http://www.psychologytoday.com/blog/the-healing-arts/200904/the-bone-fractured-fairy-tale-story-art-salvation (accessed March 4, 2014).

2. Coussens, Lisa M., and Zena Werb. "Inflammation And Cancer." Nature 420, no. 6917 (2002): 860-867.

3. Öster, Inger, Ann-Christine Svensk, Eva Magnusson, Karin Egberg Thyme, Marie Sjõdin, Sture Åström, and Jack Lindh. "Art Therapy Improves Coping Resources: A Randomized, Controlled Study Among Women With Breast Cancer." Palliative & Supportive Care 4, no. 01 (2006): 57-64.

4. Gabriel, Bonnie, Elissa Bromberg, Jackie Vandenbovenkamp, Patricia Walka, Alice B. Kornblith, and Paola Luzzatto. "Art Therapy With Adult Bone Marrow Transplant Patients In Isolation: A Pilot Study." Psycho-Oncology 10, no. 2 (2001): 114-123.

5. Collie, K.. "A Narrative View Of Art Therapy And Art Making By Women With Breast Cancer." Journal of Health Psychology 11, no. 5 (2006): 761-775.

6. Morgan, N. P., K. D. Graves, E. A. Poggi, and B. D. Cheson. "Implementing An Expressive Writing Study In A Cancer Clinic." The Oncologist 13, no. 2 (2008): 196-204.

7. Anatolia News Agency. "Art has magic power in chemotherapy patients." Hurriyet Daily News | Haber Detay.http://www.hurriyetdailynews.com/art-has-magic-power-in-chemotherapy-patients.aspx?pageID=238&nID=17645&NewsCatID=373 (accessed February 10, 2014).

Gustavo Pacheco is a first year student at The University of Chicago Majoring in Biological Sciences and minoring in Spanish and French. His interests include Genetics and Cellular Biology.

MSF, Finances, and Medical Triage

Austen Smith


     In the famous words of Mother Teresa, “If you can’t feed a hundred people, then feed just one.” This seemingly simple ethical maxim raises a dilemma that is central to humanitarian relief efforts: which one? Given limited human and financial resources, triage—or “the general problem of humanitarian prioritization”—exists at every level of the system.1 In most relief-oriented non-governmental organizations (NGOs), the highest administrative officers choose which regions of the world to aid, while field workers choose which individuals. Of the many humanitarian aid organizations, we will take Médecins Sans Frontières (MSF), known also as Doctors Without Borders, as a prime example for its nearly exclusive focus on medical aid in the midst of crisis situations. How can MSF ethically choose whom to medically treat when any decision will inevitably result in a large number of deaths? In other words, if we choose to feed just one, then are we sentencing the remaining ninety-nine to death? For MSF, ethical solace seems to lie in the tenets of a simple, life-affirming philosophical foundation; but in order to meet increasing worldwide needs, the humanitarian organization as a business becomes ever more critical.
     The issue of triage, like many issues of humanitarian relief efforts, finds its roots in funding and resources. Ideally, with infinite human and financial resources, medical triage would be unnecessary and everyone in need of assistance would be treated immediately. Even a tempered ideal might be possible in regions of relative abundance: everyone would be treated eventually, commensurate with his or her needs. However, MSF is not predicated on abundance but on responding to emergencies across the globe, especially in exigent areas, with the resources it does have.
      Through ethnographic accounts in his book Life in Crisis, Peter Redfield portrays MSF’s ethical foundation as a simple moral dictum that saving a life is good. What this tenet of MSF ultimately reflects, however, is the impossibility to save every life that needs saving. MSF specifically attempts to “build a framework for action around an ethic of life, understood medically and cast on a global scale.” MSF believes consistently, if nothing else, in a life-centered and life-affirming philosophy: “the members of MSF share no sure vision of a public good beyond a commitment to the value of life.”1
      Given limited resources and personnel, however, the group is forced, out of practical necessity, to prioritize. This triage may take the form of choosing one mission over another, or choosing to treat one person over another. Although Redfield admits that the group “has yielded no clear solutions” in the midst of worldwide medical crises,1 the actions of the group do maintain ethical consistency, in the face of finite resources, on a foundation of human life.
      Thomas Weiss, Director of the Ralph Bunche Institute for International Studies, considers the necessary, practical side of humanitarian efforts in his new book, Humanitarian Business. Weiss remarks that in recent years humanitarian aid has grown “as a business” in a strong market economy. On one hand, relief efforts seem to garner considerable support. MSF’s annual volume is approaching $18 billion, over twice what it was in 2000.2 However, even the marked rise in humanitarian aid does not come close to meeting the needs of all 50 million people who were living under duress or threatened by wars and disasters in 2012. The beginning of the solution, to Weiss, is further development of NGO “image and marketing strategies,” which are essential for success in the expanding global business.2
      MSF operated in 2010 with a total expenditure of $1.1 billion, the largest of any humanitarian relief NGO.2 Two characteristics of MSF’s financial strategies are especially noteworthy. Firstly, MSF is committed to choosing intervention sites based solely on its “independent assessment of people’s needs—not political, economic, or religious interests.”3 Such a policy leads to significant and unpredictable dents in the MSF coffers, as the group can no better predict the timing of acts of genocide than it can that of devastating earthquakes. In order to sustain its operations, then, MSF relies more on continual, monthly donations than it does on sporadic lump sums.3 Over 90% of MSF’s revenue comes from donations by individuals and private institutions (primarily corporations, trusts, and foundations).2 MSF devotes 12.3% of its total donation monies to the fundraising efforts that keep the organization afloat (the group saw a surplus of $176 million in 2010).3 Evidence of MSF’s financial strategies is readily apparent, as the first result in a Google search of “MSF” is a paid advertisement for www.doctorswithoutborders.org/Donate, a site that asks visitors to become a “Monthly Field Partner.”4
      Because national media hold such influence on mainstream attention and international donations, MSF understands triage as choosing missions based not merely on the degree of crisis and emergency, but also on the lack of support from other sources. As the United Nations (UN) and national media gave particular attention in 2013 to conflicts and natural disasters in countries such as Syria, Sudan, the Democratic Republic of Congo, and the Philippines,5 MSF launched new campaigns on lesser-reported crises. “Children with tuberculosis must not be neglected” read the headline of a December MSF article, for example, on health threats in Tajikistan. In spite of worldwide humanitarian efforts elsewhere (the total needs in Syria and Sudan alone are estimated to surpass 3 billion USD5), MSF expended its resources to diagnose and treat tuberculosis (TB) in central Asia. Jointly with the Ministry of Health of Tajikistan, MSF hosted a symposium titled “Scaling up and improving access to ambulatory and paediatric TB care in Central Asia and Eastern Europe.”6 Here is a case of MSF’s triage strategy to champion “underreported” crises that have been overshadowed by those in the mainstream news.1 If the international community can take care of many children in the spotlight, then perhaps MSF does best to turn its attention to a few children sitting in the corner. Its finances may benefit for doing so, as MSF is likely to attract a broader base of regular donors if it can showcase its many and varied international efforts.
      Past generations of relief workers, Weiss argues, have never fully abandoned the “Good Samaritan” ideal, or the illusion that NGOs are “free to act as they [see] fit, taking into account only the needs of the populations they [seek] to help, and the limits imposed by their own charters.”2 The MSF organization seems to be divesting itself of such a view, recognizing that acts of medical triage, whatever their ethical foundation, cannot be isolated from the practicalities of business. Hence, the choices made on individual and regional levels reflect not merely the fundamental principles of a humanitarian organization and sheer medical science, but also the greatly complex social and economic forces that surround every crisis situation.

References

1. Redfield, Peter. 2013. Life in Crisis: The Ethical Journey of Doctors Without Borders. Los Angeles: U of California Press.

2. Weiss, Thomas G. 2013. Humanitarian Business. Malden, MA: Polity.

3. Médecins Sans Frontières (MSF). 2014. “Financial Information.” Accessed February 19. http://www.doctorswithoutborders.org/about-us/financial-information/?ref=nav-footer

4. Médecins Sans Frontières (MSF). 2014. “Donate to Doctors Without Borders.” Accessed February 19. https://donate.doctorswithoutborders.org/monthly.cfm?source=AZD130000S03&utm_source=google&utm_medium=ppc&gclid=CPrt5emo87wCFas-Mgod30sAjg&mpch=ads

5. United Nations Office for the Coordination of Humanitarian Affairs (UNOCHA). 2014. “Overview of Global Humanitarian Response 2014.” Accessed February 19. https://docs.unocha.org/sites/dms/CAP/Overview_of_Global_Humanitarian_Response_2014.pdf

6. Médecins Sans Frontières (MSF). 2013. “Tajikistan: children with tuberculosis must not be neglected.” Accessed February 19. http://www.msf.org/article/tajikistan-children-tuberculosis-must-not-be-neglected

Austen is a fourth-year student at the University of Chicago majoring in Biological Sciences with a Specialization in Neuroscience and minoring in Germanic Studies.

The Influence of Financial Institutions on Healthcare Companies

Matthew Yeung


      The involvement of different financial institutions in different sectors of businesses has been often criticized by the general public as a corrupting influence on the companies involved, due to the institutions perceived desire to make profits at all costs. So what influence would a financial institution have on healthcare, the sector believed to be the most altruistic and least profit-driven? One potential glance into the consequences of the involvement of financial institutions lies in Madison Dearborn Partners (MDP)’s $1.6 billion acquisition of a controlling stake in Ikaria, a firm known for providing hospital therapies focused on critical care. A close analysis of the private equity firm’s goals and their effect on Ikaria’s decisions as a healthcare firm would foreshadow how financial institutions may affect healthcare companies in the long run.
     We have to first understand how private equity companies make profits. The process is as follows:

1) Private equity companies raise capital from partners

2) They select certain companies that are either undervalued, or have profit potential, and acquire them with the raised capital.

3) Operational changes are made to the company over time in an attempt to make the company under acquisition more valuable

4) The private equity firm would then try to sell the portfolio company, either by selling it to another private buyer or by conducting an initial public offering (IPO)

5) After returning the funds to the partners of the acquired company, the private equity firm earns a proportion of the funds as fees, which varies based on the growth and return of the company while under the firm’s acquisition.


      The above process describes a buyout fund, which is the most well-known type of private equity firm strategy. In most cases, in order to achieve greater potential returns on their investments, the firms partake in leveraged buyouts, or LBOs, in which they use a high proportion of debt to buy the business. In the case of Ikaria, only 25% of the money used to purchase the company was from the partners and shareholders of Ikaria and MDP.
      So what would this all mean for the healthcare company Ikaria? A common practice for firms acquired through LBOs is that assets of the company, such as cash (and its equivalents), would be used in order to pay down some of the debt, which may leave the company without adequate capital in case of any unexpected expenses. The additional debt on the balance sheet will deteriorate Ikaria’s liquidity and solvency ratios, which would make the cost of debt on the markets more expensive should the firm seek more financing for operational activities.
      With MDP as a majority stakeholder in Ikaria, a shift in the company’s direction is probable and to be expected. An agreement was already made so that some non-commercial products currently developed by Ikaria would be separated to another company. The agreement implies that Ikaria believes that these products may not be able to generate enough income in order to justify their continued research and development expenses. This hand-off may not be in the best interest of the public as further stages in the development of these non-commercial products may render them beneficial for the public and the development itself would have likely benefited from Ikaria’s resources.
      On the other hand, as private equity firms are mainly interested in increasing the value of their portfolio companies, there will likely be more operational improvements while under the acquisition of a private equity firm than if the portfolio company remained a private company. Therefore, managerial improvements might be expected for Ikaria. An optimistic view is that the managerial improvements will speed up the development of therapeutic devices currently in the pipeline and benefit potential clients, provided that the development increases the value of the firm. Ikaria may also be able to expand, as the acquisition will provide some capital to the firm, which can be used to extend the reach of its services, thereby allowing Ikaria to help more people.
      However, a worrying aspect of the acquisition of Ikaria concerns the various techniques that private equity firms use in order to extract more value from the company. One recent controversial technique that private equity firms have used is called the supercharged IPO, wherein the portfolio firm and the private equity company make an agreement in an IPO.1 The private equity company will then receive payments from the tax reductions claims due to Ikaria’s IPO1. This arrangement means Ikaria may be paying MDP for years after the IPO, depriving them of cash needed for further development of medical treatments once it is offered to the public.
      The longer term implications of Ikaria under MDP’s management may be further complicated by the nature of the business, as Ikaria’s business model relies significantly on proprietary treatments and patents. Any current product will thus have a limited profit stream before the patent expires. Therefore, it is imperative that Ikaria continue to develop new patents in order to sustain its income stream for the long run. However, development of new patents may contradict MDP’s objective for extracting the highest possible equity value out of the firm, leaving Ikaria, in the worst case scenario, facing a profit “cliff”, where patents expire without being replaced by newer patents.
      So what might be the overall effect on Ikaria under MDP’s control? It is likely that while Ikaria would be able to expand its operations and subsequently benefit more people under MDP’s management, this expansion may come at a cost to Ikaria’s long-term operations as a business. It is likely that its existing shareholders would have to contest many of MDP’s operational decisions to ensure that the company is at its best going forward.

References

1. Beltran, Luisa. 2014, "Madison Dearborn, expected to fundraise, invests $244 mln in Ikaria."peHUB, Accessed March 1, 2014. http://www.pehub.com/2014/01/madison-dearborn-expected-to-fundraise-invests-244-mln-in-ikaria/

2. Garza, Joe. Garza & Harris. 2012, "Supercharged IPOs Draw Attention." Garzaharris, Accessed March 1, 2014. http://garzaharris.com/supercharged-ipos-draw-attention/

Matthew Yeung is a first-year student at The University of Chicago majoring in Economics

Is the Next Big Thing Already Here?

Katherine Oosterbaan


     This year, the Victoria’s Secret fashion show featured a harbinger of the future. No, it wasn’t a novel piece of lingerie – it was the first pair of 3D printed wings. These elaborate, crystal-studded wings marked one of the first forays of this emerging technology into fashion. Of course, 3D printing’s possibilities extend far beyond the realm of fashion. 3D printing has far-reaching implications for almost every aspect of life, and its projected scope is almost limitless. You could print out a new phone or some new parts to repair your car. In a few years, 3D printed organs could be saving lives, just as 3D printed guns are taking them.
     So how does this seemingly magical technology work? The first step is to use computer-assisted design (CAD) software to create a 3D blueprint of the object that you want to print. When designing an image, it’s important to be extremely precise because the printer will print exactly what you have rendered. To make this process easier, many online forums and companies have already created designs that you can purchase or download. After the design is complete, you send it to the printer in the form of 3D polygons that the printer can interpret (imagine a statue sliced into lots of tiny layers). 3D printing is additive, which means that thin layers are deposited on top of each other, building up to produce the final product. Once the file is sent to the printer, you can choose the material you want to print in. Since this a complex process, 3D printing of an object can take hours or even days to complete (1).
      With the choice of metal and many plastics and polymers to print with, the options for 3D printing are almost infinite. Soldiers overseas can print guns and bullets as needed. Prosthetic limbs can be tailored exactly to every person, for no additional cost. Astronauts in space can print out food and replacement parts. Jewelry, cars, art pieces, and more can be customized down to the last detail. One of the most significant advances is happening at the Cardiovascular Innovation Institute, where scientists are developing a 3D printer that is intended to print an artificial heart using stem cells extracted from the patient’s fat or bone marrow. Doctors hope to print out the tiny blood vessels and parts of the heart, attach them, and then allow the cells to work on their own to complete the assembly process—within a day (2). While this probably won’t become an easily accessible technology for several years, the possibility that we will be able to print organs that are a perfect match and assemble them that day is certainly exciting.
      Another major recent development in 3D printing is in the world of guns. A gun company in Austin has resurrected 100-year-old blueprints to 3D print a gun, and another company has produced the first 3D printed gun for widespread sale called the Liberator. This raises myriad issues because these guns can theoretically be printed with plastic, which would render them undetectable by metal detectors. Additionally, 3D printed guns would wreak havoc on ballistics identification in crimes, since they could simply be printed, used, and recycled. In response to this, the US federal government decided to conduct tests on the gun to determine whether or not it could be considered a “deadly weapon.” Their results showed that a plastic gun would explode, causing possible harm to the user, but a gun made of ABS (an enhanced plastic-based material) was fully functional (3). Since these gun designs are floating around on the Internet, theoretically accessible to anyone with a 3D printer, this poses a significant security threat that will only increase as 3D printing technology develops.
      3D printing also poses another issue in the form of copyright infringement. Imagine that you absolutely love Star Wars and want to print out some of your favorite characters as action figures, so you download designs and do so. And just like that, you’ve broken the law. Most major film and cartoon characters are copyrighted by their agencies, as are most household name-brand items and even shapes (4). The ability to download blueprints for free or from a nonaffiliated party without paying the actual company poses large legal ramifications. Couldn’t a company just search over the Internet and remove all offending designs? Not necessarily. In October, a free image encryption software called “Disarming Corruptor” was released, which enables users to obscure their designs and to “decrypt” them with a passkey with 100 trillion different possible combinations (5). This poses even more risks—gun file transfers, for example—beyond simple copyright infringement. Clearly, some regulations will need to be implemented before this technology fully takes off.
      Even though 3D printing technology is being touted as the “next big thing,” it doesn’t quite seem to have caught on yet among average consumers. Last quarter, the world’s leading 3D printer company, Stratasys, sold only 5,925 printers. On the other hand, this is a sharp increase from 911 printers just a year ago, so the industry is certainly growing (6). As more and more industries and organizations begin to realize its advantages, 3D printing could become our main source of machines, clothes, or even food. However, there will always be those who seek to use it for more nefarious purposes, and the 3D printing revolution could bring an entirely new dimension to the underground arms market, or a new shape-pirating controversy. Only time will tell if 3D printing will, in fact, become the revolution that so many predict, and how many of its risks will materialize.

References

1. Petronzio, Matt. March 28, 2013. “How 3D Printing Actually Works.” Mashable, Novermber 18. http://mashable.com/2013/03/28/3d-printing-explained/

2. Hsu, Jeremy. November 18, 2013. “Lab-Made Heart Represents ‘Moonshot’ for 3D Printing.” livescience, November 18. http://www.livescience.com/41280-3d-printing-heart.html

3. Reilly, Ryan J. November 14, 2013. “Feds Printed Their Own 3D Gun And It Literally Blew Up In Their Faces.” Huffington Post, November 18. http://www.huffingtonpost.com/2013/11/13/3d-guns-atf_n_4269303.html

4. Henn, Steve. February 19, 2013. “As 3-D Printing Becomes More Accessible, Copyright Questions Arise.” NPR All Tech Considered, November 18. http://www.npr.org/blogs/alltechconsidered/2013/02/19/171912826/as-3-d-printing-become-more-accessible-copyright-questions-arise

5. Greenberg, Andy. November 4, 2013. “3-D Printing ‘Encryption’ App Hides Contraband Objects In Plain Sight.” Forbes, November 18. http://www.forbes.com/sites/andygreenberg/2013/11/04/3d-printing-encryption-app-hides-contraband-objects-in-plain-sight/

6. Madrigal, Alexis C. November 8, 2013. “Almost No One Buys 3D Printers.” The Atlantic, November 18. http://www.theatlantic.com/technology/archive/2013/11/almost-no-one-buys-3d-printers/281297/

Katherine Oosterbaan is a second-year student at the University of Chicago majoring in Chemistry.

The Physiology of Stress

Sarah Watanaskul


     Though stress is encountered in everyday life, the body has mechanisms that assist in confronting stress and then returning itself to a normal homeostatic state. Although the body can adapt well to short-term stress—which, in small amounts, may offer temporary benefits—long-term stress can be detrimental to several physiological systems in maintaining homeostasis[1].
     Imagine that you walk into a final exam. As you open your exam booklet, you feel your heart thumping against your chest, you begin breathing faster, and you can barely contain all your energy. What’s causing all of these symptoms? When a stressful situation or stimuli is first encountered, the elicited response begins in the brain, which leads to subsequent effects on different areas of the body via the transmission of hormones through the blood to their target organs. The brain’s response to stress can activate different axes (biochemical pathways), culminating in symptoms we commonly associate with stress.
     The autonomic nervous system (ANS), a component of the peripheral nervous system, consists of two antagonistic systems called the sympathetic (SNS) and parasympathetic nervous systems (PNS) [2]. The SNS prepares the body for increased energy expenditure through increased catabolic activity, which breaks down metabolites to generate energy that can be used for movement. The PNS is involved in controlling energy conservation and relaxation, or the body’s anabolic functions[3].
     Let us examine in detail how the brain elicits the “fight or flight” response with the help of hormones and the SNS. There are two parts to activating this response. The first part consists of the hypothalamus directly activating the adrenal medulla (the inner part of the adrenal glands) via a direct neural pathway connection[3,4]. The main function of the adrenal medulla is to release the catecholamines [5,6] epinephrine and norepinephrine (also known as adrenaline and noradrenaline). These hormones have a variety of effects on the body, including increased heart rate and force of contraction, vasodilation of arteries in areas used in responding to the stress, dilation of the pupils and bronchi, increased breathing rate, decreased digestive activity, and the release of glucose from the liver, among others[2]. Thus, some of your immediate responses to taking the final exam are caused by adrenaline and noradrenaline.
     The second part of the “fight or flight” response involves the hypothalamus in the hypothalamic-pituitary-adrenal axis (HPA). The HPA is initiated by the release of corticotropin-releasing hormone from the hypothalamus, which causes the pituitary to release adrenocorticotropic hormone (ACTH) [2]. ACTH travels through the blood stream to activate the adrenal cortex of the adrenal glands that sit atop the kidney[7]. The main function of the adrenal cortex (the outer layer of the adrenal gland)[7] is to release steroid hormones like glucocorticoids and mineralocorticoids, which have roles in increasing metabolism and maintaining a salt-and-water balance respectively[8]. Cortisol, a glucocorticoid [9], affects metabolism by generating glucose via the degradation of liver proteins (gluconeogenesis) or the breakdown of fats (lipolysis). Aldosterone, a mineralocorticoid, helps to maintain plasma volume and electrolyte concentration by increasing reabsorption of water and salts in the kidney, thereby reducing urine production [2]; this mirrors the effect of anti-diuretic hormone (ADH), which is secreted as part of the hypothalamus’ response to stress and serves to increase blood volume (and blood pressure) through water reabsorption.
     Overall, the HPA axis ensures an adequate supply of glucose in the blood as well as increased blood circulation. This provides an immediate source of energy for any muscles, organs, etc. that are needed to respond to the stressful stimuli [1,2], and the increased blood volume, heart rate, and force of heart contractions ensures that glucose and oxygen get to where they are needed. The increase in blood glucose levels accounts for the burst of test-taking energy in our hypothetical scenario, and the elevated blood pressure and heart rate may explain why some people hear their “hearts pounding in their ears” when they are stressed.
     So what happens after the final exam? Since the exam is a stimulus of short-term stress, the body will return to homeostasis once the ordeal is over. This is accomplished by the PNS, which stimulates the release of acetylcholine in order to decrease metabolic rate, heart rate, breathing rate, and muscle tension, among other things [2]. The PNS serves to reverse the effects of the SNS.
     Short-term stress may offer benefits like alertness, a burst of energy, and improved memory and cognitive functioning [10], but chronic stress can interfere with the body’s ability to return to homeostasis and may contribute to other health complications. Consider the weeks before the final exam. As a busy college student you probably have many obligations outside of class, and the challenge of balancing extracurriculars with homework and study time can quickly turn into a source of long-term stress. It’s likely you’ve heard people blame stress for hypertension, weight gain, or sickness, but why do they say that, and is it true?
     Under conditions of long-term stress, regulatory mechanisms can begin to lose their effectiveness, and the secretion of ADH from the hypothalamus will still increase blood pressure even if a person already has a high resting blood pressure [2]. This augmentation of an already-elevated blood pressure leads to hypertension.
     Another example of detrimental long-term stress involves cortisol. Cortisol is normally released in a diurnal rhythm (high levels after awakening and then lower levels throughout the day) [9] and is involved in several processes including energy regulation, the control of salt and water balance, and immune system function [9, 11]. Though having sufficient cortisol levels is essential for health, elevated cortisol levels from chronic stress can lead to symptoms like weight gain and impaired immune function [12]. On a basic level, higher cortisol levels are correlated with an increase in appetite and cravings for foods high in sugar and fat [11], a result of the body’s need to replenish energy stores [12]. One potential consequence of this effect on appetite involves the release of neuropeptide Y (NPY). Neuropeptides can act as chemical signals in the endocrine system [13] and NPY is a sympathetic neurotransmitter, one of whose functions is to stimulate adipocyte differentiation. If the receptors for NPY are significantly expressed in visceral fat, it can result in the rapid growth of the type of fat linked to an increased risk for diabetes, obesity, heart disease, and/or stroke [14,15]. Cortisol also promotes fat cell maturation and fat storage, so elevated amounts of cortisol in visceral fat (surrounding the stomach and intestines) compared with those in subcutaneous fat (underneath the skin) [15] can contribute to obesity [11]. Furthermore, since one of cortisol’s responsibilities is to increase the availability of energy sources like glucose in the blood, it is necessary to temporarily stop the storage of glucose. To do so, cells decrease their responsiveness to insulin, a hormone that stimulates glucose uptake from the blood. Prolonged unresponsiveness to insulin because of long-term stress can lead to insulin resistance, thereby increasing risk for diabetes.
     Short-term stress temporarily boosts and enhances immune function, but chronic stress and prolonged, high levels of cortisol can lead to the suppression or impaired function of the immune system. This is caused by several factors, including the shrinkage of the thymus gland (an important part of the immune system) [16], the decrease in white blood cell production, and the inhibition of white blood cell secretion of proteins that regulate immune response and inhibit virus replication [12]. Thus, if you see fellow students coughing and sniffling when finals week rolls around, it may be due to the cumulative effects of long-term stress rather than “another bug going around.” This may seem a bit counterintuitive. If cortisol can enhance immune function in situations of short-term stress, wouldn’t more cortisol result in a better immune system? As it turns out, the anti-inflammatory and immunosuppressive effects of elevated levels of cortisol are a defense mechanism against autoimmune disease. Because short-term stress enhances immune function, repeated enhancements due to a continued stream of stressful encounters will generate an overactive immune system, which can lead to autoimmune diseases. As a measure of prevention, chronic exposure to cortisol impairs the components of the immune system, thereby “protecting” us from autoimmune diseases, though ironically making us more susceptible to pathogenic disease [12].
     All in all, stressful stimuli can initiate many physiological and biochemical pathways in the body that prime the body and give it the resources necessary to respond to the stressful encounter. The interaction between the nervous and endocrine systems produces the classic symptoms that we associate with stress, and the long-term consequences of these symptoms and their underlying mechanisms help to explain the conditions potentially affected by stress.

References

[1] University of Maryland Medical Center. 2013. “Stress.” Last modified June 26. http://umm.edu/health/medical/reports/articles/stress

[2] Gregory, Michael. “The Nervous System: Organization.” The Biology Web. Accessed January 4, 2014. http://faculty.clintoncc.suny.edu/faculty/michael.gregory/files/bio%20102/bio%20102%20lectures/nervous%20system/nervous1.htm

[3] Seaward, Brian L. 2006. “Physiology of Stress.” In Managing Stress: Principles and Strategies for Health and Well-Being, 35-45. Burlington: Jones & Bartlett Learning. http://www.jblearning.com/samples/0763740411/Ch%202_Seaward_Managing%20Stress_5e.pdf

[4] Opsal, Caley. 2009. “Autonomic Nervous System.” BIO 1007 Lecture Outlines., Accessed January 4, 2014. http://www2.ivcc.edu/caley/107/lectures_unit_3/ans.html

[5] Vitality 101. 2012. “The Adrenal Gland.” Accessed January 4, 2014. http://www.endfatigue.com/articles/Article_the_adrenal_gland.html

[6] MedlinePlus. 2013. “Catecholamines—blood.” Last modified October 31. http://www.nlm.nih.gov/medlineplus/ency/article/003561.htm

[7] Gabriel, Allen MD. 2013. “Adrenal Gland Anatomy.” Medscape. Accessed January 4, 2014. http://emedicine.medscape.com/article/1898785-overview#aw2aab6b3

[8] St. Edward’s University Department of Chemistry and Biochemistry. 1995. “Steroids—Glucocorticoids.” Accessed January 4, 2014. http://www.cs.stedwards.edu/chem/Chemistry/CHEM43/CHEM43/Steroids/Steroids.HTML#GLUCOCORTICOIDS

[9] Society for Endocrinology. 2012. “Cortisol.” Last modified October 24. http://www.yourhormones.info/hormones/cortisol.aspx

[10] Evans, Lisa. 2013. “The Surprising Health Benefits of Stress.” Entrepreneur. Accessed January 4, 2014. http://www.entrepreneur.com/article/227371

[11] Maglione-Garves, C.A., Kravitz, L., and Schneider, S. 2005. “Cortisol Connection: Tips on Managing Stress and Weight.” ACSM’s Health and Fitness Journal 9(5): 20-23. http://www.unm.edu/~lkravitz/Article%20folder/stresscortisol.html

[12] Talbott, Shawn. 2007. The Cortisol Connection: Why Stress Makes You Fat and Ruins Your Health – And What You Can Do About It. Alameda: Hunter House.

[13] Burbach, J. 2011. “What are Neuropeptides?” Methods in Molecular Biology. 789: 1-5. http://www.ncbi.nlm.nih.gov/pubmed/21922398

[14] Kuo, L. et al. 2008. “Chronic Stress, Combined with a High-Fat/High-Sugar Diet, Shifts Sympathetic Signaling toward Neuropeptide Y and Leads to Obesity and the Metabolic Syndrome.” Ann N Y Acad Sci 1148: 232-237. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2914537/

[15] WebMD. 2009. “The Truth About Fat.” Last modified July 13. http://www.webmd.com/diet/features/the-truth-about-fat?page=2

[16] InnerBody. 1999. “Thymus Gland.” Accessed January 4, 2014. http://www.innerbody.com/image_endoov/lymp04-new.html#full-description

Sarah Watanaskul is a second-year student at the University of Chicago planning to double major in Biology and Fundamentals.

 

 

 
 

[back to top]