The rise of Pharming in Agriculture

Plants have been used for the treatment of diseases for thousands of years – long before researchers were able to identify and purify the active compounds. Salicin, which is found in willows, is a great example of such a compound. The bark and leaves of willow trees were being used to treat fevers and headaches around 400 B.C. However, it wasn’t until the 1800s that scientists discovered the active compound in willow trees (salicin) that conferred those beneficial properties.

In the last few decades, scientists have started using plants differently for the treatment of diseases. Sijmons et. al. demonstrated that gene insertion is possible in plants by showing that tobacco and potato plants could express the human serum albumin protein. This process of gene insertion is now referred to as pharming (a combination of “farming” and “pharmaceutical”). Initially, the field faced many challenges, including disapproval from anti-genetic-modification movements and regulatory uncertainty. Low yields as well as purification cost were other hurdles faced and it wasn’t until around 2009 that pharming obtained a realistic opportunity for commercial development. In 2012, the first pharming product (taliglucerase alfa, used for the treatment of Gaucher’s disease) was approved for use in humans. This attracted major pharmaceutical companies such as Mitsubishi Tanabe Pharma, which acquired Medicago (plant-based vaccines developer) in 2013.


The major benefit of pharming is that it does not require expensive infrastructure, as the plants are generally grown in an open environment. This enables the production capacity to be increased, by keeping the cost low. However, like any other process, pharming also has disadvantages. A major environmental concern is pollination/seed contamination. Other risks include accidental entry of the drug into food chains and consumption by non-targeted organisms.

Personally, I think there is still some way to go before pharming becomes the “go to” method for large-scale drug production. But interest in the process has increased exponentially in the past few years, suggesting that it has the potential to become an important technique in drug development.


P. Sijmons et. al. (1990) “Production of correctly processed human serum albumin in transgenic plants” Biotechnol. 8:217-21.

E. Stoger et. al. (2014) “Plant molecular pharming for the treatment of chronic and infectious diseases” Annu Rev Plant Biol. 65:743-768.



I see, therefore I am

Vision is a fascinating sense. In some way, it must also be our favorite one. We just have a very strong desire to see things. A beautiful sight, be it a landscape, a piece of art or a person, can make us happy. As a consequence, we have whole industries producing visual arts, building beautiful buildings,and bringing us to places that look nice.


Vision is also our go-to input pathway to help us learn and understand complex things: Looking at a switching circuit will be tremendously helpful to understand a complicated electronic device. Drawing out the mechanism of a chemical reaction will help us grasp what is going on in the atomic world.

However, vision quickly runs into its limitations when we try to satisfy our thirst for understanding living systems. The resolution that our eyes provide just isn’t sufficient to dive down into the molecular world, which is the current level at which we try to understand unanswered questions in medicine and life sciences. Thus, tools that function as an extension of our visual sense, by surpassing limits of, e.g. resolution or visibility, enabled many scientific breakthroughs in the past century.

When I say this, I am thinking about X-ray diffraction, Roentgen, CT, MRI and PET-imaging, as well as microscopy. The latter stands out in a way, because it is the oldest and most fundamental technique. It utilizes the very same tools our own eye uses for magnification: lenses. Microscopes were the door opener for both recognizing and understanding cellular and sub-cellular structures.

The continuous improvement of microscopy by utilization of advances in physics and computation has yielded incredibly powerful machines. One pinnacle of this development is certainly super-resolved fluorescence microscopy, which has been awarded with the Nobel Prize in Chemistry for 2014. Postulated by Ernst Abbe in 1873, light microscopy could never obtain a resolution higher than 0.2 micrometers, based on the wavelength of light. However, with their fearless work, Nobel Laureates Eric Betzig, Stefan Hell and William Moerner could surpass this limit and generate images of nanometer scale structures of neurons.

It is not unlikely that the resolution of microscopes will continue to improve, however, a recent elegant approach from researchers at MIT uses chemistry to achieve superresolution imaging by interpreting microscopy as a two-way-street: On the one hand we can build even more powerful microscopes that surpass the “limitations of physics” to resolve nanoscale structures. On the other hand, we can “expand” our object of interest and then investigate it with standard light-microscopy.[1]

This may sound funny at first, but it is exactly what the group around Ed Boyden has demonstrated in their current work on expansion microscopy “ExM”.[2],[3],[4] Using an expanding polymer, tissue preparations from cultured cells or preparations from the mouse hippocampus could be fixed, blown up, and analyzed with a conventional confocal microscope.

As such, ExM has lowered the activation barrier tremendously for scientists, who seek to image and investigate the biochemical nano-world. It will be exciting to “see” what findings this technique will uncover and help “expand” our knowledge in near future.


[1] http://syntheticneurobiology.org/projects/display/57/25

[2] F. Chen, P. W. Tillberg, E. S. Boyden, Science 2015, 347, 543.

[3] http://expansionmicroscopy.org/

[4] https://www.youtube.com/watch?v=-o9-X8TvgFo


Nanoparticle antioxidants offer potential Alzheimer’s therapy

Mitochondria have been known as a cell’s power plants. The abnormal generation of reactive oxygen species (ROS) from dysfunctioning mitochondria can cause neuronal cell death. This pathologic process is a key factor to a number of neurodegenerative diseases, including Alzheimer’s disease (AD). Amyloid-β peptides, which are believed to cause AD, can interact with resident proteins inside mitochondria, inducing abnormal production of ROS. Thus, ROS scavengers, such as antioxidant molecules, targeting mitochondria would be useful for prevention and early state treatment of AD.

A research team lead by Taeghwan Hyeon designed and synthesized triphenylphosphonium-conjugated ceria (CeO2) nanoparticles (TPP-ceria NPs), which can selectively localize in mitochondria and behave as strong ROS scavengers. These nanoparticles function in a recyclable manner by shuttling between Ce(III) and Ce(IV) oxidation states. The study of an AD mouse model indicates that the nanoparticles effectively suppress neuronal cell death by protecting them from ROS. This research has paved the road for the development of novel strategy to prevent and treat AD and other neurodegenerative diseases. 



ACS Nano 2016, 10, 2860−2870.


The Importance of the Presentation in Science

Looking back to the start of my scientific career, I always assumed that once I had great data, my talks would be as flashy and convincing as the seminars I attended when top school professors sell their research. I still think that you need reasonably good data to give a convincing talk, but over the years I have seen many talks with amazing data that overall left a stale taste and didn’t excite any listener, even though the findings were quite remarkable if you were familiar with the field. On the contrary, I have seen talks about researchers saving the world with their work, of course only if you didn’t know that the work was comparably less exciting and fundamental. And I have realized for myself that once I had my 5-minute elevator speech down about my research, people were actually interested in my work and didn’t just ask me whether as a chemist I was able to replicate Breaking Bad or tell me how horribly they did in high school science. It was one of the more important lessons I have learned so far in my PhD: A good presentation is just as important as good results. Luckily it’s less time consuming and serendipity based than science to come up with a couple of intriguing slides. Some scientists have published articles about writing good papers[i], giving good presentations[ii] and so forth. Everybody has their own strengths and individual weaknesses, but there are some generally very useful guidelines to rapidly improve one’s performance in a talk.

For one, it is worth the effort to film one’s presentations. A good portion of what you learn with a personal “presentation trainer” becomes fairly obvious after watching yourself for just a couple of minutes. Did you have your hands in your pockets, was 80% of the content “as in, like, you know, that thing!” or were you reading off your own slides? You know when you hear a good talk and you know what’s bad about a talk most of the time, so watch yourself at least once on video.

In Germany we say “no master has fallen out of the sky”, which basically just reminds you to practice. A talk is at least twice as good if you’ve actually given it before, maybe not in front of an audience but definitely out loud. I didn’t take this advice early on in grad school and spent these last two hours before perfecting my slides. I am not saying that you shouldn’t have perfect slides, but practicing for an hour instead would have had a much more convincing outcome that having that one reference italicized. Which brings me to my next point…

Go over your slides with someone else. You have been staring at them for days, weeks, or however long, and they’ll look much better to you than they actually are. Also, choose someone with similar background as your audience. Always keep in mind that the most exciting statistics are just going to bore an organic chemist if the lingo is unintelligible.

There are a lot of things to keep in mind about not having too much text on your slide, how the title should be a summary, the right size of text, consistency throughout the talk, good color schemes (consider that a relatively large part of the population is color blind, so be considerate and use appropriate colors- there’s good literature about it[iii])… Lots of work, but negligible to the work you put into the science!  

Now there is much more to the art of presentation than that, but the point I’m trying to make is that there are a lot of scientists with amazing data, but often it is the presentation skills that set you apart from the masses and open doors for collaborations or jobs. Needless to say, I am over sitting in on boring talks about great science that would really shine with just a little extra time spent to perfect the presentation.

[i] Adv. Materi. 2014, 16, 1375-1378

[ii] Angew. Chem. Int. Ed. 2013, 52, 3780 – 3781

[iii] http://www.somersault1824.com/tips-for-designing-scientific-figures-for-color-blind-readers/


Publishing under a code id?

I have often wondered the impact of being a publishing researcher who has a common name. Confusion arises when scientists with identical names publish papers, especially if such scientists are part of the same field. How does one distinguish herself on a publication from another member of the scientific community with the same name? There should be a way to better identify authors that is not so dependent on recognition by names alone. In fact, such a way does exist, and it is called the Open Researcher and Contributor ID (ORCID) system.

While submitting papers, I have observed how journals encourage authors to link themselves to their publications using ORCID codes. Though not necessary for all journal submissions, more and more journals are requiring authors to have a unique 16-digit identifying number created with ORCID. In a recent Science news article, John Bohannon points out that in the past, many researchers without issues of name ambiguity had not been motivated to sign up with ORCID. However, now there are close to two million ORCID users.

The article includes pieces from an interview with a CIT Pasadena neuroscience post-doc, Weizhe Hong, who shares some of the advantages of using ORCID. For instance, in addition to helping identify individuals from countries where people commonly have the same first and/or last names, ORCID numbers also help to preserve identification that may get lost when names are translated into the Roman alphabet (i.e., a name might be translated into multiple names with different spellings). Also, when a researcher becomes married and elects to change her surname, the identifying 16-digit number would still be linked to that person, regardless of the change in last name.

One of the only problems with ORCID is that any papers published prior to creating an ORCID code must be manually linked to the account by the author herself. Though perhaps bothersome, it is not the end of the world, especially since researchers already keep track of their publications. I think all researchers should sign-up with ORCID as soon as possible. Shouldn’t everyone have an accurate account of their publication record and receive the recognition for the work that they so rightfully deserve? 




Page 1 ... 8 9 10 11 12 ... 32 Next 5 Entries »