With California voting on whether foods containing GMOs should be labelled as such, I surveyed my audience to figure out why this is such a polarized issue.
A few weeks ago I was invited to participate in a panel discussing social media for scientists at the University of Rhode Island. It was a fab day, led by the always-engaging Bora Zivkovich, and while the discussion was lively and interesting, my real jaw-hitting-the-floor moment came when my fellow panelist Dan Blustein introduced himself. I shall paraphrase:
“Hi I’m Dan Blustein, a grad student at Northeastern University, and I make robot lobsters.”
And it’s not just for fun. The robotic lobsters and lampreys that Dan and his colleagues work on are an incredible feat of bioengineering. Since I didn’t have much chance to talk to Dan after the panel, I got in touch with him after the fact and asked him a few questions about his work.
KP: The lobster and lamprey robots you’re making rely on biomimetic control. As I understand it, these systems rely on analogue rather than digital signals to transmit information much as an animal neuron would. Is that correct?
DB: The neurons that make up the electronic nervous systems that control our robots are not real neurons, they’re just simulated neurons. We’ve made nervous systems with two different types of neurons, one is analogue, the other is digital. The analogue neurons are made up of a series of circuits that calculate equations (we use the Hindmarsh-Rose model). The equations basically describe the dynamics of the ions that flow in and out of a neuron to produce action potentials and other neural signals. These are quite faithful to the biological neuron and they operate in real-time but for large networks of neurons, they take up a lot of space and can produce a lot of heat. We’re working on a VLSI (very-large-scale-integration) implementation to shrink these circuits down to fit large networks in small robot hulls. We also use another neuron simulation called the discrete-time map-based neuron model. The simulation doesn’t mimic everything that happens inside a neuron but it does mimic the types of action potential outputs that biological neurons produce. This is coded digitally and allows us to run large networks that we can quickly modify. We could run the first type of neuron simulation in code but computing the equations is a fairly intensive process so we run into delays on the robots.
One thing that is unique about our robots is how they move. Rather than motors or pneumatics, we use a muscle analog called nitinol that comes in wire form. This material is called a shape memory alloy and when you heat it up it contracts as muscle does. We use this contraction to move joints in our robots. To heat it up we drive pulses of current through the wire, which are driven by the neurons in our electronic nervous systems. The resistance of the wire causes it to heat up which makes it contract. When the pulses stop, the wire relaxes and stops moving the joint. This is how we biomimetically move our robots!
KP: The technology allows the robot to be autonomous. What has been the biggest challenge in the lab in terms of coordinating environmental sensing and behavioral output?
DB: The robot autonomy we have developed is based on neural networks that describe how an animal reacts to known sensory information in the environment. We run into challenges when the animal/robot is faced with novel environmental conditions. But we try to use this challenge to our advantage as we develop the electronic nervous systems. Let me try to explain. You can get a lobster to walk forward by moving it’s visual world from front to back across its eye (think lobster on a treadmill with moving walls). The bending of a lobster’s antennae will also stimulate forward walking and the lobster will walk upstream into water current. Normally if the lobster’s antennae bend one way, the visual world will move in a specific way and these stimuli are paired under normal conditions. However, in the lab we can subject lobsters and lobster robots to confusing sensory information so these two sensory cues are mismatched. For example, we can move the visual world as if the lobster is walking backwards but keep the water flow coming head on at the lobster as if it were walking forward. We can look at how the lobster reacts to get a sense of how these two sensory systems interact. By comparing that response to our robot we can see if our electronic nervous system is on track or if it needs to be adjusted.
KP: The major application of these robots appears to underwater exploration, both close to shore (RoboLobster) and in the open ocean (RoboLamprey). Are there other applications that you see in their future?
DB: The RoboLobster and RoboLamprey were originally funded for underwater mine detection and were designed to operate in tandem looking for mines on the ocean floor and floating in the water column. We could also see these robots being used for a range of other underwater tasks, including underwater search and surveys, environmental tracking, and the inspection of bridge pylons and dams.
KP: The circuits that you use to build biomimetic robots are modular in nature. Does this mean you can tailor the robots to fulfill specific missions or objectives?
DB: The idea is to build cheap, expendable robots that are easily customizable for a range of missions. If you need an infrared camera for checking leaks in a pipe, we can attach one. If you want your underwater robot to throw an ocean dance party, we’ll attach some flashing lights and a disco ball.
KP: The primary focuses at the Marine Science Center are naturally underwater exploits, but are there terrestrial (or even extraterrestrial) applications for biomimetic robots?
DB: Technically speaking one can make biomimetic robots for any type of environment in which life is found. Although we don’t have any extraterrestrials yet to mimic for outer space environments, that could change someday. We’re part of a team working on a project to build a robotic bee, a task that presents a range of challenges we don’t deal with underwater.
KP: On a more personal note, what would you like to see these robots accomplish in the near future?
DB: Scientifically I’d like to get to the point where the behavior of our robots is indistinguishable from their animal counterparts. That would mean we’re really getting at how nervous systems work. But really, I’d just like to see a robot animal zoo. It would be a great educational tool and besides, who hasn’t wanted to ride a robot camel at some point?
KP: Riding a RoboCamel would indeed be a dream come true…
If you would like to learn more about the RoboLobster and RoboLamprey project, head over to the lab website, and if you’d like to read more about what Dan does on a day-to-day basis check out his blog and follow him on Twitter @bloostein.
The last few weeks have seen a lot of discussion about how science and academia have become increasingly disconnected from the lay public. Nature’s Soapbox Science blog started the ball rolling with a series of posts highlighting the problems scientists face when communicating their work through the mainstream media, and showcasing a couple of great examples of effective science outreach.
I know it has been a while since I’ve written a toothsome post on here, but I promise it has been for a good reason: Today I handed in my PhD thesis! And in two weeks I will defend it, and then, fingers’ crossed, I will finally really truly be Katie PhD.
Today sees the start of a series of blog posts on Nature.com’s Soapbox Science blog. Starting with a post by me about how scientists can begin to change the face of their industry, and hopefully rectify some of the damage that has been done over the last few years. Click on the doodle, have a read, and then get involved in the conversation on Twitter using the hashtag #reachingoutsci.
Last Monday my lab-mate said, “are you going to the Carl Zimmer thing?”.
“Huh?” Said I. “What Carl Zimmer thing?”
“He’s doing a talk on Thursday evening. Oh and office hours in the afternoon.”
After scraping my jaw off the floor and berating myself for not reading the weekly email that would have informed me of this event, I did what any other wannabe science writer and soon-to-be destitute (i.e. qualified) grad student would do. I signed up for the event, set multiple calendar alerts on my phone, and sent out several overly excited tweets.
So when Thursday rolled around me and my fellow blogger @AmasianV strolled over to the Science Center to meet Carl Zimmer. After everyone awkwardly introduced themselves, the first question for CZ was of course “how did you get into science writing?”. In case you don’t know the answer to that one, he graduated from Yale University with an English degree and subsequently got a job copy editing at Discover. A self-professed terrible copy editor, he then moved into fact checking, and the rest, as they say, is history.
But then we started talking about what it means to be a GOOD science writer, which, given the upcoming Wellcome Trust/Guardian/Observer competition and the associated blog post by Ed Yong, seemed like some useful information to share with y’all.
The most important thing, it seems, is to remember to tell a story. There’s no point just dumping a load of information onto a page and expecting your reader to wade through it. So figure out what the point of your piece is. Perhaps you are making an argument. But whether it’s an 800 word blog post or a book you need direction. Then, once you’ve figured out your angle, you have to find a way to sneak the science-y bits in there. If you need to introduce a piece of scientific jargon, do so with care. Make sure to explain each and every term you use.
The next step is to start cutting. Carl made the point that while it’s sometimes painful to cut a paragraph that you might be especially proud of, remove it from the essay and ask if it’s worse for it. If you don’t need it, it shouldn’t be there. “Extra” information could result in the reader moving on to another article in the magazine, or putting your book down, meaning you never get to make your point. And, if you find yourself cutting to the point that there’s no essay left, perhaps you need to rethink what you were trying to achieve!
Carl also had some great tips about story selection. This is particularly important in this day and age, when information is so freely available on the web. When you’re looking for a topic to write about make sure it is relevant, that the work you’re covering has a clear point to it, and that YOU can turn that work into a story for someone to read. In a nutshell: If you don’t care about it, why should anyone else?
And then the conversation turned to blogging
I asked a question about the value of blogging to a potential employer, as this blog is intended to also form a kind of online portfolio for me. Carl made the excellent point the all the advantages of blogging (the software is generally easy to use, you can self-publish, and you are free to write in whatever style you choose) can also be construed as disadvantages by a magazine editor. After all, anyone can blog. That being said, if you use your blog well and accompany it with an engaging online presence, there are definite pros to writing about science on the web. (If you haven’t already, do check out Carl’s blog, The Loom.)
But then the room seemed to polarize, with a division appearing between the “writers” and “scientists” in the room. The debate was nothing new, with the writers accusing the scientists of being distant and uptight, and the scientists decrying the abundance of mis-representation in the press. However what really shocked me was a comment from the other end of the table:
“Why aren’t scientists just better writers? Why can’t all scientific communication come from within the academy, and not just in the form of blogs?”
I felt all the blood in my body rush to my face in pure rage. My initial thought was what on earth is wrong with blogging? The last couple of years have seen a huge increase in the value of online communication. A fantastic example of this, as Carl pointed out, was the #arseniclife debacle and the subsequent efforts by Rosie Redfield to repeat the controversial experiments. But then, as my blood pressure started to return to normal, I realized that I was more annoyed with this undergrad’s complete lack of understanding of what both scientists and journalists do. What he was asking was the equivalent of someone saying “why can’t teachers also run a restaurent?”, or “why aren’t policemen also doctors?”. Being a scientist and being a writer are separate and demanding careers in their own right. While some people may have the ability to do both at the same time, they are the minority. To do either profession well, you need to be focused, dedicated, and talented.
Viruses and Whales: Adventures in Science Writing
The talk in the evening was fabulous, obviously, and rather than continue typing, I will leave you with my sketchnotes. Enjoy, and happy science writing!
I suspect one or two of you might be mildly confused that I have a post entitled “Project PB and J”, and why there is a giant picture of a peanut butter and jelly sandwich. Let me explain. I have a friend named Cindy who writes a blog called Once Upon a Loaf. She makes awesome breads and cakes and basically anything involving flour and an oven. Anyway, she set up this competition called Project PB and J and suggested I enter. The competition? Enter a recipe involving peanut butter and jelly that is either a sandwich or a baked good. The hitch? I have to blog my recipe, and as you know, I blog about science, not baking.
Fear not, my science-y reader! The rules state that the recipe must be original or modified. And so I brought my skills at the bench into the kitchen.
Peanut Butter and Jelly Dodgers
These cookies are loosely based on a British cookie called a Jammie Dodger. Basically they’re plain cookies sandwiched together with butter cream and jam (a.k.a. jelly…I’m still slightly confused as to where Americans and Brits draw the lines between jams and jellies). But what if the cookie was peanut buttery? I suspected it would be pretty amazing.
So I got myself a really simple recipe for shortbread (sooooo buttery and delicious) from allrecipes.com:
2 cups butter, softened
1 cup white sugar
4 cups all-purpose flour
2 teaspoons corn starch
And then I decided I would need to add 1 cup of peanut butter (I used Skippy Creamy), so I figured I would approach this part the same way I would change a buffer recipe that needed modification.
Here comes the science part:
1 cup of Skippy contains 130g fat, 57g carbohydrate, 24.4g sugar, 57g protein, and 1219mg sodium.
This means I need to adjust the other ingredients accordingly:
1 cup of butter = 226.8g, although only 178g of that is fat.
178g roughly = the 130g of fat in the cup of peanut butter, so I reduced the amount of butter by 130g, which is approximately 1/2 a cup. (I should also mention I used regular butter not unsalted butter as traditional shortbread generally has a salty tang. I realize this is a baking no-no, but believe me, it worked out beautifully).
1 cup of flour = 125g, 95g of that is carbohydrate and 13g is protein.
So in 4 cups there is 380g carbohydrate and 52g protein. This is the same amount of protein as in 1 cup of peanut butter, but far more carbohydrate. Obviously a cookie without flour would be rather weird, so I came to a compromise and removed 1.5 cups of flour from the recipe.
As for the sugar, I was surprised to see that one cup of Skippy only contained 24.4g of sugar. Compared to the 225g of sugar I was adding, this seemed negligible, so I left that quantity alone.
And the corn starch is there to hold the shortbread together, so I also left that alone.
Here we go with the recipe:
For the cookies:
1.5 cups butter, softened
1 cup creamy peanut butter
1 cup white sugar
2.5 cups all-purpose flour
2 teaspoons corn starch
For the butter cream:
1 cup butter
1 cup confectioner’s sugar
splash of vanilla
For the jelly:
The jelly of your choice! I went with Smuckers Strawberry Jelly because I love strawberry jam but this stuff is seedless.
1. Cream together the butter and peanut butter with a hand mixer until the mixture is smooth.
2. Slowly add the sugar while continuing to beat with the mixer.
3. Sift the flour and corn starch into a separate bowl and then add to the butter and sugar mixture.
4. Fold in until JUST incorporated.
5. Turn out the mixture onto a piece of plastic wrap and refrigerate for 30-60 minutes (do not refrigerate longer than that as the dough becomes very difficult to roll)
6. Pre-heat your oven to 325 F
7. Cover your chosen rolling surface with flour and unwrap the dough onto it.
8. Roll out until the dough is between 1/4 and 1/8 of an inch thick.
9. Using a Linzer cookie cutter cut out both top (with the heart cutout) and bottom cookies and place on cookie sheet (covered with parchment paper if desired).
10. Bake for 12 minutes or until golden brown, rotating part way through if your oven is as finicky as mine.
11. After removing from the oven, allow the cookies to set on the sheet for about 5 minutes, and then place them on a cooling rack.
12. Once the cookies are cool, cream together the butter, vanilla and confectioners sugar and spread about 1/2 tsp (or more if you love butter cream) onto the bottom cookie.
13. Add a dollop of jelly to the centre and then press on the top cookie so that the jelly makes the heart look red.
On Saturday I took a trip up to Boston to attend the first annual Blog Better Boston conference. Appart from having to get up at 5am it was an awesome day. The organizers (Amy Allen and Alana Brooks) did an amazing job: There were great panel discussions, more intimate workshops, and a whole host of swag!
I of course went armed with a sketchpad and my trusty markers, and set about using my scribing skills I learned at #scio12. So, if you missed the conference and want to find out what happened in the five sessions I went to, or were at the conference and just want a little reminder of what went on, here’s a gallery of my sketchnotes:
If you want to download a PDF of all these pictures, click here:
…and if you want to try sketchnoting yourself, download my guide available on the sidebar (look to your right)!
(N.B. if you presented in these sessions and want to use these images on your site, do let me know and I can email you a high-res file.)
Over the last few months I’ve been forced to think about the agricultural giant Monsanto. On the one hand I know someone who went to work there, someone I hold in high regard. And on the other hand I see repeated calls to action in the social media (by people I count among my closest friends) to boycott all Monsanto products.
Having done some research I have yet to come to a definitive conclusion myself. Monsanto has, in the past, been found guilty of serious environmental and ethical misconduct, and is currently being accused of aggressive enforcement of its patents amongst mid-western farmers. These things are obviously inexcusable. That being said, often the problems being voiced against the company are to do with the food their seeds ultimately produce. The goal appears to be to smear science when it is in fact human error that is to blame for Monsanto’s reputation.
But that’s as far as I will go with my opinion on the matter. My goal on this blog is to tackle hardcore science and explain it without any jargon, so that is what I will attempt to do. I’ll start with conventional plant breeding, move on to looking at how modern molecular biology has accelerated plant breeding processes, and finally take a look at how true genetically modified (a.k.a. transgenic) plants are made.
Conventional Plant Breeding
Humans have been breeding plants for around 10,000 years. The basic goal of this breeding was to improve certain plant characteristics. For example wheat, a common cereal crop, has been bred to produce more and larger seeds than its wild ancestor, and to fight off common infections. These traits were all present in the wild ancestor, but over time farmers encouraged the consolidation of these traits into better and better strains of wheat. The diagram below shows a brief schematic of how this is done:
Enter Molecular Biology
In the mid-19th century Gregor Mendel showed using pea plants that traits (or phenotypes) are inherited in a predictable manner. A century later, Watson and Crick showed that the molecular identity of this heritable material, genes, was DNA. From this discovery the field of molecular biology was born. New techniques started springing up left and right to manipulate DNA, to duplicate DNA, and, importantly, to sequence DNA. Knowing the sequence of an organism’s genome leads to a far greater understanding of the genes it contains, and therefore the day-to-day workings of its cells. It also allows scientists to track the movement of genes, a fact that is extremely useful in plant breeding. By using DNA sequencing technology, breeders can ensure that they select the plant that has gained a favorable gene without also assimilating any unwanted DNA.
We can therefore re-draw the above genetic cross as a molecular biologist would view it:
Vistive Brand Soybeans
Soybeans (as most other plants) produce fatty acids. These fatty acids come in a variety of different forms, but broadly speaking are long strings of carbon atoms decorated with hydrogen atoms. Soybeans contain linoleic acid, which contains two carbon-carbon double bonds. This renders it an “unsaturated fatty acid”. Saturated fatty acids, on the other hand, contain no such double bonds. The carbon atoms are “saturated” with hydrogen atoms. Chemical hydrogenation, a common facet of food processing, aims to remove these double bonds. But this process has the unfortunate side effect of producing trans fats, in which the carbon-carbon double bond assumes a different shape. This alternate conformation is rarely found in nature, hence the aptitude of trans fats for clogging up our arteries.
So Monsanto used molecular biology to speed up the conventional breeding process to vastly reduce the amount of linoleic acid their soybeans produced. They inserted no foreign DNA into the plants, they simply selected the plants that had the correct genomic sequence.
Generating a Transgenic Plant
Towards the end of the twentieth century, molecular biological techniques had become so advanced that scientists were able to move pieces of DNA, particularly interesting or useful genes, between organisms. In the case of agriculture, plant biologists started to insert genes into crop breeds that could impart resistance to the common pests that plagued them. While the overall effectiveness of these measures remain controversial, let’s take a look at how it’s done.
The first step is to select a “gene of awesome” that you want your plant to express. Monsanto’s Genuity Brand Roundup Ready crops, for example, contain a gene that imparts resistance to the herbicide glyphosphate (trade name, Roundup). This has allowed farmers to use Roundup to control weed populations in their fields without killing the crop. This is desirable not only because Roundup is relatively cheap, it is also less likely to run off into drinking water supplies than other herbicides (which given the controversy regarding its toxicity is no bad thing).
Once you have your gene of awesome you need to put it into a plasmid. This circle of DNA will also contain at least one other gene, which can then be used to track the insertion of the gene of awesome. This is called a genetic marker, and plant biologists often use a gene called GUS. GUS is useful because in the presence of a particular chemical it will turn a plant blue.
Then you need to get your plasmid into a plant cell. This is done in a variety of ways, including my personal favorite, the “gene gun”. In this method plasmid DNA is applied to tiny particles of gold. These microscopic bullets are then fired at the plant, and the DNA is incorporated into the plants genome.
After growing the cells into seedlings, the GUS marker can then be used to select the seedlings that contain the gene of awesome. The new transgenic plants are then grown and propagated, et voila! You have a genetically modified plant.
The most recent Monsanto-related headlines have pertained to Bt-corn being approved for sale at Walmart with no indication to the consumer that that’s what they’re buying. If you have a problem with Monsanto, then I fully understand why you might not want to eat their corn. However, that it is poisonous to humans is, as far as I can tell, a spurious claim.
Bt stands for Bacillus thuringiensis, a bacteria that produces pesticidal toxins. These toxins, called Cry proteins, attack the larvae of particular insect species (including moths, butterflies, beetles, and wasps) and kill them. Cry proteins do this by recognizing proteins found on the cells of the larval gut wall. They then insert themselves into the membranes of these cells forming a channel through which water can flow. When enough water flows into the cell it bursts, and when this happens to enough cells the larva will die. Importantly, the proteins that Cry recognizes are specifically expressed in these insects, which means it represents a safe and specific pesticide.
The use of Bt in agriculture dates back to the 1920’s, when French farmers began using it for pest control, and continues to be used extensively in approved organic pesticides (Dipel and Thuricide).
The Specter of Resistance
Recently it has become apparent that insects are increasingly becoming resistant to Cry. Understandably this has infuriated organic farmers who rely on biological pesticides too. Monsanto has attempted to combat this through the use of “refuges”. Refuges are small amounts of non-Bt seeds that are mixed into Bt products (about 5% of the seed is non-Bt) that when planted form regions of the field where insects can flourish without relying on resistance. This reduces the selection pressure on the insects to become resistant to Cry.
So…Monsanto. Friend or Foe?
I will leave that for you to decide, but would love to hear your thoughts in the comments section. Equally, if you have any questions, I will do my best to wrangle the information out of the internet, although the polarizing nature of this topic makes finding unbiased information challenging to say the least!
I just got back to lab after a few days in balmy North Carolina where I attended the amazing un-conference, ScienceOnline2012 (#scio12). There are lots of great posts appearing about the conference (such as this one from Ed Yong) that speak to the amazing atmosphere, the great conversations, and the liver-capacity of the ocean bloggers, so I thought I’d do something a little different…
Perrin Ireland (@experrinmentin) lead a wonderful workshop at the beginning of the conference on the topic of Sketchnoting (#sciencescribe) and got me hooked on the idea. So here are my notes from the meeting. Some are better than others, obviously, but unlike other meetings I’ve been to, I suspect I will actually refer back to these notes! Enjoy!
P.S. Yes, I will be doing this more often.
P.P.S. Sorry some of the scans got cut off due to technical limitations. C’est la vie.
P.P.P.S. #scio12 ROCKED.
Featured image credit: Brian Reid