Our 21 students are working in labs from NC (Duke) to MA (Harvard and MIT), and on topics from computer languages to tissue formation. Join us here to read weekly updates from their time in the lab!

Visit the EXP page on Peddie website: peddie.org/EXP.

Saturday, July 6, 2013

Counting cells - Week 4 at Duke

This is Jocelyn again from Duke University working on olfaction in mice. This is
my fourth week.
I thought the previous week was low key but this week topped that. The first two
days were spent genotyping new litters of pups from the animal house and other
analyses of microscope photos in ImageJ (and the help of Microsoft Excel).

For genotyping, Neha collected toe clips from the pups and we soaked the
templates in 100 micro liters of tail buffer and 10 micro liters of proteinase K
which breaks down the protein, and then put the tubes into the PCR machine. The
next day we took the tubes out and made a master mix (dNTP, forward and
reverse primers, 10x buffer and the diluted template). I ran 3 gels because there
were 27 tubes and each tube needed 4 (1 wt, 1 m, 2 wt, 2 m). 108 + 4 controls =
112 wells.

From Tuesday afternoon to Friday, Neha left for California, leaving me to code and
count positive cells in the InSitu photos. There were four folders of photos to
count. I created new excel files to record the numbers. I counted the positive cells
blindly which meant each photo had a randomized number so my results would
not be skewed. After all the counting, I recoded the photos to their genotype (WT
or M) and condition (Control or Test). The results are looking promising. Neha
said she'll take a look at them early next week.
I've started planning my presentation for the lab which I will be presenting on
Thursday before my last day on Friday. It has been such an awesome experience
working with Hiro, Neha, and the rest of the team. I wish I had photos to show
but I'll share them the last week!

- it seems like my entries are getting shorter and shorter every week.. hopefully my last entry will be more interesting!

Week 5 @ Microdynamic Systems Lab

While the fifth week was shorter than a regular week due to a break on Monday and the Independence Day on Thursday, it nonetheless put a wonderful end on my research experience.

After learning to use Robot Operating Systems and programming patterns for a LED string using Python in the first four weeks, in the fifth week I connected my codes to the actual LED string and finished the testing.  Following are the videos I took of the LED patterns.

1. Asleep Mode:

2. Ready Mode:

3. Unbalanced Mode:


4. Linear Single (Gaussian Wave) Motion:

As my professor said on my first day at lab, five-week is not at all a long time for scientific research.  Nevertheless, this five-week experience was both challenging and fulfilling.  It exposed me to the real-world scientific research where there is a combination of individual working and collective effort. In addition to inspiring me about technology and management, the experience introduced me to many engineering concepts and equipments, honed my programming skills, and prepared me for independent problem-solving and collaboration in innovation.

I really appreciate the preparation and inspiration Peddie has always been providing me with and the opportunity that the Robotics Institute of Carnegie Mellon University has given me.  I hope I can make good use of my knowledge and experience and benefit others in the future.

Friday, July 5, 2013

Week 3 at the Gab Lab

Hi everyone. It's Michelle again, checking in from the Gabrieli Lab at MIT. Week 3 was quite slow, due to most of the lab not coming in starting Wednesday. This week, I worked more with the CASL team, which is headed by post doc ZhengHan, who like everyone else, is very nice and encouraging. I began by organizing subject folders, grading homework and quizzes, and scoring various tests. Then, ZhengHan brought me down to help out with running Session 1s with the subjects. Session 1s are the first time we meet subjects. During these sessions, we give subjects a bunch of computerized tests, as well as interactive tests, where we are the examiners. I received a script that I had to follow with all the subjects in order to instruct them on what to do, and downloaded the program 'PsychoPy' so I could run the PCPT test, which tests tonal recognition and analysis on subjects. For example, the test will play a sound, and subjects have to identify whether it has a rising, neutral, or falling tone.

PCPT set up (not the actual test)
However, I have not been licensed to test the KBIT (IQ test) on my own because it involves interaction with the subject, and a good poker face! As a result, I'm meeting with Kelly, the psycho-educational evaluator, next Monday to go over administering the KBIT. Kelly will also teach me about the various tests I've been scoring for the past three weeks in order to give me some context as to what I am actually doing.


Today was probably the most 'science-y' day I've had in the lab. Since only Zhenghan and 'Big Michelle' were at work, and only 1 subject came in for testing, I spent the day 'pruning' raw EEG data. EEG, which stands for electroencephalography, is the recording of electrical activity along the scalp (it's the thing where people put on a swim cap with a bunch of nodes on them). Although nodes are placed all over the scalp, Zhenghan explained that the nodes near the eyes are used to record vertical eye movements (VEOG), which includes blinking, and lateral eye movement. Because EEGs are super sensitive, noise, speech, and muscle movement all produce huge peaks in the data. When these peaks occur in eye movements, they are considered garbage, and they usually affect the data from the scalp. Therefore, my job was to 'prune' these peaks, which means flattening the lines, using a process known as independent component analysis (ICA). This is important because if you simply edited out the peaks, you may be deleting sections of significant information from the scalp! Below is the ICA process:

Raw EEG Data. Scroll component activities on the left, scroll channel activities on the right.

I first had to identify which components (left side) corresponded with the eye movement channels (right side). These eye channels are the bottom two lines on picture on the right. Next, I used the technical computing application 'MATLAB' to prune the data:


In the figure above, the red lines show the pruned EEG data. As you can see, a lot of the blips are now straight lines, making it a lot easier to analyze later. 

Zhenghan was also kind enough to show me the results she had obtained from CASL's EEG sessions so far. She told me that during EEG sessions, subjects were tested on two types of language errors: syntactic and semantic. Both errors elicit strong responses in the brain, so huge peaks, which are called event related potentials (ERPs) are created in the EEG. The semantic error induces a N400 ERP, while the syntactic error induces a P600 ERP. This is shown in the figures below (Courtesy of S. J. Luck)


However, Zhenghan's data shows a fascinating trend: People are significantly better at processing one error over the other. Unfortunately, I don't have her graphs and diagrams (she said they were preliminary), but her figures indicated that people with a stronger N400 response had a weaker P600, and vice versa. Her graphs also hinted that people better at identifying one type of error in his/her native language had an easier time identifying the same error in a different language. Moreover, she had these cool, colorful maps of the scalp that showed which areas of the brain were activated during the N400 and P600. Her data suggested that even after 45 minutes of training, certain people began to utilize these specific areas when spotting errors in Mandarin or an artificial language. Basically, the big picture of EEG is to identify people who are adept to learning new languages, and her data seems to imply that this may be possible in 45 minutes!!

I finished the week sitting in on my second fMRI. Next week, I'm helping out with several S1 and fMRI 2 sessions, as well as meeting up with Amy to learn about stats and the brain. I'm excited!

Thursday, July 4, 2013

Week 4 @ Microdynamic Systems Lab

In the fourth week I designed the fourth pattern for a string of LEDs around a robot. The pattern is called "Linear Single", which means that the robot is moving linearly and that the LEDs are producing a single gaussian wave to indicate the robot's direction of movement. The design is achieved by taking the parameters of a direction outputted by a game controller and determining the relative LED that indicates the direction.


Other than designing the pattern. I added a "publisher/subscriber" relationship to the "topic" of the direction of the robot's movement. This allows the robot's movement, hence the LED pattern, to be controlled by commands inside the Graphic User Interface(GUI) rather than the game controller.


Eventually, a command inside the RViz (a GUI) called "2D Nav Goal" is able to give instructions to the robot and the LED pattern can be determined accordingly.


As promised in last week's log, I have a cool picture this week. With my mechanical engineer friend from the lab, I visited the cutting of foam sheet using a laser cutter.


Wednesday, July 3, 2013

Week 3 @ Microdynamic Systems Lab

In the third week I used programming in Python to design three patterns of a string of LEDs that shows the mode of the robot to which the LEDs are attached.

The first mode is "Ready", which tells that the robot is turned on and charged and prepared for movement instructions. The pattern has 36 LEDs emitting dim blue light in the background and 6 "rotating" LEDs emitting bright blue light in the foreground. The pattern is designed by re-editing the "string" of LED's over time and changing the RGB of each LED.



The second mode is "Asleep", which tells that the robot has not been active for a while, currently assumed to be a minute. The pattern is designed so that the LED alternates continuously between bright and dim.

The third mode is "Unbalanced". Because the BallBot robot designed by MSL is balanced on a spherical "foot", or a ball, the robot has to constantly balance itself. When the BallBot has its auxiliary legs down, being tested by researchers, or lost control of its balancing system, the LEDs will flash bright red light to alert nearby people.


Also in the third week I observed a mechanical engineer of the lab milling a Kydex sheet. (Sorry I forgot to take a picture. There is a cool one in the log for the next week)

My third week experience also taught me to be more proactive in seeking explanations for others' codes and solutions to a problem. For example, when designing the "Unbalanced" mode pattern, I was confused for a long time by the original code based on which I edit and add features. I then tried to forget what the original code is and come up with my own. Eventually, I found myself back in the path of the original code and I was able to edit and finish designing the pattern.

NLP week 2, 3: evaluating results and more coding

My name is Jiehan Zheng and I work at CCLS at Columbia University on a natural language processing project on extracting social networks from text with my mentor Apoorv, his colleague Anup and Dr. Rambow.  Now I am into my third week here and I am going to recap what I did in my second week and first half of this week.  In the first week, I worked on visualizing the generated social network.

I worked on postprocessing and evaluating the results from the NER (named entity recognition) system first.  A named entity recognizer takes raw text as input and outputs locations of grouped entity mentions (spans of character offsets counting from the very beginning of text, and by "grouped" I mean entity mentions of the same entity are grouped together in a XML structure under a node) and types of entities (organizations, people, etc).  Our team did not write the NER system ourselves because NER is not Apoorv's focus--his thesis is on social network extraction.  Anyways, so we have to know how well NER is performing and try to "improve" its result without digging into the NER itself.

There were two problems.  First, the NER sometimes mistakenly splits entities that are meant to be the same into multiple entities.  This messed up the generated social network because then you are having more than two vertices for the same person, thus distracting the viewer.  For instance, for Alice in Alice in Wonderland, entity #1 (first entity that NER gave us) had 67 entity mentions of "alice", 3 of "poor alice" among many other entity mentions like "she", "her", etc.  Entity #81, meanwhile, had 38 mentions of "alice" and 3 mentions of "alice hastily", etc.  We need to merge these.  Clearly, to humans we know that 1 and 81 both refer to the main character Alice in the novel, yet how can we have computers to make similar decisions?

Our solution was to find all the different entity mentions from the output and create feature vectors with them.  For the sake of simplicity, let's say that in a NER output, if we ignore all the words like "she" and "he", only "alice", "poor alice", "a little girl", "queen", "alice hastily", "her sister", "king" were mentioned.  We will create a feature vector of (# of occurrences of "alice", # of occurrences of "poor alice", ..., # of "king") for each entity in the output.  Maybe E1 = {"alice"x67, "poor alice"x3}, then E1 will be given a feature vector of (67, 3, 0, 0, 0, 0, 0).  Similarly, E81 will have (38, 0, 0, 0, 3, 0, 0).  Then if we think of them being vectors in 7-dimensional space (about which I have no idea) and calculate their cosine similarity (just learned this last week from Apoorv), they will have a surprisingly high similarity (> 0.99).  The implementation of entity merging was to generate a mapping from IDs of one or more entities to the ID of one entity (actually this part of the code is in the screenshot).  Say, 1, 13 and 81 are all actually Alice, then we will have a map that maps 13 to 1 and 81 to 1.  Then when we present the result to users, I check if the entity is in this duplication mapping.

Running the code to merge entities and guess names

The second problem was that the NER gives us no information about a person's real name or best name.  I wrote some code to address this problem.  For instance, for entity #6 we have (after removing words like "she" and "he"): {"a white rabbit"=1, "the rabbit"=16, "the white rabbit"=11, "the white rabbit, who was peeping"=1, "the white rabbit, who said,"=1}.  Clearly this is talking about "the rabbit".  In this case, "the rabbit" is the most frequently used entity mention, so my program would pick "the rabbit" as the best name.  The reason why we remove the common pronouns is that otherwise we would see a lot of "she" and "he" being picked as the best name, which wouldn't make sense because no one wants to see a social network graph with vertices called "she" and "he" all over the place and interacting with each other.  When the entity mentions counts suggest a tie, we choose the first entity mention because most of the time the name is clear when a character is formally introduced.

After this, I wrote a simple script in Python to crawl a website to obtain test text data for later use.

Then I wrote a program to evaluate NER by comparing NER output against the gold standard by paid human annotators.  I used a simple spans exact match and it didn't work very well.  For instance, if there is a span 10000-10002 corresponding to "cat", and there is another span 9996-10002 corresponding to "the cat", my current program would give a score of zero--yet this "cat" and "the cat" mistake is not a serious one, and shouldn't be punished so badly.  Because I was interrupted to do some other programs, I didn't get to implement a more flexible span matching method yet, but I will.  After this, we also map this into a multidimensional space and calculate the similarity between each output entity from NER and each entity from the gold to see how similar they are.

This Monday I started another small project using Java, HTML and JavaScript to help Apoorv, Anup and Dr. Rambow analyze experiment results for a paper that is due this Friday (I know, it's so close to the deadline now...)!  Basically the program makes machine learning examples output from Java, displays them in a webpage, and dynamically inserts columns from experiments provided in JSON format that maps example IDs to scores.  It also colors results that agree with the gold green, and red otherwise.  The user can type commands in the web console to filter rows (the one in the screenshot means that I want to see only the examples that model 1 got right but on which models 2, 3, 4, 5 all failed.  It makes comparing machine learning models much easier.

I expected to do the sentiment analysis starting from week 2, but obviously that didn't happen...but working on postprocessing and making all kinds of utilities is fun, too.  Hopefully I will start the coolest part of the research soon!

By the way, we still go to work on July 4th!

Week 1-- The enrichment process of fruit flies

Hello everyone.This is Sandra. The first week of my lab (The Evolution and Behavior Lab) in Harvard University was awesome.I am working directly with my PI Dr de Bivort,( but he likes me to call him Ben) and he is very easy going, helpful and communicative. I am lucky to have this chance to work with him.

Day 1
Ben gave me a tour around the Rowland Institute where my lab is located . He went through all the safety instructions in the lab and he introduced two machines that I would use in the following weeks: FlyVac and the Y-maze machine. The FlyVac is used to measure the phototactic personality while the Y-maze is to measure the locomotive behavior of fruit flies.  
 
Day 2
In the  morning I need to go for shopping at the Central Square in Cambridage area. Since my project is " How environmental enrichment affect the behavioral diversity of genetically identical fruit flies", Ben asked me to make an enrichment plan for the fruit flies and go to the art store in Central Square to buy all the supplies which could be used for enrichment. I bought some pipe cleaners, small poms poms, drinking strews, thin sticks of balsa wood and some small tubes that fruit flies could climb on. 


 
Day 3
 I had a practice of using the microscope to separate female fruit flies from male fruit flies, virgins from non virgins. It was really amazing. After that, Ben asked me to read the first chapter of the book called "Fly Pushing". It is useful for me to learn about the basic fly husbandry and steps for the collection of flies for crosses. After I finished the first chapter, I started making the trials for my experiment.There are same number of enrichment tubes and control tubes, and there are three females and two males in each tube. After the fruit flies in all the tubes were awake, I put them into the fridge which is in 25 degree and 60% Relative Humidity. 
 
Day 4
I learnt how to expand the fruit flies and created more trials for my experiment. At the end of the day, Ben gave me a suggestion: create two extreme conditions of environment enrichment, one is what I have been doing so far (enrichment in the tube) and other one is to use a tall big cage. I need to see how the results differ from each other.
 
Day 5 and Day 6
I was simply doing the expansion of the fruit flies and more creation of the trials because I was waiting for the more supplies for the enrichment (big cage) to be delivered.

Overall the first week is a fruitful week to me. I am looking forward to using the two machines and collecting data in the coming weeks!  

Tuesday, July 2, 2013

Week 3-4 in Princeton

My name is Jacky Ziwen Jiang and I am working in McAlping lab at Princeton for this summer. It has been four weeks and our research has been going pretty well.
As the first two weeks were mainly about planning and chemicals ordering, we turned in to the actual working mode the past two weeks.
The main material we need to prepare is thylakoid, the main factory where photosynthesis takes place. To get thylakoid, the method is not that simple as just cut leaves into small pieces. First of all, we need to choose the right plant leaves for the thylakoid extraction. After reading certain amount of articles, spinach leaves has the relative high concentration of chloroplasts in their leaves, which we will choose for our experiments. After we bought the spinach, there are two more buffers we need to make for the thylakoid extraction. One is grinding buffer and the other one is washing buffer.
The whole process is filled with multiple complex procedures. First of all, the precise measurement of different chemicals we need takes consistent attention without vacillation. After that, I need to use a lab blender to blend the mixture of alginate leaf pieces with grinding buffer. After we get the solution, we will put it into the centrifuge for the pellet. During this process, I also learn new knowledge of the unit conversion between rpm and g, which show the rotation strength. The pellet we get need to go through another step, which is called resuspension. In this step, we put the pellet into the washing buffer and do the resuspension procedure. Then, we need to go through few more times of centrifuge to get the concentrated thylakoid.

The solution looks clearly green and fresh. These two weeks’ experiments give me a great learning opportunity of thylakoid, which gives me a further understanding of the knowledge I have learned in class. This is a great example of practice the knowledge I learned in actual experiments.  

Monday, July 1, 2013

Week 1-2 at the Mendelsohn Lab

Hi my name is Jason and I am working at the Mendelsohn lab for the summer. I am beginning my second week here at the lab and already feeling more comfortable. When I arrived at the Columbia Irving Cancer Research Center and stepped into the lab, almost 10 days ago, I was nervous and really early. I got there thirty minutes early thinking that was the right thing to do, but ultimately waited for the next thirty minutes worrying that I was in the wrong building or that they had forgotten about me. Carolina, one of the researchers at the lab, arrived minutes later after my minor panic attack and I was relieved. She was told by my PI, Dr. Mendelsohn, that this was my first day and showed me to the lab. I soon got to meet all six other students and researchers. They were a mix of a medical student, graduate students, and post docs and all welcomed me to the lab. The Mendelsohn lab studies urological cell biology, specifically bladder cancer. Because bladder cancer has many variations and can only be differentiated through histology (the study of tissue through examination under the microscope), the course of treatment may be inaccurate depending on what each histologist sees. This makes potential treatment options useless if the histologist makes the wrong diagnosis. The primary goal of this lab is to find certain genetics markers in the mass growing on the bladder to determine which kind of cancer it is (Transitional cell bladder cancer, non-muscle invasive bladder cancer, invasive bladder cancer, or squamous cell bladder cancer). For the first couple days, I spent my time following, watching, and reading. I eventually learned what my primary job would be at the lab; put simply, it is to paraffin section and to stain. Paraffin sectioning is the process of cutting thin films (5nm) of tissue, in my case mouse bladder tissue or a mouse embryo, embedded into a block of paraffin wax to be then put on slides for further examination. Dan, a medical student, taught me how to paraffin section and it was very difficult at first. The thin film of tissue is so delicate and sticky that it either folds on itself or just rips apart before I could even get close to putting it into the water and onto the slide. After about two days, I got the technique down. Once I would finish the paraffin sectioning, I would move these slides for staining. I have learned two types of staining so far: ABC staining and H&E staining. I just follow the protocol and time how long I put the slides into the numerous solutions; it is very straightforward. (Actually, tomorrow I am going to learn a new type of staining technique called fluorescent staining. I don't really know what it is yet, but it sounds very interesting.) After an hour of reading if I have time, I end my day hopping on the A train and then the M66 to make my way home.

--- [I'll put up photos on the next post.]

Fluorescence; The Crazy World of Wavelengths Week 3

My name is Colton and I’m visiting the Buccella Laboratory at New York University. The lab is part of the chemistry department and we are studying fluorescent compounds that bind with magnesium.

 This week went by pretty fast and I’m already sad that I only have 3 weeks left. Like usual I was up at 4:30am to catch the train and be in the lab by 9:30am. As soon as Sarina came in we began that day’s experiment; testing changes in spectroscopic properties of a compound in different solvents. You may be looking at your screen asking yourself “Well what does all that mean?”. Spectroscopy is the study of how molecules interact with radiation (light). We are testing the fluorescent properties of a molecule. Essentially how much and what type of light a molecule emits and how that changes according to the solvent it is in.  

First, we had to make our solutions using the compound 1C.
Compound 1C
(SCS_2_185 & CK_1_06)
Using DMSO (Dimethyl Sulfoxide) we diluted 10uL(micro-liter) of 1mM (milli-molar) compound 1C to a 2uM (micro-molar) solution. We did the same but in ACN (Acetonitrile). We then began our tests on the UV-Vis and fluorometer. Using the UV-Vis we were able to gather the absorption maximum of each solution. Absorption maximum is the wavelength with the greatest absorbance for a compound. That maximum is then used as the excitation wavelength to gather emission data on the fluorimeter. Basically, that wavelength of light is used to excite the molecule. The emission scan tells us the wavelength that is emitted with the greatest intensity when excited with that given wavelength. The emission maximum (the wavelength emitted with the greatest intensity for a given excitation wavelength), is used in the excitation scan. The excitation scan tells us what wavelength is necessary to have a compound emit a certain wavelength (essentially confirming the absorbance scan). It was unbelievable to see how the fluorescence intensity changed so much in different solvents. Not only did it decrease but it either shifted left or right on the light spectrum. Left shifts move towards blue light, and right shifts move towards red light.  Here’s a graph to give you a visual of the data we collect.
Absorbance Data of Compound 1C in Various Solvents. Peaks represent absrobance maximum.
(CK_1_06/16)

Fluorescence Data of Compound 1C in Various Solvents. 
(CK_1_06/16)
Monday was a long day and we didn't finish up until about 7pm. We were tasked that night to plan for an unplanned experiment to start the next day. So for the time being my experiment was put on hold, because we needed the instruments for this new experiment. The previous week I mentioned that Brismar and I were testing a compound, and as it turned out, we needed to check the compounds binding ability with magnesium in the presence of ATP. The way the compound was designed it only need to bind at two places with magnesium and magnesium has four binding sites. This may seem insignificant, but it’s not because this means magnesium can still bind with ATP, which is not beneficial to our research. The goal of the fluorescent sensors the grad students and Dr. Buccella are designing are to locate free magnesium in the cell, not magnesium bound to other compounds and proteins. So for the rest of the week, that would be the project that Sarina, Brismar and I would be working on.

Before testing anything, we had to make solution of ATP and Magnesium-ATP. It was much more challenging than we were expecting. The original concentration of ATP we wanted in each solution would require more ATP than we had. When we figured out the most concentrated solution we could make with the ATP we had, we didn't account for the Kd (dissociation constant) of ATP, which would not allow us to make the concentration we wanted to, because it wouldn't dissolve in such a little amount of solvent. After an hour or two of thinking, plus doing various calculations we found a concentration that worked. So Tuesday was spent making our solutions. We were finally able to begin the experiment. The first test we did was with the Magnesium-ATP solution. As we preceded with the emission scans, we began to see a decrease in fluorescent intensity as we added more Magnesium-ATP to a solution of the sensor. This was really odd. We figured the ATP binding with magnesium may quench the fluorescence a bit, but not decrease as we add more. Well we did some thinking with Dr. Buccella and we discovered that the equilibrium of the reaction was slow. So when we proceeded with the second trial, we allowed the reaction to reach equilibrium after each addition of Magnesium-ATP. It turns out this was all we needed to do, and instead of decreasing, fluorescence increased, but not as much as when ATP is not present, which means ATP is affecting the fluorescence. (Always do experiments twice… you never know what could happen.)  The rest of the week consisted of us repeating this experiment and doing the same experiment with just ATP, to see how the sensor is affected by the ATP, with without the presence of Magnesium.

To sum the week up, it was very exciting with many surprises. For me one of the best parts was seeing and helping figure out why we were getting the results we got and how we could test them further. It was a lot of critical thinking, but that’s never a bad thing!

    

STAMs-- weeks 2&3 at the Murphy lab

Richard here, studying learning and memory in C. elegans in Dr. Murphy's lab at Princeton.

For the past two weeks, the primary focus of my research has been to perform short term associative memory training (STAMs) on wild type (N2) and egl-4(ky95) worms.  I may have given a brief description of STAMs in an earlier post, but I'll go into more detail this time:

The experiment starts off with several plates of a strain of worms, which have been bleached so that the worms being tested are all about the same age.  Using M9 buffer, I wash the worms off one of the plates to serve as my naive testing group-- I examine the worms' responsiveness to the chemical butanone without any prior conditioning which will enhance their response.  Then, I wash the rest of the worms off the plates and into a 15 mL tube, where they starve for about an hour.  When this hour has passed, I transfer the worms onto several conditioning plates, with food, and spot a small amount of butanone on each plate so that after an hour of conditioning, the worms will have a developed a strong association between food and butanone.    To test this association, I prepare chemotaxis plates, which look like this:
Chemotaxis assay of wild type right after conditioning-- all the worms are attracted to butanone (left)
The dots on the right and left are both spotted with sodium azide, which stops the worms from moving, and the right and left dots are spotted with ethanol and butanone, respectively.  I put about 100-300 worms on the dot at the bottom, called the origin, and after an hour, I take a picture of the plate and use a digital sorting program to count the worms.  This entire process is called a chemotaxis assay, and I need to perform one at various time points: right after conditioning (0 hr), 30 minutes after conditioning, 1 hr, 2 hr, 4 hr, and 6 hr, using three chemotaxis plates per time point per strain in order to ensure accuracy. The worms being tested at later time points are placed onto hold plates which contain food but no butanone. Factoring in the two hours it takes to starve and then train the worms, the entire experiment lasts 8 hours.  However, there are large chunks of time in between, which gives me time to perform other preparations needed for future STAMs, and time to just relax.

My second week, I performed three STAMs with just the wild type strain, to get acquainted.  My third week, I performed three STAMs with both the wild type strain and the egl-4 strain.  It is known that the egl-4 strain retains its association between food and butanone for several hours, unlike wild type, which loses this association almost completely 2 hrs after conditioning, but during all three of the experiments I performed last week, the egl-4 strain actually displayed even worse chemotaxis towards butanone than did wild type, which is the exact opposite of what was expected.  Two possible explanations immediately come to mind: 1) I am a genius who has just proven the scientific world wrong, or 2) I'm an idiot who, despite having performed the same exact experiment three times, still managed to completely screw up.  Option number 1 sounds  flattering, but is highly unlikely.  Option number 2 seems more realistic, but while there are many things at which I am inept, I am certainly no idiot, never have been, and I am 150% sure that I performed these experiments with as impeccable timing as I could achieve.  With this being said, at this point, I'm not sure what exactly went wrong, but now that my grad student, Geneva, along with the rest of the lab, is back from the International worm meeting at UCLA, hopefully we can figure something out.

In the meantime, I was able to cross my egl-4 mutants with my crh-1 mutants to produce heterozygotes, and then allow those to self fertilize to produce possible double mutants.  In order to verify if any of the offspring chosen are indeed double mutants, I performed PCR and then ran a gel.  However, if there are serious problems with my egl-4 strain, as evidenced by my extremely odd results from my egl-4 STAMs, then this pursuit may be in jeopardy.  All I can do is hope for the best.

PROJECT!!!!

Hey it's Katie again from the Children's Hospital of Philadelphia.  Finally after 7 long months of waiting I finally have my project.  In Dr. Hodinka's lab, the primary goal of research is to improve their clinical outcomes.  With that in mind, he has assigned me to two new projects.  The one which has taken up the majority of my time is the HSV (Herpes Simplex Virus) comparison study.  Mike (a med-tech) and I are comparing 5 different HSV assays (three type specific and two type independent assays).  Mike and I are waiting for one more probe (a PCR agent that emits florescence when replication is occuring) before we can officially begin the study.

My other project is developing a new corona virus assay.  Corona virus is a respiratory virus that is most known for the SARS outbreak a few years ago.   However, this outcome is pretty rare and coronavirus usually results in the common cold.  Its name originated from the sun shaped rash that can sometimes develop on the patient's skin. The assay I will be developing tests for 4 strains of the virus.  So far this project has involved pulling past respiratory samples and doing a lot of paper work.  At this point I have over 700 patient samples on file, and that is just from one month!  Once we get a little deeper into the project I will fill you guys in on more details but I just wanted to check in!

Product Characterization - Week 2

Hi I'm Harry and I'm currently working with Dr. Ballatore in a chemistry lab at UPenn focusing on chemical synthesis of isosteres.

At the end of the previous week we were trying to figure out the content of a product that was not what we expected. To further characterize the compound, in addition to LC-MS, the post-doc Brian took me down to the nuclear magnetic resonance (NMR) machine. Apparently the magnetic field around the machine is so strong that if you had a credit card in your pocket it would be deactivated. Brian tried to explain to me how the NMR works in simple terms because the actual theory requires chemistry and physics knowledge that are well beyond my level. So basically the NMR creates a magnetic field that can change the direction an electron spins (it can only spin in two possible directions), and the machine can measure the amount of energy required to make that change. The energy required is presented on the resulting graph as a chemical shift, which is essentially the position of a peak on the graph. In a proton NMR, the integration of each peak shows the number of protons at that location; in a carbon NMR, each peak represents one carbon atom. Because the energy required to make the change in spin direction is related to the chemical environment that the atom is in (which simply means the other atoms around it in the chemical structure), each proton or carbon in a specific location will have a specific range of chemical shift. So if you know that a carbon atom in a carbonyl group will always show up on the left end of the graph, a peak on the left end will indicate that you have a carbonyl carbon. The NMR essentially gives us more information on the structure of the compound that we have made. 

The NMR result certainly did not represent the compound we wanted, so while Dr. Ballatore and Brian were trying to figure out what was going on with the NMR (I can't interpret a NMR graph), he had me set up another reaction to deprotect the compound. The isobutane that was attached in the first step of the reaction was a "protection" of the reactive carbonyl oxygen. Once deprotected, we ran another LC-MS to see what was in the compound. As expected, the largest peak showed the same molecular weight as before minus an isobutane. After some guessing around, Dr. Ballatore figured it out. What happened was that the reaction produced a large amount of dimers, compounds that had two cyclopentanediones. It was pretty amazing how he just guessed the right one.

While all this was happening, we had set up a separate reaction which was a lot simpler than the one we were working on, it only had one step and didn't require reflux or deprotection. The reaction sill uses cyclopentanedione, and it basically adds the benzyl ring on the other side of the structure. Changing the location of the benzyl group can affect the compound's acidity and ability to penetrate membranes.

After figuring out what went wrong, we redid the reaction and this time modified the run-time to minimize dimerization.

$3,600 Responsibility and Mock Life Span Experiment! -Week 2

(Rhea - CHOP - Mitochondrial Disease Lab)

Now that I am much more acquainted with the lab members and the ways in which the lab runs, I'm being given a lot more responsibility. After learning to pour plates and spread bacteria, Fred (an undergrad) tested my skills by asking me to pour and spread my own plates without supervision. Taking great precaution, I made sure to follow the protocol correctly (ex. didn't blow up the autoclave machine, didn't let the agar solidify in the flask I prepared it in). Fred meticulously checked my final products and informed me that my plates were completely contamination free (that's a huge deal!!). Now if the lab is ever running out of plates or solutions, I can be called upon to help them out. Just the other day during my free time, I made 2 liters of S. Basal, a salt solution used to maintain the worms, and 75 new plates for Zsoka to use. Later in the week, Julian taught me about the biochemistry work in the lab. We test carbon-13 labeled worms under different conditions for their organic acids, amino acids and protein abundance by using gas chromatography/ mass spectrometry and high performance liquid chromatography. To make the samples for each type of testing, the worms have to be grown synchronously (all at the same life stage) and under the specified conditions (ex. with drug, without drug), and then collected into viles. Once in the viles, they need to be centrifuged, ground up with a little drill, and treated with certain chemicals to ensure the best results. After showing me how to make samples with the first two viles, Julian left me to do the rest saying "you probably have a steadier hand and a better eye than me anyway" in his thick Russian accent. He also reminded me that every one of the samples cost us $150 to put through testing so I made sure to be careful with each and most importantly I made sure not to mix up the order of the samples...otherwise we wouldn't know which results corresponded to which sample! Overall my second and a half week in the lab has been even more exciting than the first and the best part is that my PI, Dr. Falk, told me that at this rate, I could be starting my actual life span project next week! Woo hoo!

Sunday, June 30, 2013

Linksvayer Lab: end of Week 3

Hi my name is Ben Wagner and I've been working in the Linksvayer Lab at Penn. We work on evolutionary biology and collective decision making in ants.

My second and third weeks in the lab turned out to be very similar to the first. I'm still working on the genetics project, where we are finding differences in micro-satellites in the Pharaoh Ant colonies in the lab. This information is important for the future of the lab, because it will allow us to compare colonies genetically as well as behaviorally, and determine if certain actions/activities, such as one colony having twice as many queens although still being the same size as another (Pharaoh Ants are polygynous which means they have multiple queens); we then compare this to the genetic difference between the two or more colonies. My P.I. left this week to go to some conferences in Europe, where he'll be learning ant stinger dissection techniques, and talking with the top ant experts on the continent! But before he left he gave me new directions with my project, by which I mean he expanded it. I'm now doing about a dozen more colonies than my previously reported amount, as well as running gel electrophoresis in multiple samples from each colony to test if the PCR worked before submitting them for sequencing.

In total the lab week has been pretty slow. Most of the projects in the lab are coming to a close, so most of my fellow researchers are anxious to finish and start new projects. Although I know my project will not be finished anytime soon, I'll hopefully begin working on more behavioral based work, just to mix things up. Because of many projects almost being finished, everyone has had a little more time to hang out, so I got to know everyone in the lab a lot better, and we even went out for fro-yo after work one day. Unfortunately, we had to say good-bye to our lab tech Katie, who is leaving to go to grad school. But we're excited for our new lab tech Michael to be in charge because he's a really fun guy.

I haven't gotten a chance to return to the Berger lab yet, mainly because every time Riley's been there its been rather serious business. To remind you, Riley, my grad student, works in both Dr. Linksvayer evolutionary biology ant lab and Dr. Berger's epigentics in ants lab. So he's been over there trying to talk to Dr. Berger about his next steps not only in his project but in his future at the lab. This means I've had a lot more unsupervised lab work, to the point of Riley calling me "99%, if not completly, autonomous". I've also taken it upon myself to sign up for ant feeding, which as a temporary volunteer I'm not required to do, but I wanted to do my part. Its a rather long process, due to the hundred or so colonies of Pharaoh Ants and another 300-400 colonies of another type (which I can't remember because I don't use them). For both we need to give them new food and water, which is always interesting because they usually start making nests and climbing all over the test tubes they get water from.

I'm not expecting too much of a change this week, this Riley's going to be at an "ant meeting" all Monday and Tuesday, and with July being off... there will be mostly more of the same. But I am excited for the Penn/CHOP exp lunch Tuesday!

Week 2 at the Donohue Lab

Hey this is Meg Dalrymple, and I will be writing about my second week at the Donohue Lab, which mainly focuses in the evolutionary and genetic mechanisms behind germination at Duke University
The Autoclave
Walking into the lab on Monday of my second week, I felt much more prepared and comfortable than just 7 days prior. We still had one more day of censusing left to do and so me and Lien got started on it right away and finished pretty quickly. After that, we had a lot of dirty petri dishes that needed to be sterilized before we threw them out, since they contained biological waste (the germinating seeds). So, we headed over to the autoclave, which is a big machine that sterilizes things using heat and water. There are different types of cycles that you can run, depending on what you need to be sterilized. For the trash we used a setting called “gravity” which uses heat and water, but other dry cycles only use heat. After about an hour the trash was done so we took it out of the autoclave and to the dumpster. In the afternoon, we realized that we had cut the filter paper the wrong size for our experiment. However, the new uncut filter paper that Bri had ordered hadn’t arrived yet, so we continued to cut weigh boats and punch holes in them.
The next day on Tuesday, the filter paper came. However, I busy helping the other group in the lab prepare for later in the week so I couldn’t help cut the paper. I was busy preparing plates with which the group would seed (put seeds on the plates). This meant that I counted and laid out 900 plates. Next an undergraduate, Aman, made agar to be poured into all 900 plates. As I went to lunch he started pouring. When I returned, he told me the other agar was in the autoclave and that it would be done in about 5 minutes, and then he headed off to lunch. I went up to the autoclave and got the agar so I could begin pouring. About a half hour later, Aman returned and we continued to pour. In addition to pouring agar, the plates also had to be labeled. So as we poured, we labeled and organized the plates into the different seed genotypes. After we finished pouring the agar at one point, we realized that some of the plates we not solidifying. Aman figured out that because the liquid agar had been sitting for a while, the solution had separated and the top was mainly water, and the bottom was mainly agar. Therefore, we had to redo about 200 plates and make new agar to insure that we did it right. We finally finished pouring and labeling a couple hours later.
Cut filter paper
Then on Wednesday, Lien and I began to assemble our plates. This meant that first we would sterilize the cut weigh boats with ethanol (they couldn’t be autoclaved because they would melt). Next, because we cut slits in the strip of filter paper, we were able to create a loop out of it by locking the 2 ends together, using the slits. In this loop of filter paper, the weigh boat would slide in upside-down so that the bottom of the weigh boat was facing up. Then the weigh boat and filter paper would be put in a petri dish, lying flat. On the first day, we made about 120 dishes and then helped the other group seed.
On Thursday, our only goal was to finish making the dishes. We spent all morning and some of the afternoon working on it. Eventually however, we ran out of weigh boats and had to stop. This meant that we had to order more weigh boats, and eventually will have to cut and poke holes in them. We still have around 250 dishes that we need to make. Once we were unable to make any more dishes, we decided to autoclave some distilled water because we will need it for our experiment. For the rest of the day, we read articles.
The next day we wanted to make the solutions that had different water potentials since we will be seeding our experiment on Monday on dishes with those solutions. However, we needed to first autoclave empty bottles that would hold solutions. Therefore, while we were waiting for the autoclave to finish, we helped the other group seed until lunch. After lunch we began making the solutions. In order to change the water potential of solutions, we added a powder called polyethylene glycol (PEG). PEG is actually commonly used in medicines, in chemical spills, and in toothpastes and creams. It smells a lot like glue and when mixed together with water, it creates a soapy solution. We made 5 different water potentials by weighing out really large amounts of PEG and then mixing it with water. Once we were finished, Lien and I had to go to another discussion about a paper. However, this time the paper was far easier to understand, and I had already read it earlier in the spring. Therefore the meeting was a lot less stressful than the one we had the week before.
Overall I enjoyed this week because although I’m still not doing very exciting tasks, there is more variety to them now. I’ve learned how to do many new things this week, such as autoclaving and preparing dishes, making my research more interesting and enjoyable. In addition, everyone in the lab is still really friendly and enjoyable to work with, making the overall experience a good one.