Our 21 students are working in labs from NC (Duke) to MA (Harvard and MIT), and on topics from computer languages to tissue formation. Join us here to read weekly updates from their time in the lab!

Visit the EXP page on Peddie website: peddie.org/EXP.

Friday, July 5, 2013

Week 3 at the Gab Lab

Hi everyone. It's Michelle again, checking in from the Gabrieli Lab at MIT. Week 3 was quite slow, due to most of the lab not coming in starting Wednesday. This week, I worked more with the CASL team, which is headed by post doc ZhengHan, who like everyone else, is very nice and encouraging. I began by organizing subject folders, grading homework and quizzes, and scoring various tests. Then, ZhengHan brought me down to help out with running Session 1s with the subjects. Session 1s are the first time we meet subjects. During these sessions, we give subjects a bunch of computerized tests, as well as interactive tests, where we are the examiners. I received a script that I had to follow with all the subjects in order to instruct them on what to do, and downloaded the program 'PsychoPy' so I could run the PCPT test, which tests tonal recognition and analysis on subjects. For example, the test will play a sound, and subjects have to identify whether it has a rising, neutral, or falling tone.

PCPT set up (not the actual test)
However, I have not been licensed to test the KBIT (IQ test) on my own because it involves interaction with the subject, and a good poker face! As a result, I'm meeting with Kelly, the psycho-educational evaluator, next Monday to go over administering the KBIT. Kelly will also teach me about the various tests I've been scoring for the past three weeks in order to give me some context as to what I am actually doing.


Today was probably the most 'science-y' day I've had in the lab. Since only Zhenghan and 'Big Michelle' were at work, and only 1 subject came in for testing, I spent the day 'pruning' raw EEG data. EEG, which stands for electroencephalography, is the recording of electrical activity along the scalp (it's the thing where people put on a swim cap with a bunch of nodes on them). Although nodes are placed all over the scalp, Zhenghan explained that the nodes near the eyes are used to record vertical eye movements (VEOG), which includes blinking, and lateral eye movement. Because EEGs are super sensitive, noise, speech, and muscle movement all produce huge peaks in the data. When these peaks occur in eye movements, they are considered garbage, and they usually affect the data from the scalp. Therefore, my job was to 'prune' these peaks, which means flattening the lines, using a process known as independent component analysis (ICA). This is important because if you simply edited out the peaks, you may be deleting sections of significant information from the scalp! Below is the ICA process:

Raw EEG Data. Scroll component activities on the left, scroll channel activities on the right.

I first had to identify which components (left side) corresponded with the eye movement channels (right side). These eye channels are the bottom two lines on picture on the right. Next, I used the technical computing application 'MATLAB' to prune the data:


In the figure above, the red lines show the pruned EEG data. As you can see, a lot of the blips are now straight lines, making it a lot easier to analyze later. 

Zhenghan was also kind enough to show me the results she had obtained from CASL's EEG sessions so far. She told me that during EEG sessions, subjects were tested on two types of language errors: syntactic and semantic. Both errors elicit strong responses in the brain, so huge peaks, which are called event related potentials (ERPs) are created in the EEG. The semantic error induces a N400 ERP, while the syntactic error induces a P600 ERP. This is shown in the figures below (Courtesy of S. J. Luck)


However, Zhenghan's data shows a fascinating trend: People are significantly better at processing one error over the other. Unfortunately, I don't have her graphs and diagrams (she said they were preliminary), but her figures indicated that people with a stronger N400 response had a weaker P600, and vice versa. Her graphs also hinted that people better at identifying one type of error in his/her native language had an easier time identifying the same error in a different language. Moreover, she had these cool, colorful maps of the scalp that showed which areas of the brain were activated during the N400 and P600. Her data suggested that even after 45 minutes of training, certain people began to utilize these specific areas when spotting errors in Mandarin or an artificial language. Basically, the big picture of EEG is to identify people who are adept to learning new languages, and her data seems to imply that this may be possible in 45 minutes!!

I finished the week sitting in on my second fMRI. Next week, I'm helping out with several S1 and fMRI 2 sessions, as well as meeting up with Amy to learn about stats and the brain. I'm excited!

No comments:

Post a Comment