fMRI Fear Conditioning Ratings - PennBBL/conte GitHub Wiki

Motivation:

To detail the fMRI Fear Conditioning behavioral ratings (version3/design3 unless otherwise specified) that was created and run for CONTE, both pre and post scan. This was in an attempt to obtain a behavioral measure of conditioning to complement the fMRI fear conditioning task.

Ratings Overview:

There are three styles of questions asked pre and post scan: arousal, confidence, and valence questions. The first eight questions (eight total, four for each face stimulus) pre and post scan ratings are identical. The questions are asked for each face stimulus (actor face 1077 and actor face 1086) separately. For post scan ratings there are also two extra questions. One asking the participant to choose which face was the aversively-conditioned face and then a subsequent question asking their confidence level in their answer to the previous question.

Questions are shown to the participant with a sliding scale answer system- the participant is able to select an answer from the far left (negative valence/arousal/confidence, coded as -420) to the far right (positive valence/arousal/confidence, coded at 420) and any other tick marks in between.

The questions are detailed below.

Files:

In /data/joy/BBL/studies/conte/rawData/[bblid]/[datexscanid]/associated_files/ratings/post

  • [scanid]-Faces_ratings_wheel.log

    • Post run1 scan ratings logfile from scanner/xnat with subject answers to rating questions
  • [scanid]-fearConditioning_run1_wheel*.log

    • Run1 scan logfile from scanner/xnat that details the fear conditioning run1 task presentation
  • [scanid]-Faces_ratings_end_wheel.log

    • Post reversal scan ratings logfile from scanner/xnat with subject answers to rating questions
  • [scanid]-fearConditioning_rev_wheel*.log

    • Reversal scan logfile from scanner/xnat that details the fear conditioning reversal task presentation
  • [scanid]_Pairing.txt

    • The pairing of the conditioning order (see fear conditioning wiki for more information)
  • [scanid]_run1_array.txt

    • The array file detailing conditioning order (see fear conditioning wiki for more information)
  • [scanid]_run1_faces_data.csv

    • Subject-level output from post run1 ratings script
  • [scanid]_run1_ratings_data.csv

    • Subject-level output from post run1 ratings script

In /data/joy/BBL/studies/conte/rawData/[bblid]/[datexscanid]/associated_files/ratings/pre

  • [scanid]-Pre_task_faces_*_wheel.log
    • Pre ratings logfile from laptop/xnat with subject answers to rating questions
  • [scanid]_pre_ratings_data.csv
    • Subject-level output from pre ratings script

Pre Scan Ratings:

The participant sees the following questions for 17 different faces, but we only care about the two faces that they will be seeing during the scan (face 1077 and face 1086)

Question 1: "How friendly or unfriendly does this person seem?"

  • for actor face 1077
  • valence question
  • Options from "Unfriendly" (far left), to "Neutral" (middle), to "Friendly" (far right)

Question 2: "How positive or negative is the emotional expression on this person’s face?"

  • for actor face 1077
  • valence question
  • Options from "Moderately Negative" (far left), to "Neutral" (middle), to "Moderately Positive" (far right)

Question 3: "How positive or negative do you feel when you look at this person’s face?"

  • for actor face 1077
  • valence question
  • Options from "Moderately Negative" (far left), to "Neutral" (middle), to "Moderately Positive" (far right)

Question 4: "How strong are the emotions you feel when you see this person?"

  • for actor face 1077
  • arousal question
  • Options from "No Emotion" (far left) to "Moderately Strong" (far right)

Question 5: "How friendly or unfriendly does this person seem?"

  • for actor face 1086
  • valence question
  • Options from "Unfriendly" (far left), to "Neutral" (middle), to "Friendly" (far right)

Question 6: "How positive or negative is the emotional expression on this person’s face?"

  • for actor face 1086
  • valence question
  • Options from "Moderately Negative" (far left), to "Neutral" (middle), to "Moderately Positive" (far right)

Question 7: "How positive or negative do you feel when you look at this person’s face?"

  • for actor face 1086
  • valence question
  • Options from "Moderately Negative" (far left), to "Neutral" (middle), to "Moderately Positive" (far right)

Question 8: "How strong are the emotions you feel when you see this person?"

  • for actor face 1086
  • arousal question
  • Options from "No Emotion" (far left) to "Moderately Strong" (far right)

Post Scan Ratings:

The participant is only asked these questions about the two faces they saw during the scan (face 1077 and face 1086)

Question 1: "How friendly or unfriendly does this person seem?"

  • for actor face 1077
  • valence question
  • Options from "Unfriendly" (far left), to "Neutral" (middle), to "Friendly" (far right)

Question 2: "How positive or negative is the emotional expression on this person’s face?"

  • for actor face 1077
  • valence question
  • Options from "Moderately Negative" (far left), to "Neutral" (middle), to "Moderately Positive" (far right)

Question 3: "How positive or negative do you feel when you look at this person’s face?"

  • for actor face 1077
  • valence question
  • Options from "Moderately Negative" (far left), to "Neutral" (middle), to "Moderately Positive" (far right)

Question 4: "How strong are the emotions you feel when you see this person?"

  • for actor face 1077
  • arousal question
  • Options from "No Emotion" (far left) to "Moderately Strong" (far right)

Question 5: "How friendly or unfriendly does this person seem?"

  • for actor face 1086
  • valence question
  • Options from "Unfriendly" (far left), to "Neutral" (middle), to "Friendly" (far right)

Question 6: "How positive or negative is the emotional expression on this person’s face?"

  • for actor face 1086
  • valence question
  • Options from "Moderately Negative" (far left), to "Neutral" (middle), to "Moderately Positive" (far right)

Question 7: "How positive or negative do you feel when you look at this person’s face?"

  • for actor face 1086
  • valence question
  • Options from "Moderately Negative" (far left), to "Neutral" (middle), to "Moderately Positive" (far right)

Question 8: "How strong are the emotions you feel when you see this person?"

  • for actor face 1086
  • arousal question
  • Options from "No Emotion" (far left) to "Moderately Strong" (far right)

Question 9: "Scroll to select the face that was presented with a scream more often."

  • for both faces
  • accuracy question
  • Options from actor face 1077 (far left, coded as -420), to actor face 1086 (far right, coded as 420)

Question 10: "How confident are you that this face was presented with a scream more often."

  • The actor face image selected by the participant in Question 9 is displayed for Question 10
  • confidence question
  • Options from "Not Sure at All" (far left) to "Totally Sure" (far right)

Post Scan Questionnaire:

Question 1: "How unpleasant was the "scream" sound?"

  • Options from 1-10; "Not Unpleasant at All" (1) to "Extremely Unpleasant" (10)

Question 2: "How loud was the "scream" sound?"

  • Options from 1-10; "Almost Too Quiet to Hear" (1) to "Extremely Loud" (10)

Question 3: "How sleepy were you during the scan session?"

  • Options from 1-10; "Wide Awake the Whole Time" (1) to "Very Sleepy the Whole Time" (10)

Pre Scan Ratings Processing:

Wrapper script: /data/joy/BBL/projects/conteReproc2017/behavioralRatings/pre/pre_get_ratings_design3.sh

  • calls: /data/joy/BBL/projects/conteReproc2017/behavioralRatings/pre/pre_parse_logfiles_ratings_design3.R

Requires:

  • Subject list in the form of a text file (list of bblid/datexscanid)
    Example: bblid/datexscanid
  • Pre scan ratings downloaded from xnat and saved in subject's ratings directory
    /data/joy/BBL/studies/conte/rawData/*/*x*/associated_files/ratings/pre/*-Pre_task_faces_BL_wheel.log

Output:

  • Subject-specific pre ratings
    • /data/joy/BBL/studies/conte/rawData//x/associated_files/ratings/pre/_pre_ratings_data.csv
  • Aggregate subject data with pre ratings
    • /data/joy/BBL/studies/conte/subjectData/behavioralRatings/pre/pre_ratings_[date].csv

Set variables and pass to logfile script

Script: /data/joy/BBL/projects/conteReproc2017/behavioralRatings/pre/pre_get_ratings_design3.sh

  1. Create a variable for the date

     date=`date +%Y-%m-%d`  
    
  2. Loop through the subjects in the subject list and create variables for their ID's, then print the subject being processed to the screen

     #loop through the design 3 subjects and  
     for i in `cat /data/joy/BBL/studies/conte/subjectData/design3FullSubjectList.txt`  
     do  
               
     #create a variable which gets the bblid and scanid and prints the scanid to the screen  
     bblid=`echo $i | cut -d "/" -f 1`  
     datexscanid=`echo $i | cut -d "/" -f 2`  
     scanid=`echo $i | cut -d "x" -f 2`  
               
     echo "Processing subject........" $scanid  
    
  3. create variables for the subjects pre task ratings downloaded from xnat, and a path to their pre task rating directory

     #find the file for the subject's ratings and faces  
     path=`ls -d /data/joy/BBL/studies/conte/rawData/$bblid/*$scanid/associated_files/ratings/pre`  
     rating=`ls -d /data/joy/BBL/studies/conte/rawData/$bblid/*$scanid/associated_files/ratings/pre/*"$scanid"*Pre_task_faces*.log`   
    
  4. Run the R script and pass it the scanid, path to output directory, and pre task rating file

/share/apps/R/R-3.1.1/bin/R --file=/data/joy/BBL/projects/conteReproc2017/behavioralRatings/pre/pre_parse_logfiles_ratings_design3.R --slave --args "$rating" "$scanid" "$path"

Parse Pre Ratings logfile

Script: /data/joy/BBL/projects/conteReproc2017/behavioralRatings/pre/pre_parse_logfiles_ratings_design3.R

  1. Read in pre ratings log file, scanid, and path passed by wrapper script

     args<- commandArgs(TRUE)  
     data1<- read.table(args[1],fill=T)  
     scanid<- as.character(args[2])  
     path<- as.character(args[3])  
    
  2. Rename columns, subset data to Subject, Trial and Code. Then populate the column Trial with trial numbers

     #rename columns  
     colnames(data1)<- c("Subject","Trial","Event_Type","Code","Time","TTime")  
     #subset columns  
     data1<- data1[,c(1,2,4)]  
     #create a column called Trial which gets the rows in order (so can re-order data later)  
     data1$Trial<- as.numeric(1:nrow(data1))  
    
  3. Subset and Reshape logfile so have Question and Answer data

     #find question and answer rows  
     q<- grep("quest",data1$Code)  
     a<- grep("ans",data1$Code)  
     #create two files, one with only question data and one with only answer data  
     q2<- as.data.frame(data1[q,])  
     a2<- as.data.frame(data1[a,])  
     #reorder data files by Trial  
     q2<- q2[order(q2$Trial),]  
     a2<- a2[order(a2$Trial),]  
     #delete unnecessary rows (the pre scan asks about 17 faces, there are only 4 in the task)   
     q2<-q2[c(29:32,37:40),]  
     a2<-a2[c(29:32,37:40),]  
     #combine two data files so that question and answer are next to eachother  
     qa<- cbind(q2,a2$Trial,a2$Code)  
     #create a timepoint column that gets "pre"  
     qa$Timepoint<- "pre"  
     #subset data to only Subject, Question, Answer and Timepoint columns  
     qa<- qa[,c(1,3,5,6)]  
     #Rename Code and a2$Code columns to Question and Answer  
     colnames(qa)[2]<- "Question"  
     colnames(qa)[3]<- "Answer"     
    
  4. Add columns for which face the question was about, convert the answers to numbers since they are factors, and populate a Subject ID column

     #add column (called Face) for which face goes with which questions   
     qa$Face<- c(rep("face_1077",4),rep("face_1086",4))  
     #the answer column needs to be converted to something other than factors  
     qa$Answer<- as.character(qa$Answer)  
     qa$Answer<- substring(qa$Answer,4)  
     qa$Answer<- as.numeric(qa$Answer)  
     #because some of the files are incomplete/have missing questions, these will have NA rows. In order to reshape data the Subject column  
     #needs to not be NA, so create a variable called subject which gets the Subject number, then fill all NA subject column rows with this  
     #subject number  
     subject<- as.character(qa$Subject[1])  
     qa$Subject[is.na(qa$Subject)]<- subject  
    
  5. Reshape data so that each question is a column and has the face and answer below it (each subject is in a single row)

     library(reshape2)  
     qa<- reshape(qa, timevar="Question",idvar=c("Subject","Timepoint"),direction="wide")  
    
  6. Save out data for each subject in their output ratings pre directory

write.csv(qa,paste(path,"/",scanid,"_pre_ratings_data.csv",sep=""))

Create aggregate pre ratings data file

Script: /data/joy/BBL/projects/conteReproc2017/behavioralRatings/pre/pre_get_ratings_design3.sh

  1. Create a file called pre_ratings_date.csv which gets column headers for the questions

     echo "Subject,Timepoint,Answer.quest29,Face.quest29,Answer.quest30,Face.quest30,Answer.quest31,Face.quest31,Answer.quest32,Face.quest32,Answer.quest37,Face.quest37,Answer.quest38,Face.quest38,Answer.quest39,Face.quest39,Answer.quest40,Face.quest40" > /data/joy/BBL/studies/conte/subjectData/behavioralRatings/pre/pre_ratings_"$date".csv  
     done  
    
  2. For every individual's ratings file, append to the aggregate csv file

     for k in $( ls -d /data/joy/BBL/studies/conte/rawData/*/*x*/associated_files/ratings/pre/*_pre_ratings_data.csv ) ;do  
     tail -1 "$k" | cut -d "," -f 2-100 >> /data/joy/BBL/studies/conte/subjectData/behavioralRatings/pre/pre_ratings_"$date".csv  
     done  
    

Post Scan Ratings Processing:

Wrapper script: /data/joy/BBL/projects/conteReproc2017/behavioralRatings/post/run1_get_ratings_faces.sh

  • calls: /data/joy/BBL/projects/conteReproc2017/behavioralRatings/post/run1_parse_logfiles_ratings_faces.R

Requires:

  • Subject list in the form of a text file (list of bblid/datexscanid)
    Example: bblid/datexscanid
  • Post scan ratings downloaded from xnat and saved in subject's ratings directory
    /data/joy/BBL/studies/conte/rawData/*/*x*/associated_files/ratings/post/*-fearConditioning_run1_wheel_*.log

Output:

  • Subject-specific conditioning file (with which face was aversive and which neutral)
    • /data/joy/BBL/studies/conte/rawData//x/associated_files/ratings/post/_run1_faces_data.csv
  • Subject-specific post scan ratings
    • /data/joy/BBL/studies/conte/rawData//x/associated_files/ratings/post/_run1_ratings_data.csv
  • Aggregate subject data with post scan ratings
    • /data/joy/BBL/studies/conte/subjectData/behavioralRatings/post/run1_ratings_[date].csv
  • Aggregate subject data with conditioning information (which face was aversive/neutral)
    • /data/joy/BBL/studies/conte/subjectData/behavioralRatings/post/run1_faces_[date].csv

Set variables and pass to logfile script

Script: /data/joy/BBL/projects/conteReproc2017/behavioralRatings/post/run1_get_ratings_faces.sh

  1. Create a variable for the date

     date=`date +%Y-%m-%d`  
    
  2. Loop through the subjects in the subject list and create variables for their ID's, then print the subject being processed to the screen

     #loop through the design 3 subjects and  
     for i in `cat /data/joy/BBL/studies/conte/subjectData/design3FullSubjectList.txt`  
     do  
               
     #create a variable which gets the bblid and scanid and prints the scanid to the screen  
     bblid=`echo $i | cut -d "/" -f 1`  
     datexscanid=`echo $i | cut -d "/" -f 2`  
     scanid=`echo $i | cut -d "x" -f 2`  
               
     echo "Processing subject........" $scanid  
    
  3. create variables for the subjects post scan task ratings and conditioning file downloaded from xnat, and a path to their post scan rating directory

     path=`ls -d /data/joy/BBL/studies/conte/rawData/$bblid/*$scanid/associated_files/ratings/post`  
     rating=`ls -d /data/joy/BBL/studies/conte/rawData/$bblid/*$scanid/associated_files/ratings/post/*$scanid-Faces_ratings_wheel*.log`   
     faces=`ls -ltr -d /data/joy/BBL/studies/conte/rawData/$bblid/*$scanid/associated_files/ratings/post/*$scanid-fearConditioning_run1_wheel*.log | tail -1 | rev | cut -d " " -f1 | rev`   
    
  4. Run the R script and pass it the scanid, path to output directory, and post scan task rating and conditioning files

/share/apps/R/R-3.1.1/bin/R --file=/data/joy/BBL/projects/conteReproc2017/behavioralRatings/post/run1_parse_logfiles_ratings_faces.R --slave --args "$rating" "$faces" "$scanid" "$path"

Parse Post Ratings logfile

Script: /data/joy/BBL/projects/conteReproc2017/behavioralRatings/post/run1_parse_logfiles_ratings_faces.R

  1. Read in post ratings log file, conditioning file, scanid, and path passed by wrapper script

     args<- commandArgs(TRUE)  
     data1<- read.table(args[1],fill=T)  
     data2<- read.table(args[2],fill=T)  
     scanid<- as.character(args[3])  
     path<- as.character(args[4])  
    
  2. Rename columns, subset data to Subject, Trial and Code. Then populate the column Trial with trial numbers

     #rename columns  
     colnames(data1)<- c("Subject","Trial","Event_Type","Code","Time","TTime")  
     #subset columns  
     data1<- data1[,c(1,2,4)]  
     #create a column called Trial which gets the rows in order (so can re-order data later)  
     data1$Trial<- as.numeric(1:nrow(data1))  
     #rename columns  
     colnames(data2)<- c("Subject","Trial","Event_Type","Code","Time","TTime")  
     #subset columns  
     data2<- data2[,c(1,2,4)]  
     #create a column called Trial which gets the rows in order (so can re-order data later)  
     data2$Trial<- as.numeric(1:nrow(data2))  
    
  3. Subset and Reshape logfile so have Question and Answer data

     #find question and answer rows  
     q<- grep("quest",data1$Code)  
     a<- grep("ans",data1$Code)  
     #create two files, one with only question data and one with only answer data  
     q2<- as.data.frame(data1[q,])  
     a2<- as.data.frame(data1[a,])  
     #reorder data files by Trial  
     q2<- q2[order(q2$Trial),]  
     a2<- a2[order(a2$Trial),]  
     #combine two data files so that question and answer are next to eachother  
     qa<- cbind(q2,a2$Trial,a2$Code)  
     #create a timepoint column that gets "run1"  
     qa$Timepoint<- "run1"  
     #subset data to only Subject, Question, Answer and Timepoint columns  
     qa<- qa[,c(1,3,5,6)]  
     #Rename Code and a2$Code columns to Question and Answer  
     colnames(qa)[2]<- "Question"  
     colnames(qa)[3]<- "Answer"  
    
  4. Add columns for which face the question was about, convert the answers to numbers since they are factors, and populate a Subject ID column

     #add column (called Face) for which face goes with which questions  
     qa$Face<- c(rep("face_1077",4),rep("face_1086",4),"both",NA)  
     #the answer column needs to be converted to something other than factors  
     qa$Answer<- as.character(qa$Answer)  
     qa$Answer<- substring(qa$Answer,4)  
     qa$Answer<- as.numeric(qa$Answer)  
    
  5. Look at question 9 answer and fills in face cell for question 10 depending on question 9 answer

     if(qa$Answer[ qa$Question=="quest9"]==-140){  
     qa$Face[qa$Question=="quest10"]<- "face_1077"  
     }  
     if(qa$Answer[ qa$Question=="quest9"]==140){  
     qa$Face[qa$Question=="quest10"]<- "face_1086"  
     }  
    
  6. Reshape data so that each question is a column and has the face and answer below it (each subject is in a single row)

     library(reshape2)  
     qa<- reshape(qa, timevar="Question",idvar=c("Subject","Timepoint"),direction="wide")  
    
  7. Create Face Condition File which gets the condition of each face (whether it was aversive or neutral)

     #find face and tone rows  
     face_1077_condition<- data2[ grep("what_face_1077",data2$Code),]  
     face_1086_condition<- data2[ grep("what_face_1086",data2$Code),]  
                
     #create a file which gets face conditions  
     face<- data.frame(Subject=scanid,Timepoint="run1",face_1077_condition=NA,face_1086_condition=NA)  
        
     #populate cells depending on which face is aversive  
     tone_1077 <- grep("what_face_1077_tone",face_1077_condition$Code)[1]  
     tone_1086 <- grep("what_face_1086_tone",face_1086_condition$Code)[1]  
        
     if(! is.na(tone_1077)==TRUE){  
       face$face_1077_condition<-"aversive"  
       face$face_1086_condition<-"neutral"  
     }   
     if(! is.na(tone_1086)==TRUE){  
       face$face_1086_condition<-"aversive"  
       face$face_1077_condition<-"neutral"  
     }  
    
  8. Save out data for each subject in their output ratings post directory

    write.csv(qa,paste(path,"/",scanid,"_run1_ratings_data.csv",sep=""))  
    write.csv(face,paste(path,"/",scanid,"_run1_faces_data.csv",sep=""))   
    

Create aggregate post scan ratings and conditioning data files

Script: /data/joy/BBL/projects/conteReproc2017/behavioralRatings/post/run1_get_ratings_faces.sh

  1. Create a file called run1_ratings_date.csv which gets the appropriate column headers

    echo "Subject,Timepoint,Answer.quest1,Face.quest1,Answer.quest2,Face.quest2,Answer.quest3,Face.quest3,Answer.quest4,Face.quest4,Answer.quest5,Face.quest5,Answer.quest6,Face.quest6,Answer.quest7,Face.quest7,Answer.quest8,Face.quest8,Answer.quest9,Face.quest9,Answer.quest10,Face.quest10" > /data/joy/BBL/studies/conte/subjectData/behavioralRatings/post/run1_ratings_"$date".csv  
    
  2. Create a file called run1_faces_date.csv which gets the appropriate column headers

    echo "Subject,Timepoint,face_1077,face_1086" > /data/joy/BBL/studies/conte/subjectData/behavioralRatings/post/run1_faces_"$date".csv  
      
    done  
    
  3. For every individual's ratings and faces files, append to the aggregate csv file

    for k in $( ls -d /data/joy/BBL/studies/conte/rawData/*/*x*/associated_files/ratings/post/*_run1_ratings_data.csv ) ;do  
    tail -1 "$k" | cut -d "," -f 2-100 >> /data/joy/BBL/studies/conte/subjectData/behavioralRatings/post/run1_ratings_"$date".csv  
    done  
             
    for k in $( ls -d /data/joy/BBL/studies/conte/rawData/*/*x*/associated_files/ratings/post/*_run1_faces_data.csv ) ;do  
    tail -1 "$k" | cut -d "," -f 2-100 >> /data/joy/BBL/studies/conte/subjectData/behavioralRatings/post/run1_faces_"$date".csv  
    done  
    

Catch Trials Processing:

Wrapper script: /data/joy/BBL/projects/conteReproc2017/behavioralCatchTrial/get_catch_trial_data.sh

  • calls: /data/joy/BBL/projects/conteReproc2017/behavioralCatchTrial/parse_logfile_catch_trial_run1.R
  • calls: /data/joy/BBL/projects/conteReproc2017/behavioralCatchTrial/parse_logfile_catch_trial_run2.R

Requires:

  • Subject list in the form of a text file (list of bblid/datexscanid)
    Example: bblid/datexscanid
  • Post scan ratings downloaded from xnat and saved in subject's ratings directory
    /data/joy/BBL/studies/conte/rawData//x/associated_files/ratings/post/$scanid-fearConditioning_run1_wheel*.log
    /data/joy/BBL/studies/conte/rawData//x/associated_files/ratings/post/$scanid-fearConditioning_rev_wheel*.log

Output:

  • Subject-specific catch trial summaries
    • /data/joy/BBL/studies/conte/rawData//x/associated_files/behavioralQa/catchTrials/_run1_summary_response_trial_data.csv
    • /data/joy/BBL/studies/conte/rawData//x/associated_files/behavioralQa/catchTrials/_run2_summary_response_trial_data.csv
  • Subject-specific responses to stimuli
    • /data/joy/BBL/studies/conte/rawData//x/associated_files/behavioralQa/catchTrials/_run1_summary_response_trial_data.csv
    • /data/joy/BBL/studies/conte/rawData//x/associated_files/behavioralQa/catchTrials/_run2_summary_response_trial_data.csv
  • Aggregate subject catch trial summaries
    • /data/joy/BBL/studies/conte/subjectData/behavioralCatchTrial/run1_summary_response_trial_data_"$date".csv
    • /data/joy/BBL/studies/conte/subjectData/behavioralCatchTrial/run2_summary_response_trial_data_"$date".csv
  • Aggregate subject responses to stimuli
    • /data/joy/BBL/studies/conte/subjectData/behavioralCatchTrial/run1_response_type_table_data_"$date".csv
    • /data/joy/BBL/studies/conte/subjectData/behavioralCatchTrial/run2_response_type_table_data_"$date".csv

Set variables and pass to logfile script

Script: /data/joy/BBL/projects/conteReproc2017/behavioralCatchTrial/get_catch_trial_data.sh

  1. Create a variable for the date

     date=`date +%Y-%m-%d`  
    
  2. Loop through the subjects in the subject list and create variables for their ID's, then print the subject being processed to the screen

     #loop through each conte subject and  
     for i in `cat /data/joy/BBL/studies/conte/subjectData/design3FullSubjectList.txt`  
     do  
               
     #create a variable for bblid and scanid and output scanid to the screen  
     bblid=`echo $i | cut -d "/" -f 1`  
     datexscanid=`echo $i | cut -d "/" -f 2`  
     scanid=`echo $datexscanid | cut -d "x" -f 2`  
               
     echo "Processing subject......." $scanid   
    
  3. create variables for the subjects post scan task ratings files downloaded from xnat

     logfile_run1=`ls -ltr -d /data/joy/BBL/studies/conte/rawData/$bblid/*$scanid/associated_files/ratings/post/*$scanid-fearConditioning_run1_wheel*.log | tail -1 | rev | cut -d " " -f1 | rev`   
     logfile_run2=`ls -ltr -d /data/joy/BBL/studies/conte/rawData/$bblid/*$scanid/associated_files/ratings/post/*$scanid-fearConditioning_rev_wheel*.log | tail -1 | rev | cut -d " " -f1 | rev`   
    
  4. Check if the catch trial directory exists for that subject, and if it doesn't create it

     if [ ! `ls -d /data/joy/BBL/studies/conte/rawData/$bblid/*$scanid/associated_files/behavioralQa/catchTrials` ]; then  
     mkdir /data/joy/BBL/studies/conte/rawData/$bblid/$datexscanid/associated_files/behavioralQa  
     mkdir /data/joy/BBL/studies/conte/rawData/$bblid/$datexscanid/associated_files/behavioralQa/catchTrials  
     fi  
    
  5. Create a variable for the catch trial output directory for that subject

     path=`ls -d /data/joy/BBL/studies/conte/rawData/$bblid/*$scanid/associated_files/behavioralQa/catchTrials`  
    
  6. Run the R script and pass it the scanid, path to output directory, and post scan task rating files

     /share/apps/R/R-3.1.1/bin/R --file=/data/joy/BBL/projects/conteReproc2017/behavioralCatchTrial/parse_logfile_catch_trial_run1.R --slave --args "$logfile_run1" "$scanid" "$path"  
     /share/apps/R/R-3.1.1/bin/R --file=/data/joy/BBL/projects/conteReproc2017/behavioralCatchTrial/parse_logfile_catch_trial_run2.R --slave --args "$logfile_run2" "$scanid" "$path"  
    

Parse Post Ratings logfile (only detailing run1 because it is the same for run2)

Script: /data/joy/BBL/projects/conteReproc2017/behavioralCatchTrial/parse_logfile_catch_trial_run1.R

  1. Read in post ratings log file, scanid, and path passed by wrapper script

     args<- commandArgs(TRUE)  
     data1<- read.table(args[1],fill=T)  
     scanid<- as.character(args[2])  
     path<- as.character(args[3])  
    
  2. Convert Logfiles to easily readable csv format for later manipulation

     #rename columns  
     colnames(data1)<- c("Subject","Trial","Event_Type","Code","Time","TTime")  
     #create a column called Trial which gets the rows in order (so can re-order data later)  
     data1$Trial<- as.numeric(1:nrow(data1))  
     #subset to only necessary columns  
     data1<- data1[,1:6]  
    
  3. Create variables for the pairing by finding which face is presented first (with "tone" listed after it (aversive) or "notone" (neutral) and then read in the correct stick files

     ####create variables for the pairing   
     what_face<-  grep("what_face",data1$Code)  
     pairing<- as.data.frame(data1[what_face,])  
     pairing$Code<- as.character(pairing$Code)  
          
     #set appropriate variables for onset times of different stimuli depending on which order the subject had
     if(unlist(strsplit(pairing[1,4],split="_"))[4]=="tone"){  
       catch<- read.table("/data/joy/BBL/studies/conte/fmriDesignFiles/order0_2/run1/catch.txt",fill=T)  
       tone<- read.table("/data/joy/BBL/studies/conte/fmriDesignFiles/order0_2/run1/aversive_tone.txt",fill=T)  
       face_av<- read.table("/data/joy/BBL/studies/conte/fmriDesignFiles/order0_2/run1/face1_aversive.txt",fill=T)  
       face_notone<- read.table("/data/joy/BBL/studies/conte/fmriDesignFiles/order0_2/run1/face1_notone.txt",fill=T)  
       face_neu<- read.table("/data/joy/BBL/studies/conte/fmriDesignFiles/order0_2/run1/face2_notone.txt",fill=T)  
     } else if(unlist(strsplit(pairing[1,4],split="_"))[4]=="notone"){  
       catch<- read.table("/data/joy/BBL/studies/conte/fmriDesignFiles/order1_3/run1/catch.txt",fill=T)   
       tone<- read.table("/data/joy/BBL/studies/conte/fmriDesignFiles/order1_3/run1/aversive_tone.txt",fill=T)  
       face_neu<- read.table("/data/joy/BBL/studies/conte/fmriDesignFiles/order1_3/run1/face1_notone.txt",fill=T)  
       face_av<- read.table("/data/joy/BBL/studies/conte/fmriDesignFiles/order1_3/run1/face2_aversive.txt",fill=T)  
       face_notone<- read.table("/data/joy/BBL/studies/conte/fmriDesignFiles/order1_3/run1/face2_notone.txt",fill=T)  
     } else {  
       Print("ERROR NEITHER ORDER 0 2 NOR ORDER 1 3")  
     }  
    
  4. Subset and Reshape data

     #find Response rows  
     r<- grep("Response",data1$Event_Type)  
     #create file with only response data  
     response<- as.data.frame(data1[r,])  
     #reorder data files by Trial  
     response<- response[order(response$Trial),]  
       
     #create a timepoint column that gets "run1"  
     response$Timepoint<- "run1"  
        
     #find start time from data 1 (we delete first six volumes so start time should be at pulse count 7)  
     start_time<- grep("pulseCount_7",data1$Code,fixed=TRUE)  
     start_time<- data1[start_time[1],]   
         
     #create a column called delta time which gets the time minus the start time so can get onset time and translate delta 10th miliseconds  
     #to delta seconds  
     response$Time<- as.numeric(as.character(response$Time))  
     start_time$Time<- as.numeric(as.character(start_time$Time))  
     response$delta_time<- response$Time-start_time$Time  
     response$onset<- response$delta_time/10000  
    
  5. Create trial data frame so can match responses to the trials

      trials<- rbind(catch,tone,face_av,face_notone,face_neu)  
      trials$type<- c(rep("catch",8),rep("tone",12),rep("face_aversive",12),rep("face_notone",12),rep("face_neu",24))  
      trials<- trials[,c(1,4)]  
      colnames(trials)[1]<- "onset"  
         
      cross<- data1[ grep("cross",data1$Code,fixed=TRUE),]  
      cross<- cross[grep("green_cross",cross$Code,invert=TRUE),c(5,4)]  
      colnames(cross)[1:2]<- c("onset","type")  
      cross$onset<- as.numeric(as.character(cross$onset))  
      cross$onset<- cross$onset-start_time$Time  
      cross$onset<- cross$onset/10000  
        
      trials<- rbind(trials,cross)  
      trials<- trials[order(trials$onset),]  
    
  6. Looks at the response trial time and if there is a trial at that time (or within 2 seconds) then put the catch trial time in the response file in a column called match

      response$match<- NA  
      response$match_type<- NA  
         
      for (i in 1:nrow(response)){  
        for (j in 1:nrow(trials)){  
        x<- trials[j,1]   
        y<- response[i,"onset"]  
        type<- trials[j,"type"]  
        if (y>x && y<x+2){  
          response$match[i]<- x  
          response$match_type[i]<- type  
        }   
        }  
      }  
    
  7. Create a table with the response type to stimuli

      response_type<- data.frame(scanid=scanid)  
      response_type$face_aversive_tone<- sum(response$match_type=="face_aversive",na.rm=T)  
      response_type$face_neutral<- sum(response$match_type=="face_neu",na.rm=T)  
      response_type$face_aversive_notone<- sum(response$match_type=="face_notone",na.rm=T)  
      response_type$aversive_tone<- sum(response$match_type=="tone",na.rm=T)  
      response_type$cross<- sum(response$match_type=="cross",na.rm=T)  
      response_type$catch<- sum(response$match_type=="catch",na.rm=T)  
    
  8. Check to make sure that there are the correct number of responses based on catch trials

      num_catch_missing<- 8-sum(! duplicated(response$match[response$match_type=="catch" & ! is.na(response$match_type)]))  
      num_catch_extra_responses<- sum(duplicated(response$match[response$match_type=="catch" & ! is.na(response$match_type)]))  
      non_catch<- nrow(response[! response$match_type=="catch" & ! is.na(response$onset),])  
                  
      sum_response<- as.data.frame(t(c(scanid,num_catch_missing,num_catch_extra_responses,non_catch)))  
      colnames(sum_response)<- c("scanid","number_catch_responses_missing","number_catch_extra_responses","number_responses_to_non_catch_trials")  
    
  9. Write out data to paths given by wrapper script

      response<- response[,c("Subject","Timepoint","Event_Type","onset","match","match_type")]  
      write.csv(response,paste(path,"/",scanid,"_run1_response_trial_data.csv",sep=""))  
      write.csv(sum_response,paste(path,"/",scanid,"_run1_summary_response_trial_data.csv",sep=""))  
      write.csv(response_type,paste(path,"/",scanid,"_run1_response_type_table_data.csv",sep=""))  
    

Aggregate subject level data

Script: /data/joy/BBL/projects/conteReproc2017/behavioralCatchTrial/get_catch_trial_data.sh

  1. Create a file called run1_summary_response_trial_data_"$date".csv and run2_summary_response_trial_data_"$date".csv which get the following column headers

     echo "Scanid,Number Catch Responses Missing,Number Catch Extra Responses,Number Response to Non-Catch Trials" > /data/joy/BBL/studies/conte/subjectData/behavioralCatchTrial/run1_summary_response_trial_data_"$date".csv  
     echo "Scanid,Number Catch Responses Missing,Number Catch Extra Responses,Number Response to Non-Catch Trials" > /data/joy/BBL/studies/conte/subjectData/behavioralCatchTrial/run2_summary_response_trial_data_"$date".csv  
     echo "Scanid,Aversive Face with Tone,Neutral Face,Aversive Face no Tone,Aversive Tone,Crosshair,Catch Trial" > /data/joy/BBL/studies/conte/subjectData/behavioralCatchTrial/run1_response_type_table_data_"$date".csv  
     echo "Scanid,Aversive Face with Tone,Neutral Face,Aversive Face no Tone,Aversive Tone,Crosshair,Catch Trial" > /data/joy/BBL/studies/conte/subjectData/behavioralCatchTrial/run2_response_type_table_data_"$date".csv  
       
     done  
    
  2. For every individual's summary response files, append to the appropriate csv file

     for k in $( ls -d /data/joy/BBL/studies/conte/rawData/*/*x*/associated_files/behavioralQa/catchTrials/*_run1_summary_response_trial_data.csv ) ;do  
     tail -1 "$k" | cut -d "," -f 2-100 >> /data/joy/BBL/studies/conte/subjectData/behavioralCatchTrial/run1_summary_response_trial_data_"$date".csv  
     done  
               
     for j in $( ls -d /data/joy/BBL/studies/conte/rawData/*/*x*/associated_files/behavioralQa/catchTrials/*_run2_summary_response_trial_data.csv ) ;do  
     tail -1 "$j" | cut -d "," -f 2-100 >> /data/joy/BBL/studies/conte/subjectData/behavioralCatchTrial/run2_summary_response_trial_data_"$date".csv  
     done  
       
     for l in $( ls -d /data/joy/BBL/studies/conte/rawData/*/*x*/associated_files/behavioralQa/catchTrials/*_run1_response_type_table_data.csv ) ;do  
     tail -1 "$l" | cut -d "," -f 2-8 >> /data/joy/BBL/studies/conte/subjectData/behavioralCatchTrial/run1_response_type_table_data_"$date".csv  
     done  
        
     for k in $( ls -d /data/joy/BBL/studies/conte/rawData/*/*x*/associated_files/behavioralQa/catchTrials/*_run2_response_type_table_data.csv ) ;do  
     tail -1 "$k" | cut -d "," -f 2-8 >> /data/joy/BBL/studies/conte/subjectData/behavioralCatchTrial/run2_response_type_table_data_"$date".csv  
     done   
    

Differences between versions:

Version1 to Version2:

  • Due to variable reports of sedation and variable reports of subjective loudness/aversiveness of tone and scream, rating measures of sleepingess and loudness and aversiveness for the sounds were added, only at the very end of the task (after reversal).
  • The graphics and anchors of the other rating questions were modified. They had been using wording such as "arousal", for version2 we modified arousal wording (to avoid “aroused”).
  • Intermediate increment marks were added rather than having them only at extreme anchors, to make clear to subject that intermediate choices were acceptable.
  • We considered re-anchoring ratings at "very" rather than "moderately", but decided to keep it at moderately to maintain sensitivity to expected low ratings.

Version2 to Version3:

  • Changed the wording for the accuracy question to "Scroll to select the face that was presented with a scream more often." The neutral tone was removed from the task and therefore no longer relevant.
  • A confidence question was added for each face following the accuracy question, to determine the subject's confidence in their answer about which face was paired with the scream more.
  • There were no longer 4 faces presented, in version3 it was reduced to only 2 faces presented (one aversive, one neutral) so questions about face 1023 and 1057 which were no longer in the task were removed.
  • Conditioning order information was output to make analysis easier (a pairing file and run1 array file) however there were some bugs early on in the implementation so these should not be used as the gold standard for fear conditioning order.

Version 1 Ratings:

Files:

In /data/joy/BBL/studies/conte/rawData/[bblid]/[datexscanid]/associated_files/ratings/post

  • [scanid]-Faces_ratings2.log
    • Post run1 scan ratings logfile from scanner/xnat with subject answers to rating questions
  • [scanid]-fearConditioningV3R1.log
    • Run1 scan logfile from scanner/xnat that details the fear conditioning run1 task presentation
  • [scanid]-Faces_ratings_end.log
    • Post reversal scan ratings logfile from scanner/xnat with subject answers to rating questions
  • [scanid]-fearConditioningV3R2.log
    • Maintenance scan logfile from scanner/xnat that details the fear conditioning maintenance task presentation
  • [scanid]-fearConditioningV3R3.reversal.log
    • Reversal scan logfile from scanner/xnat that details the fear conditioning reversal task presentation

Note: We do not have the original raw data logfiles for Pre ratings from version1 or version2. However, all of the aggregate processed data from the pre ratings are in the CONTE2 RedCap project.

Pre Scan Ratings:

Question 1: "How friendly or unfriendly does this person seem?"

  • for actor face 1023
  • valence question
  • Options from "Unfriendly" (far left), to "Neutral" (middle), to "Friendly" (far right)

Question 2: "How positive or negative is the emotional expression on this person’s face?"

  • for actor face 1023
  • valence question
  • Options from "Moderately Negative" (far left), to "Neutral" (middle), to "Moderately Positive" (far right)

Question 3: "How positive or negative do you feel when you look at this person’s face?"

  • for actor face 1023
  • valence question
  • Options from "Moderately Negative" (far left), to "Neutral" (middle), to "Moderately Positive" (far right)

Question 4: "How much emotional arousal do you feel when you see this person?"

  • for actor face 1023
  • arousal question
  • Options from "No Arousal" (far left), to "Moderately Aroused" (far right)

Question 5: "How friendly or unfriendly does this person seem?"

  • for actor face 1077
  • valence question
  • Options from "Unfriendly" (far left), to "Neutral" (middle), to "Friendly" (far right)

Question 6: "How positive or negative is the emotional expression on this person’s face?"

  • for actor face 1077
  • valence question
  • Options from "Moderately Negative" (far left), to "Neutral" (middle), to "Moderately Positive" (far right)

Question 7: "How positive or negative do you feel when you look at this person’s face?"

  • for actor face 1077
  • valence question
  • Options from "Moderately Negative" (far left), to "Neutral" (middle), to "Moderately Positive" (far right)

Question 8: "How much emotional arousal do you feel when you see this person?"

  • for actor face 1077
  • arousal question
  • Options from "No Arousal" (far left), to "Moderately Aroused" (far right)

Question 9: "How friendly or unfriendly does this person seem?"

  • for actor face 1086
  • valence question
  • Options from "Unfriendly" (far left), to "Neutral" (middle), to "Friendly" (far right)

Question 10: "How positive or negative is the emotional expression on this person’s face?"

  • for actor face 1086
  • valence question
  • Options from "Moderately Negative" (far left), to "Neutral" (middle), to "Moderately Positive" (far right)

Question 11: "How positive or negative do you feel when you look at this person’s face?"

  • for actor face 1086
  • valence question
  • Options from "Moderately Negative" (far left), to "Neutral" (middle), to "Moderately Positive" (far right)

Question 12: "How much emotional arousal do you feel when you see this person?"

  • for actor face 1086
  • arousal question
  • Options from "No Arousal" (far left), to "Moderately Aroused" (far right)

Question 13: "How friendly or unfriendly does this person seem?"

  • for actor face 1057
  • valence question
  • Options from "Unfriendly" (far left), to "Neutral" (middle), to "Friendly" (far right)

Question 14: "How positive or negative is the emotional expression on this person’s face?"

  • for actor face 1057
  • valence question
  • Options from "Moderately Negative" (far left), to "Neutral" (middle), to "Moderately Positive" (far right)

Question 15: "How positive or negative do you feel when you look at this person’s face?"

  • for actor face 1057
  • valence question
  • Options from "Moderately Negative" (far left), to "Neutral" (middle), to "Moderately Positive" (far right)

Question 16: "How much emotional arousal do you feel when you see this person?"

  • for actor face 1057
  • arousal question
  • Options from "No Arousal" (far left), to "Moderately Aroused" (far right)

Post Scan Ratings:

Question 1: "How friendly or unfriendly does this person seem?"

  • for actor face 1023
  • valence question
  • Options from "Unfriendly" (far left), to "Neutral" (middle), to "Friendly" (far right)

Question 2: "How positive or negative is the emotional expression on this person’s face?"

  • for actor face 1023
  • valence question
  • Options from "Moderately Negative" (far left), to "Neutral" (middle), to "Moderately Positive" (far right)

Question 3: "How positive or negative do you feel when you look at this person’s face?"

  • for actor face 1023
  • valence question
  • Options from "Moderately Negative" (far left), to "Neutral" (middle), to "Moderately Positive" (far right)

Question 4: "How much emotional arousal do you feel when you see this person?"

  • for actor face 1023
  • arousal question
  • Options from "No Arousal" (far left), to "Moderately Aroused" (far right)

Question 5: "During Session 1 this face was presented more often with the following sound: A) Tone, B) Scream, C) Both Equally, D) I don't know"

  • for actor face 1023
  • accuracy question
  • Options -360 (tone), -120 (scream), 120 (both equally), 360 (I don't know)

Question 6: "How friendly or unfriendly does this person seem?"

  • for actor face 1077
  • valence question
  • Options from "Unfriendly" (far left), to "Neutral" (middle), to "Friendly" (far right)

Question 7: "How positive or negative is the emotional expression on this person’s face?"

  • for actor face 1077
  • valence question
  • Options from "Moderately Negative" (far left), to "Neutral" (middle), to "Moderately Positive" (far right)

Question 8: "How positive or negative do you feel when you look at this person’s face?"

  • for actor face 1077
  • valence question
  • Options from "Moderately Negative" (far left), to "Neutral" (middle), to "Moderately Positive" (far right)

Question 9: "How much emotional arousal do you feel when you see this person?"

  • for actor face 1077
  • arousal question
  • Options from "No Arousal" (far left), to "Moderately Aroused" (far right)

Question 10: "During Session 1 this face was presented more often with the following sound: A) Tone, B) Scream, C) Both Equally, D) I don't know"

  • for actor face 1077
  • accuracy question
  • Options -360 (tone), -120 (scream), 120 (both equally), 360 (I don't know)

Question 11: "How friendly or unfriendly does this person seem?"

  • for actor face
  • valence question
  • Options from "Unfriendly" (far left), to "Neutral" (middle), to "Friendly" (far right)

Question 12: "How positive or negative is the emotional expression on this person’s face?"

  • for actor face 1086
  • valence question
  • Options from "Moderately Negative" (far left), to "Neutral" (middle), to "Moderately Positive" (far right)

Question 13: "How positive or negative do you feel when you look at this person’s face?"

  • for actor face 1086
  • valence question
  • Options from "Moderately Negative" (far left), to "Neutral" (middle), to "Moderately Positive" (far right)

Question 14: "How much emotional arousal do you feel when you see this person?"

  • for actor face 1086
  • arousal question
  • Options from "No Arousal" (far left), to "Moderately Aroused" (far right)

Question 15: "During Session 1 this face was presented more often with the following sound: A) Tone, B) Scream, C) Both Equally, D) I don't know"

  • for actor face 1086
  • accuracy question
  • Options -360 (tone), -120 (scream), 120 (both equally), 360 (I don't know)

Question 16: "How friendly or unfriendly does this person seem?"

  • for actor face 1057
  • valence question
  • Options from "Unfriendly" (far left), to "Neutral" (middle), to "Friendly" (far right)

Question 17: "How positive or negative is the emotional expression on this person’s face?"

  • for actor face 1057
  • valence question
  • Options from "Moderately Negative" (far left), to "Neutral" (middle), to "Moderately Positive" (far right)

Question 18: "How positive or negative do you feel when you look at this person’s face?"

  • for actor face 1057
  • valence question
  • Options from "Moderately Negative" (far left), to "Neutral" (middle), to "Moderately Positive" (far right)

Question 19: "How much emotional arousal do you feel when you see this person?"

  • for actor face 1057
  • arousal question
  • Options from "No Arousal" (far left), to "Moderately Aroused" (far right)

Question 20: "During Session 1 this face was presented more often with the following sound: A) Tone, B) Scream, C) Both Equally, D) I don't know"

  • for actor face 1057
  • accuracy question
  • Options -360 (tone), -120 (scream), 120 (both equally), 360 (I don't know)

Version 2 Ratings:

Files:

In /data/joy/BBL/studies/conte/rawData/[bblid]/[datexscanid]/associated_files/ratings/post

  • [scanid]-Faces_ratings2_wheel.log

    • Post run1 scan ratings logfile from scanner/xnat with subject answers to rating questions
  • [scanid]-fearConditioning_run1_NEWARRAY_wheel*.log

    • Run1 scan logfile from scanner/xnat that details the fear conditioning run1 task presentation
  • [scanid]-Faces_ratings_end_wheel.log

    • Post reversal scan ratings logfile from scanner/xnat with subject answers to rating questions
  • [scanid]-fearConditioning_rev_NEWARRAY_wheel*.log

    • Reversal scan logfile from scanner/xnat that details the fear conditioning reversal task presentation
  • [scanid]_run1_array.txt

    • The array file detailing conditioning order (see fear conditioning wiki for more information)
  • [scanid]_run1_faces_data.csv

    • Subject-level output from post run1 ratings script
  • [scanid]_run1_ratings_data.csv

    • Subject-level output from post run1 ratings script

Note: We do not have the original raw data logfiles for Pre ratings from version1 or version2. However, all of the aggregate processed data from the pre ratings are in the CONTE2 RedCap project.

Pre Scan Ratings:

Question 1: "How friendly or unfriendly does this person seem?"

  • for actor face 1023
  • valence question
  • Options from "Unfriendly" (far left), to "Neutral" (middle), to "Friendly" (far right)

Question 2: "How positive or negative is the emotional expression on this person’s face?"

  • for actor face 1023
  • valence question
  • Options from "Moderately Negative" (far left), to "Neutral" (middle), to "Moderately Positive" (far right)

Question 3: "How positive or negative do you feel when you look at this person’s face?"

  • for actor face 1023
  • valence question
  • Options from "Moderately Negative" (far left), to "Neutral" (middle), to "Moderately Positive" (far right)

Question 4: "How strong are the emotions you feel when you see this person?"

  • for actor face 1023
  • arousal question
  • Options from "No Emotion" (far left), to "Moderately Strong" (far right)

Question 5: "How friendly or unfriendly does this person seem?"

  • for actor face 1077
  • valence question
  • Options from "Unfriendly" (far left), to "Neutral" (middle), to "Friendly" (far right)

Question 6: "How positive or negative is the emotional expression on this person’s face?"

  • for actor face 1077
  • valence question
  • Options from "Moderately Negative" (far left), to "Neutral" (middle), to "Moderately Positive" (far right)

Question 7: "How positive or negative do you feel when you look at this person’s face?"

  • for actor face 1077
  • valence question
  • Options from "Moderately Negative" (far left), to "Neutral" (middle), to "Moderately Positive" (far right)

Question 8: "How strong are the emotions you feel when you see this person?"

  • for actor face 1077
  • arousal question
  • Options from "No Emotion" (far left), to "Moderately Strong" (far right)

Question 9: "How friendly or unfriendly does this person seem?"

  • for actor face 1086
  • valence question
  • Options from "Unfriendly" (far left), to "Neutral" (middle), to "Friendly" (far right)

Question 10: "How positive or negative is the emotional expression on this person’s face?"

  • for actor face 1086
  • valence question
  • Options from "Moderately Negative" (far left), to "Neutral" (middle), to "Moderately Positive" (far right)

Question 11: "How positive or negative do you feel when you look at this person’s face?"

  • for actor face 1086
  • valence question
  • Options from "Moderately Negative" (far left), to "Neutral" (middle), to "Moderately Positive" (far right)

Question 12: "How strong are the emotions you feel when you see this person?"

  • for actor face 1086
  • arousal question
  • Options from "No Emotion" (far left), to "Moderately Strong" (far right)

Question 13: "How friendly or unfriendly does this person seem?"

  • for actor face 1057
  • valence question
  • Options from "Unfriendly" (far left), to "Neutral" (middle), to "Friendly" (far right)

Question 14: "How positive or negative is the emotional expression on this person’s face?"

  • for actor face 1057
  • valence question
  • Options from "Moderately Negative" (far left), to "Neutral" (middle), to "Moderately Positive" (far right)

Question 15: "How positive or negative do you feel when you look at this person’s face?"

  • for actor face 1057
  • valence question
  • Options from "Moderately Negative" (far left), to "Neutral" (middle), to "Moderately Positive" (far right)

Question 16: "How strong are the emotions you feel when you see this person?"

  • for actor face 1057
  • arousal question
  • Options from "No Emotion" (far left), to "Moderately Strong" (far right)

Post Scan Ratings:

Question 1: "How friendly or unfriendly does this person seem?"

  • for actor face 1023
  • valence question
  • Options from "Unfriendly" (far left), to "Neutral" (middle), to "Friendly" (far right)

Question 2: "How positive or negative is the emotional expression on this person’s face?"

  • for actor face 1023
  • valence question
  • Options from "Moderately Negative" (far left), to "Neutral" (middle), to "Moderately Positive" (far right)

Question 3: "How positive or negative do you feel when you look at this person’s face?"

  • for actor face 1023
  • valence question
  • Options from "Moderately Negative" (far left), to "Neutral" (middle), to "Moderately Positive" (far right)

Question 4: "How strong are the emotions you feel when you see this person?"

  • for actor face 1023
  • arousal question
  • Options from "No Emotion" (far left), to "Moderately Strong" (far right)

Question 5: "During Session 1 this face was presented more often with the following sound: A) Tone, B) Scream, C) Both Equally, D) I don't know"

  • for actor face 1023
  • accuracy question
  • Options -360 (tone), -120 (scream), 120 (both equally), 360 (I don't know)

Question 6: "How friendly or unfriendly does this person seem?"

  • for actor face 1077
  • valence question
  • Options from "Unfriendly" (far left), to "Neutral" (middle), to "Friendly" (far right)

Question 7: "How positive or negative is the emotional expression on this person’s face?"

  • for actor face 1077
  • valence question
  • Options from "Moderately Negative" (far left), to "Neutral" (middle), to "Moderately Positive" (far right)

Question 8: "How positive or negative do you feel when you look at this person’s face?"

  • for actor face 1077
  • valence question
  • Options from "Moderately Negative" (far left), to "Neutral" (middle), to "Moderately Positive" (far right)

Question 9: "How strong are the emotions you feel when you see this person?"

  • for actor face 1077
  • arousal question
  • Options from "No Emotion" (far left), to "Moderately Strong" (far right)

Question 10: "During Session 1 this face was presented more often with the following sound: A) Tone, B) Scream, C) Both Equally, D) I don't know"

  • for actor face 1077
  • accuracy question
  • Options -360 (tone), -120 (scream), 120 (both equally), 360 (I don't know)

Question 11: "How friendly or unfriendly does this person seem?"

  • for actor face
  • valence question
  • Options from "Unfriendly" (far left), to "Neutral" (middle), to "Friendly" (far right)

Question 12: "How positive or negative is the emotional expression on this person’s face?"

  • for actor face 1086
  • valence question
  • Options from "Moderately Negative" (far left), to "Neutral" (middle), to "Moderately Positive" (far right)

Question 13: "How positive or negative do you feel when you look at this person’s face?"

  • for actor face 1086
  • valence question
  • Options from "Moderately Negative" (far left), to "Neutral" (middle), to "Moderately Positive" (far right)

Question 14: "How strong are the emotions you feel when you see this person?"

  • for actor face 1086
  • arousal question
  • Options from "No Emotion" (far left), to "Moderately Strong" (far right)

Question 15: "During Session 1 this face was presented more often with the following sound: A) Tone, B) Scream, C) Both Equally, D) I don't know"

  • for actor face 1086
  • accuracy question
  • Options -360 (tone), -120 (scream), 120 (both equally), 360 (I don't know)

Question 16: "How friendly or unfriendly does this person seem?"

  • for actor face 1057
  • valence question
  • Options from "Unfriendly" (far left), to "Neutral" (middle), to "Friendly" (far right)

Question 17: "How positive or negative is the emotional expression on this person’s face?"

  • for actor face 1057
  • valence question
  • Options from "Moderately Negative" (far left), to "Neutral" (middle), to "Moderately Positive" (far right)

Question 18: "How positive or negative do you feel when you look at this person’s face?"

  • for actor face 1057
  • valence question
  • Options from "Moderately Negative" (far left), to "Neutral" (middle), to "Moderately Positive" (far right)

Question 19: "How strong are the emotions you feel when you see this person?"

  • for actor face 1057
  • arousal question
  • Options from "No Emotion" (far left), to "Moderately Strong" (far right)

Question 20: "During Session 1 this face was presented more often with the following sound: A) Tone, B) Scream, C) Both Equally, D) I don't know"

  • for actor face 1057
  • accuracy question
  • Options -360 (tone), -120 (scream), 120 (both equally), 360 (I don't know)

Post Scan Questionnaire:

Note: The coding for the post scan questionnaire for subjects prior to version 3 in RedCap where it was collected was changed so that some of the scores go up to 12. Because of this, the post scan questionnaire questions below from version2 should probably not be used because we are unable to confirm the correct coding of the answers.

“There were two sounds in the experiment, one sounded like a simple tone, the other one sounded like a scream.

For the “tone” sound:

Question 1: "How loud was it?"

  • Options 1 (Almost too quiet to hear) to 10 (So loud it was uncomfortable)

Question 2: "How unpleasant was it?"

  • Options 1 (Not unpleasant at all) to 10 (Very unpleasant)

For the “scream” sound:

Question 3: "How loud was it?"

  • Options 1 (Almost too quiet to hear) to 10 (So loud it was uncomfortable)

Question 4: "How unpleasant was it?"

  • Options 1 (Not unpleasant at all) to 10 (Very unpleasant)

Question 5: "How sleepy were you during the scan?"

  • Options 1 (Wide awake the whole time) to 10 (Very sleepy the whole time)