Robust Machine Comprehension Models via Adversarial Training - USC-LHAMa/CSCI544_Project GitHub Wiki
By Yicheng Wang and Mohit Bansal.
TL;DL:)
Authors train BERT-based state of the art model on various various sets that include adversarial examples in various ways. They found that training on adversarially-enhanced variants of SquaDs helps increase robustness (not tricked easily by distractors). Also, priming input with semantic enhancement helps with performance.
1. Introduction
In 2017, Jiang and Lang introduced AddSent algorithm to enhance the base SQuaD dataset to enhance/generate adversarial enhancement to fool models specifically designed for the Q-A tasks. This method failed because "specificity of the AddSent algorithm along with the lack of naturally-occurring counterexamples allow models to learn superficial clues regarding what is a ‘distractor’ and subsequently ignore it; thus significantly limiting their robustness." Wang et al instead introduced AddSentDiverse with trickier variance to teach the model "deeper" learning.
AddSent Original by Jiang and Lang in 5 step processes:
- Change some part of the question with some related but different words.
- Fake answer generated to match the type of the original answer. Ex: Which is the biggest airport in LA? Correct A: LAX --> Changed to: Burbank.
- Combine fake Q and A into a distractor statement (The biggest airport in LA is Burbank).
- Manual review to make sure distractor sentences are grammatically correct.
- Add the distractor sentence to the end of the paragraph/context.
AddSentDiverse Based on Addsent but with several key differences:
-
Random Distractor Placement: put the distractor in various places instead of a fixed place (Ex: only at the end of a paragraph).
-
Dynamic Fake Answer Generation: Choose from a larger bank of words for replacement in step 2 of Addsent (Instead of Burbank, they can also choose from many other locations {John Wayne, Long Beach, Fullerton etc.})
-
Semantic Feature Enhanced Model: Enhance input with some vectors to diversity synonym and antonym
Experiment and Results
Authors used BASE (BiDAF + Self-Atm + ELMo) model for the QA test. They trained the model separately on different datasets: base SquaD, AddSent-enahanced, AddSenPrepend (distractor appended at the beginning), AddSentRandom (distractors thrown randomly in the text), and AddSendDiverse.
Results: AddSentDiverse boots performance significantly across all adversarial datasets ==> indicating general enhanced robustness.
Training Original-SQuAD-Dev AddSent AddSentPrepend AddSentRandom AddSentMod Average Original-SQuAD 84.65 42.45 41.46 40.48 41.96 50.20
AddSent 83.76 79.55 51.96 59.03 46.85 64.23
AddSentDiverse 83.49 76.95 77.45 76.02 77.06 78.19
Table 1: F1 performance of the BSAE model trained and tested on different regular/adversarial datasets.
Training AddSent AddSentPrepend Average
InsFirst 60.22 79.81 70.02
InsLast 79.54 51.96 65.75
InsMid 74.74 74.33 74.54
InsRandom 76.33 77.38 76.85 Table 2: F1 performance of the BSAE model trained on datasets with different distractor placement strategies.