Few shot Learning - HanjieChen/Reading-List GitHub Wiki
Few-shot learning
- Generalizing from a Few Examples: A Survey on Few-Shot Learning
- SimpleShot: Revisiting Nearest-Neighbor Classification for Few-Shot Learning
Benchmark datasets
- Few-shot Natural Language Generation for Task-Oriented Dialog
- Few-Shot NLG with Pre-Trained Language Model
Data augmentation
- AugNLG: Few-shot Natural Language Generation using Self-trained Data Augmentation
- Improving Zero and Few-Shot Abstractive Summarization with Intermediate Fine-tuning and Data Augmentation
- Data Augmentation Approaches in Natural Language Processing: A Survey
- C2C-GenDA: Cluster-to-Cluster Generation for Data Augmentation of Slot Filling
- Data Augmentation for Spoken Language Understanding via Pretrained Language Models
- Velocidapter: Task-oriented Dialogue Comprehension Modeling Pairing Synthetic Text Generation with Domain Adaptation
Data selection
- On Training Instance Selection for Few-Shot Neural Text Generation
- Gradual Fine-Tuning for Low-Resource Domain Adaptation
Self-training
- REVISITING SELF-TRAINING FOR NEURAL SEQUENCE GENERATION
- Best Practices for Data-Efficient Modeling in NLG: How to Train Production-Ready Neural Models with Less Data
- Getting to Production with Few-shot Natural Language Generation Models
- A Good Sample is Hard to Find: Noise Injection Sampling and Self-Training for Neural Language Generation Models
- Neural Data-to-Text Generation with LM-based Text Augmentation
- Data-to-Text Generation with Iterative Text Editing
- Semi-Supervised Neural Text Generation by Joint Learning of Natural Language Generation and Natural Language Understanding Models
Prompt design
- Few-Shot Text Generation with Pattern-Exploiting Training
- Avoiding Inference Heuristics in Few-shot Prompt-based Finetuning
- The Power of Scale for Parameter-Efficient Prompt Tuning
Feature augmentation
Few-shot shortcuts
- Rectifying the Shortcut Learning of Background for Few-Shot Learning
- Avoiding Inference Heuristics in Few-shot Prompt-based Finetuning
- Pre-training also Transfers Non-Robustness
- Issues with Entailment-based Zero-shot Text Classification
- Calibrate Before Use: Improving Few-Shot Performance of Language Models
Optimization-based approach
- What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?
- Meta-Learning for Low-resource Natural Language Generation in Task-oriented Dialogue Systems
- Towards Low-Resource Semi-Supervised Dialogue Generation with Meta-Learning