Transformers - bryanneliu/Nature-Language-Processing GitHub Wiki
Transformers ToDO List
- https://aman.ai/primers/ai/transformers/
- https://lilianweng.github.io/posts/2018-06-24-attention/
- https://github.com/huggingface/notebooks/tree/main/sagemaker
- https://github.com/huggingface/transformers/tree/main/examples
Representation Learning for NLP
At a high level, all neural network architectures build representations of input data as vectors/embeddings, which encode useful syntactic [sɪnˈtæktɪk] (句法的) and semantic information about the data. These latent [ˈleɪtnt] (潜在的,隐藏的) or hidden representations can then be used for performing something useful, such as classifying an image or translating a sentence. The neural network learns to build better-and-better representations by receiving feedback, usually via error/loss functions.
presentation(展示, 更多关注展示或显示信息的行为) representation(表现, 侧重于以某种更广泛的方式描绘或象征某物)