Questions Emoji - ufal/NPFL095 GitHub Wiki
Questions Emoji
-
What is the purpose of the DeepMoji model? Can you see other uses for this model (not mentioned in the paper)?
-
Is the model capable of working with texts longer than 70 tokens? If yes, what challenges does it bring (if any)?
-
Does the model (described in Section 3.2) use self-attention (as described in the Transformer paper)? What are the similarities and differences?
-
Table 3 shows Top 1 and Top 5 accuracy of four models in a "64-emoji labeling" task. Guess what would be the Top 1 and Top 5 accuracy of the Random and DeepMoji (d=1024) models when instead of 64, we would use only the 8 emojis listed in Figure 5 (in Appendix A.3).
-
Can we say DeepMoji has a super-human accuracy in predicting the polarity of a tweet (positive vs. negative)? Why?