PORTAGE_sharedTrainingModels - SamuelLarkin/LizzyConversion GitHub Wiki
Up: PortageII Previous: WordAlignmentFormats Down: LanguageModels
'''Note:''' this section of the user manual presents all the different models you can use, but not how best to use them. The steps required to train PORTAGE shared following our current recommendations are automated in our experimental framework. See tutorial.pdf
in framework
for details.
Training: Constructing Models
Here are the models you need to build to run PORTAGE shared:
-
- Training_anLM_usingSRILM#TraininganLMusingSRILM
- Training_anLM_usingMITLM#TraininganLMusingMITLM
- Training_anLM_usingIRSTLM#TraininganLMusingIRSTLM
- TheBinLM_format#TheBinLMformat
- TheTPLM_format#TheTPLMformat
- DynamicMappingLM#DynamicMappingLM
- OpenVocabularyLM#OpenVocabularyLM
- MixtureModelLM#MixtureModelLM
- New! CoarseLM#CoarseLM
- New! CoarseBiLM#CoarseBiLM
-
- Phrase_tables_based_on_IBM2_word_alignment_models#PhrasetablesbasedonIBM2wordalignmentmodels
- Phrase_tables_based_on_HMM_word_alignment_models#PhrasetablesbasedonHMMwordalignmentmodels
- Phrase_tables_based_on_IBM4_word_alignment_models#PhrasetablesbasedonIBM4wordalignmentmodels
- Merged_phrase_tables#Mergedphrasetables
- TheTPPT_format#TheTPPTformat
-
HierarchicalLexicalizedDistortionModels#HierarchicalLexicalizedDistortionModels
-
TightlyPackedTries
Up: PortageII Previous: WordAlignmentFormats Down: LanguageModels