2.1 Errata (Including cutting edge of Coomer modifications) - VBPXKSMI/Wiki-Test GitHub Wiki
================================================================
2.1 Errata
================================================================
(Anon's Guide to making your own RPG/Game System)
"In this world you have levels and your Attributes increase as you level up, Weapon/Armor/Magic masteries level up when you use them in battle or practice. Status consists of Attributes, Weapon, Armor and Magic masteries. In X world you can use X magic, X Armor, X Weapon. Attributes are Strength, Agility, Dexterity, Intelligence, Wisdom, Charisma, Luck, etc, etc, etc. Strength does X."
Basically just describe each and every element and the AI will form the system for you. The more you explain the better it is, try out the Attributes first then describe what each does, I also specified the starting status. Also specify that you can check your status.
(Editing the AI to be less random)
Decreasing the temperature to ~0.2 and going with a top_k of 10 to 20 results in really stable results.
No more falling asleep after vaguely looking at a girl, no random loud noises during sex, no sudden voices behind you. The AI pretty much stops trying to direct the story on it's own and just goes with your prompts. Obviously it'll be up to you to direct the story then, but those settings are really nice if you just want the AI going along with your ERP.
If you're using the local version, navigate to generator/gpt2/gpt2_generator.py and find temperature and top_k to edit them.
To disable story uploading to Go*gle, it's trivial. Just comment out line 131 in story_manager.py, where it launches gsutil.
(Anon sets the record straight on top_k and temperature)
Here's how temperature and top_k actually work:
When the AI decides how to write, it looks at the past context and generates a list of words that would make sense given this context Each word in this list has a probability of occurring in the current context, which it learned from the training data It then orders this list of possible-words from most-likely to least-likely
It then cuts off the list after top_k entries. So if top_k is one, it will only consider the most-likely word. If top_k is 2, it will consider the two most like words
It then scales the probabilities by temperature.
A temperature of 1 means "real-life" probabilities for the next word
A temperature > 1 scales probabilities to make less-likely words more likely. i.e. the ie will more often use words that are unlikely in the context
A temperature < 1 will make less-likely words even less likely. i.e. the AI tends to use words it has seen a lot in real life training data
So a temperature of 0.4 with top_k of 20 will result in the AI considering only the top 20 words, then shifting probabilities to make the most likely (i.e. "stable") words even more likely to occur
Ok nerd but which values should I use Start with temperature=0.2 and top_k=20 for more stable and coherent responses You can lower the temperature to 0.1 to make the AI even more stable, but it might become boring and repetitive
Doesn't matter if you use first or second person, the game calls first_to_second_person() which does what it says, pretty much.
Here's a mapping of words it will convert:
("I'm", "you're"),
("Im", "you're"),
("Ive", "you've"),
("I am", "you are"),
("was I", "were you"),
("am I", "are you"),
("wasn't I", "weren't you"),
("I", "you"),
("I'd", "you'd"),
("i", "you"),
("I've", "you've"),
("was I", "were you"),
("am I", "are you"),
("wasn't I", "weren't you"),
("I", "you"),
("I'd", "you'd"),
("i", "you"),
("I've", "you've"),
("I was", "you were"),
("my", "your"),
("we","you"),
("we're", "you're"),
("mine","yours"),
("me", "you"),
("us", "you"),
("our", "your"),
("I'll", "you'll"),
("myself", "yourself")