LLM - rFronteddu/general_wiki GitHub Wiki

Temperature

The temperature parameter controls how consistent are the model answers to the same prompt (more temperature brings in more randomness in the responses).

Multi-turn prompts

  • LLM's are stateless, they don't remember your previous interactions with them by default.
  • You need to pass the LLM all prior prompts and responses to keep track of the conversation.

Prompt Engineering

You can guide the model to improve its response for your task through specific instructions or by including different kinds of information:

  • Providing examples of the task you are trying to carry out
    • Few-shot prompting you not only provide the structure to the model, but also two or more examples, you prompt the model to see if it can infer the task from the structure, as well as the examples in your prompt
  • Specifying how to format responses
  • Requesting that the model assume a particular "role or persona" when creating its response
    • Roles/tones give context to LLMs what type of answer are desired
  • Summarization
    • common use case to reduce large text
  • Including additional information or data for the model to use in its response
    • provide new information to the model that it couldn't know at training time

Chain-of-thought prompting

  • Instructing a model to tackle problems by breaking them down into smaller steps.
  • You can do this by
    • ask the model to think step by step
    • ask the model to explain your reasoning