Chatbot API - davidmarsoni/llog GitHub Wiki
The Chatbot's APIs are the main gateway to interact with the LLMs, querying them with the knowledge base and configuration.
This section will cover the following endpoints:
-
/chatbot/chat
- Renders the chatbot page -
/chatbot/check-message-length
- Checks the length of the user's message -
/chatbot/get-query-response
- Gets a response from the LLM based on user input -
/chatbot/get-agent-response
- Gets a response from the agent for complex tasks
Renders the page for interacting with the chatbot.
QUERY: GET /chatbot/chat
ARGS:
- Template: 'chatbot.html', the page to render
- Indexes: a collection of indexes (knowledge bases) to query from
- Messages: a list of all the messages exchanged between the user and the LLM
RETURN: The rendered chatbot page
Note
The chatbot page is made with HTMX, which allows for dynamic content loading and re-rendering without full page reloads. Thus, many of the API will send back HTML snippets to be injected into the page.
This API is used to check the length of the message sent by the user. It is used to limit the number of tokens sent to the LLM.
The base limit rate is set to 2000 characters.
QUERY: GET /chatbot/check-message-length
ARGS:
- Message: the message sent by the user
RETURN: An HTML snippet with the ratio between the message length and the limit rate.
This API is used to get a basic interaction with the LLM. It is used to send a message to the LLM and get a response.
QUERY: POST /chatbot/get-query-response
ARGS:
- Message: the message sent by the user
- lstMessageHistory[]: a list of all the messages exchanged between the user and the LLM
- temperature: (Optional) the temperature of the LLM
- maxTokens: (Optional) the maximum number of tokens to be generated by the LLM
- useRag: (Optional) a boolean to use the RAG or not
- mode: (Optional) the mode of the LLM, not implemented yet
- listOfIndexes[]: a list of indexes to query from
RETURN:
- The rendered template of the 'chat_messages' component
- user_message: the message sent by the user
- ai_response: the response generated by the LLM
- response: the response of the LLM, by default 'AI', but can be set to 'Error' if an error is raised
- ai_only: a boolean to prevent re-rendering of the user message (quirk of the chatbot page)
- response_element_id: the id of the response element to be updated
This API is used to get a response from the agent, which can perform more complex tasks.
QUERY: POST /chatbot/get-agent-response
ARGS:
- Message: the message sent by the user
- lstMessageHistory[]: a list of all the messages exchanged between the user and the LLM
- temperature: (Optional) the temperature of the LLM
- maxTokens: (Optional) the maximum number of tokens to be generated by the LLM
- modules: specific modules or tools the agent should use (implementation specific)
- useRag: (Optional) a boolean to use the RAG or not
- mode: (Optional) the mode of the LLM, not implemented yet
- listOfIndexes[]: a list of indexes to query from
RETURN:
- The rendered template of the 'chat_messages' component
- user_message: the message sent by the user
- ai_response: the response generated by the agent
- response_type: the type of the response, by default 'AI', but can be set to 'Error' if an error is raised
- ai_only: a boolean to prevent re-rendering of the user message
- response_element_id: the id of the response element to be updated