Usage - HewlettPackard/llmesh GitHub Wiki
Usage Guide
This guide provides detailed instructions on how to use the services provided by the LLM Agentic Tool Mesh platform, both from code and through examples that demonstrate how to create your first tool.
Prerequisites
Before proceeding, ensure that you have completed all installation steps and that all necessary dependencies are installed. If you need assistance, refer to the Installation Guide.
Using Library Services
LLM Agentic Tool Mesh provides a self-service platform with several packages designed to meet various needs:
- System Package: Includes services for managing tools on both the client and server sides, as well as utilities like logging.
- Chat Package: Offers services for creating chat applications, including prompt management, LLM model integration, and memory handling.
- Agents Package: Provides agentic services to create a Reasoning Engine or Multi-Agent task force.
- RAG Package: Contains services for injecting, storing, and retrieving data using Retrieval-Augmented Generation (RAG). This package should be explicitly declared during installation.
You can install the relevant packages using pip from this repository:
pip install 'agentic-python[rag]'
Once installed, you can import and use the packages in your code. Below is an example that demonstrates how to initialize an LLM model and invoke it:
from athon.chat import ChatModel
from langchain.schema import HumanMessage, SystemMessage
# Example configuration for the Chat Model
LLM_CONFIG = {
'type': 'LangChainChatOpenAI',
'api_key': 'your-api-key-here',
'model_name': 'gpt-4o',
'temperature': 0.7
}
# Initialize the Chat Model with LLM configuration
chat = ChatModel.create(LLM_CONFIG)
# Define the prompts
prompts = [
SystemMessage(content="Convert the message to pirate language"),
HumanMessage(content="Today is a sunny day and the sky is blue")
]
# Invoke the model with the prompts
result = chat.invoke(prompts)
# Handle the response
if result.status == "success":
print(f"COMPLETION:\n{result.content}")
else:
print(f"ERROR:\n{result.error_message}")
You can find more info about the platform services in Software Architecture
Running Notebooks:
To explore and experiment with LLM Agentic Tool Mesh, we provide interactive Jupyter notebooks located in the notebooks
directory:
-
Platform Services (
src/notebooks/platform_services
): This folder contains three notebooks that allow you to test key LLM functionalities:- Chat Service: Try out basic conversational interactions with an LLM.
- Retrieval-Augmented Generation (RAG) Service: Experiment with retrieval-based techniques to improve LLM responses.
- Agent Service: Explore how the agent framework interacts with tools dynamically.
-
Meta-Prompting (
src/notebooks/meta_prompting
): This section contains a notebook that enables you to automatically create an eCustomer Support Service Agent using the meta-prompting technique. By providing an operational manual and a test case, you can generate and evaluate an agent’s accuracy in handling queries.
Running the Examples to create a Mesh of LLM Tools
We have developed a series of web applications and tools, complete with examples, to demonstrate the capabilities of LLM Agentic Tool Mesh.
Web Applications:
-
Orchestrator (src/platform/orchestrator): The orchestrator is an OpenAI-compatible backend that exposes its functionality via standard
/v1/chat/completions
and/v1/models
endpoints. It can be integrated with tools like OpenWebUI by configuring a local API connection. Instead of listing generic model names, it dynamically exposes configured projects as models, allowing project-specific routing and tool access. Each project can define its tools, memory, and reasoning logic.
For a detailed breakdown of the architecture, see docs/design/orchestrator/architecture.md. -
Admin Panel (src/platform/app_backpanel): The admin panel enables the configuration of basic LLM tools to perform actions via LLM calls. It allows you to set the system prompt, select the LLM model, and define the LLM tool interface, simplifying the process of configuring LLM tool interfaces.
-
Agentic Memory (src/platform/app_memory): This application uses an LLM to categorize messages as either personal or project-related, storing them in the appropriate memory storage. Different chatbots can access and utilize the project memory, facilitating information sharing and collaboration within teams.
Tools:
-
Chat Service (src/platform/chat): An intelligent and adaptive chat tool assistant designed to provide helpful, context-aware responses. It supports multiple expert personas, such as technical specialists or playful characters, and maintains memory across interactions for a more personalized experience.
-
Temperature Finder (src/platform/tool_api): Fetches and displays the current temperature for a specified location by utilizing a public API.
-
Temperature Analyzer (src/platform/tool_analyzer): Generates code using a language model to analyze historical temperature data and create visual charts for better understanding.
-
Telco Expert (src/platform/tool_rag): A RAG tool that provides quick and accurate access to 5G Specifications.
-
OpenAPI Manager (src/platform/tool_agents): A multi-agent tool that reads OpenAPI documentation and provides users with relevant information based on their queries.
You can run the tools and web applications individually or use the provided src/infra/scripts/start_examples.sh
script to run them all together. Once everything is started, you can connect to the orchestrator endpoint at https://127.0.0.1:5001/v1 and the back panel at https://127.0.0.1:5011/.
Configuring the LLM Model for Different Environments
Depending on whether you are using ChatGPT or other models, you will need to set the LLM parameters accordingly in the app and tool configuration files. Below are examples of how to configure the parameters for each environment:
For ChatGPT:
Update the configuration file (i.e., config.yaml
) with the following settings:
# LLM settings normally inside model or llm fields
type: LangChainChatOpenAI
model_name: gpt-4o
api_key: $ENV{OPENAI_API_KEY}
temperature: 0
seed: 42
For internal ChatHPE:
Update the configuration file (i.e., config.yaml
) with the following settings:
# LLM settings normally inside model or llm fields
type: LangChainAzureChatOpenAI
azure_deployment: $ENV{HPE_DEPLOYMENT}
api_version: "2023-10-01-preview"
endpoint: $ENV{HPE_ENDPOINT}
api_key: $ENV{HPE_API_KEY}
temperature: 0
seed: 42
These changes should be made for all tools and applications. By default, they are set to use ChatGPT. To switch to ChatHPE, simply modify the relevant parameters as shown above.
Example Changes for All Tools and Apps
Each tool or app configuration file, such as src/platform/add_chatbot/config.yaml
, can be updated similarly:
chat:
type: LangChainChatOpenAI # Update to match the specific LLM environment
system_prompt: $PROMPT{chat_system_prompt}
model:
# Include the LLM model configuration details here
Creating Your First Tool
If you'd like to create your own tool from a template, detailed instructions are available in the Guide to Creating a New Athon Tool.
Creating Your First Web APP
When creating a new web app, you can build upon the existing examples. Considering that all services are fully parameterized, you have the flexibility to design various user experience panels. For instance, the examples include a chatbot as a user interface and an admin panel for configuring an LLM tool. However, you can also develop web apps to support tasks like deployment or to facilitate experiments aimed at optimizing service parameters for specific objectives.