LangChain - ghdrako/doc_snipets GitHub Wiki
when using LLMs(Large Language Models) on their own, we face on the lack of external knowledge, incorrect reasoning, and the inability to take action. LangChain provides solutions to these issues through different integrations and off-the-shelf components for specific tasks.
LangChain enables building dynamic, data-aware applications that go beyond what is possible by simply accessing LLMs via API calls.
LLMs cannot connect inferences or adapt responses to new situations. Overcoming these obstacles requires augmenting LLMs with techniques that add true comprehension. Raw model scale alone cannot transform stochastic parroting into beneficial systems. Innovations like prompting, chain-of- thought reasoning, retrieval grounding, and others are needed to educate models.
Some pain points associated with LLMs include:
- Outdated knowledge: LLMs rely solely on their training data. Without external integration, they cannot provide recent real-world information.
- Inability to take action: LLMs cannot perform interactive actions like searches, calculations, or lookups. This severely limits functionality.
- Lack of context: LLMs struggle to incorporate relevant context like previous conversations and the supplementary details that are needed for coherent and useful responses.
- Hallucination risks: Insufficient knowledge on certain topics can lead to the generation of incorrect or nonsensical content by LLMs if not properly grounded.
- Biases and discrimination: Depending on the data they were trained on, LLMs can exhibitmbiases that can be religious, ideological, or political in nature.
- Lack of transparency: The behavior of large, complex models can be opaque and difficult to interpret, posing challenges to alignment with human values.
- Lack of context: LLMs may struggle to understand and incorporate context from previous prompts or conversations. They may not remember previously mentioned details or may fail to provide additional relevant information beyond the given prompt.
LLM app is an application that utilizes an LLM to understand natural language prompts and generate responsive text outputs. LLM apps typically have the following components:
- A client layer to collect user input as text queries or decisions.
- A prompt engineering layer to construct prompts that guide the LLM.
- An LLM backend to analyze prompts and produce relevant text responses.
- An output parsing layer to interpret LLM responses for the application interface.
- Optional integration with external services via function APIs, knowledge bases, and reasoning algorithms to augment the LLM’s capabilities.
LLM apps can integrate external services via:
- Function APIs to access web tools and databases.
- Advanced reasoning algorithms for complex logic chains.
- Retrieval augmented generation via knowledge bases.
Retrieval augmented generation (RAG) enhances the LLM with external knowledge. These extensions expand the capabilities of LLM apps beyond the LLM’s knowledge alone. For instance:
- Function calling allows parameterized API requests.
- SQL functions enable conversational database queries.
- Reasoning algorithms like chain-of-thought facilitate multi-step logic.
LLM applications are important for several reasons:
- The LLM backend handles language in a nuanced, human-like way without hardcoded rules.
- Responses can be personalized and contextualized based on past interactions.
- Advanced reasoning algorithms enable complex, multi-step inference chains.
- Dynamic responses based on the LLM or on up-to-date information retrieved in real time.
LangChain is an open-source Python framework for building LLM-powered applications. It provides developers with modular, easy-to-use components for connecting language models with external data sources and services.
LangChain simplifies the development of sophisticated LLM applications by providing reusable components and pre-assembled chains. Its modular architecture abstracts access to LLMs and external services into a unified interface. Developers can combine these building blocks to carry out complex workflows.
Beyond basic LLM API usage, LangChain facilitates advanced interactions like conversational context and persistence through agents and memory.
In particular, LangChain’s support for chains, agents, tools, and memory allows developers to build applications that can interact with their environment in a more sophisticated way and store and reuse information over time. Its modular design makes it easy to build complex applications that can be adapted to a variety of domains. Support for action plans and strategies improves the performance and robustness of applications.
The key benefits LangChain offers developers are:
- Modular architecture for flexible and adaptable LLM integrations.
- Chaining together multiple services beyond just LLMs.
- Goal-driven agent interactions instead of isolated calls.
- Memory and persistence for statefulness across executions.
- Open-source access and community support.
Version:
- LangChain (Python)
- LangChain.js (TypeScript)
- Langchain.rb (Ruby)
There’s even a chatbot, ChatLangChain, that can answer questions about the LangChain documentation.
Ecosystem
- LangSmith is a platform that complements LangChain by providing robust debugging, testing, and monitoring capabilities for LLM applications. For example, developers can quickly debug new chains by viewing detailed execution traces. Alternative prompts and LLMs can be evaluated against datasets to ensure quality and consistency. Usage analytics empower data-driven decisions around optimizations.
- LlamaHub and LangChainHub provide open libraries of reusable elements to build sophisticated LLM systems in a simplified manner.
- LlamaHub is a library of data loaders, readers, and tools created by the LlamaIndex community. It provides utilities to easily connect LLMs to diverse knowledge sources. The loaders ingest data for retrieval, while tools enable models to read/write to external data services. LlamaHub simplifies the creation of customized data agents to unlock LLM capabilities.
- LangChainHub is a central repository for sharing artifacts like prompts, chains, and agents used in LangChain. Inspired by the Hugging Face Hub, it aims to be a one-stop resource for discovering high-quality building blocks to compose complex LLM apps. The initial launch focuses on a collection of reusable prompts. Future plans involve adding support for chains, agents, and other key LangChain components.
- LangFlow and Flowise are UIs that allow chaining LangChain components in an executable flowchart by dragging sidebar components onto the canvas and connecting them together to create your pipeline.
LangChain unlocks more advanced LLM applications via its combination of components like memory, chaining, and agents.
Chains are a critical concept in LangChain for composing modular components into reusable pipelines.
Prompt chaining is a technique that can be used to improve the performance of LangChain applications, which involves chaining together multiple prompts to autocomplete a more com- plex response. More complex chains integrate models with tools like LLMMath, for math-related queries, or SQLDatabaseChain, for querying databases. These are called utility chains, because they combine language models with specific tools.
Chains can even enforce policies, like moderating toxic outputs or aligning with ethical princi- ples. LangChain implements chains to make sure the content of the output is not toxic, does not otherwise violate OpenAI’s moderation rules (OpenAIModerationChain), or that it conforms to ethical, legal, or custom principles (ConstitutionalChain).