LangChain
Created: 2023-05-05 13:42
#note
LangChain is a framework that enables quick and easy development of applications that make use of Large Language Models (LLMs)s.
The framework is organized into six modules each module allows you to manage a different aspect of the interaction with the LLM.
- Models: Allows you to instantiate and use different models.
- Prompts: The prompt is how we interact with the model to try to obtain an output from it. By now knowing how to write an effective prompt is of critical importance. This framework module allows us to better manage prompts. For example, by creating templates that we can reuse.
- Indexes: The best models are often those that are combined with some of your textual data, in order to add context or explain something to the model. This module helps us do just that.
- Chains: Many times to solve tasks a single API call to an LLM is not enough. This module allows other tools to be integrated. For example, one call can be a composed chain with the purpose of getting information from Wikipedia and then giving this information as input to the model. This module allows multiple tools to be concatenated in order to solve complex tasks.
- Memory: This module allows us to create a persisting state between calls of a model. Being able to use a model that remembers what has been said in the past will surely improve our application.
- Agents: An agent is an LLM that makes a decision, takes an action, makes an observation about what it has done, and continues in this manner until it can complete its task. This module provides a set of agents that can be used.
Whereas a chain defines an immediate input/output process, the logic of agents allows a step-by-step thought process. The advantage of this step-by-step process is that the LLM can work through multiple reasoning steps or tools to produce a better answer.
To create a custom tool, check here.
Agent types
LangChain offers several types of agents:
- Zero Shot ReAct (zero-shot-react-description): we use this agent to perform “zero-shot” tasks on some input. That means the agent considers one single interaction with the agent — it will have no memory;
- Conversational ReAct (conversational-react-description): We can think of this agent as the same as Zero Shot ReAct agent, but with conversational memory (remember to initialize the memory buffer);
- ReAct Docstore (react-docstore): As before, it uses the ReAct - Synergizing reasoning and acting in language models methodology, but now it is explicitly built for information search and lookup using a LangChain docstore which allows us to store and retrieve information using traditional retrieval methods;
- Self-Ask With Search (self-ask-with-search): This agent is the first to consider when connecting an LLM with a search engine. The agent will perform searches and ask follow-up questions as often as required to get a final answer.