LangChain
In this section, we present how to integrate LangChain with MindsDB.
LangChain allows users to connect multiple large language models in a logical way using chains, leveraging the strengths of each model and expanding the capabilities of language processing systems. It also enables the usage of mechanisms like memories and output parsers, which can improve the robustness of LLM-based applications. By employing chains, memories, and output parsers, LangChain empowers users to create, among other things, so-called “agents” that emerge out of forcing LLMs into a structured loop to achieve reasoning. These agents can understand and interact with data, making communication with data more efficient and intuitive.
Read on to find out how to use LangChain with MinsdDB.
Setup
MindsDB provides the LangChain handler that enables you to use LangChain within MindsDB.
AI Engine
Before creating a model, it is required to create an AI engine based on the provided handler.
If you installed MindsDB locally, make sure to install all LangChain dependencies by running pip install .[langchain]
or from the requirements.txt file.
You can create a LangChain engine using this command and providing at least one of OpenAI or Anthropic API keys:
The name of the engine (here, langchain_engine
) should be used as a value for the engine
parameter in the USING
clause of the CREATE MODEL
statement.
AI Model
The CREATE MODEL
statement is used to create, train, and deploy models within MindsDB.
Currently, this integration supports exposing OpenAI and Anthrophic LLMs with normal text completion support. They are then wrapped in a zero shot react description agent that offers a few third party tools out of the box, with support for additional ones (like Serper) if an API key is provided. Ongoing memory is also provided.
You can provide other parameters specific for OpenAI and Anthropic models, such as temperature
or max_tokens
.
There are three different tools utilized by this agent:
- MindsDB is the internal MindsDB executor.
- Metadata fetches the metadata information for the available tables.
- Write is able to write agent responses into a MindsDB data source.
Each tool exposes the internal MindsDB executor in a different way to perform its tasks, effectively enabling the agent model to read from (and potentially write to) data sources or models available in the active MindsDB project.
Examples
Let’s create the model that will be used by the following use cases.
Here, we create the tool_based_agent
model using the LangChain engine, as defined in the engine
parameter. This model answers users’ questions in a helpful way, as defined in the prompt_template
parameter, which specifies input
as the input column when calling the model.
Describing Connected Data Sources
We can ask questions about data sources connected to MindsDB.
On execution, we get:
To get information about the mysql_demo_db.house_sales
table, the agent uses the Metadata tool. Then the agent prepares the response.
The Write tool only comes into play when writing to other data sources. By default, the agent is able to write an answer back to the user (through standard output), like in this case.
Analyzing Data
We can ask questions to analyze the available data.
On execution, we get:
Here, the model uses the Metadata tool again to fetch the column information. As there is no beds
column in the mysql_demo_db.home_rentals
table, it uses the number_of_rooms
column and writes the following query:
This query returns the value of 1.6, which is then used to write an answer.
Retrieving Data
We can ask the model to retrieve specific data.
On execution, we get:
Here, the model uses the Metadata tool again to fetch information about the table. Then, it creates and executes the following query:
On execution, the model gets this output:
Consequently, it takes the query output and writes an answer.
Inserting Data
We can ask the model to insert data into a table, assuming we have sufficient privileges on that database.
On execution, we get:
The agent uses the Write tool to INSERT INTO
the local_database
database.