LlamaIndex
LlamaIndex is a data framework for your LLM application that allows you to work with private or domain-specific data and simplifies the interface between your data source and LLM.
Developers can use LlamaIndex to build chatbots, applications, private set-ups, and Q&A over documents and web pages. The implemented features for LlamaIndex include Support Web Page Reader and Support Database Reader.
How to bring LlamaIndex Models to MindsDB
Before creating a model, you can create an ML engine for LlamaIndex using the CREATE ML_ENGINE
statement and provide own OpenAI API key:
You can check the available engines with this command:
Alternatively, you can use the default llama_index
engine, if available. Please note that it is necessary to provide own OpenAI API key while creating the model.
Next, we use the CREATE MODEL
statement to create the LlamaIndex model in MindsDB.
Where:
Name | Description |
---|---|
engine | It defines the LlamaIndex engine. |
index_class | It is used to define the type of vector store index used. |
reader | It defines the type of reader (either data reader or webpage reader). |
source_url_link | It defines the URL link used to get the answers from. |
input_column | It defines the prompt to the model. |
openai_api_key | It is used to provide your OpenAI API key to gain access to the model. |
Example
Let’s explore a usecase where you will be creating a question-answering model that retrieves answers from a webpage. OpenAI will be used as the LLM and Blackrock’s investment webpage will be used as the datasource.
Use the CREATE MODEL
syntax to create a model with Llamaindex.
We use a table that contains questions to train the model. This data can be downloaded here and uploaded as a file to MindsDB. If you want to know how to upload a file, you can check out the documentation here.
Please visit our docs on the CREATE MODEL
statement to learn more.
The status of the model can be verified that it has completed training successfully by using the DESCRIBE
syntax.
The model can be queried for a single prediction by providing it with a question.
On execution you get:
You can also query for batch predictions.
On execution you get:
The LlamaIndex integration made it possible to use OpenAI to seamlessly query a webpage and obtain answers to questions.