In this section, you’ll find how to create new machine learning (ML) handlers within MindsDB.
Prerequisite
You should have the latest staging version of the MindsDB repository installed locally. Follow this guide to learn how to install MindsDB for development.
ML handlers act as a bridge to any ML framework. You use ML handlers to create ML engines using the CREATE ML_ENGINE command. So you can expose ML models from any supported ML engine as an AI table.
Database Handlers
To learn more about handlers and how to implement a database handler, visit our doc page here.
You can create your own ML handler within MindsDB by inheriting from the BaseMLEngine class.
By providing the implementation for some or all of the methods contained in the BaseMLEngine
class, you can connect with the machine learning library or framework of your choice.
Apart from the __init__()
method, there are five methods, of which two must be implemented. We recommend checking actual examples in the codebase to get an idea of what goes into each of these methods, as they can change a bit depending on the nature of the system being integrated.
Let’s review the purpose of each method.
Method | Purpose |
---|---|
create() | It creates a model inside the engine registry. |
predict() | It calls a model and returns prediction data. |
update() | Optional. It updates an existing model without resetting its internal structure. |
describe() | Optional. It provides global model insights. |
create_engine() | Optional. It connects with external sources, such as REST API. |
Authors can opt for adding private methods, new files and folders, or any combination of these to structure all the necessary work that will enable the core methods to work as intended.
Other Common Methods
Under the mindsdb.integrations.libs.utils
library, contributors can find various methods that may be useful while implementing new handlers.
Also, there is a wrapper class for the BaseMLEngine
instances called BaseMLEngineExec. It is automatically deployed to take care of modifying the data responses into something that can be used alongside data handlers.
Here are the methods that must be implemented while inheriting from the BaseMLEngine class:
And here are the optional methods that you can implement alongside the mandatory ones if your ML framework allows it:
MindsDB has recently decoupled some modules out of its AutoML package in order to leverage them in integrations with other ML engines. The three modules are as follows:
The type_infer module that implements automated type inference for any dataset.
Below is the description of the input and output of this module.
Input: tabular dataset.
Output: best guesses of what type of data each column contains.
The dataprep_ml module that provides data preparation utilities, such as data cleaning, analysis, and splitting. Data cleaning procedures include column-wise cleaners, column-wise missing value imputers, and data splitters (train-val-test split, either simple or stratified).
Below is the description of the input and output of this module.
Input: tabular dataset.
Output: cleaned dataset, plus insights useful for data analysis and model building.
The mindsdb_evaluator module that provides utilities for evaluating the accuracy and calibration of ML models.
Below is the description of the input and output of this module.
Input: model predictions and the input data used to generate these predictions, including corresponding ground truth values of the column to predict.
Output: accuracy metrics that evaluate prediction accuracy and calibration metrics that check whether model-emitted probabilities are calibrated.
We recommend that new contributors use type_infer and dataprep_ml modules when writing ML handlers to avoid reimplementing thin AutoML layers over and over again; it is advised to focus on mapping input data and user parameters to the underlying framework’s API.
For now, using the mindsdb_evaluator module is not required, but will be in the short to medium term, so it’s important to be aware of it while writing a new integration.
Example
Let’s say you want to write an integration for TPOT
. Its high-level API exposes classes that are either for classification or regression. But as a handler designer, you need to ensure that arbitrary ML tasks are dispatched properly to each class (i.e., not using a regressor for a classification problem and vice versa). First, type_infer
can help you by estimating the data type of the target variable (so you immediately know what class to use). Additionally, to quickly get a stratified train-test split, you can leverage dataprep_ml
splitters and continue to focus on the actual usage of TPOT for the training and inference logic.
We would appreciate your feedback regarding usage & feature roadmap for the above modules, as they are quite new.
Step 1: Set up and run MindsDB locally
Step 2: Write a (failing) test for your new handler
Check that you can run the existing handler tests with python -m pytest tests/unit/ml_handlers/
. If you get the ModuleNotFoundError
error, try adding the __init__.py
file to any subdirectory that doesn’t have it.
Copy the simple tests from a relevant handler. For regular data, use the Ludwig handler. And for time series data, use the StatsForecast handler.
Change the SQL query to reference your handler. Specifically, set USING engine={HandlerName}
.
Run your new test. Please note that it should fail as you haven’t yet added your handler. The exception should be Can't find integration_record for handler ...
.
Step 3: Add your handler to the source code
Create a new directory in mindsdb/integrations/handlers/
. You must name the new directory {HandlerName}_handler/
.
Copy all the .py
files from the StatsForecast handler folder. These are: __about__.py
, __init__.py
, setup.py
, and statsforecast_handler.py
.
Change the contents of .py
files to match your new handler. Also, change the name of the statsforecast_handler.py
file to match your handler.
Modify the requirements.txt
file to install your handler’s dependencies. You may get conflicts with other packages like Lightwood, but you can ignore them for now.
Create a new blank class for your handler in the {HandlerName}_handler.py
file. Like for other handlers, this should be a subclass of the BaseMLEngine
class.
Add your new handler class to the testing DB. In the tests/unit/executor_test_base.py
file starting at line 91, you can see how other handlers are added with db.session.add(...)
. Copy that and modify it to add your handler. Please note to add your handler before Lightwood, otherwise the CI will break.
Run your new test. Please note that it should still fail but with a different exception message.
Step 4: Modify the handler source code until your test passes
Define a create()
method that deals with the model setup arguments. This will add your handler to the models table. Depending on the framework, you may also train the model here using the df
argument.
Save relevant arguments/trained models at the end of your create
method. This allows them to be accessed later. Use the engine_storage
attributes; you can find examples in other handlers’ folders.
Define a predict()
method that makes model predictions. This method must return a dataframe with format matching the input, except with a column containing your model’s predictions of the target. The input df is a subset of the original df with the rows determined by the conditions in the predict SQL query.
Don’t debug the create()
and predict()
methods with the print()
statement because they’re inside a subthread. Instead, write relevant info to disk.
Once your first test passes, add new tests for any important cases. You can also add tests for any helper functions you write.
Step 5: QA your handler locally
Launch the MindsDB server locally with python -m mindsdb
. Again, any issues will appear in the terminal output.
Check that your handler has been added to the local server database. You can view the list of handlers with SELECT * from information_schema.handlers
.
Run the relevant tutorial from the panel on the right side. For regular data, this is Predict Home Rental Prices
. And for time series data, this is Forecast Quarterly House Sales
. Specify USING ENGINE={your_handler}
while creating a model.
Don’t debug the create()
and predict()
methods with the print()
statement because they’re inside a subthread. Instead, write relevant info to disk.
You should get sensible results if your handler has been well-implemented. Make sure you try the predict step with a range of parameters.
Step 6: Open a pull request
You need to fork the MindsDB repository. Follow this guide to start a PR.
If relevant, add your tests and new dependencies to the CI config. This is at .github/workflows/mindsdb.yml
.
Please note that pytest
is the recommended testing package. Use pytest
to confirm your ML handler implementation is correct.
Templates for Unit Tests
If you implement a time-series ML handler, create your unit tests following the structure of the StatsForecast unit tests.
If you implement an NLP ML handler, create your unit tests following the structure of the Hugging Face unit tests.
To see some ML handlers that are currently in use, we encourage you to check out the following ML handlers inside the MindsDB repository:
And here are all the handlers available in the MindsDB repository.