MLflow allows you to create, train, and serve machine learning models, apart from other features, such as organizing experiments, tracking metrics, and more.
Here are the prerequisites for using MLflow-served models in MindsDB:
Train a model via a wrapper class that inherits from the mlflow.pyfunc.PythonModel
class. It should expose the predict()
method that returns the predicted output for some input data when called.
Please ensure that the Python version specified for Conda environment matches the one used to train the model.
Start the MLflow server:
Serve the trained model:
Let’s create a model that registers an MLflow-served model as an AI Table:
Here is how to check the models status:
Once the status is complete
, we can query for predictions.
One way is to query for a single prediction using synthetic data in the WHERE
clause.
Another way is to query for batch predictions by joining the model with the data table.
Here, the data table comes from the files
integration. It is joined with the model and predictions are made for all the records at once.
Get More Insights
Check out the article on How to bring your own machine learning model to databases by Patricio Cerda Mardini to learn more.