HERE MLflow Plugin

MLflow is a popular open source platform for managing Machine Learning (ML) development, a MLflow plugin is provided to manage the ML Lifecycle on the platform. This plugin allows users to manage their ML experiments on the platform and share that with other users of the platform, or make it available on the HERE Marketplace. Data scientists can use any ML framework on any ML Cloud platform for training and can choose to manage the ML artifacts on the platform. The MLflow plug-in can be used while training the ML model or users can upload an already trained model onto the platform.

MLflow plugin features include:

  • Tracking experiments to record and compare parameters and results
  • Packaging ML code in a reusable, reproducible form in order to share with other data scientists
  • Providing a central model store to collaboratively manage the full lifecycle of an MLflow model, including model versioning, stage transitions, and annotations through a catalog HRN.
  • Build a docker locally for exposing the model as a service for inference. Note this feature is available only for testing the inference service locally in this release.



To install the module, use the following command:

This command will install the latest version of the MLflow Plugin module in the current environment.

pip install --extra-index-url here-mlflow-plugin==1.0


  • If you have errors related with GDAL or geopandas dependency on Windows, when installing the package, follow these steps.
  • If you have errors related with Microsoft Visual C++ build tools, follow these steps.

Developer Flow

  • Create a catalog for storing all the information
here_mlflow_plugin_setup -c <catalog_id>
  • Set the tracking URI pointing to this catalog hrn. Specify catalog hrn and not catalog id.

For Linux/MacOS:

export MLFLOW_TRACKING_URI=here+mlflow://catalog/v1/<catalog_hrn>

For Windows:

set MLFLOW_TRACKING_URI=here+mlflow://catalog/v1/<catalog_hrn>
  • Download the sample notebooks.

  • Start the training or upload an existing trained model on the platform. For more information, see the example notebooks.

  • Launch the MLflow locally to visualize/compare all the logged information. Extract the run_id by choosing the given experiment name.

    mlflow ui --backend-store-uri here+mlflow://catalog/v1/<catalog_hrn> --default-artifact-root here+mlflow://catalog/v1/<catalog_hrn>
  • Use this run_id to build the docker image locally for exposing it as a service.

here_mlflow_plugin_build_docker -m <MODEL-URI> --no_java <False/True> -n <IMAGE-NAME>

There are three parameters for this script:

Parameter Description
-m / --model URI of the model for which the docker image is to be built
-n / --name Name that would be given to image, default value is mlflow-pyfunc-servable
-nj / --no_java If True will not install java in docker image, default value is False


here_mlflow_plugin_build_docker -m "here+mlflow://catalog/v1/<catalog_hrn>/0/6cbba15e-fe0c-4ab7-86c1-37644a7afc94/models" --no_java True -n "test-alpine"
  • Run the docker locally for accessing the service.
docker run -p <PORT>:8080 <IMAGE_ID>


  • Delete a catalog:

    here_mlflow_plugin_setup -d <catalog_hrn>
  • Layer Details:

Layername Layertype Content type attribute1 type attribute2 type attribute3 type attribute4 type
tracking-experiment index application/json ingestion_time timewindow(10 min) experiment_id string experiment_name string - -
tracking-run index application/json start_time timewindow(10 min) experiment_id string run_id string - -
artifact-metadata index application/json ingestion_time timewindow(10 min) run_id string - - - -
artifact-data version application/octet-stream Partition type Generic - - - - - -
model-metadata index application/json ingestion_time timewindow(10 min) model_name string - - - -
model-version-metadata index application/json ingestion_time timewindow(10 min) model_name string version int run_id string

results matching ""

    No results matching ""