Custom Integrations

Robust Intelligence provides a framework to load data from arbitrary sources. This is helpful for integrations Robust Intelligence doesn’t support natively.

Configuring Authentication for Custom Integrations

  1. Follow steps 1 through 5 of Adding an integration through the Robust Intelligence web UI

  2. From the Integration Type drop-down, select Custom.

  3. Type a name for the integration in Name.

  4. (Adding a Custom integration) Type the configuration information for the custom integration as key/value pairs and select a sensitivity for each pair.

    Sensitivity levels are Not sensitive, Workspace level, and Members level.

Using Custom Integrations

Robust Intelligence can load data from arbitrary sources defined in a Python file for use in a Stress Test or Continuous Test. Custom integrations may be used by the (custom loader)[../../testing_and_monitoring/monitoring_models/configuring_scheduled_ct.md#custom-loaders] feature to provide authentication credentials as environment variables.

Example stress test with a custom integration

The following code specifies a custom data dictionary and the Python SDK call that starts the stress test.

custom_config = {
    "run_name": "Weather Predictor Test",
    "data_info": {
        "type": "split",
        "ref_data_info": {
            "type": "custom",
            "load_path": "s3://rime-datasets/custom-loader-weather/custom_weather_loader.py",
            "load_func_name": "get_weather_data",
            "loader_kwargs_json": '{"start_time": 0, "end_time": 1758608114}',
        },
        "eval_data_info": {
            "type": "custom",
            "load_path": "s3://rime-datasets/custom-loader-weather/custom_weather_loader.py",
            "load_func_name": "get_weather_data",
            "loader_kwargs_json": '{"start_time": 0, "end_time": 1758608114}',
        },
    }
}

job = client.start_stress_test(
    test_run_config=custom_config,
)

The custom_weather_loader.py code specifies the actual data loading logic to use. This example parses data from an Amazon S3 bucket, but a custom data loader can be written to handle data from any source.

"""Data loader file for weather file."""
import boto3
from datetime import datetime
import pandas as pd
​
BUCKET_NAME = "rime-datasets"
ACCESS_KEY = "*access key*"
SECRET_ACCESS_KEY = "*secret key*"
​
​
def get_weather_data(start_time: int, end_time: int) -> pd.DataFrame:
    start_time = datetime.fromtimestamp(start_time)
    end_time = datetime.fromtimestamp(end_time)
​
    master_df = pd.DataFrame()
    s3 = boto3.resource('s3', aws_access_key_id=ACCESS_KEY, aws_secret_access_key=SECRET_ACCESS_KEY)
    my_bucket = s3.Bucket(BUCKET_NAME)
    for object_summary in my_bucket.objects.filter(Prefix="custom-loader-weather/"):
        if ".csv" in object_summary.key:
            date_str = object_summary.key.split("/")[1].replace(".csv", "")
            date_str = date_str.replace("day_", "")
            file_time = datetime.strptime(date_str, "%Y-%m-%d")
            if start_time <= file_time <= end_time:
                obj = s3.Object(BUCKET_NAME, object_summary.key)
                curr_df = pd.read_csv(obj.get()["Body"])
                master_df = pd.concat([master_df, curr_df], ignore_index=True)
    return master_df

Example continuous test with a custom integration

Continuous tests rely on an established reference data source and do not need to specify one in the data dictionary. The contents of custom_weather_loader.py do not change, but the custom data dictionary is different.

incremental_config.= {
"eval_data_info": {
            "type": "custom",
            "load_path": "s3://rime-datasets/custom-loader-weather/custom_weather_loader.py",
            "load_func_name": "get_weather_data",
            "loader_kwargs_json": '{"start_time": 0, "end_time": 1758608114}',
}
}