Model Configuration
Configuring an NLP model source can be done by specifying a mapping in the RIME JSON
configuration file, under the model_info argument. Models can be defined via a custom
model.py file or through a native integration with Hugging Face.
Template
{
"model_info": {
"path": "/path/to/model.py" (REQUIRED)
}
...
}
Arguments
path: string, requiredPath to Python model file. For instructions on how to create this Python model file, please see Specify a Model.
Hugging Face Model
Note: this is only supported for the Text Classification and Natural Language Inference tasks.
{
"model_info": {
"type": "huggingface_classification", (REQUIRED)
"model_uri": "path", (REQUIRED)
"tokenizer_uri": null,
"model_max_length": null,
},
...
}
Arguments
model_uri: string, requiredThe pretrained model name or path used to load a pretrained Hugging Face model from disk or from the model hub.
tokenizer_uri: string or null, default =nullThe pretrained tokenizer name or path used to load the tokenizer from disk or from the model hub. If
null, RIME defaults to loading from the providedmodel_uri.model_max_length: int or null, default =nullThe maximum sequence length (in tokens) supported by the model. If
null, RIME infers the maximum length from the pretrained model and tokenizer.