Model Configuration
Configuring an NLP model source can be done by specifying a mapping in the RIME JSON
configuration file, under the model_info
argument. Models can be defined via a custom
model.py
file or through a native integration with Huggingface.
Template
{
"model_info": {
"path": "/path/to/model.py" (REQUIRED)
}
...
}
Arguments
path
: string, requiredPath to PYTHON model file. For instructions on how to create this Python model file, please see Specify a Model.
Huggingface Classification Model
{
"model_info": {
"type": "huggingface_classification", (REQUIRED)
"model_uri": "path", (REQUIRED)
"tokenizer_uri": null,
"model_max_length": null,
},
...
}
Arguments
model_uri
: string, requiredThe pretrained model name or path used to load a pretrained Huggingface model from disk or from the model hub.
tokenizer_uri
: string or null, default =null
The pretrained tokenizer name or path used to load a the tokenizer from disk or from the model hub. If
null
, RIME defaults to loading from the providedmodel_uri
.model_max_length
: int or null, default =null
The maximum sequence length (in tokens) supported by the model. If
null
, RIME infers the maximum length from the pretrained model and tokenizer.