Export the model as an artifact for inference.
Source:R/model-persistence.R
export_savedmodel.keras.src.models.model.Model.Rd
(e.g. via TF-Serving).
Note: This can currently only be used with the TensorFlow or JAX backends.
This method lets you export a model to a lightweight SavedModel artifact
that contains the model's forward pass only (its call()
method)
and can be served via e.g. TF-Serving. The forward pass is registered
under the name serve()
(see example below).
The original code of the model (including any custom layers you may have used) is no longer necessary to reload the artifact – it is entirely standalone.
Note: This feature is currently supported only with TensorFlow, JAX and Torch backends.
Usage
# S3 method for class 'keras.src.models.model.Model'
export_savedmodel(
object,
export_dir_base,
...,
format = "tf_saved_model",
verbose = TRUE,
input_signature = NULL
)
Arguments
- object
A keras model.
- export_dir_base
string, file path where to save the artifact.
- ...
Additional keyword arguments:
Specific to the JAX backend and
format="tf_saved_model"
:is_static
: Optionalbool
. Indicates whetherfn
is static. Set toFALSE
iffn
involves state updates (e.g., RNG seeds and counters).jax2tf_kwargs
: Optionaldict
. Arguments forjax2tf.convert
. See the documentation forjax2tf.convert
. Ifnative_serialization
andpolymorphic_shapes
are not provided, they will be automatically computed.
- format
string. The export format. Supported values:
"tf_saved_model"
and"onnx"
. Defaults to"tf_saved_model"
.- verbose
whether to print all the variables of the exported model.
- input_signature
Optional. Specifies the shape and dtype of the model inputs. Can be a structure of
keras.InputSpec
,tf.TensorSpec
,backend.KerasTensor
, or backend tensor. If not provided, it will be automatically computed. Defaults toNULL
.
Value
This is called primarily for the side effect of exporting object
.
The first argument, object
is also returned, invisibly, to enable usage
with the pipe.
Examples
# Create the artifact
model |> tensorflow::export_savedmodel("path/to/location")
# Later, in a different process/environment...
library(tensorflow)
reloaded_artifact <- tf$saved_model$load("path/to/location")
predictions <- reloaded_artifact$serve(input_data)
# see tfdeploy::serve_savedmodel() for serving a model over a local web api.
Here's how to export an ONNX for inference.
# Export the model as a ONNX artifact
model |> export_savedmodel("path/to/location", format = "onnx")
# Load the artifact in a different process/environment
onnxruntime <- reticulate::import("onnxruntime")
ort_session <- onnxruntime$InferenceSession("path/to/location")
input_data <- list(....)
names(input_data) <- sapply(ort_session$get_inputs(), `[[`, "name")
predictions <- ort_session$run(NULL, input_data)
See also
Other saving and loading functions: layer_tfsm()
load_model()
load_model_weights()
register_keras_serializable()
save_model()
save_model_config()
save_model_weights()
with_custom_object_scope()