keras3 1.4.0
- New - op_subset()and- x@r[...]methods enable tensor subsetting using R’s- [semantics and idioms.
- New subset assignment methods implemented for tensors: - op_subset(x, ...) <- valueand- x@r[...] <- value
- Breaking changes: All operations prefixed with - op_now return 1-based indices by default. The following functions that return or consume indices have changed:- op_argmax(),- op_argmin(),- op_top_k(),- op_argpartition(),- op_searchsorted(),- op_argsort(),- op_digitize(),- op_nonzero(),- op_split(),- op_trace(),- op_swapaxes(),- op_ctc_decode(),- op_ctc_loss(),- op_one_hot(),- op_arange()
- op_arange()now matches the semantics of- base::seq(). By default it starts, includes the end value, and automatically infers step direction.
- op_one_hot()now infers- num_classesif supplied a factor.
- op_hstack()and- op_vstack()now accept arguments passed via- ....
- application_decode_predictions()now returns a processed data frame by default or a decoder function if predictions are missing.
- application_preprocess_inputs()returns a preprocessor function if inputs are missing.
- Various new examples added to documentation, including - op_scatter(),- op_switch(), and- op_nonzero().
- New - x@py[...]accessor introduced for Python-style 0-based indexing of tensors.
- New - Summarygroup generic method for- keras_shape, enabling usage like- prod(shape(3, 4))
- KERAS_HOMEis now set to- tools::R_user_dir("keras3", "cache")if- ~/.kerasdoes not exist and- KERAS_HOMEis unset.
- new - op_convert_to_array()to convert a tensor to an R array.
- 
Added compatibility with Keras v3.9.2. - 
New operations added: 
- 
New layers introduced: 
- layer_resizing()gains an- antialiasargument.
- keras_input(),- keras_model_sequential(), and- op_convert_to_tensor()gain a- raggedargument.
- layer$pop_layer()gains a- rebuildargument and now returns the removed layer.
- New - rematerialized_call()method added to- Layerobjects.
- Documentation improvements and minor fixes. 
 
- 
- Fixed an issue where - op_shape()would sometimes return a TensorFlow- TensorShape
- Fixes for - metric_iou(),- op_top_k(), and- op_eye()being called with R atomic doubles
keras3 1.3.0
CRAN release: 2025-03-03
- Keras now uses - reticulate::py_require()to resolve Python dependencies. Calling- install_keras()is no longer required (but is still supported).
- use_backend()gains a- gpuargument, to specify if a GPU-capable set of dependencies should be resolved by- py_require().
- The progress bar in - fit(),- evaluate()and- predict()now defaults to not presenting during testthat tests.
- dotty::.is now reexported.
- %*%now dispatches to- op_matmul()for tensorflow tensors, which has relaxed shape constraints compared to- tf$matmul().
- Fixed an issue where calling a - Metricand- Lossobject with unnamed arguments would error.
Added compatibility with Keras v3.8.0. User-facing changes:
- New symbols:
- activation_sparse_plus()
- activation_sparsemax()
- activation_threshold()
- layer_equalization()
- layer_mix_up()
- layer_rand_augment()
- layer_random_color_degeneration()
- layer_random_color_jitter()
- layer_random_grayscale()
- layer_random_hue()
- layer_random_posterization()
- layer_random_saturation()
- layer_random_sharpness()
- layer_random_shear()
- op_diagflat()
- op_sparse_plus()
- op_sparsemax()
- op_threshold()
- op_unravel_index()
 
- Add argument axis to tversky loss
- New: ONNX model export with export_savedmodel()
- Doc improvements and bug fixes.
- JAX specific changes: Add support for JAX named scope
- TensorFlow specific changes: Make random_shuffle()XLA compilable
Added compatibility with Keras v3.7.0. User-facing changes:
New functions
New arguments
- 
callback_backup_and_restore(): Addeddouble_checkpointargument to save a fallback checkpoint
- 
callback_tensorboard(): Added support forprofile_batchargument
- 
layer_group_query_attention(): Addedflash_attentionandseedarguments
- 
layer_multi_head_attention(): Addedflash_attentionargument
- 
metric_sparse_top_k_categorical_accuracy(): Addedfrom_sorted_idsargument
Performance improvements
- Added native Flash Attention support for GPU (via cuDNN) and TPU (via Pallas kernel) in JAX backend 
- Added opt-in native Flash Attention support for GPU in PyTorch backend 
- Enabled additional kernel fusion via bias_add in TensorFlow backend 
- Added support for Intel XPU devices in PyTorch backend 
- install_keras()changes: if a GPU is available, the default is now to install a CPU build of TensorFlow and a GPU build of JAX. To use a GPU in the current session, call- use_backend("jax").
Added compatibility with Keras v3.6.0. User-facing changes:
Breaking changes:
- When using get_file()withextract = TRUEoruntar = TRUE, the return value is now the path of the extracted directory, rather than the path of the archive.
Other changes and additions:
- 
Logging is now asynchronous in fit(),evaluate(), andpredict(). This enables 100% compact stacking oftrain_stepcalls on accelerators (e.g. when running small models on TPU).- If you are using custom callbacks that rely on on_batch_end, this will disable async logging. You can re-enable it by addingself$async_safe <- TRUEto your callbacks. Note that the TensorBoard callback is not considered async-safe by default. Default callbacks like the progress bar are async-safe.
 
- If you are using custom callbacks that rely on 
- 
New bitwise operations: 
- 
New math operations: 
- New neural network operation: - op_dot_product_attention()
- 
New image preprocessing layers: 
- New Model functions - get_state_tree()and- set_state_tree(), for retrieving all model variables, including trainable, non-trainable, optimizer variables, and metric variables.
- 
New layer_pipeline()for composing a sequence of layers. This class is useful for building a preprocessing pipeline. Compared to akeras_model_sequential(),layer_pipeline()has a few key differences:- It’s not a Model, just a plain layer.
- When the layers in the pipeline are compatible with tf.data, the pipeline will also remaintf.datacompatible, regardless of the backend you use.
 
- New argument: - export_savedmodel(verbose = )
- New argument: - op_normalize(epsilon = )
- Various documentation improvements and bug fixes. 
keras3 1.2.0
CRAN release: 2024-09-05
- 
Added compatibility with Keras v3.5.0. User facing changes: - New functions:
- 
keras$DTypePolicyinstances can now be supplied todtypeargument for losses, metrics, and layers.
- Add integration with the Hugging Face Hub. You can now save models to Hugging Face Hub directly save_model()and load .keras models directly from Hugging Face Hub withload_model().
- Added compatibility with NumPy 2.0.
- Improved keras$distributionAPI support for very large models.
- Bug fixes and performance improvements.
- Add data_formatargument tolayer_zero_padding_1d()layer.
- Miscellaneous documentation improvements.
- Bug fixes and performance improvements.
 
keras3 1.1.0
CRAN release: 2024-07-17
- Fixed issue where GPUs would not be found when running on Windows under WSL Linux. (reported in #1456, fixed in #1459) 
- keras_shapeobjects (as returned by- keras3::shape()) gain- ==and- !=methods.
- Fixed warning from - tfruns::training_run()being unable to log optimizer learning rate.
- Added compatibility with Keras v3.4.1 (no R user facing changes). 
- 
Added compatibility with Keras v3.4.0. User facing changes: - New functions:
- Changes:
- Added support for arbitrary, deeply nested input/output structures in Functional models (e.g. lists of lists of lists of inputs or outputs…)
- Add support for optionalFunctional inputs.- 
keras_input()gains anoptionalargument.
- 
keras_model_sequential()gains ainput_optionalargument.
 
- 
- Add support for float8inference forDenseandEinsumDenselayers.
- Enable layer_feature_space()to be used in a tfdatasets pipeline even when the backend isn’t TensorFlow.
- 
layer_string_lookup()can now taketf$SparseTensor()as input.
- 
layer_string_lookup()returns"int64"dtype by default in more modes now.
- 
Layer()instances gain attributespathandquantization_mode.
- 
Metric()$variablesis now recursive.
- Add trainingargument toModel$compute_loss().
- 
split_dataset()now supports nested structures in dataset.
- All applications gain a nameargument, accept a custom name.
- 
layer_multi_head_attention()gains aseedargument.
- All losses gain a dtypeargument.
- 
loss_dice()gains anaxisargument.
- 
op_ctc_decode(), new default formask_index = 0
- All op_image_*functions now use defaultdata_formatvalue toconfig_image_data_format()
- 
op_isclose()gains argumentsrtol,atol,equal_nan.
- 
save_model()gains argumentzipped.
- Bugs fixes and performance improvements.
 
 
keras3 1.0.0
CRAN release: 2024-05-21
- Chains of - layer_*calls with- |>now instantiate layers in the same order as- %>%pipe chains: left-hand-side first (#1440).
- iterate(),- iter_next()and- as_iterator()are now reexported from reticulate.
User facing changes with upstream Keras v3.3.3:
- new functions: - op_slogdet(),- op_psnr()
- clone_model()gains new args:- call_function,- recursiveUpdated example usage.
- op_ctc_decode()strategy argument has new default:- "greedy". Updated docs.
- loss_ctc()default name fixed, changed to- "ctc"
User facing changes with upstream Keras v3.3.2:
- new function: - op_ctc_decode()
- new function: - op_eigh()
- new function: - op_select()
- new function: - op_vectorize()
- new function: - op_image_rgb_to_grayscale()
- new function: - loss_tversky()
- new args: - layer_resizing(pad_to_aspect_ratio, fill_mode, fill_value)
- new arg: - layer_embedding(weights)for providing an initial weights matrix
- new args: - op_nan_to_num(nan, posinf, neginf)
- new args: - op_image_resize(crop_to_aspect_ratio, pad_to_aspect_ratio, fill_mode, fill_value)
- new args: - op_argmax(keepdims)and- op_argmin(keepdims)
- new arg: - clear_session(free_memory)for clearing without invoking the garbage collector.
- metric_kl_divergence()and- loss_kl_divergence()clip inputs (- y_trueand- y_pred) to the- [0, 1]range.
- new - Layer()attributes:- metrics,- dtype_policy
- Added initial support for float8 training 
- layer_conv_*d()layers now support LoRa
- op_digitize()now supports sparse tensors.
- Models and layers now return owned metrics recursively. 
- Add pickling support for Keras models. (e.g., via - reticulate::py_save_object()) Note that pickling is not recommended, prefer using Keras saving APIs.
keras3 0.2.0
CRAN release: 2024-04-18
New functions:
- quantize_weights(): quantize model or layer weights in-place. Currently, only- Dense,- EinsumDense, and- Embeddinglayers are supported (which is enough to cover the majority of transformers today)
- config_set_backend(): change the backend after Keras has initialized.
- 
New Ops 
- 
New family of linear algebra ops 
- audio_dataset_from_directory(),- image_dataset_from_directory()and- text_dataset_from_directory()gain a- verboseargument (default- TRUE)
- image_dataset_from_directory()gains- pad_to_aspect_ratioargument (default- FALSE)
- to_categorical(),- op_one_hot(), and- fit()can now accept R factors, offset them to be 0-based (reported in- #1055).
- op_convert_to_numpy()now returns unconverted NumPy arrays.
- op_array()and- op_convert_to_tensor()no longer error when casting R doubles to integer types.
- export_savedmodel()now works with a Jax backend.
- Metric()$add_variable()method gains arg:- aggregration.
- Layer()$add_weight()method gains args:- autocast,- regularizer,- aggregation.
- op_bincount(),- op_multi_hot(),- op_one_hot(), and- layer_category_encoding()now support sparse tensors.
- op_custom_gradient()now supports the PyTorch backend
- layer_lstm()and- layer_gru()gain arg- use_cudnn, default- 'auto'.
- Fixed an issue where - application_preprocess_inputs()would error if supplied an R array as input.
- Doc improvements. 
keras3 0.1.0
CRAN release: 2024-02-17
- The package has been rebuilt for Keras 3.0. Refer to https://blogs.rstudio.com/ai/posts/2024-05-21-keras3/ for an overview and https://keras3.posit.co for the current up-to-date documentation.