Skip to contents

Introduction

This example shows how to do timeseries classification from scratch, starting from raw CSV timeseries files on disk. We demonstrate the workflow on the FordA dataset from the UCR/UEA archive.

Setup

Load the data: the FordA dataset

Dataset description

The dataset we are using here is called FordA. The data comes from the UCR archive. The dataset contains 3601 training instances and another 1320 testing instances. Each timeseries corresponds to a measurement of engine noise captured by a motor sensor. For this task, the goal is to automatically detect the presence of a specific issue with the engine. The problem is a balanced binary classification task. The full description of this dataset can be found here.

Read the TSV data

We will use the FordA_TRAIN file for training and the FordA_TEST file for testing. The simplicity of this dataset allows us to demonstrate effectively how to use ConvNets for timeseries classification. In this file, the first column corresponds to the label.

get_data <- function(path) {
  if(path |> startsWith("https://"))
    path <- get_file(origin = path)  # cache file locally

  data <- readr::read_tsv(
    path, col_names = FALSE,
    # Each row is: one integer (the label),
    # followed by 500 doubles (the timeseries)
    col_types = paste0("i", strrep("d", 500))
  )

  y <- as.matrix(data[[1]])
  x <- as.matrix(data[,-1])
  dimnames(x) <- dimnames(y) <- NULL

  list(x, y)
}

root_url <- "https://raw.githubusercontent.com/hfawaz/cd-diagram/master/FordA/"
c(x_train, y_train) %<-% get_data(paste0(root_url, "FordA_TRAIN.tsv"))
c(x_test, y_test) %<-% get_data(paste0(root_url, "FordA_TEST.tsv"))

str(keras3:::named_list(
  x_train, y_train,
  x_test, y_test
))
## List of 4
##  $ x_train: num [1:3601, 1:500] -0.797 0.805 0.728 -0.234 -0.171 ...
##  $ y_train: int [1:3601, 1] -1 1 -1 -1 -1 1 1 1 1 1 ...
##  $ x_test : num [1:1320, 1:500] -0.14 0.334 0.717 1.24 -1.159 ...
##  $ y_test : int [1:1320, 1] -1 -1 -1 1 -1 1 -1 -1 1 1 ...

Visualize the data

Here we visualize one timeseries example for each class in the dataset.

plot(NULL, main = "Timeseries Data",
     xlab = "Timepoints",  ylab = "Values",
     xlim = c(1, ncol(x_test)),
     ylim = range(x_test))
grid()
lines(x_test[match(-1, y_test), ], col = "blue")
lines(x_test[match( 1, y_test), ], col = "red")
legend("topright", legend=c("label -1", "label 1"), col=c("blue", "red"), lty=1)
Plot of Example Timeseries Data
Plot of Example Timeseries Data

Standardize the data

Our timeseries are already in a single length (500). However, their values are usually in various ranges. This is not ideal for a neural network; in general we should seek to make the input values normalized. For this specific dataset, the data is already z-normalized: each timeseries sample has a mean equal to zero and a standard deviation equal to one. This type of normalization is very common for timeseries classification problems, see Bagnall et al. (2016).

Note that the timeseries data used here are univariate, meaning we only have one channel per timeseries example. We will therefore transform the timeseries into a multivariate one with one channel using a simple reshaping via numpy. This will allow us to construct a model that is easily applicable to multivariate time series.

dim(x_train) <- c(dim(x_train), 1)
dim(x_test) <- c(dim(x_test), 1)

Finally, in order to use sparse_categorical_crossentropy, we will have to count the number of classes beforehand.

num_classes <- length(unique(y_train))

Now we shuffle the training set because we will be using the validation_split option later when training.

c(x_train, y_train) %<-% listarrays::shuffle_rows(x_train, y_train)
# idx <- sample.int(nrow(x_train))
# x_train %<>% .[idx,, ,drop = FALSE]
# y_train %<>% .[idx,  ,drop = FALSE]

Standardize the labels to positive integers. The expected labels will then be 0 and 1.

y_train[y_train == -1L] <- 0L
y_test[y_test == -1L] <- 0L

Build a model

We build a Fully Convolutional Neural Network originally proposed in this paper. The implementation is based on the TF 2 version provided here. The following hyperparameters (kernel_size, filters, the usage of BatchNorm) were found via random search using KerasTuner.

make_model <- function(input_shape) {
  inputs <- keras_input(input_shape)

  outputs <- inputs |>
    # conv1
    layer_conv_1d(filters = 64, kernel_size = 3, padding = "same") |>
    layer_batch_normalization() |>
    layer_activation_relu() |>
    # conv2
    layer_conv_1d(filters = 64, kernel_size = 3, padding = "same") |>
    layer_batch_normalization() |>
    layer_activation_relu() |>
    # conv3
    layer_conv_1d(filters = 64, kernel_size = 3, padding = "same") |>
    layer_batch_normalization() |>
    layer_activation_relu() |>
    # pooling
    layer_global_average_pooling_1d() |>
    # final output
    layer_dense(num_classes, activation = "softmax")

  keras_model(inputs, outputs)
}

model <- make_model(input_shape = dim(x_train)[-1])
model
## Model: "functional"
## ┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━┓
## ┃ Layer (type)                 Output Shape              Param #  Trai… 
## ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━┩
## │ input_layer (InputLayer)    │ (None, 500, 1)        │          0-
## ├─────────────────────────────┼───────────────────────┼────────────┼───────┤
## │ conv1d (Conv1D)             │ (None, 500, 64)       │        256Y
## ├─────────────────────────────┼───────────────────────┼────────────┼───────┤
## │ batch_normalization         │ (None, 500, 64)       │        256Y
## │ (BatchNormalization)        │                       │            │       │
## ├─────────────────────────────┼───────────────────────┼────────────┼───────┤
## │ re_lu (ReLU)                │ (None, 500, 64)       │          0-
## ├─────────────────────────────┼───────────────────────┼────────────┼───────┤
## │ conv1d_1 (Conv1D)           │ (None, 500, 64)       │     12,352Y
## ├─────────────────────────────┼───────────────────────┼────────────┼───────┤
## │ batch_normalization_1       │ (None, 500, 64)       │        256Y
## │ (BatchNormalization)        │                       │            │       │
## ├─────────────────────────────┼───────────────────────┼────────────┼───────┤
## │ re_lu_1 (ReLU)              │ (None, 500, 64)       │          0-
## ├─────────────────────────────┼───────────────────────┼────────────┼───────┤
## │ conv1d_2 (Conv1D)           │ (None, 500, 64)       │     12,352Y
## ├─────────────────────────────┼───────────────────────┼────────────┼───────┤
## │ batch_normalization_2       │ (None, 500, 64)       │        256Y
## │ (BatchNormalization)        │                       │            │       │
## ├─────────────────────────────┼───────────────────────┼────────────┼───────┤
## │ re_lu_2 (ReLU)              │ (None, 500, 64)       │          0-
## ├─────────────────────────────┼───────────────────────┼────────────┼───────┤
## │ global_average_pooling1d    │ (None, 64)            │          0-
## │ (GlobalAveragePooling1D)    │                       │            │       │
## ├─────────────────────────────┼───────────────────────┼────────────┼───────┤
## │ dense (Dense)               │ (None, 2)             │        130Y
## └─────────────────────────────┴───────────────────────┴────────────┴───────┘
##  Total params: 25,858 (101.01 KB)
##  Trainable params: 25,474 (99.51 KB)
##  Non-trainable params: 384 (1.50 KB)
plot(model, show_shapes = TRUE)
plot of chunk unnamed-chunk-9

plot of chunk unnamed-chunk-9

Train the model

epochs <- 500
batch_size <- 32

callbacks <- c(
  callback_model_checkpoint(
    "best_model.keras", save_best_only = TRUE,
    monitor = "val_loss"
  ),
  callback_reduce_lr_on_plateau(
    monitor = "val_loss", factor = 0.5,
    patience = 20, min_lr = 0.0001
  ),
  callback_early_stopping(
    monitor = "val_loss", patience = 50,
    verbose = 1
  )
)


model |> compile(
  optimizer = "adam",
  loss = "sparse_categorical_crossentropy",
  metrics = "sparse_categorical_accuracy"
)

history <- model |> fit(
  x_train, y_train,
  batch_size = batch_size,
  epochs = epochs,
  callbacks = callbacks,
  validation_split = 0.2
)
## Epoch 1/500
## 90/90 - 3s - 36ms/step - loss: 0.5579 - sparse_categorical_accuracy: 0.7066 - val_loss: 0.8503 - val_sparse_categorical_accuracy: 0.4896 - learning_rate: 0.0010
## Epoch 2/500
## 90/90 - 1s - 7ms/step - loss: 0.4851 - sparse_categorical_accuracy: 0.7625 - val_loss: 0.9085 - val_sparse_categorical_accuracy: 0.4896 - learning_rate: 0.0010
## Epoch 3/500
## 90/90 - 0s - 2ms/step - loss: 0.4704 - sparse_categorical_accuracy: 0.7601 - val_loss: 0.7682 - val_sparse_categorical_accuracy: 0.4896 - learning_rate: 0.0010
## Epoch 4/500
## 90/90 - 0s - 2ms/step - loss: 0.4217 - sparse_categorical_accuracy: 0.7896 - val_loss: 0.6329 - val_sparse_categorical_accuracy: 0.5853 - learning_rate: 0.0010
## Epoch 5/500
## 90/90 - 0s - 2ms/step - loss: 0.4297 - sparse_categorical_accuracy: 0.7802 - val_loss: 0.5372 - val_sparse_categorical_accuracy: 0.6768 - learning_rate: 0.0010
## Epoch 6/500
## 90/90 - 0s - 2ms/step - loss: 0.4048 - sparse_categorical_accuracy: 0.8010 - val_loss: 0.4630 - val_sparse_categorical_accuracy: 0.8086 - learning_rate: 0.0010
## Epoch 7/500
## 90/90 - 0s - 2ms/step - loss: 0.4028 - sparse_categorical_accuracy: 0.7986 - val_loss: 0.6753 - val_sparse_categorical_accuracy: 0.6893 - learning_rate: 0.0010
## Epoch 8/500
## 90/90 - 0s - 2ms/step - loss: 0.3901 - sparse_categorical_accuracy: 0.8056 - val_loss: 0.3839 - val_sparse_categorical_accuracy: 0.8211 - learning_rate: 0.0010
## Epoch 9/500
## 90/90 - 0s - 1ms/step - loss: 0.3888 - sparse_categorical_accuracy: 0.8139 - val_loss: 0.4134 - val_sparse_categorical_accuracy: 0.8017 - learning_rate: 0.0010
## Epoch 10/500
## 90/90 - 0s - 1ms/step - loss: 0.3843 - sparse_categorical_accuracy: 0.8122 - val_loss: 0.4185 - val_sparse_categorical_accuracy: 0.8058 - learning_rate: 0.0010
## Epoch 11/500
## 90/90 - 0s - 2ms/step - loss: 0.3817 - sparse_categorical_accuracy: 0.8135 - val_loss: 0.4276 - val_sparse_categorical_accuracy: 0.8017 - learning_rate: 0.0010
## Epoch 12/500
## 90/90 - 0s - 1ms/step - loss: 0.3674 - sparse_categorical_accuracy: 0.8198 - val_loss: 0.4145 - val_sparse_categorical_accuracy: 0.7989 - learning_rate: 0.0010
## Epoch 13/500
## 90/90 - 0s - 1ms/step - loss: 0.3551 - sparse_categorical_accuracy: 0.8392 - val_loss: 0.4850 - val_sparse_categorical_accuracy: 0.7268 - learning_rate: 0.0010
## Epoch 14/500
## 90/90 - 0s - 2ms/step - loss: 0.3579 - sparse_categorical_accuracy: 0.8299 - val_loss: 0.3535 - val_sparse_categorical_accuracy: 0.8419 - learning_rate: 0.0010
## Epoch 15/500
## 90/90 - 0s - 1ms/step - loss: 0.3425 - sparse_categorical_accuracy: 0.8438 - val_loss: 0.4458 - val_sparse_categorical_accuracy: 0.7712 - learning_rate: 0.0010
## Epoch 16/500
## 90/90 - 0s - 1ms/step - loss: 0.3419 - sparse_categorical_accuracy: 0.8462 - val_loss: 0.5953 - val_sparse_categorical_accuracy: 0.7004 - learning_rate: 0.0010
## Epoch 17/500
## 90/90 - 0s - 2ms/step - loss: 0.3291 - sparse_categorical_accuracy: 0.8476 - val_loss: 0.3407 - val_sparse_categorical_accuracy: 0.8599 - learning_rate: 0.0010
## Epoch 18/500
## 90/90 - 0s - 1ms/step - loss: 0.3221 - sparse_categorical_accuracy: 0.8531 - val_loss: 0.6358 - val_sparse_categorical_accuracy: 0.7060 - learning_rate: 0.0010
## Epoch 19/500
## 90/90 - 0s - 1ms/step - loss: 0.3164 - sparse_categorical_accuracy: 0.8646 - val_loss: 0.3234 - val_sparse_categorical_accuracy: 0.8474 - learning_rate: 0.0010
## Epoch 20/500
## 90/90 - 0s - 1ms/step - loss: 0.3139 - sparse_categorical_accuracy: 0.8611 - val_loss: 0.5349 - val_sparse_categorical_accuracy: 0.7240 - learning_rate: 0.0010
## Epoch 21/500
## 90/90 - 0s - 1ms/step - loss: 0.2988 - sparse_categorical_accuracy: 0.8733 - val_loss: 0.3568 - val_sparse_categorical_accuracy: 0.8072 - learning_rate: 0.0010
## Epoch 22/500
## 90/90 - 0s - 2ms/step - loss: 0.3007 - sparse_categorical_accuracy: 0.8740 - val_loss: 0.3128 - val_sparse_categorical_accuracy: 0.8613 - learning_rate: 0.0010
## Epoch 23/500
## 90/90 - 0s - 2ms/step - loss: 0.2902 - sparse_categorical_accuracy: 0.8830 - val_loss: 0.3023 - val_sparse_categorical_accuracy: 0.8641 - learning_rate: 0.0010
## Epoch 24/500
## 90/90 - 0s - 2ms/step - loss: 0.2892 - sparse_categorical_accuracy: 0.8778 - val_loss: 0.5304 - val_sparse_categorical_accuracy: 0.7254 - learning_rate: 0.0010
## Epoch 25/500
## 90/90 - 0s - 1ms/step - loss: 0.2811 - sparse_categorical_accuracy: 0.8816 - val_loss: 0.4071 - val_sparse_categorical_accuracy: 0.8072 - learning_rate: 0.0010
## Epoch 26/500
## 90/90 - 0s - 1ms/step - loss: 0.3112 - sparse_categorical_accuracy: 0.8635 - val_loss: 0.5276 - val_sparse_categorical_accuracy: 0.7268 - learning_rate: 0.0010
## Epoch 27/500
## 90/90 - 0s - 1ms/step - loss: 0.2723 - sparse_categorical_accuracy: 0.8889 - val_loss: 0.3626 - val_sparse_categorical_accuracy: 0.8169 - learning_rate: 0.0010
## Epoch 28/500
## 90/90 - 0s - 2ms/step - loss: 0.2891 - sparse_categorical_accuracy: 0.8809 - val_loss: 0.3338 - val_sparse_categorical_accuracy: 0.8363 - learning_rate: 0.0010
## Epoch 29/500
## 90/90 - 0s - 2ms/step - loss: 0.2760 - sparse_categorical_accuracy: 0.8778 - val_loss: 0.2732 - val_sparse_categorical_accuracy: 0.8793 - learning_rate: 0.0010
## Epoch 30/500
## 90/90 - 0s - 2ms/step - loss: 0.2693 - sparse_categorical_accuracy: 0.8851 - val_loss: 0.3162 - val_sparse_categorical_accuracy: 0.8627 - learning_rate: 0.0010
## Epoch 31/500
## 90/90 - 0s - 1ms/step - loss: 0.2715 - sparse_categorical_accuracy: 0.8865 - val_loss: 0.5307 - val_sparse_categorical_accuracy: 0.7184 - learning_rate: 0.0010
## Epoch 32/500
## 90/90 - 0s - 1ms/step - loss: 0.2682 - sparse_categorical_accuracy: 0.8878 - val_loss: 0.3087 - val_sparse_categorical_accuracy: 0.8530 - learning_rate: 0.0010
## Epoch 33/500
## 90/90 - 0s - 2ms/step - loss: 0.2569 - sparse_categorical_accuracy: 0.8944 - val_loss: 0.6098 - val_sparse_categorical_accuracy: 0.6824 - learning_rate: 0.0010
## Epoch 34/500
## 90/90 - 0s - 2ms/step - loss: 0.2466 - sparse_categorical_accuracy: 0.9021 - val_loss: 0.4204 - val_sparse_categorical_accuracy: 0.7642 - learning_rate: 0.0010
## Epoch 35/500
## 90/90 - 0s - 1ms/step - loss: 0.2649 - sparse_categorical_accuracy: 0.8889 - val_loss: 0.4431 - val_sparse_categorical_accuracy: 0.7850 - learning_rate: 0.0010
## Epoch 36/500
## 90/90 - 0s - 2ms/step - loss: 0.2511 - sparse_categorical_accuracy: 0.8997 - val_loss: 0.2687 - val_sparse_categorical_accuracy: 0.8779 - learning_rate: 0.0010
## Epoch 37/500
## 90/90 - 0s - 2ms/step - loss: 0.2529 - sparse_categorical_accuracy: 0.8917 - val_loss: 0.3174 - val_sparse_categorical_accuracy: 0.8599 - learning_rate: 0.0010
## Epoch 38/500
## 90/90 - 0s - 2ms/step - loss: 0.2479 - sparse_categorical_accuracy: 0.9014 - val_loss: 0.4697 - val_sparse_categorical_accuracy: 0.7753 - learning_rate: 0.0010
## Epoch 39/500
## 90/90 - 0s - 1ms/step - loss: 0.2484 - sparse_categorical_accuracy: 0.9010 - val_loss: 0.3184 - val_sparse_categorical_accuracy: 0.8571 - learning_rate: 0.0010
## Epoch 40/500
## 90/90 - 0s - 1ms/step - loss: 0.2484 - sparse_categorical_accuracy: 0.8972 - val_loss: 0.2814 - val_sparse_categorical_accuracy: 0.8835 - learning_rate: 0.0010
## Epoch 41/500
## 90/90 - 0s - 2ms/step - loss: 0.2320 - sparse_categorical_accuracy: 0.9045 - val_loss: 0.2346 - val_sparse_categorical_accuracy: 0.8974 - learning_rate: 0.0010
## Epoch 42/500
## 90/90 - 0s - 1ms/step - loss: 0.2267 - sparse_categorical_accuracy: 0.9115 - val_loss: 0.3550 - val_sparse_categorical_accuracy: 0.8530 - learning_rate: 0.0010
## Epoch 43/500
## 90/90 - 0s - 2ms/step - loss: 0.2358 - sparse_categorical_accuracy: 0.8969 - val_loss: 0.9938 - val_sparse_categorical_accuracy: 0.6269 - learning_rate: 0.0010
## Epoch 44/500
## 90/90 - 0s - 2ms/step - loss: 0.2338 - sparse_categorical_accuracy: 0.9017 - val_loss: 0.4220 - val_sparse_categorical_accuracy: 0.8031 - learning_rate: 0.0010
## Epoch 45/500
## 90/90 - 0s - 1ms/step - loss: 0.2315 - sparse_categorical_accuracy: 0.9094 - val_loss: 0.2986 - val_sparse_categorical_accuracy: 0.8544 - learning_rate: 0.0010
## Epoch 46/500
## 90/90 - 0s - 1ms/step - loss: 0.2306 - sparse_categorical_accuracy: 0.9031 - val_loss: 0.3021 - val_sparse_categorical_accuracy: 0.8669 - learning_rate: 0.0010
## Epoch 47/500
## 90/90 - 0s - 1ms/step - loss: 0.2148 - sparse_categorical_accuracy: 0.9163 - val_loss: 0.2687 - val_sparse_categorical_accuracy: 0.8960 - learning_rate: 0.0010
## Epoch 48/500
## 90/90 - 0s - 1ms/step - loss: 0.2167 - sparse_categorical_accuracy: 0.9146 - val_loss: 0.2682 - val_sparse_categorical_accuracy: 0.8696 - learning_rate: 0.0010
## Epoch 49/500
## 90/90 - 0s - 1ms/step - loss: 0.2083 - sparse_categorical_accuracy: 0.9160 - val_loss: 0.2813 - val_sparse_categorical_accuracy: 0.8724 - learning_rate: 0.0010
## Epoch 50/500
## 90/90 - 0s - 2ms/step - loss: 0.2103 - sparse_categorical_accuracy: 0.9184 - val_loss: 0.2291 - val_sparse_categorical_accuracy: 0.9001 - learning_rate: 0.0010
## Epoch 51/500
## 90/90 - 0s - 1ms/step - loss: 0.2135 - sparse_categorical_accuracy: 0.9188 - val_loss: 0.2310 - val_sparse_categorical_accuracy: 0.9043 - learning_rate: 0.0010
## Epoch 52/500
## 90/90 - 0s - 1ms/step - loss: 0.2057 - sparse_categorical_accuracy: 0.9181 - val_loss: 0.2294 - val_sparse_categorical_accuracy: 0.9071 - learning_rate: 0.0010
## Epoch 53/500
## 90/90 - 0s - 1ms/step - loss: 0.1951 - sparse_categorical_accuracy: 0.9240 - val_loss: 0.3438 - val_sparse_categorical_accuracy: 0.8363 - learning_rate: 0.0010
## Epoch 54/500
## 90/90 - 0s - 1ms/step - loss: 0.1952 - sparse_categorical_accuracy: 0.9240 - val_loss: 0.4523 - val_sparse_categorical_accuracy: 0.7809 - learning_rate: 0.0010
## Epoch 55/500
## 90/90 - 0s - 1ms/step - loss: 0.1901 - sparse_categorical_accuracy: 0.9306 - val_loss: 0.2329 - val_sparse_categorical_accuracy: 0.9015 - learning_rate: 0.0010
## Epoch 56/500
## 90/90 - 0s - 2ms/step - loss: 0.1797 - sparse_categorical_accuracy: 0.9351 - val_loss: 0.5545 - val_sparse_categorical_accuracy: 0.7670 - learning_rate: 0.0010
## Epoch 57/500
## 90/90 - 0s - 3ms/step - loss: 0.1733 - sparse_categorical_accuracy: 0.9417 - val_loss: 0.4508 - val_sparse_categorical_accuracy: 0.7781 - learning_rate: 0.0010
## Epoch 58/500
## 90/90 - 0s - 2ms/step - loss: 0.1608 - sparse_categorical_accuracy: 0.9448 - val_loss: 0.1838 - val_sparse_categorical_accuracy: 0.9196 - learning_rate: 0.0010
## Epoch 59/500
## 90/90 - 0s - 2ms/step - loss: 0.1666 - sparse_categorical_accuracy: 0.9413 - val_loss: 0.3148 - val_sparse_categorical_accuracy: 0.8599 - learning_rate: 0.0010
## Epoch 60/500
## 90/90 - 0s - 2ms/step - loss: 0.1626 - sparse_categorical_accuracy: 0.9410 - val_loss: 0.2557 - val_sparse_categorical_accuracy: 0.8960 - learning_rate: 0.0010
## Epoch 61/500
## 90/90 - 0s - 2ms/step - loss: 0.1435 - sparse_categorical_accuracy: 0.9542 - val_loss: 0.2049 - val_sparse_categorical_accuracy: 0.9223 - learning_rate: 0.0010
## Epoch 62/500
## 90/90 - 0s - 2ms/step - loss: 0.1460 - sparse_categorical_accuracy: 0.9500 - val_loss: 0.4613 - val_sparse_categorical_accuracy: 0.7947 - learning_rate: 0.0010
## Epoch 63/500
## 90/90 - 0s - 2ms/step - loss: 0.1513 - sparse_categorical_accuracy: 0.9434 - val_loss: 0.1866 - val_sparse_categorical_accuracy: 0.9515 - learning_rate: 0.0010
## Epoch 64/500
## 90/90 - 0s - 2ms/step - loss: 0.1334 - sparse_categorical_accuracy: 0.9545 - val_loss: 0.2366 - val_sparse_categorical_accuracy: 0.9043 - learning_rate: 0.0010
## Epoch 65/500
## 90/90 - 0s - 2ms/step - loss: 0.1296 - sparse_categorical_accuracy: 0.9587 - val_loss: 0.1656 - val_sparse_categorical_accuracy: 0.9362 - learning_rate: 0.0010
## Epoch 66/500
## 90/90 - 0s - 2ms/step - loss: 0.1186 - sparse_categorical_accuracy: 0.9635 - val_loss: 0.2215 - val_sparse_categorical_accuracy: 0.9140 - learning_rate: 0.0010
## Epoch 67/500
## 90/90 - 0s - 2ms/step - loss: 0.1157 - sparse_categorical_accuracy: 0.9622 - val_loss: 0.1591 - val_sparse_categorical_accuracy: 0.9334 - learning_rate: 0.0010
## Epoch 68/500
## 90/90 - 0s - 2ms/step - loss: 0.1201 - sparse_categorical_accuracy: 0.9601 - val_loss: 0.1539 - val_sparse_categorical_accuracy: 0.9376 - learning_rate: 0.0010
## Epoch 69/500
## 90/90 - 0s - 2ms/step - loss: 0.1251 - sparse_categorical_accuracy: 0.9559 - val_loss: 0.2505 - val_sparse_categorical_accuracy: 0.8793 - learning_rate: 0.0010
## Epoch 70/500
## 90/90 - 0s - 2ms/step - loss: 0.1188 - sparse_categorical_accuracy: 0.9618 - val_loss: 0.2255 - val_sparse_categorical_accuracy: 0.9015 - learning_rate: 0.0010
## Epoch 71/500
## 90/90 - 0s - 2ms/step - loss: 0.1194 - sparse_categorical_accuracy: 0.9618 - val_loss: 0.5463 - val_sparse_categorical_accuracy: 0.7587 - learning_rate: 0.0010
## Epoch 72/500
## 90/90 - 0s - 2ms/step - loss: 0.1128 - sparse_categorical_accuracy: 0.9649 - val_loss: 0.1886 - val_sparse_categorical_accuracy: 0.9154 - learning_rate: 0.0010
## Epoch 73/500
## 90/90 - 0s - 2ms/step - loss: 0.1056 - sparse_categorical_accuracy: 0.9681 - val_loss: 0.7039 - val_sparse_categorical_accuracy: 0.7822 - learning_rate: 0.0010
## Epoch 74/500
## 90/90 - 0s - 2ms/step - loss: 0.1035 - sparse_categorical_accuracy: 0.9656 - val_loss: 0.6353 - val_sparse_categorical_accuracy: 0.7559 - learning_rate: 0.0010
## Epoch 75/500
## 90/90 - 0s - 2ms/step - loss: 0.1218 - sparse_categorical_accuracy: 0.9573 - val_loss: 0.1446 - val_sparse_categorical_accuracy: 0.9404 - learning_rate: 0.0010
## Epoch 76/500
## 90/90 - 0s - 2ms/step - loss: 0.1157 - sparse_categorical_accuracy: 0.9583 - val_loss: 0.1494 - val_sparse_categorical_accuracy: 0.9404 - learning_rate: 0.0010
## Epoch 77/500
## 90/90 - 0s - 2ms/step - loss: 0.1027 - sparse_categorical_accuracy: 0.9646 - val_loss: 0.1718 - val_sparse_categorical_accuracy: 0.9348 - learning_rate: 0.0010
## Epoch 78/500
## 90/90 - 0s - 2ms/step - loss: 0.1052 - sparse_categorical_accuracy: 0.9705 - val_loss: 0.3352 - val_sparse_categorical_accuracy: 0.8669 - learning_rate: 0.0010
## Epoch 79/500
## 90/90 - 0s - 2ms/step - loss: 0.1066 - sparse_categorical_accuracy: 0.9639 - val_loss: 0.2230 - val_sparse_categorical_accuracy: 0.9140 - learning_rate: 0.0010
## Epoch 80/500
## 90/90 - 0s - 2ms/step - loss: 0.1099 - sparse_categorical_accuracy: 0.9635 - val_loss: 0.2144 - val_sparse_categorical_accuracy: 0.9057 - learning_rate: 0.0010
## Epoch 81/500
## 90/90 - 0s - 2ms/step - loss: 0.1117 - sparse_categorical_accuracy: 0.9656 - val_loss: 0.4700 - val_sparse_categorical_accuracy: 0.8100 - learning_rate: 0.0010
## Epoch 82/500
## 90/90 - 0s - 2ms/step - loss: 0.0982 - sparse_categorical_accuracy: 0.9688 - val_loss: 0.1167 - val_sparse_categorical_accuracy: 0.9515 - learning_rate: 0.0010
## Epoch 83/500
## 90/90 - 0s - 1ms/step - loss: 0.1053 - sparse_categorical_accuracy: 0.9632 - val_loss: 0.4367 - val_sparse_categorical_accuracy: 0.8405 - learning_rate: 0.0010
## Epoch 84/500
## 90/90 - 0s - 1ms/step - loss: 0.1004 - sparse_categorical_accuracy: 0.9660 - val_loss: 0.8277 - val_sparse_categorical_accuracy: 0.7517 - learning_rate: 0.0010
## Epoch 85/500
## 90/90 - 0s - 1ms/step - loss: 0.1013 - sparse_categorical_accuracy: 0.9691 - val_loss: 0.2423 - val_sparse_categorical_accuracy: 0.9057 - learning_rate: 0.0010
## Epoch 86/500
## 90/90 - 0s - 1ms/step - loss: 0.1028 - sparse_categorical_accuracy: 0.9653 - val_loss: 0.5650 - val_sparse_categorical_accuracy: 0.7961 - learning_rate: 0.0010
## Epoch 87/500
## 90/90 - 0s - 1ms/step - loss: 0.1114 - sparse_categorical_accuracy: 0.9594 - val_loss: 0.7420 - val_sparse_categorical_accuracy: 0.7503 - learning_rate: 0.0010
## Epoch 88/500
## 90/90 - 0s - 2ms/step - loss: 0.1003 - sparse_categorical_accuracy: 0.9653 - val_loss: 0.1620 - val_sparse_categorical_accuracy: 0.9348 - learning_rate: 0.0010
## Epoch 89/500
## 90/90 - 0s - 2ms/step - loss: 0.1003 - sparse_categorical_accuracy: 0.9677 - val_loss: 0.2886 - val_sparse_categorical_accuracy: 0.8724 - learning_rate: 0.0010
## Epoch 90/500
## 90/90 - 0s - 1ms/step - loss: 0.0957 - sparse_categorical_accuracy: 0.9677 - val_loss: 0.1361 - val_sparse_categorical_accuracy: 0.9487 - learning_rate: 0.0010
## Epoch 91/500
## 90/90 - 0s - 2ms/step - loss: 0.1177 - sparse_categorical_accuracy: 0.9604 - val_loss: 0.1378 - val_sparse_categorical_accuracy: 0.9459 - learning_rate: 0.0010
## Epoch 92/500
## 90/90 - 0s - 2ms/step - loss: 0.0969 - sparse_categorical_accuracy: 0.9715 - val_loss: 0.4347 - val_sparse_categorical_accuracy: 0.8322 - learning_rate: 0.0010
## Epoch 93/500
## 90/90 - 0s - 2ms/step - loss: 0.0935 - sparse_categorical_accuracy: 0.9660 - val_loss: 0.3435 - val_sparse_categorical_accuracy: 0.8585 - learning_rate: 0.0010
## Epoch 94/500
## 90/90 - 0s - 2ms/step - loss: 0.0969 - sparse_categorical_accuracy: 0.9715 - val_loss: 0.1168 - val_sparse_categorical_accuracy: 0.9584 - learning_rate: 0.0010
## Epoch 95/500
## 90/90 - 0s - 2ms/step - loss: 0.0942 - sparse_categorical_accuracy: 0.9677 - val_loss: 0.4135 - val_sparse_categorical_accuracy: 0.8460 - learning_rate: 0.0010
## Epoch 96/500
## 90/90 - 0s - 2ms/step - loss: 0.1128 - sparse_categorical_accuracy: 0.9573 - val_loss: 0.6156 - val_sparse_categorical_accuracy: 0.7517 - learning_rate: 0.0010
## Epoch 97/500
## 90/90 - 0s - 2ms/step - loss: 0.1065 - sparse_categorical_accuracy: 0.9628 - val_loss: 1.1117 - val_sparse_categorical_accuracy: 0.6505 - learning_rate: 0.0010
## Epoch 98/500
## 90/90 - 0s - 2ms/step - loss: 0.0945 - sparse_categorical_accuracy: 0.9677 - val_loss: 0.1545 - val_sparse_categorical_accuracy: 0.9348 - learning_rate: 0.0010
## Epoch 99/500
## 90/90 - 0s - 2ms/step - loss: 0.0925 - sparse_categorical_accuracy: 0.9667 - val_loss: 0.1317 - val_sparse_categorical_accuracy: 0.9626 - learning_rate: 0.0010
## Epoch 100/500
## 90/90 - 0s - 2ms/step - loss: 0.0976 - sparse_categorical_accuracy: 0.9701 - val_loss: 0.4001 - val_sparse_categorical_accuracy: 0.8571 - learning_rate: 0.0010
## Epoch 101/500
## 90/90 - 0s - 2ms/step - loss: 0.0904 - sparse_categorical_accuracy: 0.9715 - val_loss: 0.1509 - val_sparse_categorical_accuracy: 0.9487 - learning_rate: 0.0010
## Epoch 102/500
## 90/90 - 0s - 2ms/step - loss: 0.0990 - sparse_categorical_accuracy: 0.9688 - val_loss: 0.2433 - val_sparse_categorical_accuracy: 0.8974 - learning_rate: 0.0010
## Epoch 103/500
## 90/90 - 1s - 8ms/step - loss: 0.0846 - sparse_categorical_accuracy: 0.9719 - val_loss: 0.1247 - val_sparse_categorical_accuracy: 0.9584 - learning_rate: 5.0000e-04
## Epoch 104/500
## 90/90 - 0s - 2ms/step - loss: 0.0830 - sparse_categorical_accuracy: 0.9729 - val_loss: 0.1638 - val_sparse_categorical_accuracy: 0.9404 - learning_rate: 5.0000e-04
## Epoch 105/500
## 90/90 - 0s - 2ms/step - loss: 0.0826 - sparse_categorical_accuracy: 0.9736 - val_loss: 0.1156 - val_sparse_categorical_accuracy: 0.9515 - learning_rate: 5.0000e-04
## Epoch 106/500
## 90/90 - 0s - 2ms/step - loss: 0.0827 - sparse_categorical_accuracy: 0.9729 - val_loss: 0.1531 - val_sparse_categorical_accuracy: 0.9515 - learning_rate: 5.0000e-04
## Epoch 107/500
## 90/90 - 0s - 2ms/step - loss: 0.0861 - sparse_categorical_accuracy: 0.9712 - val_loss: 0.1212 - val_sparse_categorical_accuracy: 0.9528 - learning_rate: 5.0000e-04
## Epoch 108/500
## 90/90 - 0s - 2ms/step - loss: 0.0810 - sparse_categorical_accuracy: 0.9743 - val_loss: 0.1088 - val_sparse_categorical_accuracy: 0.9639 - learning_rate: 5.0000e-04
## Epoch 109/500
## 90/90 - 0s - 2ms/step - loss: 0.0832 - sparse_categorical_accuracy: 0.9753 - val_loss: 0.1209 - val_sparse_categorical_accuracy: 0.9612 - learning_rate: 5.0000e-04
## Epoch 110/500
## 90/90 - 0s - 2ms/step - loss: 0.0828 - sparse_categorical_accuracy: 0.9726 - val_loss: 0.1277 - val_sparse_categorical_accuracy: 0.9376 - learning_rate: 5.0000e-04
## Epoch 111/500
## 90/90 - 0s - 2ms/step - loss: 0.0762 - sparse_categorical_accuracy: 0.9757 - val_loss: 0.1111 - val_sparse_categorical_accuracy: 0.9542 - learning_rate: 5.0000e-04
## Epoch 112/500
## 90/90 - 0s - 2ms/step - loss: 0.0779 - sparse_categorical_accuracy: 0.9743 - val_loss: 0.1260 - val_sparse_categorical_accuracy: 0.9445 - learning_rate: 5.0000e-04
## Epoch 113/500
## 90/90 - 0s - 2ms/step - loss: 0.0747 - sparse_categorical_accuracy: 0.9743 - val_loss: 0.1099 - val_sparse_categorical_accuracy: 0.9570 - learning_rate: 5.0000e-04
## Epoch 114/500
## 90/90 - 0s - 2ms/step - loss: 0.0893 - sparse_categorical_accuracy: 0.9694 - val_loss: 0.1280 - val_sparse_categorical_accuracy: 0.9542 - learning_rate: 5.0000e-04
## Epoch 115/500
## 90/90 - 0s - 2ms/step - loss: 0.0734 - sparse_categorical_accuracy: 0.9760 - val_loss: 0.1139 - val_sparse_categorical_accuracy: 0.9598 - learning_rate: 5.0000e-04
## Epoch 116/500
## 90/90 - 0s - 2ms/step - loss: 0.0798 - sparse_categorical_accuracy: 0.9750 - val_loss: 0.4385 - val_sparse_categorical_accuracy: 0.8585 - learning_rate: 5.0000e-04
## Epoch 117/500
## 90/90 - 0s - 2ms/step - loss: 0.0737 - sparse_categorical_accuracy: 0.9753 - val_loss: 0.1047 - val_sparse_categorical_accuracy: 0.9598 - learning_rate: 5.0000e-04
## Epoch 118/500
## 90/90 - 0s - 2ms/step - loss: 0.0826 - sparse_categorical_accuracy: 0.9688 - val_loss: 0.1573 - val_sparse_categorical_accuracy: 0.9293 - learning_rate: 5.0000e-04
## Epoch 119/500
## 90/90 - 0s - 2ms/step - loss: 0.0794 - sparse_categorical_accuracy: 0.9719 - val_loss: 0.2355 - val_sparse_categorical_accuracy: 0.9098 - learning_rate: 5.0000e-04
## Epoch 120/500
## 90/90 - 0s - 2ms/step - loss: 0.0794 - sparse_categorical_accuracy: 0.9757 - val_loss: 0.1191 - val_sparse_categorical_accuracy: 0.9542 - learning_rate: 5.0000e-04
## Epoch 121/500
## 90/90 - 0s - 2ms/step - loss: 0.0764 - sparse_categorical_accuracy: 0.9760 - val_loss: 0.1310 - val_sparse_categorical_accuracy: 0.9487 - learning_rate: 5.0000e-04
## Epoch 122/500
## 90/90 - 0s - 2ms/step - loss: 0.0767 - sparse_categorical_accuracy: 0.9753 - val_loss: 0.1968 - val_sparse_categorical_accuracy: 0.9279 - learning_rate: 5.0000e-04
## Epoch 123/500
## 90/90 - 0s - 2ms/step - loss: 0.0761 - sparse_categorical_accuracy: 0.9753 - val_loss: 0.2344 - val_sparse_categorical_accuracy: 0.9126 - learning_rate: 5.0000e-04
## Epoch 124/500
## 90/90 - 0s - 2ms/step - loss: 0.0819 - sparse_categorical_accuracy: 0.9688 - val_loss: 0.1195 - val_sparse_categorical_accuracy: 0.9528 - learning_rate: 5.0000e-04
## Epoch 125/500
## 90/90 - 0s - 2ms/step - loss: 0.0805 - sparse_categorical_accuracy: 0.9733 - val_loss: 0.1458 - val_sparse_categorical_accuracy: 0.9404 - learning_rate: 5.0000e-04
## Epoch 126/500
## 90/90 - 0s - 2ms/step - loss: 0.0735 - sparse_categorical_accuracy: 0.9760 - val_loss: 0.1443 - val_sparse_categorical_accuracy: 0.9556 - learning_rate: 5.0000e-04
## Epoch 127/500
## 90/90 - 0s - 2ms/step - loss: 0.0750 - sparse_categorical_accuracy: 0.9778 - val_loss: 0.1388 - val_sparse_categorical_accuracy: 0.9362 - learning_rate: 5.0000e-04
## Epoch 128/500
## 90/90 - 0s - 2ms/step - loss: 0.0775 - sparse_categorical_accuracy: 0.9757 - val_loss: 0.1123 - val_sparse_categorical_accuracy: 0.9612 - learning_rate: 5.0000e-04
## Epoch 129/500
## 90/90 - 0s - 2ms/step - loss: 0.0737 - sparse_categorical_accuracy: 0.9771 - val_loss: 0.1102 - val_sparse_categorical_accuracy: 0.9598 - learning_rate: 5.0000e-04
## Epoch 130/500
## 90/90 - 0s - 2ms/step - loss: 0.0835 - sparse_categorical_accuracy: 0.9740 - val_loss: 0.0994 - val_sparse_categorical_accuracy: 0.9639 - learning_rate: 5.0000e-04
## Epoch 131/500
## 90/90 - 0s - 2ms/step - loss: 0.0690 - sparse_categorical_accuracy: 0.9795 - val_loss: 0.1122 - val_sparse_categorical_accuracy: 0.9584 - learning_rate: 5.0000e-04
## Epoch 132/500
## 90/90 - 0s - 2ms/step - loss: 0.0692 - sparse_categorical_accuracy: 0.9757 - val_loss: 0.3209 - val_sparse_categorical_accuracy: 0.8766 - learning_rate: 5.0000e-04
## Epoch 133/500
## 90/90 - 0s - 2ms/step - loss: 0.0674 - sparse_categorical_accuracy: 0.9799 - val_loss: 0.1675 - val_sparse_categorical_accuracy: 0.9404 - learning_rate: 5.0000e-04
## Epoch 134/500
## 90/90 - 0s - 2ms/step - loss: 0.0728 - sparse_categorical_accuracy: 0.9760 - val_loss: 0.1376 - val_sparse_categorical_accuracy: 0.9598 - learning_rate: 5.0000e-04
## Epoch 135/500
## 90/90 - 0s - 2ms/step - loss: 0.0740 - sparse_categorical_accuracy: 0.9750 - val_loss: 0.1318 - val_sparse_categorical_accuracy: 0.9445 - learning_rate: 5.0000e-04
## Epoch 136/500
## 90/90 - 0s - 2ms/step - loss: 0.0768 - sparse_categorical_accuracy: 0.9736 - val_loss: 0.1042 - val_sparse_categorical_accuracy: 0.9584 - learning_rate: 5.0000e-04
## Epoch 137/500
## 90/90 - 0s - 1ms/step - loss: 0.0785 - sparse_categorical_accuracy: 0.9750 - val_loss: 0.1313 - val_sparse_categorical_accuracy: 0.9598 - learning_rate: 5.0000e-04
## Epoch 138/500
## 90/90 - 0s - 2ms/step - loss: 0.0765 - sparse_categorical_accuracy: 0.9757 - val_loss: 0.1213 - val_sparse_categorical_accuracy: 0.9570 - learning_rate: 5.0000e-04
## Epoch 139/500
## 90/90 - 0s - 2ms/step - loss: 0.0831 - sparse_categorical_accuracy: 0.9708 - val_loss: 0.1106 - val_sparse_categorical_accuracy: 0.9626 - learning_rate: 5.0000e-04
## Epoch 140/500
## 90/90 - 0s - 2ms/step - loss: 0.0700 - sparse_categorical_accuracy: 0.9802 - val_loss: 0.1491 - val_sparse_categorical_accuracy: 0.9362 - learning_rate: 5.0000e-04
## Epoch 141/500
## 90/90 - 0s - 2ms/step - loss: 0.0732 - sparse_categorical_accuracy: 0.9764 - val_loss: 0.1241 - val_sparse_categorical_accuracy: 0.9598 - learning_rate: 5.0000e-04
## Epoch 142/500
## 90/90 - 0s - 2ms/step - loss: 0.0700 - sparse_categorical_accuracy: 0.9785 - val_loss: 0.1106 - val_sparse_categorical_accuracy: 0.9570 - learning_rate: 5.0000e-04
## Epoch 143/500
## 90/90 - 0s - 2ms/step - loss: 0.0761 - sparse_categorical_accuracy: 0.9747 - val_loss: 0.1483 - val_sparse_categorical_accuracy: 0.9390 - learning_rate: 5.0000e-04
## Epoch 144/500
## 90/90 - 0s - 2ms/step - loss: 0.0679 - sparse_categorical_accuracy: 0.9809 - val_loss: 0.1041 - val_sparse_categorical_accuracy: 0.9570 - learning_rate: 5.0000e-04
## Epoch 145/500
## 90/90 - 0s - 2ms/step - loss: 0.0697 - sparse_categorical_accuracy: 0.9774 - val_loss: 0.4287 - val_sparse_categorical_accuracy: 0.8474 - learning_rate: 5.0000e-04
## Epoch 146/500
## 90/90 - 0s - 2ms/step - loss: 0.0714 - sparse_categorical_accuracy: 0.9747 - val_loss: 0.1838 - val_sparse_categorical_accuracy: 0.9209 - learning_rate: 5.0000e-04
## Epoch 147/500
## 90/90 - 0s - 2ms/step - loss: 0.0727 - sparse_categorical_accuracy: 0.9764 - val_loss: 0.2412 - val_sparse_categorical_accuracy: 0.9057 - learning_rate: 5.0000e-04
## Epoch 148/500
## 90/90 - 0s - 2ms/step - loss: 0.0794 - sparse_categorical_accuracy: 0.9753 - val_loss: 0.1038 - val_sparse_categorical_accuracy: 0.9667 - learning_rate: 5.0000e-04
## Epoch 149/500
## 90/90 - 0s - 2ms/step - loss: 0.0704 - sparse_categorical_accuracy: 0.9743 - val_loss: 0.2301 - val_sparse_categorical_accuracy: 0.9029 - learning_rate: 5.0000e-04
## Epoch 150/500
## 90/90 - 0s - 2ms/step - loss: 0.0714 - sparse_categorical_accuracy: 0.9757 - val_loss: 0.3270 - val_sparse_categorical_accuracy: 0.8849 - learning_rate: 5.0000e-04
## Epoch 151/500
## 90/90 - 0s - 2ms/step - loss: 0.0654 - sparse_categorical_accuracy: 0.9788 - val_loss: 0.1123 - val_sparse_categorical_accuracy: 0.9487 - learning_rate: 2.5000e-04
## Epoch 152/500
## 90/90 - 0s - 2ms/step - loss: 0.0675 - sparse_categorical_accuracy: 0.9785 - val_loss: 0.1038 - val_sparse_categorical_accuracy: 0.9639 - learning_rate: 2.5000e-04
## Epoch 153/500
## 90/90 - 0s - 2ms/step - loss: 0.0639 - sparse_categorical_accuracy: 0.9809 - val_loss: 0.0995 - val_sparse_categorical_accuracy: 0.9639 - learning_rate: 2.5000e-04
## Epoch 154/500
## 90/90 - 0s - 2ms/step - loss: 0.0610 - sparse_categorical_accuracy: 0.9809 - val_loss: 0.1088 - val_sparse_categorical_accuracy: 0.9542 - learning_rate: 2.5000e-04
## Epoch 155/500
## 90/90 - 0s - 2ms/step - loss: 0.0644 - sparse_categorical_accuracy: 0.9813 - val_loss: 0.1084 - val_sparse_categorical_accuracy: 0.9556 - learning_rate: 2.5000e-04
## Epoch 156/500
## 90/90 - 0s - 2ms/step - loss: 0.0657 - sparse_categorical_accuracy: 0.9771 - val_loss: 0.1368 - val_sparse_categorical_accuracy: 0.9362 - learning_rate: 2.5000e-04
## Epoch 157/500
## 90/90 - 0s - 2ms/step - loss: 0.0647 - sparse_categorical_accuracy: 0.9774 - val_loss: 0.1622 - val_sparse_categorical_accuracy: 0.9348 - learning_rate: 2.5000e-04
## Epoch 158/500
## 90/90 - 0s - 2ms/step - loss: 0.0648 - sparse_categorical_accuracy: 0.9795 - val_loss: 0.1205 - val_sparse_categorical_accuracy: 0.9473 - learning_rate: 2.5000e-04
## Epoch 159/500
## 90/90 - 0s - 2ms/step - loss: 0.0654 - sparse_categorical_accuracy: 0.9767 - val_loss: 0.1094 - val_sparse_categorical_accuracy: 0.9528 - learning_rate: 2.5000e-04
## Epoch 160/500
## 90/90 - 0s - 2ms/step - loss: 0.0640 - sparse_categorical_accuracy: 0.9781 - val_loss: 0.1048 - val_sparse_categorical_accuracy: 0.9556 - learning_rate: 2.5000e-04
## Epoch 161/500
## 90/90 - 0s - 2ms/step - loss: 0.0639 - sparse_categorical_accuracy: 0.9792 - val_loss: 0.1250 - val_sparse_categorical_accuracy: 0.9542 - learning_rate: 2.5000e-04
## Epoch 162/500
## 90/90 - 0s - 2ms/step - loss: 0.0587 - sparse_categorical_accuracy: 0.9826 - val_loss: 0.1044 - val_sparse_categorical_accuracy: 0.9639 - learning_rate: 2.5000e-04
## Epoch 163/500
## 90/90 - 0s - 2ms/step - loss: 0.0547 - sparse_categorical_accuracy: 0.9844 - val_loss: 0.1375 - val_sparse_categorical_accuracy: 0.9404 - learning_rate: 2.5000e-04
## Epoch 164/500
## 90/90 - 0s - 2ms/step - loss: 0.0609 - sparse_categorical_accuracy: 0.9809 - val_loss: 0.1647 - val_sparse_categorical_accuracy: 0.9376 - learning_rate: 2.5000e-04
## Epoch 165/500
## 90/90 - 0s - 2ms/step - loss: 0.0599 - sparse_categorical_accuracy: 0.9799 - val_loss: 0.1088 - val_sparse_categorical_accuracy: 0.9639 - learning_rate: 2.5000e-04
## Epoch 166/500
## 90/90 - 0s - 2ms/step - loss: 0.0575 - sparse_categorical_accuracy: 0.9806 - val_loss: 0.1074 - val_sparse_categorical_accuracy: 0.9570 - learning_rate: 2.5000e-04
## Epoch 167/500
## 90/90 - 0s - 2ms/step - loss: 0.0599 - sparse_categorical_accuracy: 0.9802 - val_loss: 0.1054 - val_sparse_categorical_accuracy: 0.9584 - learning_rate: 2.5000e-04
## Epoch 168/500
## 90/90 - 0s - 2ms/step - loss: 0.0694 - sparse_categorical_accuracy: 0.9747 - val_loss: 0.1145 - val_sparse_categorical_accuracy: 0.9681 - learning_rate: 2.5000e-04
## Epoch 169/500
## 90/90 - 0s - 2ms/step - loss: 0.0594 - sparse_categorical_accuracy: 0.9802 - val_loss: 0.1142 - val_sparse_categorical_accuracy: 0.9667 - learning_rate: 2.5000e-04
## Epoch 170/500
## 90/90 - 0s - 2ms/step - loss: 0.0600 - sparse_categorical_accuracy: 0.9837 - val_loss: 0.1071 - val_sparse_categorical_accuracy: 0.9653 - learning_rate: 2.5000e-04
## Epoch 171/500
## 90/90 - 0s - 2ms/step - loss: 0.0541 - sparse_categorical_accuracy: 0.9802 - val_loss: 0.1284 - val_sparse_categorical_accuracy: 0.9584 - learning_rate: 1.2500e-04
## Epoch 172/500
## 90/90 - 0s - 2ms/step - loss: 0.0565 - sparse_categorical_accuracy: 0.9819 - val_loss: 0.0999 - val_sparse_categorical_accuracy: 0.9695 - learning_rate: 1.2500e-04
## Epoch 173/500
## 90/90 - 0s - 2ms/step - loss: 0.0547 - sparse_categorical_accuracy: 0.9826 - val_loss: 0.1204 - val_sparse_categorical_accuracy: 0.9584 - learning_rate: 1.2500e-04
## Epoch 174/500
## 90/90 - 0s - 2ms/step - loss: 0.0571 - sparse_categorical_accuracy: 0.9826 - val_loss: 0.1042 - val_sparse_categorical_accuracy: 0.9695 - learning_rate: 1.2500e-04
## Epoch 175/500
## 90/90 - 0s - 2ms/step - loss: 0.0577 - sparse_categorical_accuracy: 0.9816 - val_loss: 0.1032 - val_sparse_categorical_accuracy: 0.9736 - learning_rate: 1.2500e-04
## Epoch 176/500
## 90/90 - 0s - 2ms/step - loss: 0.0545 - sparse_categorical_accuracy: 0.9830 - val_loss: 0.1004 - val_sparse_categorical_accuracy: 0.9639 - learning_rate: 1.2500e-04
## Epoch 177/500
## 90/90 - 0s - 2ms/step - loss: 0.0548 - sparse_categorical_accuracy: 0.9854 - val_loss: 0.1013 - val_sparse_categorical_accuracy: 0.9667 - learning_rate: 1.2500e-04
## Epoch 178/500
## 90/90 - 0s - 2ms/step - loss: 0.0537 - sparse_categorical_accuracy: 0.9840 - val_loss: 0.1193 - val_sparse_categorical_accuracy: 0.9598 - learning_rate: 1.2500e-04
## Epoch 179/500
## 90/90 - 0s - 2ms/step - loss: 0.0541 - sparse_categorical_accuracy: 0.9833 - val_loss: 0.1364 - val_sparse_categorical_accuracy: 0.9556 - learning_rate: 1.2500e-04
## Epoch 180/500
## 90/90 - 0s - 2ms/step - loss: 0.0530 - sparse_categorical_accuracy: 0.9851 - val_loss: 0.1159 - val_sparse_categorical_accuracy: 0.9667 - learning_rate: 1.2500e-04
## Epoch 180: early stopping

Evaluate model on test data

model <- load_model("best_model.keras")

results <- model |> evaluate(x_test, y_test)
## 42/42 - 1s - 12ms/step - loss: 0.0926 - sparse_categorical_accuracy: 0.9697
str(results)
## List of 2
##  $ loss                       : num 0.0926
##  $ sparse_categorical_accuracy: num 0.97
cat(
  "Test accuracy: ", results$sparse_categorical_accuracy, "\n",
  "Test loss: ", results$loss, "\n",
  sep = ""
)
## Test accuracy: 0.969697
## Test loss: 0.0926162

Plot the model’s training history

plot(history)
Plot of Training History Metrics
Plot of Training History Metrics

Plot just the training and validation accuracy:

plot(history, metric = "sparse_categorical_accuracy") +
  # scale x axis to actual number of epochs run before early stopping
  ggplot2::xlim(0, length(history$metrics$loss))
Plot of Accuracy During Training
Plot of Accuracy During Training

We can see how the training accuracy reaches almost 0.95 after 100 epochs. However, by observing the validation accuracy we can see how the network still needs training until it reaches almost 0.97 for both the validation and the training accuracy after 200 epochs. Beyond the 200th epoch, if we continue on training, the validation accuracy will start decreasing while the training accuracy will continue on increasing: the model starts overfitting.