This post illustrates the early stopping callback in Keras.

The training data is a 3 time steps for two variables. The target value is sum of three time steps values for the first variable.

Simulate the data

1
2
3
4
5
import numpy as np
import pandas as pd
np.random.seed(1234)
X = np.random.normal(size=(1000,6))
Y = X[:,0:3].sum(axis=1)

Define the Model and Callback

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
from keras.callbacks import EarlyStopping
from keras.layers import Dense
from keras.models import Sequential

model = Sequential() model.add(Dense(100, activation='relu', input_dim= 6)) model.add(Dense(1)) model.compile(optimizer='adam', loss='mse') es = EarlyStopping(monitor='val_loss', min_delta=0.0001, patience=20) history = model.fit(X,Y, validation_split=0.2, shuffle=False,epochs=1000, callbacks=[es])

Even though the number of epochs specified is 1000, the early stopping callback stops the training at the end of 119 epochs

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
Epoch 119/1000

32/800 [>………………………..] - ETA: 0s - loss: 2.0512e-04 800/800 [==============================] - 0s 27us/step - loss: 1.9612e-04 - val_loss: 7.5999e-04 Epoch 120/1000

32/800 [>………………………..] - ETA: 0s - loss: 2.0067e-04 800/800 [==============================] - 0s 25us/step - loss: 1.9326e-04 - val_loss: 7.5662e-04 Epoch 121/1000

32/800 [>………………………..] - ETA: 0s - loss: 1.9639e-04 800/800 [==============================] - 0s 25us/step - loss: 1.9060e-04 - val_loss: 7.5309e-04 Epoch 122/1000

32/800 [>………………………..] - ETA: 0s - loss: 1.9208e-04 800/800 [==============================] - 0s 25us/step - loss: 1.8788e-04 - val_loss: 7.4992e-04 Epoch 123/1000

32/800 [>………………………..] - ETA: 0s - loss: 1.8804e-04 800/800 [==============================] - 0s 26us/step - loss: 1.8540e-04 - val_loss: 7.4602e-04 Epoch 124/1000

32/800 [>………………………..] - ETA: 0s - loss: 1.8407e-04 800/800 [==============================] - 0s 25us/step - loss: 1.8298e-04 - val_loss: 7.4188e-04 Epoch 125/1000

32/800 [>………………………..] - ETA: 0s - loss: 1.7936e-04 800/800 [==============================] - 0s 25us/step - loss: 1.8051e-04 - val_loss: 7.3798e-04 Epoch 126/1000

32/800 [>………………………..] - ETA: 0s - loss: 1.7523e-04 800/800 [==============================] - 0s 30us/step - loss: 1.7818e-04 - val_loss: 7.3394e-04 Epoch 127/1000

32/800 [>………………………..] - ETA: 0s - loss: 1.7129e-04 800/800 [==============================] - 0s 24us/step - loss: 1.7581e-04 - val_loss: 7.3082e-04 Epoch 128/1000

32/800 [>………………………..] - ETA: 0s - loss: 1.6748e-04 800/800 [==============================] - 0s 22us/step - loss: 1.7354e-04 - val_loss: 7.2578e-04 Epoch 129/1000

32/800 [>………………………..] - ETA: 0s - loss: 1.6340e-04 800/800 [==============================] - 0s 24us/step - loss: 1.7124e-04 - val_loss: 7.2199e-04 Epoch 130/1000

32/800 [>………………………..] - ETA: 0s - loss: 1.6007e-04 800/800 [==============================] - 0s 24us/step - loss: 1.6888e-04 - val_loss: 7.1804e-04 Epoch 131/1000

32/800 [>………………………..] - ETA: 0s - loss: 1.5657e-04 800/800 [==============================] - 0s 26us/step - loss: 1.6662e-04 - val_loss: 7.1401e-04 Epoch 132/1000

32/800 [>………………………..] - ETA: 0s - loss: 1.5324e-04 800/800 [==============================] - 0s 26us/step - loss: 1.6433e-04 - val_loss: 7.1052e-04 Epoch 133/1000

32/800 [>………………………..] - ETA: 0s - loss: 1.4963e-04 800/800 [==============================] - 0s 24us/step - loss: 1.6205e-04 - val_loss: 7.0707e-04 Epoch 134/1000

32/800 [>………………………..] - ETA: 0s - loss: 1.4632e-04 800/800 [==============================] - 0s 29us/step - loss: 1.5987e-04 - val_loss: 7.0297e-04 Epoch 135/1000

32/800 [>………………………..] - ETA: 0s - loss: 1.4298e-04 800/800 [==============================] - 0s 32us/step - loss: 1.5774e-04 - val_loss: 7.0025e-04

Takeaway Deliberate practice on Keras callbacks