Epochs and batches control in Keras
up vote
0
down vote
favorite
I would like to implement an autoencoder model that acts as following:
for epoch in xrange(100):
for X_batch in batch_list:
model.train_on_batch(X_batch, X_batch)
training_error = model.evaluate(X_batch, X_batch, verbose=0)
average the training error by the number of the batches considered
save it as the epoch training error
call the function to get the validation error in the same fashion over the validation data
compare the two errors and decide whether go on training or stopping
I have looked on Internet and already asked something and it has been suggested me to use fit_generator, but I have not understand how to implement it. Or should I use the method train_on_batch or the fit one with number of epochs equal to 1 to proper fit the model?
Which is the best practice in this case? Do you have an example or a similar question to link me?
python keras training-data
add a comment |
up vote
0
down vote
favorite
I would like to implement an autoencoder model that acts as following:
for epoch in xrange(100):
for X_batch in batch_list:
model.train_on_batch(X_batch, X_batch)
training_error = model.evaluate(X_batch, X_batch, verbose=0)
average the training error by the number of the batches considered
save it as the epoch training error
call the function to get the validation error in the same fashion over the validation data
compare the two errors and decide whether go on training or stopping
I have looked on Internet and already asked something and it has been suggested me to use fit_generator, but I have not understand how to implement it. Or should I use the method train_on_batch or the fit one with number of epochs equal to 1 to proper fit the model?
Which is the best practice in this case? Do you have an example or a similar question to link me?
python keras training-data
Can you explain the first paragraph again? Do you mean that you you want to stop or continue training on the basis of validation error? Something like early stopping?
– Garvita Tiwari
Nov 19 at 16:20
I have updated with your suggestions, thank you. Yes exactly. But my problem is actually before: is it correct to use train_on_batch? Or should I use fit? Or fit.generator? I do not find any exhaustive example on Internet and I am doing by chance.
– Guido
Nov 19 at 16:28
I have just written as a pseudo code, I have still not implemented, I am wondering about what should I use
– Guido
Nov 19 at 16:30
add a comment |
up vote
0
down vote
favorite
up vote
0
down vote
favorite
I would like to implement an autoencoder model that acts as following:
for epoch in xrange(100):
for X_batch in batch_list:
model.train_on_batch(X_batch, X_batch)
training_error = model.evaluate(X_batch, X_batch, verbose=0)
average the training error by the number of the batches considered
save it as the epoch training error
call the function to get the validation error in the same fashion over the validation data
compare the two errors and decide whether go on training or stopping
I have looked on Internet and already asked something and it has been suggested me to use fit_generator, but I have not understand how to implement it. Or should I use the method train_on_batch or the fit one with number of epochs equal to 1 to proper fit the model?
Which is the best practice in this case? Do you have an example or a similar question to link me?
python keras training-data
I would like to implement an autoencoder model that acts as following:
for epoch in xrange(100):
for X_batch in batch_list:
model.train_on_batch(X_batch, X_batch)
training_error = model.evaluate(X_batch, X_batch, verbose=0)
average the training error by the number of the batches considered
save it as the epoch training error
call the function to get the validation error in the same fashion over the validation data
compare the two errors and decide whether go on training or stopping
I have looked on Internet and already asked something and it has been suggested me to use fit_generator, but I have not understand how to implement it. Or should I use the method train_on_batch or the fit one with number of epochs equal to 1 to proper fit the model?
Which is the best practice in this case? Do you have an example or a similar question to link me?
python keras training-data
python keras training-data
edited Nov 19 at 20:19
halfer
14.2k757106
14.2k757106
asked Nov 19 at 16:04
Guido
76
76
Can you explain the first paragraph again? Do you mean that you you want to stop or continue training on the basis of validation error? Something like early stopping?
– Garvita Tiwari
Nov 19 at 16:20
I have updated with your suggestions, thank you. Yes exactly. But my problem is actually before: is it correct to use train_on_batch? Or should I use fit? Or fit.generator? I do not find any exhaustive example on Internet and I am doing by chance.
– Guido
Nov 19 at 16:28
I have just written as a pseudo code, I have still not implemented, I am wondering about what should I use
– Guido
Nov 19 at 16:30
add a comment |
Can you explain the first paragraph again? Do you mean that you you want to stop or continue training on the basis of validation error? Something like early stopping?
– Garvita Tiwari
Nov 19 at 16:20
I have updated with your suggestions, thank you. Yes exactly. But my problem is actually before: is it correct to use train_on_batch? Or should I use fit? Or fit.generator? I do not find any exhaustive example on Internet and I am doing by chance.
– Guido
Nov 19 at 16:28
I have just written as a pseudo code, I have still not implemented, I am wondering about what should I use
– Guido
Nov 19 at 16:30
Can you explain the first paragraph again? Do you mean that you you want to stop or continue training on the basis of validation error? Something like early stopping?
– Garvita Tiwari
Nov 19 at 16:20
Can you explain the first paragraph again? Do you mean that you you want to stop or continue training on the basis of validation error? Something like early stopping?
– Garvita Tiwari
Nov 19 at 16:20
I have updated with your suggestions, thank you. Yes exactly. But my problem is actually before: is it correct to use train_on_batch? Or should I use fit? Or fit.generator? I do not find any exhaustive example on Internet and I am doing by chance.
– Guido
Nov 19 at 16:28
I have updated with your suggestions, thank you. Yes exactly. But my problem is actually before: is it correct to use train_on_batch? Or should I use fit? Or fit.generator? I do not find any exhaustive example on Internet and I am doing by chance.
– Guido
Nov 19 at 16:28
I have just written as a pseudo code, I have still not implemented, I am wondering about what should I use
– Guido
Nov 19 at 16:30
I have just written as a pseudo code, I have still not implemented, I am wondering about what should I use
– Guido
Nov 19 at 16:30
add a comment |
1 Answer
1
active
oldest
votes
up vote
0
down vote
accepted
From what I can understand, you want to use validation error as early stopping criteria. Good news is that keras already has early stopping callback. SO all you need is to create a callback and call it during training after some epochs/iterations.
keras.callbacks.EarlyStopping(monitor='val_loss', min_delta=0, patience=0, verbose=0, mode='auto', baseline=None, restore_best_weights=False)
Let us look at train_on_batch and fit()
train_on_batch(x, y, sample_weight=None, class_weight=None)
fit(x=None, y=None, batch_size=None, epochs=1, verbose=1, callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0, steps_per_epoch=None, validation_steps=None)
You can see that train_on_batch doesn't take any callback as input, so good choice is to use fit here, unless you want to implement it by yourself.
Now you can call fit as following
callbacks = [EarlyStopping(monitor='val_loss', patience=2),
ModelCheckpoint(filepath='path to latest ckpt', monitor='val_loss', save_best_only=True)]
history = model.fit(train_features,train_target, epochs=num_epochs, callbacks=callbacks, verbose=0, batch_size=your_choice, validation_data)
add a comment |
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
0
down vote
accepted
From what I can understand, you want to use validation error as early stopping criteria. Good news is that keras already has early stopping callback. SO all you need is to create a callback and call it during training after some epochs/iterations.
keras.callbacks.EarlyStopping(monitor='val_loss', min_delta=0, patience=0, verbose=0, mode='auto', baseline=None, restore_best_weights=False)
Let us look at train_on_batch and fit()
train_on_batch(x, y, sample_weight=None, class_weight=None)
fit(x=None, y=None, batch_size=None, epochs=1, verbose=1, callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0, steps_per_epoch=None, validation_steps=None)
You can see that train_on_batch doesn't take any callback as input, so good choice is to use fit here, unless you want to implement it by yourself.
Now you can call fit as following
callbacks = [EarlyStopping(monitor='val_loss', patience=2),
ModelCheckpoint(filepath='path to latest ckpt', monitor='val_loss', save_best_only=True)]
history = model.fit(train_features,train_target, epochs=num_epochs, callbacks=callbacks, verbose=0, batch_size=your_choice, validation_data)
add a comment |
up vote
0
down vote
accepted
From what I can understand, you want to use validation error as early stopping criteria. Good news is that keras already has early stopping callback. SO all you need is to create a callback and call it during training after some epochs/iterations.
keras.callbacks.EarlyStopping(monitor='val_loss', min_delta=0, patience=0, verbose=0, mode='auto', baseline=None, restore_best_weights=False)
Let us look at train_on_batch and fit()
train_on_batch(x, y, sample_weight=None, class_weight=None)
fit(x=None, y=None, batch_size=None, epochs=1, verbose=1, callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0, steps_per_epoch=None, validation_steps=None)
You can see that train_on_batch doesn't take any callback as input, so good choice is to use fit here, unless you want to implement it by yourself.
Now you can call fit as following
callbacks = [EarlyStopping(monitor='val_loss', patience=2),
ModelCheckpoint(filepath='path to latest ckpt', monitor='val_loss', save_best_only=True)]
history = model.fit(train_features,train_target, epochs=num_epochs, callbacks=callbacks, verbose=0, batch_size=your_choice, validation_data)
add a comment |
up vote
0
down vote
accepted
up vote
0
down vote
accepted
From what I can understand, you want to use validation error as early stopping criteria. Good news is that keras already has early stopping callback. SO all you need is to create a callback and call it during training after some epochs/iterations.
keras.callbacks.EarlyStopping(monitor='val_loss', min_delta=0, patience=0, verbose=0, mode='auto', baseline=None, restore_best_weights=False)
Let us look at train_on_batch and fit()
train_on_batch(x, y, sample_weight=None, class_weight=None)
fit(x=None, y=None, batch_size=None, epochs=1, verbose=1, callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0, steps_per_epoch=None, validation_steps=None)
You can see that train_on_batch doesn't take any callback as input, so good choice is to use fit here, unless you want to implement it by yourself.
Now you can call fit as following
callbacks = [EarlyStopping(monitor='val_loss', patience=2),
ModelCheckpoint(filepath='path to latest ckpt', monitor='val_loss', save_best_only=True)]
history = model.fit(train_features,train_target, epochs=num_epochs, callbacks=callbacks, verbose=0, batch_size=your_choice, validation_data)
From what I can understand, you want to use validation error as early stopping criteria. Good news is that keras already has early stopping callback. SO all you need is to create a callback and call it during training after some epochs/iterations.
keras.callbacks.EarlyStopping(monitor='val_loss', min_delta=0, patience=0, verbose=0, mode='auto', baseline=None, restore_best_weights=False)
Let us look at train_on_batch and fit()
train_on_batch(x, y, sample_weight=None, class_weight=None)
fit(x=None, y=None, batch_size=None, epochs=1, verbose=1, callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0, steps_per_epoch=None, validation_steps=None)
You can see that train_on_batch doesn't take any callback as input, so good choice is to use fit here, unless you want to implement it by yourself.
Now you can call fit as following
callbacks = [EarlyStopping(monitor='val_loss', patience=2),
ModelCheckpoint(filepath='path to latest ckpt', monitor='val_loss', save_best_only=True)]
history = model.fit(train_features,train_target, epochs=num_epochs, callbacks=callbacks, verbose=0, batch_size=your_choice, validation_data)
answered Nov 19 at 16:49
Garvita Tiwari
453211
453211
add a comment |
add a comment |
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53378515%2fepochs-and-batches-control-in-keras%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Can you explain the first paragraph again? Do you mean that you you want to stop or continue training on the basis of validation error? Something like early stopping?
– Garvita Tiwari
Nov 19 at 16:20
I have updated with your suggestions, thank you. Yes exactly. But my problem is actually before: is it correct to use train_on_batch? Or should I use fit? Or fit.generator? I do not find any exhaustive example on Internet and I am doing by chance.
– Guido
Nov 19 at 16:28
I have just written as a pseudo code, I have still not implemented, I am wondering about what should I use
– Guido
Nov 19 at 16:30