Epochs and batches control in Keras











up vote
0
down vote

favorite












I would like to implement an autoencoder model that acts as following:



for epoch in xrange(100):
for X_batch in batch_list:
model.train_on_batch(X_batch, X_batch)
training_error = model.evaluate(X_batch, X_batch, verbose=0)
average the training error by the number of the batches considered
save it as the epoch training error
call the function to get the validation error in the same fashion over the validation data
compare the two errors and decide whether go on training or stopping


I have looked on Internet and already asked something and it has been suggested me to use fit_generator, but I have not understand how to implement it. Or should I use the method train_on_batch or the fit one with number of epochs equal to 1 to proper fit the model?



Which is the best practice in this case? Do you have an example or a similar question to link me?










share|improve this question
























  • Can you explain the first paragraph again? Do you mean that you you want to stop or continue training on the basis of validation error? Something like early stopping?
    – Garvita Tiwari
    Nov 19 at 16:20










  • I have updated with your suggestions, thank you. Yes exactly. But my problem is actually before: is it correct to use train_on_batch? Or should I use fit? Or fit.generator? I do not find any exhaustive example on Internet and I am doing by chance.
    – Guido
    Nov 19 at 16:28










  • I have just written as a pseudo code, I have still not implemented, I am wondering about what should I use
    – Guido
    Nov 19 at 16:30















up vote
0
down vote

favorite












I would like to implement an autoencoder model that acts as following:



for epoch in xrange(100):
for X_batch in batch_list:
model.train_on_batch(X_batch, X_batch)
training_error = model.evaluate(X_batch, X_batch, verbose=0)
average the training error by the number of the batches considered
save it as the epoch training error
call the function to get the validation error in the same fashion over the validation data
compare the two errors and decide whether go on training or stopping


I have looked on Internet and already asked something and it has been suggested me to use fit_generator, but I have not understand how to implement it. Or should I use the method train_on_batch or the fit one with number of epochs equal to 1 to proper fit the model?



Which is the best practice in this case? Do you have an example or a similar question to link me?










share|improve this question
























  • Can you explain the first paragraph again? Do you mean that you you want to stop or continue training on the basis of validation error? Something like early stopping?
    – Garvita Tiwari
    Nov 19 at 16:20










  • I have updated with your suggestions, thank you. Yes exactly. But my problem is actually before: is it correct to use train_on_batch? Or should I use fit? Or fit.generator? I do not find any exhaustive example on Internet and I am doing by chance.
    – Guido
    Nov 19 at 16:28










  • I have just written as a pseudo code, I have still not implemented, I am wondering about what should I use
    – Guido
    Nov 19 at 16:30













up vote
0
down vote

favorite









up vote
0
down vote

favorite











I would like to implement an autoencoder model that acts as following:



for epoch in xrange(100):
for X_batch in batch_list:
model.train_on_batch(X_batch, X_batch)
training_error = model.evaluate(X_batch, X_batch, verbose=0)
average the training error by the number of the batches considered
save it as the epoch training error
call the function to get the validation error in the same fashion over the validation data
compare the two errors and decide whether go on training or stopping


I have looked on Internet and already asked something and it has been suggested me to use fit_generator, but I have not understand how to implement it. Or should I use the method train_on_batch or the fit one with number of epochs equal to 1 to proper fit the model?



Which is the best practice in this case? Do you have an example or a similar question to link me?










share|improve this question















I would like to implement an autoencoder model that acts as following:



for epoch in xrange(100):
for X_batch in batch_list:
model.train_on_batch(X_batch, X_batch)
training_error = model.evaluate(X_batch, X_batch, verbose=0)
average the training error by the number of the batches considered
save it as the epoch training error
call the function to get the validation error in the same fashion over the validation data
compare the two errors and decide whether go on training or stopping


I have looked on Internet and already asked something and it has been suggested me to use fit_generator, but I have not understand how to implement it. Or should I use the method train_on_batch or the fit one with number of epochs equal to 1 to proper fit the model?



Which is the best practice in this case? Do you have an example or a similar question to link me?







python keras training-data






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 19 at 20:19









halfer

14.2k757106




14.2k757106










asked Nov 19 at 16:04









Guido

76




76












  • Can you explain the first paragraph again? Do you mean that you you want to stop or continue training on the basis of validation error? Something like early stopping?
    – Garvita Tiwari
    Nov 19 at 16:20










  • I have updated with your suggestions, thank you. Yes exactly. But my problem is actually before: is it correct to use train_on_batch? Or should I use fit? Or fit.generator? I do not find any exhaustive example on Internet and I am doing by chance.
    – Guido
    Nov 19 at 16:28










  • I have just written as a pseudo code, I have still not implemented, I am wondering about what should I use
    – Guido
    Nov 19 at 16:30


















  • Can you explain the first paragraph again? Do you mean that you you want to stop or continue training on the basis of validation error? Something like early stopping?
    – Garvita Tiwari
    Nov 19 at 16:20










  • I have updated with your suggestions, thank you. Yes exactly. But my problem is actually before: is it correct to use train_on_batch? Or should I use fit? Or fit.generator? I do not find any exhaustive example on Internet and I am doing by chance.
    – Guido
    Nov 19 at 16:28










  • I have just written as a pseudo code, I have still not implemented, I am wondering about what should I use
    – Guido
    Nov 19 at 16:30
















Can you explain the first paragraph again? Do you mean that you you want to stop or continue training on the basis of validation error? Something like early stopping?
– Garvita Tiwari
Nov 19 at 16:20




Can you explain the first paragraph again? Do you mean that you you want to stop or continue training on the basis of validation error? Something like early stopping?
– Garvita Tiwari
Nov 19 at 16:20












I have updated with your suggestions, thank you. Yes exactly. But my problem is actually before: is it correct to use train_on_batch? Or should I use fit? Or fit.generator? I do not find any exhaustive example on Internet and I am doing by chance.
– Guido
Nov 19 at 16:28




I have updated with your suggestions, thank you. Yes exactly. But my problem is actually before: is it correct to use train_on_batch? Or should I use fit? Or fit.generator? I do not find any exhaustive example on Internet and I am doing by chance.
– Guido
Nov 19 at 16:28












I have just written as a pseudo code, I have still not implemented, I am wondering about what should I use
– Guido
Nov 19 at 16:30




I have just written as a pseudo code, I have still not implemented, I am wondering about what should I use
– Guido
Nov 19 at 16:30












1 Answer
1






active

oldest

votes

















up vote
0
down vote



accepted










From what I can understand, you want to use validation error as early stopping criteria. Good news is that keras already has early stopping callback. SO all you need is to create a callback and call it during training after some epochs/iterations.



keras.callbacks.EarlyStopping(monitor='val_loss', min_delta=0, patience=0, verbose=0, mode='auto', baseline=None, restore_best_weights=False)


Let us look at train_on_batch and fit()



train_on_batch(x, y, sample_weight=None, class_weight=None)


fit(x=None, y=None, batch_size=None, epochs=1, verbose=1, callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0, steps_per_epoch=None, validation_steps=None)


You can see that train_on_batch doesn't take any callback as input, so good choice is to use fit here, unless you want to implement it by yourself.



Now you can call fit as following



callbacks = [EarlyStopping(monitor='val_loss', patience=2),
ModelCheckpoint(filepath='path to latest ckpt', monitor='val_loss', save_best_only=True)]

history = model.fit(train_features,train_target, epochs=num_epochs, callbacks=callbacks, verbose=0, batch_size=your_choice, validation_data)





share|improve this answer





















    Your Answer






    StackExchange.ifUsing("editor", function () {
    StackExchange.using("externalEditor", function () {
    StackExchange.using("snippets", function () {
    StackExchange.snippets.init();
    });
    });
    }, "code-snippets");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "1"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














     

    draft saved


    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53378515%2fepochs-and-batches-control-in-keras%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    0
    down vote



    accepted










    From what I can understand, you want to use validation error as early stopping criteria. Good news is that keras already has early stopping callback. SO all you need is to create a callback and call it during training after some epochs/iterations.



    keras.callbacks.EarlyStopping(monitor='val_loss', min_delta=0, patience=0, verbose=0, mode='auto', baseline=None, restore_best_weights=False)


    Let us look at train_on_batch and fit()



    train_on_batch(x, y, sample_weight=None, class_weight=None)


    fit(x=None, y=None, batch_size=None, epochs=1, verbose=1, callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0, steps_per_epoch=None, validation_steps=None)


    You can see that train_on_batch doesn't take any callback as input, so good choice is to use fit here, unless you want to implement it by yourself.



    Now you can call fit as following



    callbacks = [EarlyStopping(monitor='val_loss', patience=2),
    ModelCheckpoint(filepath='path to latest ckpt', monitor='val_loss', save_best_only=True)]

    history = model.fit(train_features,train_target, epochs=num_epochs, callbacks=callbacks, verbose=0, batch_size=your_choice, validation_data)





    share|improve this answer

























      up vote
      0
      down vote



      accepted










      From what I can understand, you want to use validation error as early stopping criteria. Good news is that keras already has early stopping callback. SO all you need is to create a callback and call it during training after some epochs/iterations.



      keras.callbacks.EarlyStopping(monitor='val_loss', min_delta=0, patience=0, verbose=0, mode='auto', baseline=None, restore_best_weights=False)


      Let us look at train_on_batch and fit()



      train_on_batch(x, y, sample_weight=None, class_weight=None)


      fit(x=None, y=None, batch_size=None, epochs=1, verbose=1, callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0, steps_per_epoch=None, validation_steps=None)


      You can see that train_on_batch doesn't take any callback as input, so good choice is to use fit here, unless you want to implement it by yourself.



      Now you can call fit as following



      callbacks = [EarlyStopping(monitor='val_loss', patience=2),
      ModelCheckpoint(filepath='path to latest ckpt', monitor='val_loss', save_best_only=True)]

      history = model.fit(train_features,train_target, epochs=num_epochs, callbacks=callbacks, verbose=0, batch_size=your_choice, validation_data)





      share|improve this answer























        up vote
        0
        down vote



        accepted







        up vote
        0
        down vote



        accepted






        From what I can understand, you want to use validation error as early stopping criteria. Good news is that keras already has early stopping callback. SO all you need is to create a callback and call it during training after some epochs/iterations.



        keras.callbacks.EarlyStopping(monitor='val_loss', min_delta=0, patience=0, verbose=0, mode='auto', baseline=None, restore_best_weights=False)


        Let us look at train_on_batch and fit()



        train_on_batch(x, y, sample_weight=None, class_weight=None)


        fit(x=None, y=None, batch_size=None, epochs=1, verbose=1, callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0, steps_per_epoch=None, validation_steps=None)


        You can see that train_on_batch doesn't take any callback as input, so good choice is to use fit here, unless you want to implement it by yourself.



        Now you can call fit as following



        callbacks = [EarlyStopping(monitor='val_loss', patience=2),
        ModelCheckpoint(filepath='path to latest ckpt', monitor='val_loss', save_best_only=True)]

        history = model.fit(train_features,train_target, epochs=num_epochs, callbacks=callbacks, verbose=0, batch_size=your_choice, validation_data)





        share|improve this answer












        From what I can understand, you want to use validation error as early stopping criteria. Good news is that keras already has early stopping callback. SO all you need is to create a callback and call it during training after some epochs/iterations.



        keras.callbacks.EarlyStopping(monitor='val_loss', min_delta=0, patience=0, verbose=0, mode='auto', baseline=None, restore_best_weights=False)


        Let us look at train_on_batch and fit()



        train_on_batch(x, y, sample_weight=None, class_weight=None)


        fit(x=None, y=None, batch_size=None, epochs=1, verbose=1, callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0, steps_per_epoch=None, validation_steps=None)


        You can see that train_on_batch doesn't take any callback as input, so good choice is to use fit here, unless you want to implement it by yourself.



        Now you can call fit as following



        callbacks = [EarlyStopping(monitor='val_loss', patience=2),
        ModelCheckpoint(filepath='path to latest ckpt', monitor='val_loss', save_best_only=True)]

        history = model.fit(train_features,train_target, epochs=num_epochs, callbacks=callbacks, verbose=0, batch_size=your_choice, validation_data)






        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Nov 19 at 16:49









        Garvita Tiwari

        453211




        453211






























             

            draft saved


            draft discarded



















































             


            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53378515%2fepochs-and-batches-control-in-keras%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            404 Error Contact Form 7 ajax form submitting

            How to know if a Active Directory user can login interactively

            Refactoring coordinates for Minecraft Pi buildings written in Python