Switching entire network from float32 to float64 on condition
up vote
4
down vote
favorite
Since lower precision can yield significant computational time savings, I would like to be able to switch (mid run) all variables in my partially trained network from float32 to float64 on an error condition.
For example: I initialize all variables as float32, run several hundred thousand batches through the network, and observe the loss reaches a tolerance of order 1e-8. At this point, to continue converging the model, I would like to switch to double precision for all model variables.
Is there a simple way to do this in python?
EDIT: Also, will switching the dtype of all of the network variables (weights, biases, inputs, etc.) cause issues with the optimizer I previously used? For example if Adam is being used, and computes moment estimates in single precision, will switching to double precision cause an issue?
python tensorflow precision convergence epsilon
add a comment |
up vote
4
down vote
favorite
Since lower precision can yield significant computational time savings, I would like to be able to switch (mid run) all variables in my partially trained network from float32 to float64 on an error condition.
For example: I initialize all variables as float32, run several hundred thousand batches through the network, and observe the loss reaches a tolerance of order 1e-8. At this point, to continue converging the model, I would like to switch to double precision for all model variables.
Is there a simple way to do this in python?
EDIT: Also, will switching the dtype of all of the network variables (weights, biases, inputs, etc.) cause issues with the optimizer I previously used? For example if Adam is being used, and computes moment estimates in single precision, will switching to double precision cause an issue?
python tensorflow precision convergence epsilon
add a comment |
up vote
4
down vote
favorite
up vote
4
down vote
favorite
Since lower precision can yield significant computational time savings, I would like to be able to switch (mid run) all variables in my partially trained network from float32 to float64 on an error condition.
For example: I initialize all variables as float32, run several hundred thousand batches through the network, and observe the loss reaches a tolerance of order 1e-8. At this point, to continue converging the model, I would like to switch to double precision for all model variables.
Is there a simple way to do this in python?
EDIT: Also, will switching the dtype of all of the network variables (weights, biases, inputs, etc.) cause issues with the optimizer I previously used? For example if Adam is being used, and computes moment estimates in single precision, will switching to double precision cause an issue?
python tensorflow precision convergence epsilon
Since lower precision can yield significant computational time savings, I would like to be able to switch (mid run) all variables in my partially trained network from float32 to float64 on an error condition.
For example: I initialize all variables as float32, run several hundred thousand batches through the network, and observe the loss reaches a tolerance of order 1e-8. At this point, to continue converging the model, I would like to switch to double precision for all model variables.
Is there a simple way to do this in python?
EDIT: Also, will switching the dtype of all of the network variables (weights, biases, inputs, etc.) cause issues with the optimizer I previously used? For example if Adam is being used, and computes moment estimates in single precision, will switching to double precision cause an issue?
python tensorflow precision convergence epsilon
python tensorflow precision convergence epsilon
edited Nov 19 at 23:16
asked Nov 19 at 22:03
user23590632
4613
4613
add a comment |
add a comment |
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53383286%2fswitching-entire-network-from-float32-to-float64-on-condition%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown