Nginx Reload Configuration Best Practice
Currently setting up a nginx reverse-proxy load-balancing a wide variety of domain names.
nginx configuration files are programatically generated and might change very often (ie add or delete http/https servers)
I am using:
nginx -s reload
To tell nginx to re-read the configuration.
the main nginx.conf file contain an include of all the generated configuration files as such:
http {
include /volumes/config/*/domain.conf;
}
Included configuration file might look like this:
server {
listen 80;
listen [::]:80;
server_name mydomain.com;
location / {
try_files $uri /404.html /404.htm =404;
root /volumes/sites/mydomain;
}
}
My question:
Is it healthy or considered harmfull to run:
nginx -s reload
multiple times per minutes to notify nginx to take into account modifications on the configuration?
What kind of performance hit would that imply ?
EDIT: I'd like to reformulate the question: How can we make it possible to dynamically change the configuration of nginx very often without a big perfomance hit ?
nginx
add a comment |
Currently setting up a nginx reverse-proxy load-balancing a wide variety of domain names.
nginx configuration files are programatically generated and might change very often (ie add or delete http/https servers)
I am using:
nginx -s reload
To tell nginx to re-read the configuration.
the main nginx.conf file contain an include of all the generated configuration files as such:
http {
include /volumes/config/*/domain.conf;
}
Included configuration file might look like this:
server {
listen 80;
listen [::]:80;
server_name mydomain.com;
location / {
try_files $uri /404.html /404.htm =404;
root /volumes/sites/mydomain;
}
}
My question:
Is it healthy or considered harmfull to run:
nginx -s reload
multiple times per minutes to notify nginx to take into account modifications on the configuration?
What kind of performance hit would that imply ?
EDIT: I'd like to reformulate the question: How can we make it possible to dynamically change the configuration of nginx very often without a big perfomance hit ?
nginx
add a comment |
Currently setting up a nginx reverse-proxy load-balancing a wide variety of domain names.
nginx configuration files are programatically generated and might change very often (ie add or delete http/https servers)
I am using:
nginx -s reload
To tell nginx to re-read the configuration.
the main nginx.conf file contain an include of all the generated configuration files as such:
http {
include /volumes/config/*/domain.conf;
}
Included configuration file might look like this:
server {
listen 80;
listen [::]:80;
server_name mydomain.com;
location / {
try_files $uri /404.html /404.htm =404;
root /volumes/sites/mydomain;
}
}
My question:
Is it healthy or considered harmfull to run:
nginx -s reload
multiple times per minutes to notify nginx to take into account modifications on the configuration?
What kind of performance hit would that imply ?
EDIT: I'd like to reformulate the question: How can we make it possible to dynamically change the configuration of nginx very often without a big perfomance hit ?
nginx
Currently setting up a nginx reverse-proxy load-balancing a wide variety of domain names.
nginx configuration files are programatically generated and might change very often (ie add or delete http/https servers)
I am using:
nginx -s reload
To tell nginx to re-read the configuration.
the main nginx.conf file contain an include of all the generated configuration files as such:
http {
include /volumes/config/*/domain.conf;
}
Included configuration file might look like this:
server {
listen 80;
listen [::]:80;
server_name mydomain.com;
location / {
try_files $uri /404.html /404.htm =404;
root /volumes/sites/mydomain;
}
}
My question:
Is it healthy or considered harmfull to run:
nginx -s reload
multiple times per minutes to notify nginx to take into account modifications on the configuration?
What kind of performance hit would that imply ?
EDIT: I'd like to reformulate the question: How can we make it possible to dynamically change the configuration of nginx very often without a big perfomance hit ?
nginx
nginx
edited Dec 9 '16 at 2:03
Crappy
asked Dec 8 '16 at 3:14
CrappyCrappy
62
62
add a comment |
add a comment |
3 Answers
3
active
oldest
votes
I would use inotifywatch
with a timeout on the directory containing the generated conf files and reload nginx
only if something was modified/created/deleted in said directory during that time:
-t , --timeout
Listen only for the specified amount of seconds. If not specified, inotifywatch will gather
statistics until receiving an interrupt signal by (for example)
pressing CONTROL-C at the console.
while true; do
if [[ "$(inotifywatch -e modify,create,delete -t 30 /volumes/config/ 2>&1)" =~ filename ]]; then
service nginx reload;
fi;
done
This way you set up a minimum timer after which the reloads will take place and you don't lose any watches between calls to inotifywait
.
If I'm not mistaken, it still might cause nginx to reload multiple times a minute if the configuration does really get updated that often, do we know what kind of performance hit we are looking at ?
– Crappy
Dec 8 '16 at 18:07
You can change the timeout to 60s or whatever you deem acceptable and it will reload at most once every timer, only if configs changed during that time. Do you really change configs that often? There shouldn't be that big of a hit.
– alindt
Dec 9 '16 at 5:37
1
Maybe I'm looking at the problem from the wrong point of view, but i do need to be able to change the nginx configuration very very often (without hurting the runtime performance) ideally 10x+ per minute
– Crappy
Dec 9 '16 at 19:27
@Crappy are you sure that's the only way to handle it? i often send lots of traffic to a map script which uses cached database queries to figure out what to do dynamically instead
– Garet Claborn
Nov 24 '18 at 7:42
add a comment |
Rather than reloading nginx several times a minute I would suggest to watch the config file and execute the reload only when the changes are saved; you can use inotifywait (available through the inotify-tools
package) with the following command:
while inotifywait -e close_write /etc/nginx/sites-enabled/default; do service nginx reload; done
That's the best solution i could think of too, but what if the configuration do indeed needs to changes multiple times a minute ?
– Crappy
Dec 8 '16 at 5:21
close_wait
doesn't catchdelete
events, not does it imply the file was written to. close_write => "A watched file or a file within a watched directory was closed, after being opened in writeable mode. This does not necessarily imply the file was written to."
– alindt
Dec 8 '16 at 8:26
add a comment |
If you
- Use a script similar to what's provided in this answer, let's call it
check_nginx_confs.sh
- Change your ExecStart directive in nginx.service so
/etc/nginx/
is/dev/shm/nginx/
- Add a script to
/etc/init.d/
to copy conf files to your temp dir ------------------------
mkdir /dev/shm/nginx && cp /etc/nginx/* /dev/shm/nginx
- Use rsync (or other sync tool) to sync
/dev/shm/nginx
back to/etc/nginx
; so you dont lose config files created in/dev/shm/nginx
on reboot. Or simply make both locations in-app, for atomic checks as desired - Set a cronjob to run
check_nginx_confs.sh
as often as files 'turn old' incheck_nginx_confs.sh
, so you know if a change happened within the last time window but only check once - Only
systemctl reload ngnix
ifcheck_nginx_confs.sh
finds a new file, once per time period defined by $OLDTIME - Rest
Now nginx will load those configs much, much faster; from RAM. It will only reload once every $OLDTIME seconds and only if it needs to. Beyond just routing requests to a dynamic handler of your own; this is probably the fastest you get nginx to reload frequently
It's a good idea to reserve a certain disk quota to the temp directory you use, to ensure you don't run out of memory. There are various ways of accomplishing that. You can also add a symlink to an empty, on-disk directory in case you have to spill over but that'd be a lot of confs
Script from other answer:
#!/bin/sh
# Input file
TESTDIR=/dev/shm/nginx
# How many seconds before dir is deemed "older"
OLDTIME=75
#add a little grace period, optional
# Get current and file times
CURTIME=$(date +%s)
FILETIME=$(date -r $TESTDIR +%s)
TIMEDIFF=$(expr $CURTIME - $FILETIME)
# Check if dir updated in last 120 seconds
if [ $OLDTIME -gt $TIMEDIFF ]; then
systemctl nginx reload
fi
# Run me every 1 minute with cron
Optionally; if you're feeling up to it you can put the copy and sync commands in nginx.service
's ExecStart with some && magic so they always happen together. You can also && a sort of 'destructor function' which does a final sync and frees /dev/shm/nginx
on ExecStop. This would replace step (3) and (4)
Alternative to cron; you can have a script running a loop in the background with a wait duration. If you do this, you can pass LastUpdateTime back and forth between the two scripts for greater accuracy as LastUpdateTime+GracePeriod is more reliable. With this, I would still use cron to periodically make sure the loop is still running
For reference, on my CentOS 7 images, nginx.service is at
/usr/lib/systemd/system/nginx.service
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f41031170%2fnginx-reload-configuration-best-practice%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
I would use inotifywatch
with a timeout on the directory containing the generated conf files and reload nginx
only if something was modified/created/deleted in said directory during that time:
-t , --timeout
Listen only for the specified amount of seconds. If not specified, inotifywatch will gather
statistics until receiving an interrupt signal by (for example)
pressing CONTROL-C at the console.
while true; do
if [[ "$(inotifywatch -e modify,create,delete -t 30 /volumes/config/ 2>&1)" =~ filename ]]; then
service nginx reload;
fi;
done
This way you set up a minimum timer after which the reloads will take place and you don't lose any watches between calls to inotifywait
.
If I'm not mistaken, it still might cause nginx to reload multiple times a minute if the configuration does really get updated that often, do we know what kind of performance hit we are looking at ?
– Crappy
Dec 8 '16 at 18:07
You can change the timeout to 60s or whatever you deem acceptable and it will reload at most once every timer, only if configs changed during that time. Do you really change configs that often? There shouldn't be that big of a hit.
– alindt
Dec 9 '16 at 5:37
1
Maybe I'm looking at the problem from the wrong point of view, but i do need to be able to change the nginx configuration very very often (without hurting the runtime performance) ideally 10x+ per minute
– Crappy
Dec 9 '16 at 19:27
@Crappy are you sure that's the only way to handle it? i often send lots of traffic to a map script which uses cached database queries to figure out what to do dynamically instead
– Garet Claborn
Nov 24 '18 at 7:42
add a comment |
I would use inotifywatch
with a timeout on the directory containing the generated conf files and reload nginx
only if something was modified/created/deleted in said directory during that time:
-t , --timeout
Listen only for the specified amount of seconds. If not specified, inotifywatch will gather
statistics until receiving an interrupt signal by (for example)
pressing CONTROL-C at the console.
while true; do
if [[ "$(inotifywatch -e modify,create,delete -t 30 /volumes/config/ 2>&1)" =~ filename ]]; then
service nginx reload;
fi;
done
This way you set up a minimum timer after which the reloads will take place and you don't lose any watches between calls to inotifywait
.
If I'm not mistaken, it still might cause nginx to reload multiple times a minute if the configuration does really get updated that often, do we know what kind of performance hit we are looking at ?
– Crappy
Dec 8 '16 at 18:07
You can change the timeout to 60s or whatever you deem acceptable and it will reload at most once every timer, only if configs changed during that time. Do you really change configs that often? There shouldn't be that big of a hit.
– alindt
Dec 9 '16 at 5:37
1
Maybe I'm looking at the problem from the wrong point of view, but i do need to be able to change the nginx configuration very very often (without hurting the runtime performance) ideally 10x+ per minute
– Crappy
Dec 9 '16 at 19:27
@Crappy are you sure that's the only way to handle it? i often send lots of traffic to a map script which uses cached database queries to figure out what to do dynamically instead
– Garet Claborn
Nov 24 '18 at 7:42
add a comment |
I would use inotifywatch
with a timeout on the directory containing the generated conf files and reload nginx
only if something was modified/created/deleted in said directory during that time:
-t , --timeout
Listen only for the specified amount of seconds. If not specified, inotifywatch will gather
statistics until receiving an interrupt signal by (for example)
pressing CONTROL-C at the console.
while true; do
if [[ "$(inotifywatch -e modify,create,delete -t 30 /volumes/config/ 2>&1)" =~ filename ]]; then
service nginx reload;
fi;
done
This way you set up a minimum timer after which the reloads will take place and you don't lose any watches between calls to inotifywait
.
I would use inotifywatch
with a timeout on the directory containing the generated conf files and reload nginx
only if something was modified/created/deleted in said directory during that time:
-t , --timeout
Listen only for the specified amount of seconds. If not specified, inotifywatch will gather
statistics until receiving an interrupt signal by (for example)
pressing CONTROL-C at the console.
while true; do
if [[ "$(inotifywatch -e modify,create,delete -t 30 /volumes/config/ 2>&1)" =~ filename ]]; then
service nginx reload;
fi;
done
This way you set up a minimum timer after which the reloads will take place and you don't lose any watches between calls to inotifywait
.
edited Dec 8 '16 at 8:33
answered Dec 8 '16 at 7:22
alindtalindt
504310
504310
If I'm not mistaken, it still might cause nginx to reload multiple times a minute if the configuration does really get updated that often, do we know what kind of performance hit we are looking at ?
– Crappy
Dec 8 '16 at 18:07
You can change the timeout to 60s or whatever you deem acceptable and it will reload at most once every timer, only if configs changed during that time. Do you really change configs that often? There shouldn't be that big of a hit.
– alindt
Dec 9 '16 at 5:37
1
Maybe I'm looking at the problem from the wrong point of view, but i do need to be able to change the nginx configuration very very often (without hurting the runtime performance) ideally 10x+ per minute
– Crappy
Dec 9 '16 at 19:27
@Crappy are you sure that's the only way to handle it? i often send lots of traffic to a map script which uses cached database queries to figure out what to do dynamically instead
– Garet Claborn
Nov 24 '18 at 7:42
add a comment |
If I'm not mistaken, it still might cause nginx to reload multiple times a minute if the configuration does really get updated that often, do we know what kind of performance hit we are looking at ?
– Crappy
Dec 8 '16 at 18:07
You can change the timeout to 60s or whatever you deem acceptable and it will reload at most once every timer, only if configs changed during that time. Do you really change configs that often? There shouldn't be that big of a hit.
– alindt
Dec 9 '16 at 5:37
1
Maybe I'm looking at the problem from the wrong point of view, but i do need to be able to change the nginx configuration very very often (without hurting the runtime performance) ideally 10x+ per minute
– Crappy
Dec 9 '16 at 19:27
@Crappy are you sure that's the only way to handle it? i often send lots of traffic to a map script which uses cached database queries to figure out what to do dynamically instead
– Garet Claborn
Nov 24 '18 at 7:42
If I'm not mistaken, it still might cause nginx to reload multiple times a minute if the configuration does really get updated that often, do we know what kind of performance hit we are looking at ?
– Crappy
Dec 8 '16 at 18:07
If I'm not mistaken, it still might cause nginx to reload multiple times a minute if the configuration does really get updated that often, do we know what kind of performance hit we are looking at ?
– Crappy
Dec 8 '16 at 18:07
You can change the timeout to 60s or whatever you deem acceptable and it will reload at most once every timer, only if configs changed during that time. Do you really change configs that often? There shouldn't be that big of a hit.
– alindt
Dec 9 '16 at 5:37
You can change the timeout to 60s or whatever you deem acceptable and it will reload at most once every timer, only if configs changed during that time. Do you really change configs that often? There shouldn't be that big of a hit.
– alindt
Dec 9 '16 at 5:37
1
1
Maybe I'm looking at the problem from the wrong point of view, but i do need to be able to change the nginx configuration very very often (without hurting the runtime performance) ideally 10x+ per minute
– Crappy
Dec 9 '16 at 19:27
Maybe I'm looking at the problem from the wrong point of view, but i do need to be able to change the nginx configuration very very often (without hurting the runtime performance) ideally 10x+ per minute
– Crappy
Dec 9 '16 at 19:27
@Crappy are you sure that's the only way to handle it? i often send lots of traffic to a map script which uses cached database queries to figure out what to do dynamically instead
– Garet Claborn
Nov 24 '18 at 7:42
@Crappy are you sure that's the only way to handle it? i often send lots of traffic to a map script which uses cached database queries to figure out what to do dynamically instead
– Garet Claborn
Nov 24 '18 at 7:42
add a comment |
Rather than reloading nginx several times a minute I would suggest to watch the config file and execute the reload only when the changes are saved; you can use inotifywait (available through the inotify-tools
package) with the following command:
while inotifywait -e close_write /etc/nginx/sites-enabled/default; do service nginx reload; done
That's the best solution i could think of too, but what if the configuration do indeed needs to changes multiple times a minute ?
– Crappy
Dec 8 '16 at 5:21
close_wait
doesn't catchdelete
events, not does it imply the file was written to. close_write => "A watched file or a file within a watched directory was closed, after being opened in writeable mode. This does not necessarily imply the file was written to."
– alindt
Dec 8 '16 at 8:26
add a comment |
Rather than reloading nginx several times a minute I would suggest to watch the config file and execute the reload only when the changes are saved; you can use inotifywait (available through the inotify-tools
package) with the following command:
while inotifywait -e close_write /etc/nginx/sites-enabled/default; do service nginx reload; done
That's the best solution i could think of too, but what if the configuration do indeed needs to changes multiple times a minute ?
– Crappy
Dec 8 '16 at 5:21
close_wait
doesn't catchdelete
events, not does it imply the file was written to. close_write => "A watched file or a file within a watched directory was closed, after being opened in writeable mode. This does not necessarily imply the file was written to."
– alindt
Dec 8 '16 at 8:26
add a comment |
Rather than reloading nginx several times a minute I would suggest to watch the config file and execute the reload only when the changes are saved; you can use inotifywait (available through the inotify-tools
package) with the following command:
while inotifywait -e close_write /etc/nginx/sites-enabled/default; do service nginx reload; done
Rather than reloading nginx several times a minute I would suggest to watch the config file and execute the reload only when the changes are saved; you can use inotifywait (available through the inotify-tools
package) with the following command:
while inotifywait -e close_write /etc/nginx/sites-enabled/default; do service nginx reload; done
answered Dec 8 '16 at 4:07
Andrea RampinAndrea Rampin
813
813
That's the best solution i could think of too, but what if the configuration do indeed needs to changes multiple times a minute ?
– Crappy
Dec 8 '16 at 5:21
close_wait
doesn't catchdelete
events, not does it imply the file was written to. close_write => "A watched file or a file within a watched directory was closed, after being opened in writeable mode. This does not necessarily imply the file was written to."
– alindt
Dec 8 '16 at 8:26
add a comment |
That's the best solution i could think of too, but what if the configuration do indeed needs to changes multiple times a minute ?
– Crappy
Dec 8 '16 at 5:21
close_wait
doesn't catchdelete
events, not does it imply the file was written to. close_write => "A watched file or a file within a watched directory was closed, after being opened in writeable mode. This does not necessarily imply the file was written to."
– alindt
Dec 8 '16 at 8:26
That's the best solution i could think of too, but what if the configuration do indeed needs to changes multiple times a minute ?
– Crappy
Dec 8 '16 at 5:21
That's the best solution i could think of too, but what if the configuration do indeed needs to changes multiple times a minute ?
– Crappy
Dec 8 '16 at 5:21
close_wait
doesn't catch delete
events, not does it imply the file was written to. close_write => "A watched file or a file within a watched directory was closed, after being opened in writeable mode. This does not necessarily imply the file was written to."– alindt
Dec 8 '16 at 8:26
close_wait
doesn't catch delete
events, not does it imply the file was written to. close_write => "A watched file or a file within a watched directory was closed, after being opened in writeable mode. This does not necessarily imply the file was written to."– alindt
Dec 8 '16 at 8:26
add a comment |
If you
- Use a script similar to what's provided in this answer, let's call it
check_nginx_confs.sh
- Change your ExecStart directive in nginx.service so
/etc/nginx/
is/dev/shm/nginx/
- Add a script to
/etc/init.d/
to copy conf files to your temp dir ------------------------
mkdir /dev/shm/nginx && cp /etc/nginx/* /dev/shm/nginx
- Use rsync (or other sync tool) to sync
/dev/shm/nginx
back to/etc/nginx
; so you dont lose config files created in/dev/shm/nginx
on reboot. Or simply make both locations in-app, for atomic checks as desired - Set a cronjob to run
check_nginx_confs.sh
as often as files 'turn old' incheck_nginx_confs.sh
, so you know if a change happened within the last time window but only check once - Only
systemctl reload ngnix
ifcheck_nginx_confs.sh
finds a new file, once per time period defined by $OLDTIME - Rest
Now nginx will load those configs much, much faster; from RAM. It will only reload once every $OLDTIME seconds and only if it needs to. Beyond just routing requests to a dynamic handler of your own; this is probably the fastest you get nginx to reload frequently
It's a good idea to reserve a certain disk quota to the temp directory you use, to ensure you don't run out of memory. There are various ways of accomplishing that. You can also add a symlink to an empty, on-disk directory in case you have to spill over but that'd be a lot of confs
Script from other answer:
#!/bin/sh
# Input file
TESTDIR=/dev/shm/nginx
# How many seconds before dir is deemed "older"
OLDTIME=75
#add a little grace period, optional
# Get current and file times
CURTIME=$(date +%s)
FILETIME=$(date -r $TESTDIR +%s)
TIMEDIFF=$(expr $CURTIME - $FILETIME)
# Check if dir updated in last 120 seconds
if [ $OLDTIME -gt $TIMEDIFF ]; then
systemctl nginx reload
fi
# Run me every 1 minute with cron
Optionally; if you're feeling up to it you can put the copy and sync commands in nginx.service
's ExecStart with some && magic so they always happen together. You can also && a sort of 'destructor function' which does a final sync and frees /dev/shm/nginx
on ExecStop. This would replace step (3) and (4)
Alternative to cron; you can have a script running a loop in the background with a wait duration. If you do this, you can pass LastUpdateTime back and forth between the two scripts for greater accuracy as LastUpdateTime+GracePeriod is more reliable. With this, I would still use cron to periodically make sure the loop is still running
For reference, on my CentOS 7 images, nginx.service is at
/usr/lib/systemd/system/nginx.service
add a comment |
If you
- Use a script similar to what's provided in this answer, let's call it
check_nginx_confs.sh
- Change your ExecStart directive in nginx.service so
/etc/nginx/
is/dev/shm/nginx/
- Add a script to
/etc/init.d/
to copy conf files to your temp dir ------------------------
mkdir /dev/shm/nginx && cp /etc/nginx/* /dev/shm/nginx
- Use rsync (or other sync tool) to sync
/dev/shm/nginx
back to/etc/nginx
; so you dont lose config files created in/dev/shm/nginx
on reboot. Or simply make both locations in-app, for atomic checks as desired - Set a cronjob to run
check_nginx_confs.sh
as often as files 'turn old' incheck_nginx_confs.sh
, so you know if a change happened within the last time window but only check once - Only
systemctl reload ngnix
ifcheck_nginx_confs.sh
finds a new file, once per time period defined by $OLDTIME - Rest
Now nginx will load those configs much, much faster; from RAM. It will only reload once every $OLDTIME seconds and only if it needs to. Beyond just routing requests to a dynamic handler of your own; this is probably the fastest you get nginx to reload frequently
It's a good idea to reserve a certain disk quota to the temp directory you use, to ensure you don't run out of memory. There are various ways of accomplishing that. You can also add a symlink to an empty, on-disk directory in case you have to spill over but that'd be a lot of confs
Script from other answer:
#!/bin/sh
# Input file
TESTDIR=/dev/shm/nginx
# How many seconds before dir is deemed "older"
OLDTIME=75
#add a little grace period, optional
# Get current and file times
CURTIME=$(date +%s)
FILETIME=$(date -r $TESTDIR +%s)
TIMEDIFF=$(expr $CURTIME - $FILETIME)
# Check if dir updated in last 120 seconds
if [ $OLDTIME -gt $TIMEDIFF ]; then
systemctl nginx reload
fi
# Run me every 1 minute with cron
Optionally; if you're feeling up to it you can put the copy and sync commands in nginx.service
's ExecStart with some && magic so they always happen together. You can also && a sort of 'destructor function' which does a final sync and frees /dev/shm/nginx
on ExecStop. This would replace step (3) and (4)
Alternative to cron; you can have a script running a loop in the background with a wait duration. If you do this, you can pass LastUpdateTime back and forth between the two scripts for greater accuracy as LastUpdateTime+GracePeriod is more reliable. With this, I would still use cron to periodically make sure the loop is still running
For reference, on my CentOS 7 images, nginx.service is at
/usr/lib/systemd/system/nginx.service
add a comment |
If you
- Use a script similar to what's provided in this answer, let's call it
check_nginx_confs.sh
- Change your ExecStart directive in nginx.service so
/etc/nginx/
is/dev/shm/nginx/
- Add a script to
/etc/init.d/
to copy conf files to your temp dir ------------------------
mkdir /dev/shm/nginx && cp /etc/nginx/* /dev/shm/nginx
- Use rsync (or other sync tool) to sync
/dev/shm/nginx
back to/etc/nginx
; so you dont lose config files created in/dev/shm/nginx
on reboot. Or simply make both locations in-app, for atomic checks as desired - Set a cronjob to run
check_nginx_confs.sh
as often as files 'turn old' incheck_nginx_confs.sh
, so you know if a change happened within the last time window but only check once - Only
systemctl reload ngnix
ifcheck_nginx_confs.sh
finds a new file, once per time period defined by $OLDTIME - Rest
Now nginx will load those configs much, much faster; from RAM. It will only reload once every $OLDTIME seconds and only if it needs to. Beyond just routing requests to a dynamic handler of your own; this is probably the fastest you get nginx to reload frequently
It's a good idea to reserve a certain disk quota to the temp directory you use, to ensure you don't run out of memory. There are various ways of accomplishing that. You can also add a symlink to an empty, on-disk directory in case you have to spill over but that'd be a lot of confs
Script from other answer:
#!/bin/sh
# Input file
TESTDIR=/dev/shm/nginx
# How many seconds before dir is deemed "older"
OLDTIME=75
#add a little grace period, optional
# Get current and file times
CURTIME=$(date +%s)
FILETIME=$(date -r $TESTDIR +%s)
TIMEDIFF=$(expr $CURTIME - $FILETIME)
# Check if dir updated in last 120 seconds
if [ $OLDTIME -gt $TIMEDIFF ]; then
systemctl nginx reload
fi
# Run me every 1 minute with cron
Optionally; if you're feeling up to it you can put the copy and sync commands in nginx.service
's ExecStart with some && magic so they always happen together. You can also && a sort of 'destructor function' which does a final sync and frees /dev/shm/nginx
on ExecStop. This would replace step (3) and (4)
Alternative to cron; you can have a script running a loop in the background with a wait duration. If you do this, you can pass LastUpdateTime back and forth between the two scripts for greater accuracy as LastUpdateTime+GracePeriod is more reliable. With this, I would still use cron to periodically make sure the loop is still running
For reference, on my CentOS 7 images, nginx.service is at
/usr/lib/systemd/system/nginx.service
If you
- Use a script similar to what's provided in this answer, let's call it
check_nginx_confs.sh
- Change your ExecStart directive in nginx.service so
/etc/nginx/
is/dev/shm/nginx/
- Add a script to
/etc/init.d/
to copy conf files to your temp dir ------------------------
mkdir /dev/shm/nginx && cp /etc/nginx/* /dev/shm/nginx
- Use rsync (or other sync tool) to sync
/dev/shm/nginx
back to/etc/nginx
; so you dont lose config files created in/dev/shm/nginx
on reboot. Or simply make both locations in-app, for atomic checks as desired - Set a cronjob to run
check_nginx_confs.sh
as often as files 'turn old' incheck_nginx_confs.sh
, so you know if a change happened within the last time window but only check once - Only
systemctl reload ngnix
ifcheck_nginx_confs.sh
finds a new file, once per time period defined by $OLDTIME - Rest
Now nginx will load those configs much, much faster; from RAM. It will only reload once every $OLDTIME seconds and only if it needs to. Beyond just routing requests to a dynamic handler of your own; this is probably the fastest you get nginx to reload frequently
It's a good idea to reserve a certain disk quota to the temp directory you use, to ensure you don't run out of memory. There are various ways of accomplishing that. You can also add a symlink to an empty, on-disk directory in case you have to spill over but that'd be a lot of confs
Script from other answer:
#!/bin/sh
# Input file
TESTDIR=/dev/shm/nginx
# How many seconds before dir is deemed "older"
OLDTIME=75
#add a little grace period, optional
# Get current and file times
CURTIME=$(date +%s)
FILETIME=$(date -r $TESTDIR +%s)
TIMEDIFF=$(expr $CURTIME - $FILETIME)
# Check if dir updated in last 120 seconds
if [ $OLDTIME -gt $TIMEDIFF ]; then
systemctl nginx reload
fi
# Run me every 1 minute with cron
Optionally; if you're feeling up to it you can put the copy and sync commands in nginx.service
's ExecStart with some && magic so they always happen together. You can also && a sort of 'destructor function' which does a final sync and frees /dev/shm/nginx
on ExecStop. This would replace step (3) and (4)
Alternative to cron; you can have a script running a loop in the background with a wait duration. If you do this, you can pass LastUpdateTime back and forth between the two scripts for greater accuracy as LastUpdateTime+GracePeriod is more reliable. With this, I would still use cron to periodically make sure the loop is still running
For reference, on my CentOS 7 images, nginx.service is at
/usr/lib/systemd/system/nginx.service
edited Nov 24 '18 at 8:30
answered Nov 24 '18 at 8:08
Garet ClabornGaret Claborn
76521240
76521240
add a comment |
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f41031170%2fnginx-reload-configuration-best-practice%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown