Difference between running Logstash on console and service












0














I want to index the Apache logs on my webserver, and view them on the Elasticsearch server, were also Kibana is running.
So I installed Logstash on my webserver.



If I start my Logstash conf on the console at the webserver (as root), the content is send to the ES-server, and an index is created on the ES-server.



/usr/share/logstash/bin/logstash -f apache2.conf


But if I start the Logstash service with that same config, the ES-server dont recieve anything.



systemctl start logstash


I checked the logs /var/log/logstash/logstash-plain.log and /var/log/messages , but no error entry or useful hint is included.



Nov 21 15:05:01 wfe01 logstash: [2018-11-21T15:05:01,967][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
Nov 21 15:05:02 wfe01 logstash: [2018-11-21T15:05:02,793][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>, :added=>[http://192.168.X.X:9200/]}}
Nov 21 15:05:02 wfe01 logstash: [2018-11-21T15:05:02,809][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://192.168.X.X:9200/, :path=>"/"}
Nov 21 15:05:03 wfe01 logstash: [2018-11-21T15:05:03,230][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://192.168.X.X:9200/"}
Nov 21 15:05:03 wfe01 logstash: [2018-11-21T15:05:03,344][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
Nov 21 15:05:03 wfe01 logstash: [2018-11-21T15:05:03,353][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
Nov 21 15:05:03 wfe01 logstash: [2018-11-21T15:05:03,398][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://192.168.X.X:9200"]}
Nov 21 15:05:03 wfe01 logstash: [2018-11-21T15:05:03,441][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
Nov 21 15:05:03 wfe01 logstash: [2018-11-21T15:05:03,507][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
Nov 21 15:05:04 wfe01 logstash: [2018-11-21T15:05:04,367][INFO ][logstash.filters.geoip ] Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-filter-geoip-5.0.3-java/vendor/GeoLite2-City.mmdb"}
Nov 21 15:05:05 wfe01 logstash: [2018-11-21T15:05:05,138][INFO ][logstash.inputs.file ] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/var/lib/logstash/plugins/inputs/file/.sincedb_d32aef0519b35231d714b89c8b4d5791", :path=>["/path/ssl_access_log", "/path/ssl_error_log"]}
Nov 21 15:05:05 wfe01 logstash: [2018-11-21T15:05:05,193][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x634099e9 run>"}
Nov 21 15:05:05 wfe01 logstash: [2018-11-21T15:05:05,293][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>}
Nov 21 15:05:05 wfe01 logstash: [2018-11-21T15:05:05,321][INFO ][filewatch.observingtail ] START, creating Discoverer, Watch with file and sincedb collections
Nov 21 15:05:05 wfe01 logstash: [2018-11-21T15:05:05,914][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}


(We have another db-server with metricbeat-service installed, and this works also over the network, the content is send to the ES-server.)



ES Version 6.4
Logstash config:



input {
file {
path => [
"/path/ssl_access_log",
"/path/ssl_error_log"
]
start_position => "beginning"
add_field => { "myconf" => "apache2" }
}
}

output {
if [myconf]=="apache2" {
elasticsearch {
hosts => ["http://192.168.X.X:9200"]
index => "apache2-status-%{+YYYY.MM.dd}"
}
stdout { codec => rubydebug }
}
}


I tried several things: deleting the index, the since_db file, service-restarting.



What could be the problem, that the console call works, but not the service?



Thanks
Steffen










share|improve this question






















  • Anything in journalctl? This sounds like a permission problem to me ... or maybe it doesn't find the config file. Did you try: /usr/share/logstash/bin/logstash -f /usr/share/logstash/conf.d/apache2.conf or wherever it's located for you?
    – Faulander
    Nov 23 '18 at 12:12










  • Solution: the "path" (/path/ssl_access_log) had following rights: drwx------ 2 root root Changing this to drwxr-xr-x. 7 root root solved the problem.
    – SteffenM
    Dec 5 '18 at 14:11


















0














I want to index the Apache logs on my webserver, and view them on the Elasticsearch server, were also Kibana is running.
So I installed Logstash on my webserver.



If I start my Logstash conf on the console at the webserver (as root), the content is send to the ES-server, and an index is created on the ES-server.



/usr/share/logstash/bin/logstash -f apache2.conf


But if I start the Logstash service with that same config, the ES-server dont recieve anything.



systemctl start logstash


I checked the logs /var/log/logstash/logstash-plain.log and /var/log/messages , but no error entry or useful hint is included.



Nov 21 15:05:01 wfe01 logstash: [2018-11-21T15:05:01,967][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
Nov 21 15:05:02 wfe01 logstash: [2018-11-21T15:05:02,793][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>, :added=>[http://192.168.X.X:9200/]}}
Nov 21 15:05:02 wfe01 logstash: [2018-11-21T15:05:02,809][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://192.168.X.X:9200/, :path=>"/"}
Nov 21 15:05:03 wfe01 logstash: [2018-11-21T15:05:03,230][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://192.168.X.X:9200/"}
Nov 21 15:05:03 wfe01 logstash: [2018-11-21T15:05:03,344][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
Nov 21 15:05:03 wfe01 logstash: [2018-11-21T15:05:03,353][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
Nov 21 15:05:03 wfe01 logstash: [2018-11-21T15:05:03,398][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://192.168.X.X:9200"]}
Nov 21 15:05:03 wfe01 logstash: [2018-11-21T15:05:03,441][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
Nov 21 15:05:03 wfe01 logstash: [2018-11-21T15:05:03,507][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
Nov 21 15:05:04 wfe01 logstash: [2018-11-21T15:05:04,367][INFO ][logstash.filters.geoip ] Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-filter-geoip-5.0.3-java/vendor/GeoLite2-City.mmdb"}
Nov 21 15:05:05 wfe01 logstash: [2018-11-21T15:05:05,138][INFO ][logstash.inputs.file ] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/var/lib/logstash/plugins/inputs/file/.sincedb_d32aef0519b35231d714b89c8b4d5791", :path=>["/path/ssl_access_log", "/path/ssl_error_log"]}
Nov 21 15:05:05 wfe01 logstash: [2018-11-21T15:05:05,193][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x634099e9 run>"}
Nov 21 15:05:05 wfe01 logstash: [2018-11-21T15:05:05,293][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>}
Nov 21 15:05:05 wfe01 logstash: [2018-11-21T15:05:05,321][INFO ][filewatch.observingtail ] START, creating Discoverer, Watch with file and sincedb collections
Nov 21 15:05:05 wfe01 logstash: [2018-11-21T15:05:05,914][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}


(We have another db-server with metricbeat-service installed, and this works also over the network, the content is send to the ES-server.)



ES Version 6.4
Logstash config:



input {
file {
path => [
"/path/ssl_access_log",
"/path/ssl_error_log"
]
start_position => "beginning"
add_field => { "myconf" => "apache2" }
}
}

output {
if [myconf]=="apache2" {
elasticsearch {
hosts => ["http://192.168.X.X:9200"]
index => "apache2-status-%{+YYYY.MM.dd}"
}
stdout { codec => rubydebug }
}
}


I tried several things: deleting the index, the since_db file, service-restarting.



What could be the problem, that the console call works, but not the service?



Thanks
Steffen










share|improve this question






















  • Anything in journalctl? This sounds like a permission problem to me ... or maybe it doesn't find the config file. Did you try: /usr/share/logstash/bin/logstash -f /usr/share/logstash/conf.d/apache2.conf or wherever it's located for you?
    – Faulander
    Nov 23 '18 at 12:12










  • Solution: the "path" (/path/ssl_access_log) had following rights: drwx------ 2 root root Changing this to drwxr-xr-x. 7 root root solved the problem.
    – SteffenM
    Dec 5 '18 at 14:11
















0












0








0







I want to index the Apache logs on my webserver, and view them on the Elasticsearch server, were also Kibana is running.
So I installed Logstash on my webserver.



If I start my Logstash conf on the console at the webserver (as root), the content is send to the ES-server, and an index is created on the ES-server.



/usr/share/logstash/bin/logstash -f apache2.conf


But if I start the Logstash service with that same config, the ES-server dont recieve anything.



systemctl start logstash


I checked the logs /var/log/logstash/logstash-plain.log and /var/log/messages , but no error entry or useful hint is included.



Nov 21 15:05:01 wfe01 logstash: [2018-11-21T15:05:01,967][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
Nov 21 15:05:02 wfe01 logstash: [2018-11-21T15:05:02,793][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>, :added=>[http://192.168.X.X:9200/]}}
Nov 21 15:05:02 wfe01 logstash: [2018-11-21T15:05:02,809][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://192.168.X.X:9200/, :path=>"/"}
Nov 21 15:05:03 wfe01 logstash: [2018-11-21T15:05:03,230][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://192.168.X.X:9200/"}
Nov 21 15:05:03 wfe01 logstash: [2018-11-21T15:05:03,344][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
Nov 21 15:05:03 wfe01 logstash: [2018-11-21T15:05:03,353][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
Nov 21 15:05:03 wfe01 logstash: [2018-11-21T15:05:03,398][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://192.168.X.X:9200"]}
Nov 21 15:05:03 wfe01 logstash: [2018-11-21T15:05:03,441][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
Nov 21 15:05:03 wfe01 logstash: [2018-11-21T15:05:03,507][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
Nov 21 15:05:04 wfe01 logstash: [2018-11-21T15:05:04,367][INFO ][logstash.filters.geoip ] Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-filter-geoip-5.0.3-java/vendor/GeoLite2-City.mmdb"}
Nov 21 15:05:05 wfe01 logstash: [2018-11-21T15:05:05,138][INFO ][logstash.inputs.file ] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/var/lib/logstash/plugins/inputs/file/.sincedb_d32aef0519b35231d714b89c8b4d5791", :path=>["/path/ssl_access_log", "/path/ssl_error_log"]}
Nov 21 15:05:05 wfe01 logstash: [2018-11-21T15:05:05,193][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x634099e9 run>"}
Nov 21 15:05:05 wfe01 logstash: [2018-11-21T15:05:05,293][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>}
Nov 21 15:05:05 wfe01 logstash: [2018-11-21T15:05:05,321][INFO ][filewatch.observingtail ] START, creating Discoverer, Watch with file and sincedb collections
Nov 21 15:05:05 wfe01 logstash: [2018-11-21T15:05:05,914][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}


(We have another db-server with metricbeat-service installed, and this works also over the network, the content is send to the ES-server.)



ES Version 6.4
Logstash config:



input {
file {
path => [
"/path/ssl_access_log",
"/path/ssl_error_log"
]
start_position => "beginning"
add_field => { "myconf" => "apache2" }
}
}

output {
if [myconf]=="apache2" {
elasticsearch {
hosts => ["http://192.168.X.X:9200"]
index => "apache2-status-%{+YYYY.MM.dd}"
}
stdout { codec => rubydebug }
}
}


I tried several things: deleting the index, the since_db file, service-restarting.



What could be the problem, that the console call works, but not the service?



Thanks
Steffen










share|improve this question













I want to index the Apache logs on my webserver, and view them on the Elasticsearch server, were also Kibana is running.
So I installed Logstash on my webserver.



If I start my Logstash conf on the console at the webserver (as root), the content is send to the ES-server, and an index is created on the ES-server.



/usr/share/logstash/bin/logstash -f apache2.conf


But if I start the Logstash service with that same config, the ES-server dont recieve anything.



systemctl start logstash


I checked the logs /var/log/logstash/logstash-plain.log and /var/log/messages , but no error entry or useful hint is included.



Nov 21 15:05:01 wfe01 logstash: [2018-11-21T15:05:01,967][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
Nov 21 15:05:02 wfe01 logstash: [2018-11-21T15:05:02,793][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>, :added=>[http://192.168.X.X:9200/]}}
Nov 21 15:05:02 wfe01 logstash: [2018-11-21T15:05:02,809][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://192.168.X.X:9200/, :path=>"/"}
Nov 21 15:05:03 wfe01 logstash: [2018-11-21T15:05:03,230][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://192.168.X.X:9200/"}
Nov 21 15:05:03 wfe01 logstash: [2018-11-21T15:05:03,344][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
Nov 21 15:05:03 wfe01 logstash: [2018-11-21T15:05:03,353][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
Nov 21 15:05:03 wfe01 logstash: [2018-11-21T15:05:03,398][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://192.168.X.X:9200"]}
Nov 21 15:05:03 wfe01 logstash: [2018-11-21T15:05:03,441][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
Nov 21 15:05:03 wfe01 logstash: [2018-11-21T15:05:03,507][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
Nov 21 15:05:04 wfe01 logstash: [2018-11-21T15:05:04,367][INFO ][logstash.filters.geoip ] Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-filter-geoip-5.0.3-java/vendor/GeoLite2-City.mmdb"}
Nov 21 15:05:05 wfe01 logstash: [2018-11-21T15:05:05,138][INFO ][logstash.inputs.file ] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/var/lib/logstash/plugins/inputs/file/.sincedb_d32aef0519b35231d714b89c8b4d5791", :path=>["/path/ssl_access_log", "/path/ssl_error_log"]}
Nov 21 15:05:05 wfe01 logstash: [2018-11-21T15:05:05,193][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x634099e9 run>"}
Nov 21 15:05:05 wfe01 logstash: [2018-11-21T15:05:05,293][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>}
Nov 21 15:05:05 wfe01 logstash: [2018-11-21T15:05:05,321][INFO ][filewatch.observingtail ] START, creating Discoverer, Watch with file and sincedb collections
Nov 21 15:05:05 wfe01 logstash: [2018-11-21T15:05:05,914][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}


(We have another db-server with metricbeat-service installed, and this works also over the network, the content is send to the ES-server.)



ES Version 6.4
Logstash config:



input {
file {
path => [
"/path/ssl_access_log",
"/path/ssl_error_log"
]
start_position => "beginning"
add_field => { "myconf" => "apache2" }
}
}

output {
if [myconf]=="apache2" {
elasticsearch {
hosts => ["http://192.168.X.X:9200"]
index => "apache2-status-%{+YYYY.MM.dd}"
}
stdout { codec => rubydebug }
}
}


I tried several things: deleting the index, the since_db file, service-restarting.



What could be the problem, that the console call works, but not the service?



Thanks
Steffen







elasticsearch logstash






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Nov 21 '18 at 15:11









SteffenM

263




263












  • Anything in journalctl? This sounds like a permission problem to me ... or maybe it doesn't find the config file. Did you try: /usr/share/logstash/bin/logstash -f /usr/share/logstash/conf.d/apache2.conf or wherever it's located for you?
    – Faulander
    Nov 23 '18 at 12:12










  • Solution: the "path" (/path/ssl_access_log) had following rights: drwx------ 2 root root Changing this to drwxr-xr-x. 7 root root solved the problem.
    – SteffenM
    Dec 5 '18 at 14:11




















  • Anything in journalctl? This sounds like a permission problem to me ... or maybe it doesn't find the config file. Did you try: /usr/share/logstash/bin/logstash -f /usr/share/logstash/conf.d/apache2.conf or wherever it's located for you?
    – Faulander
    Nov 23 '18 at 12:12










  • Solution: the "path" (/path/ssl_access_log) had following rights: drwx------ 2 root root Changing this to drwxr-xr-x. 7 root root solved the problem.
    – SteffenM
    Dec 5 '18 at 14:11


















Anything in journalctl? This sounds like a permission problem to me ... or maybe it doesn't find the config file. Did you try: /usr/share/logstash/bin/logstash -f /usr/share/logstash/conf.d/apache2.conf or wherever it's located for you?
– Faulander
Nov 23 '18 at 12:12




Anything in journalctl? This sounds like a permission problem to me ... or maybe it doesn't find the config file. Did you try: /usr/share/logstash/bin/logstash -f /usr/share/logstash/conf.d/apache2.conf or wherever it's located for you?
– Faulander
Nov 23 '18 at 12:12












Solution: the "path" (/path/ssl_access_log) had following rights: drwx------ 2 root root Changing this to drwxr-xr-x. 7 root root solved the problem.
– SteffenM
Dec 5 '18 at 14:11






Solution: the "path" (/path/ssl_access_log) had following rights: drwx------ 2 root root Changing this to drwxr-xr-x. 7 root root solved the problem.
– SteffenM
Dec 5 '18 at 14:11














0






active

oldest

votes











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53415048%2fdifference-between-running-logstash-on-console-and-service%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























0






active

oldest

votes








0






active

oldest

votes









active

oldest

votes






active

oldest

votes
















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.





Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


Please pay close attention to the following guidance:


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53415048%2fdifference-between-running-logstash-on-console-and-service%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

404 Error Contact Form 7 ajax form submitting

How to know if a Active Directory user can login interactively

Refactoring coordinates for Minecraft Pi buildings written in Python