LinuxQuestions.org
Welcome to the most active Linux Forum on the web.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Software
User Name
Password
Linux - Software This forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.

Notices


Reply
  Search this Thread
Old 05-16-2014, 02:31 PM   #16
Habitual
LQ Veteran
 
Registered: Jan 2011
Location: Abingdon, VA
Distribution: Catalina
Posts: 9,374

Original Poster
Blog Entries: 37

Rep: Reputation: Disabled

As it turns out the init.d/logstash script from logstash-1.4.1-1_bd507eb.noarch.rpm was not correctly written.
Edit: Wed May 21, 2014 - 12:02:45 PM EDT
"not correctly written" is an inaccurate|false statement.


Here's the working 1.4,1-1 /etc/init.d/logstash script:
Code:
#!/bin/sh
# Init script for logstash
# Maintained by Elasticsearch
# Generated by pleaserun.
# Implemented based on LSB Core 3.1:
#   * Sections: 20.2, 20.3
#
### BEGIN INIT INFO
# Provides:          logstash
# Required-Start:    $remote_fs $syslog
# Required-Stop:     $remote_fs $syslog
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: 
# Description:        Starts Logstash as a daemon.
### END INIT INFO

PATH=/sbin:/usr/sbin:/bin:/usr/bin
export PATH

if [ `id -u` -ne 0 ]; then
   echo "You need root privileges to run this script"
   exit 1
fi

name=logstash
pidfile="/var/run/$name.pid"

LS_USER=logstash
LS_GROUP=logstash
LS_HOME=/var/lib/logstash
LS_HEAP_SIZE="500m"
LS_JAVA_OPTS="-Djava.io.tmpdir=${LS_HOME}"
LS_LOG_DIR=/var/log/logstash
LS_LOG_FILE="${LS_LOG_DIR}/$name.log"
LS_CONF_DIR=/etc/logstash/conf.d
LS_OPEN_FILES=16384
LS_NICE=19
LS_OPTS=""

[ -r /etc/default/$name ] && . /etc/default/$name
[ -r /etc/sysconfig/$name ] && . /etc/sysconfig/$name

program=/opt/logstash/bin/logstash
args="agent -f ${LS_CONF_DIR} -l ${LS_LOG_FILE} ${LS_OPTS}"

start() {


  JAVA_OPTS=${LS_JAVA_OPTS}
  export PATH HOME JAVA_OPTS LS_HEAP_SIZE LS_JAVA_OPTS LS_USE_GC_LOGGING

  # set ulimit as (root, presumably) first, before we drop privileges
  ulimit -n ${LS_OPEN_FILES}

  # Run the program!
  nice -n ${LS_NICE} runuser -g $LS_GROUP -s /bin/bash $LS_USER -c "cd $LS_HOME; ulimit -n ${LS_OPEN_FILES}; exec \"$program\" $args;" > "${LS_LOG_DIR}/$name.stdout" 2> "${LS_LOG_DIR}/$name.err" &

  # Generate the pidfile from here. If we instead made the forked process
  # generate it there will be a race condition between the pidfile writing
  # and a process possibly asking for status.
  echo $! > $pidfile

  echo "$name started."
  return 0
}

stop() {
  # Try a few times to kill TERM the program
  if status ; then
    pid=`cat "$pidfile"`
    echo "Killing $name (pid $pid) with SIGTERM"
    kill -TERM $pid
    # Wait for it to exit.
    for i in 1 2 3 4 5 ; do
      echo "Waiting $name (pid $pid) to die..."
      status || break
      sleep 1
    done
    if status ; then
      echo "$name stop failed; still running."
    else
      echo "$name stopped."
    fi
    pid=`cat "$pidfile"`
    if kill -0 $pid > /dev/null 2> /dev/null ; then
      # process by this pid is running.
      # It may not be our pid, but that's what you get with just pidfiles.
      # TODO(sissel): Check if this process seems to be the same as the one we
      # expect. It'd be nice to use flock here, but flock uses fork, not exec,
      # so it makes it quite awkward to use in this case.
      return 0
    else
      return 2 # program is dead but pid file exists
    fi
  else
  if [ -f "$pidfile" ] ; then
    pid=`cat "$pidfile"`
    if kill -0 $pid > /dev/null 2> /dev/null ; then
      # process by this pid is running.
      # It may not be our pid, but that's what you get with just pidfiles.
      # TODO(sissel): Check if this process seems to be the same as the one we
      # expect. It'd be nice to use flock here, but flock uses fork, not exec,
      # so it makes it quite awkward to use in this case.
      return 0
    else
      return 2 # program is dead but pid file exists
    fi
  else
    return 3 # program is not running
  fi
}

force_stop() {
  if status ; then
    stop
    status && kill -KILL `cat "$pidfile"`
  fi
}


case "$1" in
  start)
    status
    code=$?
    if [ $code -eq 0 ]; then
      echo "$name is already running"
    else
      start
      code=$?
    fi
    exit $code
    ;;
  stop) stop ;;
  force-stop) force_stop ;;
  status) 
    status
    code=$?
    if [ $code -eq 0 ] ; then
      echo "$name is running"
    else
      echo "$name is not running"
    fi
    exit $code
    ;;
  restart) 
    
    stop || start 
    ;;
  *)
    echo "Usage: $SCRIPTNAME {start|stop|force-stop|status|restart}" >&2
    exit 3
  ;;
esac

exit $?
This correctly starts|stop|restart the logstash service on CentOS 5.10 using that rpm source.

But this leaves another issue "out there" as I don't see my indexes.

To be continued...

Last edited by Habitual; 05-21-2014 at 11:04 AM.
 
Old 05-21-2014, 08:23 AM   #17
dkanbier
LQ Newbie
 
Registered: May 2014
Distribution: Fedora
Posts: 13

Rep: Reputation: Disabled
Quote:
Originally Posted by Habitual View Post
I restarted using
Code:
/opt/logstash/bin/logstash -f /etc/logstash/conf.d/logstash.conf
manually and my indexes "came back", soooo..........how to get service logstash start "use" /etc/logstash/conf.d/logstash.conf ?
It should do this already, in my init script there is a parameter for it:

Code:
LS_CONF_DIR=/etc/logstash/conf.d
It could be overridden by the same parameter in /etc/sysconfig/logstash.

Using the curl command:

Code:
curl http://localhost:9200/_aliases?pretty=1
You're actually quering elasticsearch, not logstash. I do think you still have a permission problem.

When running the embedded elasticsearch as you do now, do the files in /root/data/elasticsearch still update with new data? If so your elasticsearch instance still wants to write to that directory, and it can't if you start it as a non-root user.

Could it be there is some old elasticsearch configuration files some where that are picked up by the embedded instance, forcing it's data directory to /root/data/elasticsearch?
 
Old 05-21-2014, 08:42 AM   #18
Habitual
LQ Veteran
 
Registered: Jan 2011
Location: Abingdon, VA
Distribution: Catalina
Posts: 9,374

Original Poster
Blog Entries: 37

Rep: Reputation: Disabled
Quote:
Originally Posted by dkanbier View Post
It could be overridden by the same parameter in /etc/sysconfig/logstash.
Mine (everything in it too) is REM'd out. So, all defaults apparently?

Quote:
When running the embedded elasticsearch as you do now, do the files in /root/data/elasticsearch still update with new data? If so your elasticsearch instance still wants to write to that directory, and it can't if you start it as a non-root user.
The directory /root/data/elasticsearch/nodes/0/indices/ is being updated presently, but not by using "service logstash start", but rather "/opt/logstash/bin/logstash -f /etc/logstash/conf.d/logstash.conf > /dev/null 2>&1 &" in /etc/rc.local or manually in screen.

Quote:
Could it be there is some old elasticsearch configuration files some where that are picked up by the embedded instance, forcing it's data directory to /root/data/elasticsearch?
Could be except that I nuked the entire logstash-1.4.1-1_bd507eb to orbit (cleaning /opt/logstash, /root/data/*, and all occurances of /root/.sincedb_* after removing it and re-installing the same rpm.

My last step is to write a 1 minute cron that uses
Code:
nc -z 127.0.0.1 9200 && echo "$?"
to 'test' for the java-started port and
Code:
killall -9 java && /opt/logstash -f ...
if it's found to be not running using this scriptlet service checker as a template.

Thanks for the feedback.
 
Old 05-21-2014, 09:04 AM   #19
dkanbier
LQ Newbie
 
Registered: May 2014
Distribution: Fedora
Posts: 13

Rep: Reputation: Disabled
Quote:
Originally Posted by Habitual View Post
Mine (everything in it too) is REM'd out. So, all defaults apparently?

The directory /root/data/elasticsearch/nodes/0/indices/ is being updated presently, but not by using "service logstash start", but rather "/opt/logstash/bin/logstash -f /etc/logstash/conf.d/logstash.conf > /dev/null 2>&1 &" in /etc/rc.local or manually in screen.

Could be except that I nuked the entire logstash-1.4.1-1_bd507eb to orbit (cleaning /opt/logstash, /root/data/*, and all occurances of /root/.sincedb_* after removing it and re-installing the same rpm.

My last step is to write a 1 minute cron that uses
Code:
nc -z 127.0.0.1 9200 && echo "$?"
to 'test' for the java-started port and
Code:
killall -9 java && /opt/logstash -f ...
if it's found to be not running using this scriptlet service checker as a template.

Thanks for the feedback.
So it recreates /root/data/* when you run logstash manually in screen?

I would:

  • stop logstash
  • remove /root/data*
  • check for files called elasticsearch.yml in /etc
  • add "--debug" to the LS_OPTS parameter in /etc/init.d/logstash
  • start logstash with the "service logstash start" command


This should start logstash and show debug output in the /var/log/logstash/* files. This should also add data to /var/lib/logstash/data (as opposed to /root/data), and it should start an elasticsearch instance on port 9200.

If it does, everything should be working normal.

Now it's also possible the user logstash can't read your inputs you've configured because of permission issues, giving you 0 indexes. It should state this in the logfiles, but are you sure the user "logstash" can access these files?

Code:
 file {
    type => "syslog"
    path => [ "/var/log/remotes/web/*.log" ]
  }

  file {
    type => "syslog"
    path => [ "/var/log/remotes/cirrhus9a/*.log" ]
  }

  file {
    type => "syslog"
    path => [ "/var/log/remotes/cirrhus9b/*.log" ]
  }

Last edited by dkanbier; 05-21-2014 at 09:06 AM.
 
1 members found this post helpful.
Old 05-21-2014, 09:10 AM   #20
Habitual
LQ Veteran
 
Registered: Jan 2011
Location: Abingdon, VA
Distribution: Catalina
Posts: 9,374

Original Poster
Blog Entries: 37

Rep: Reputation: Disabled
Quote:
Originally Posted by dkanbier View Post
but are you sure the user "logstash" can access these files?

Code:
 file {
    type => "syslog"
    path => [ "/var/log/remotes/web/*.log" ]
  }

  file {
    type => "syslog"
    path => [ "/var/log/remotes/cirrhus9a/*.log" ]
  }

  file {
    type => "syslog"
    path => [ "/var/log/remotes/cirrhus9b/*.log" ]
  }
Err, duh: I am now!
Code:
drwx------ 2 root root 4096 May 16 02:38 cirrhus9a/
drwx------ 2 root root 4096 May 16 11:23 cirrhus9b/
drwx------ 2 root root 4096 May 16 11:42 web/
...
drwxr-xr-x 6 root     root     4096 May 20 17:00 /root/data/elasticsearch/nodes/0/indices/
drwxr-xr-x 4 logstash logstash 4096 May 19 12:20 /var/lib/logstash/data/elasticsearch/nodes/0/indices/
I will advise...
 
Old 05-21-2014, 09:19 AM   #21
dkanbier
LQ Newbie
 
Registered: May 2014
Distribution: Fedora
Posts: 13

Rep: Reputation: Disabled
Quote:
Originally Posted by Habitual View Post
Err, duh: I am now!
Code:
drwx------ 2 root root 4096 May 16 02:38 cirrhus9a/
drwx------ 2 root root 4096 May 16 11:23 cirrhus9b/
drwx------ 2 root root 4096 May 16 11:42 web/
...
drwxr-xr-x 6 root     root     4096 May 20 17:00 /root/data/elasticsearch/nodes/0/indices/
drwxr-xr-x 4 logstash logstash 4096 May 19 12:20 /var/lib/logstash/data/elasticsearch/nodes/0/indices/
I will advise...
Yeah fixing access to those files for the logstash user should help quite a bit I think haha.
 
Old 05-21-2014, 09:33 AM   #22
Habitual
LQ Veteran
 
Registered: Jan 2011
Location: Abingdon, VA
Distribution: Catalina
Posts: 9,374

Original Poster
Blog Entries: 37

Rep: Reputation: Disabled
"progress"...
I added debug and then did this:
Code:
cp -pr /root/data.org/elasticsearch/nodes/0/indices/* /var/lib/logstash/data/elasticsearch/nodes/0/indices/
chown -R logstash:logstash /var/lib/logstash/data/elasticsearch/nodes/0/indices/
chmod -R 770 /var/log/remotes/*
however, data collection seems to have stopped.

Thanks!

Should I also remove /root/.sincedb_* ?

tail logstash.log shows:
Code:
{:timestamp=>"2014-05-21T07:35:49.101000-0700", :message=>"No sincedb_path set, generating one based on the file path", :sincedb_path=>"/opt/logstash/.sincedb_f3f1a09b7195f62d15e9bfe6d07044c7", :path=>["/var/log/remotes/cirrhus9a/*.log"], :level=>:info, :file=>"logstash/inputs/file.rb", :line=>"115"}
{:timestamp=>"2014-05-21T07:35:49.104000-0700", :message=>"Registering file input", :path=>["/var/log/remotes/cirrhus9b/*.log"], :level=>:info, :file=>"logstash/inputs/file.rb", :line=>"74"}
{:timestamp=>"2014-05-21T07:35:49.106000-0700", :message=>"No sincedb_path set, generating one based on the file path", :sincedb_path=>"/opt/logstash/.sincedb_42f4e99991f750cf42bd5d2e154ef9de", :path=>["/var/log/remotes/cirrhus9b/*.log"], :level=>:info, :file=>"logstash/inputs/file.rb", :line=>"115"}
{:timestamp=>"2014-05-21T07:35:49.108000-0700", :message=>"_sincedb_open: /opt/logstash/.sincedb_f3f1a09b7195f62d15e9bfe6d07044c7: No such file or directory - /opt/logstash/.sincedb_f3f1a09b7195f62d15e9bfe6d07044c7", :level=>:debug, :file=>"filewatch/tail.rb", :line=>"195"}
{:timestamp=>"2014-05-21T07:35:49.110000-0700", :message=>"_discover_file_glob: /var/log/remotes/cirrhus9a/*.log: glob is: []", :level=>:debug, :file=>"filewatch/watch.rb", :line=>"117"}
{:timestamp=>"2014-05-21T07:35:49.117000-0700", :message=>"_sincedb_open: /opt/logstash/.sincedb_42f4e99991f750cf42bd5d2e154ef9de: No such file or directory - /opt/logstash/.sincedb_42f4e99991f750cf42bd5d2e154ef9de", :level=>:debug, :file=>"filewatch/tail.rb", :line=>"195"}
{:timestamp=>"2014-05-21T07:35:49.118000-0700", :message=>"_discover_file_glob: /var/log/remotes/cirrhus9b/*.log: glob is: []", :level=>:debug, :file=>"filewatch/watch.rb", :line=>"117"}
{:timestamp=>"2014-05-21T07:35:49.120000-0700", :message=>"Pipeline started", :level=>:info, :file=>"logstash/pipeline.rb", :line=>"78"}
{:timestamp=>"2014-05-21T07:35:49.226000-0700", :message=>"log4j java properties setup", :log4j_level=>"DEBUG", :level=>:debug, :file=>"logstash/logging.rb", :line=>"87"}
{:timestamp=>"2014-05-21T07:35:49.236000-0700", :message=>"Starting embedded Elasticsearch local node.", :level=>:info, :file=>"logstash/outputs/elasticsearch.rb", :line=>"290"}

Last edited by Habitual; 05-21-2014 at 09:37 AM.
 
Old 05-21-2014, 09:38 AM   #23
dkanbier
LQ Newbie
 
Registered: May 2014
Distribution: Fedora
Posts: 13

Rep: Reputation: Disabled
Quote:
Originally Posted by Habitual View Post
"progress"...
I added debug and then did this:
Code:
cp -pr /root/data.org/elasticsearch/nodes/0/indices/* /var/lib/logstash/data/elasticsearch/nodes/0/indices/
chown -R logstash:logstash /var/lib/logstash/data/elasticsearch/nodes/0/indices/
chmod -R 770 /var/log/remotes/*
however, data collection seems to have stopped.

Thanks!
Are the /var/log/remotes/* files still owned by root:root?

Changing to 770 permissions will not be enough in that case, giving read permission to others should be enough (you could set it to 744 for example, giving only read permission to others).
 
Old 05-21-2014, 09:40 AM   #24
Habitual
LQ Veteran
 
Registered: Jan 2011
Location: Abingdon, VA
Distribution: Catalina
Posts: 9,374

Original Poster
Blog Entries: 37

Rep: Reputation: Disabled
Quote:
Originally Posted by dkanbier View Post
Are the /var/log/remotes/* files still owned by root:root?
No.
Code:
drwxrwx--- 2 root logstash 4096 May 16 02:38 /var/log/remotes/cirrhus9a
drwxrwx--- 2 root logstash 4096 May 16 11:23 /var/log/remotes/cirrhus9b
drwxrwx--- 2 root logstash 4096 May 16 11:42 /var/log/remotes/web
...
stat -c%a /var/log/remotes/*
770
770
770
Quote:
Changing to 770 permissions will not be enough in that case, giving read permission to others should be enough (you could set it to 744 for example, giving only read permission to others).
 
Old 05-21-2014, 09:46 AM   #25
dkanbier
LQ Newbie
 
Registered: May 2014
Distribution: Fedora
Posts: 13

Rep: Reputation: Disabled
Quote:
Originally Posted by Habitual View Post
"progress"...
I added debug and then did this:
Code:
cp -pr /root/data.org/elasticsearch/nodes/0/indices/* /var/lib/logstash/data/elasticsearch/nodes/0/indices/
chown -R logstash:logstash /var/lib/logstash/data/elasticsearch/nodes/0/indices/
chmod -R 770 /var/log/remotes/*
however, data collection seems to have stopped.

Thanks!

Should I also remove /root/.sincedb_* ?

tail logstash.log shows:
Code:
{:timestamp=>"2014-05-21T07:35:49.101000-0700", :message=>"No sincedb_path set, generating one based on the file path", :sincedb_path=>"/opt/logstash/.sincedb_f3f1a09b7195f62d15e9bfe6d07044c7", :path=>["/var/log/remotes/cirrhus9a/*.log"], :level=>:info, :file=>"logstash/inputs/file.rb", :line=>"115"}
{:timestamp=>"2014-05-21T07:35:49.104000-0700", :message=>"Registering file input", :path=>["/var/log/remotes/cirrhus9b/*.log"], :level=>:info, :file=>"logstash/inputs/file.rb", :line=>"74"}
{:timestamp=>"2014-05-21T07:35:49.106000-0700", :message=>"No sincedb_path set, generating one based on the file path", :sincedb_path=>"/opt/logstash/.sincedb_42f4e99991f750cf42bd5d2e154ef9de", :path=>["/var/log/remotes/cirrhus9b/*.log"], :level=>:info, :file=>"logstash/inputs/file.rb", :line=>"115"}
{:timestamp=>"2014-05-21T07:35:49.108000-0700", :message=>"_sincedb_open: /opt/logstash/.sincedb_f3f1a09b7195f62d15e9bfe6d07044c7: No such file or directory - /opt/logstash/.sincedb_f3f1a09b7195f62d15e9bfe6d07044c7", :level=>:debug, :file=>"filewatch/tail.rb", :line=>"195"}
{:timestamp=>"2014-05-21T07:35:49.110000-0700", :message=>"_discover_file_glob: /var/log/remotes/cirrhus9a/*.log: glob is: []", :level=>:debug, :file=>"filewatch/watch.rb", :line=>"117"}
{:timestamp=>"2014-05-21T07:35:49.117000-0700", :message=>"_sincedb_open: /opt/logstash/.sincedb_42f4e99991f750cf42bd5d2e154ef9de: No such file or directory - /opt/logstash/.sincedb_42f4e99991f750cf42bd5d2e154ef9de", :level=>:debug, :file=>"filewatch/tail.rb", :line=>"195"}
{:timestamp=>"2014-05-21T07:35:49.118000-0700", :message=>"_discover_file_glob: /var/log/remotes/cirrhus9b/*.log: glob is: []", :level=>:debug, :file=>"filewatch/watch.rb", :line=>"117"}
{:timestamp=>"2014-05-21T07:35:49.120000-0700", :message=>"Pipeline started", :level=>:info, :file=>"logstash/pipeline.rb", :line=>"78"}
{:timestamp=>"2014-05-21T07:35:49.226000-0700", :message=>"log4j java properties setup", :log4j_level=>"DEBUG", :level=>:debug, :file=>"logstash/logging.rb", :line=>"87"}
{:timestamp=>"2014-05-21T07:35:49.236000-0700", :message=>"Starting embedded Elasticsearch local node.", :level=>:info, :file=>"logstash/outputs/elasticsearch.rb", :line=>"290"}

Ah yes, we're almost there I think. Try adding a sincedb_path option for the inputs:

Code:
input {
 file {
    type => "syslog"
    path => [ "/var/log/remotes/web/*.log", "/var/log/remotes/cirrhus9a/*.log", "/var/log/remotes/cirrhus9b/*.log" ]
    sincedb_path => "/opt/logstash/sincedb-access"
  }
}
I think it should create the sincedb-access file when you restart logstash, if not just create it and make sure it's owned by logstash (and logstash can write to it). As I understand it logstash needs this to keep track on where it's been in the logfiles, so it doesn't give you duplicate entries and so on.

Last edited by dkanbier; 05-21-2014 at 09:51 AM.
 
Old 05-21-2014, 10:03 AM   #26
Habitual
LQ Veteran
 
Registered: Jan 2011
Location: Abingdon, VA
Distribution: Catalina
Posts: 9,374

Original Poster
Blog Entries: 37

Rep: Reputation: Disabled
Quote:
Originally Posted by dkanbier View Post
Ah yes, we're almost there I think. Try adding a sincedb_path option for the inputs:
God, I hope so.

Code:
 grep -e since /var/log/logstash/logstash.log
{:timestamp=>"2014-05-21T07:58:00.496000-0700", :message=>"Compiled pipeline code:\n@inputs = []\n@filters = []\n@outputs = []\n@input_file_1 = plugin(\"input\", \"file\", LogStash::Util.hash_merge_many({ \"type\" => (\"syslog\".force_encoding(\"UTF-8\")) }, { \"path\" => [(\"/var/log/remotes/web/*.log\".force_encoding(\"UTF-8\"))] }, { \"start_position\" => (\"beginning\".force_encoding(\"UTF-8\")) }, { \"sincedb_path\" => (\"/opt/logstash/sincedb-access\".force_encoding(\"UTF-8\")) }))\n\n@inputs << @input_file_1\n@input_file_2 = plugin(\"input\", \"file\", LogStash::Util.hash_merge_many({ \"type\" => (\"syslog\".force_encoding(\"UTF-8\")) }, { \"path\" => [(\"/var/log/remotes/cirrhus9a/*.log\".force_encoding(\"UTF-8\"))] }, { \"start_position\" => (\"beginning\".force_encoding(\"UTF-8\")) }, { \"sincedb_path\" => (\"/opt/logstash/sincedb-access\".force_encoding(\"UTF-8\")) }))\n\n@inputs << @input_file_2\n@input_file_3 = plugin(\"input\", \"file\", LogStash::Util.hash_merge_many({ \"type\" => (\"syslog\".force_encoding(\"UTF-8\")) }, { \"path\" => [(\"/var/log/remotes/cirrhus9b/*.log\".force_encoding(\"UTF-8\"))] }, { \"start_position\" => (\"beginning\".force_encoding(\"UTF-8\")) }, { \"sincedb_path\" => (\"/opt/logstash/sincedb-access\".force_encoding(\"UTF-8\")) }))\n\n@inputs << @input_file_3\n@output_stdout_4 = plugin(\"output\", \"stdout\", LogStash::Util.hash_merge_many({ \"codec\" => (\"rubydebug\".force_encoding(\"UTF-8\")) }))\n\n@outputs << @output_stdout_4\n@output_elasticsearch_5 = plugin(\"output\", \"elasticsearch\", LogStash::Util.hash_merge_many({ \"embedded\" => (\"true\".force_encoding(\"UTF-8\")) }))\n\n@outputs << @output_elasticsearch_5\n  @filter_func = lambda do |event, &block|\n    extra_events = []\n    @logger.debug? && @logger.debug(\"filter received\", :event => event.to_hash)\n    extra_events.each(&block)\n  end\n  @output_func = lambda do |event, &block|\n    @logger.debug? && @logger.debug(\"output received\", :event => event.to_hash)\n    @output_stdout_4.handle(event)\n    @output_elasticsearch_5.handle(event)\n    \n  end", :level=>:debug, :file=>"logstash/pipeline.rb", :line=>"26"}
{:timestamp=>"2014-05-21T07:58:00.623000-0700", :message=>"config LogStash::Inputs::File/@sincedb_path = \"/opt/logstash/sincedb-access\"", :level=>:debug, :file=>"logstash/config/mixin.rb", :line=>"105"}
{:timestamp=>"2014-05-21T07:58:00.634000-0700", :message=>"config LogStash::Inputs::File/@sincedb_write_interval = 15", :level=>:debug, :file=>"logstash/config/mixin.rb", :line=>"105"}
{:timestamp=>"2014-05-21T07:58:00.649000-0700", :message=>"config LogStash::Inputs::File/@sincedb_path = \"/opt/logstash/sincedb-access\"", :level=>:debug, :file=>"logstash/config/mixin.rb", :line=>"105"}
{:timestamp=>"2014-05-21T07:58:00.659000-0700", :message=>"config LogStash::Inputs::File/@sincedb_write_interval = 15", :level=>:debug, :file=>"logstash/config/mixin.rb", :line=>"105"}
{:timestamp=>"2014-05-21T07:58:00.674000-0700", :message=>"config LogStash::Inputs::File/@sincedb_path = \"/opt/logstash/sincedb-access\"", :level=>:debug, :file=>"logstash/config/mixin.rb", :line=>"105"}
{:timestamp=>"2014-05-21T07:58:00.685000-0700", :message=>"config LogStash::Inputs::File/@sincedb_write_interval = 15", :level=>:debug, :file=>"logstash/config/mixin.rb", :line=>"105"}
{:timestamp=>"2014-05-21T07:58:03.018000-0700", :message=>"_sincedb_open: reading from /opt/logstash/sincedb-access", :level=>:debug, :file=>"filewatch/tail.rb", :line=>"199"}
{:timestamp=>"2014-05-21T07:58:03.031000-0700", :message=>"_sincedb_open: reading from /opt/logstash/sincedb-access", :level=>:debug, :file=>"filewatch/tail.rb", :line=>"199"}
{:timestamp=>"2014-05-21T07:58:03.052000-0700", :message=>"_sincedb_open: reading from /opt/logstash/sincedb-access", :level=>:debug, :file=>"filewatch/tail.rb", :line=>"199"}
but
Code:
ls -al /opt/logstash/sincedb-access
-rwxrwx--- 1 logstash logstash 0 May 21 07:48 /opt/logstash/sincedb-access
 
Old 05-21-2014, 10:06 AM   #27
dkanbier
LQ Newbie
 
Registered: May 2014
Distribution: Fedora
Posts: 13

Rep: Reputation: Disabled
Quote:
Originally Posted by Habitual View Post
God, I hope so.

Code:
 grep -e since /var/log/logstash/logstash.log
{:timestamp=>"2014-05-21T07:58:00.496000-0700", :message=>"Compiled pipeline code:\n@inputs = []\n@filters = []\n@outputs = []\n@input_file_1 = plugin(\"input\", \"file\", LogStash::Util.hash_merge_many({ \"type\" => (\"syslog\".force_encoding(\"UTF-8\")) }, { \"path\" => [(\"/var/log/remotes/web/*.log\".force_encoding(\"UTF-8\"))] }, { \"start_position\" => (\"beginning\".force_encoding(\"UTF-8\")) }, { \"sincedb_path\" => (\"/opt/logstash/sincedb-access\".force_encoding(\"UTF-8\")) }))\n\n@inputs << @input_file_1\n@input_file_2 = plugin(\"input\", \"file\", LogStash::Util.hash_merge_many({ \"type\" => (\"syslog\".force_encoding(\"UTF-8\")) }, { \"path\" => [(\"/var/log/remotes/cirrhus9a/*.log\".force_encoding(\"UTF-8\"))] }, { \"start_position\" => (\"beginning\".force_encoding(\"UTF-8\")) }, { \"sincedb_path\" => (\"/opt/logstash/sincedb-access\".force_encoding(\"UTF-8\")) }))\n\n@inputs << @input_file_2\n@input_file_3 = plugin(\"input\", \"file\", LogStash::Util.hash_merge_many({ \"type\" => (\"syslog\".force_encoding(\"UTF-8\")) }, { \"path\" => [(\"/var/log/remotes/cirrhus9b/*.log\".force_encoding(\"UTF-8\"))] }, { \"start_position\" => (\"beginning\".force_encoding(\"UTF-8\")) }, { \"sincedb_path\" => (\"/opt/logstash/sincedb-access\".force_encoding(\"UTF-8\")) }))\n\n@inputs << @input_file_3\n@output_stdout_4 = plugin(\"output\", \"stdout\", LogStash::Util.hash_merge_many({ \"codec\" => (\"rubydebug\".force_encoding(\"UTF-8\")) }))\n\n@outputs << @output_stdout_4\n@output_elasticsearch_5 = plugin(\"output\", \"elasticsearch\", LogStash::Util.hash_merge_many({ \"embedded\" => (\"true\".force_encoding(\"UTF-8\")) }))\n\n@outputs << @output_elasticsearch_5\n  @filter_func = lambda do |event, &block|\n    extra_events = []\n    @logger.debug? && @logger.debug(\"filter received\", :event => event.to_hash)\n    extra_events.each(&block)\n  end\n  @output_func = lambda do |event, &block|\n    @logger.debug? && @logger.debug(\"output received\", :event => event.to_hash)\n    @output_stdout_4.handle(event)\n    @output_elasticsearch_5.handle(event)\n    \n  end", :level=>:debug, :file=>"logstash/pipeline.rb", :line=>"26"}
{:timestamp=>"2014-05-21T07:58:00.623000-0700", :message=>"config LogStash::Inputs::File/@sincedb_path = \"/opt/logstash/sincedb-access\"", :level=>:debug, :file=>"logstash/config/mixin.rb", :line=>"105"}
{:timestamp=>"2014-05-21T07:58:00.634000-0700", :message=>"config LogStash::Inputs::File/@sincedb_write_interval = 15", :level=>:debug, :file=>"logstash/config/mixin.rb", :line=>"105"}
{:timestamp=>"2014-05-21T07:58:00.649000-0700", :message=>"config LogStash::Inputs::File/@sincedb_path = \"/opt/logstash/sincedb-access\"", :level=>:debug, :file=>"logstash/config/mixin.rb", :line=>"105"}
{:timestamp=>"2014-05-21T07:58:00.659000-0700", :message=>"config LogStash::Inputs::File/@sincedb_write_interval = 15", :level=>:debug, :file=>"logstash/config/mixin.rb", :line=>"105"}
{:timestamp=>"2014-05-21T07:58:00.674000-0700", :message=>"config LogStash::Inputs::File/@sincedb_path = \"/opt/logstash/sincedb-access\"", :level=>:debug, :file=>"logstash/config/mixin.rb", :line=>"105"}
{:timestamp=>"2014-05-21T07:58:00.685000-0700", :message=>"config LogStash::Inputs::File/@sincedb_write_interval = 15", :level=>:debug, :file=>"logstash/config/mixin.rb", :line=>"105"}
{:timestamp=>"2014-05-21T07:58:03.018000-0700", :message=>"_sincedb_open: reading from /opt/logstash/sincedb-access", :level=>:debug, :file=>"filewatch/tail.rb", :line=>"199"}
{:timestamp=>"2014-05-21T07:58:03.031000-0700", :message=>"_sincedb_open: reading from /opt/logstash/sincedb-access", :level=>:debug, :file=>"filewatch/tail.rb", :line=>"199"}
{:timestamp=>"2014-05-21T07:58:03.052000-0700", :message=>"_sincedb_open: reading from /opt/logstash/sincedb-access", :level=>:debug, :file=>"filewatch/tail.rb", :line=>"199"}
but
Code:
ls -al /opt/logstash/sincedb-access
-rwxrwx--- 1 logstash logstash 0 May 21 07:48 /opt/logstash/sincedb-access
I don't think there are any error messages in there or did I miss something? Still no indexes?
 
Old 05-21-2014, 10:07 AM   #28
Habitual
LQ Veteran
 
Registered: Jan 2011
Location: Abingdon, VA
Distribution: Catalina
Posts: 9,374

Original Poster
Blog Entries: 37

Rep: Reputation: Disabled
I have indexes, just no data in the last hour.

Thanks.
 
Old 05-21-2014, 10:19 AM   #29
dkanbier
LQ Newbie
 
Registered: May 2014
Distribution: Fedora
Posts: 13

Rep: Reputation: Disabled
Quote:
Originally Posted by Habitual View Post
I have indexes, just no data in the last hour.

Thanks.
Do you see new entries being processed in /var/log/logstash/logstash.log? It should log every entry it discovers in the that logfile, so you can track what happened to it.

I'm not quite sure where elasticsearch logs when you use the embedded version, but it might be worth checking that out as well. You're probably quering elasticsearch for data, so if it's not there it doesn't have to be a logstash problem (but could be of course).

Last edited by dkanbier; 05-21-2014 at 10:26 AM.
 
Old 05-21-2014, 10:31 AM   #30
Habitual
LQ Veteran
 
Registered: Jan 2011
Location: Abingdon, VA
Distribution: Catalina
Posts: 9,374

Original Poster
Blog Entries: 37

Rep: Reputation: Disabled
I hear you.

Thanks for all you've done.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
LXer: Centralized log setup awesant elasticsearch logstash and kibana3 LXer Syndicated Linux News 0 01-26-2014 06:12 PM
CentOS centralised logging, syslogd, rsyslog, syslog-ng, logstash sender? batfastad Linux - Server 4 11-29-2012 03:56 AM
Centos5 Ramdisk help cf500 Linux - General 8 02-22-2011 01:59 AM
centOS5.2 ekac Linux - Newbie 4 06-05-2009 07:53 AM
XEN on centos5 hackintosh Linux - Server 2 10-19-2007 11:11 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Software

All times are GMT -5. The time now is 11:34 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration