logstash插件推荐

1、kafka
参考:https://www.elastic.co/guide/en/logstash/current/plugins-inputs-kafka.html
2、hdfs
参考:https://www.elastic.co/guide/en/logstash/5.4/plugins-outputs-webhdfs.html
3、zabbix
参考:https://www.elastic.co/guide/en/logstash/5.4/plugins-outputs-zabbix.html

input插件
https://www.elastic.co/guide/en/logstash/5.4/input-plugins.html
output插件
https://www.elastic.co/guide/en/logstash/5.4/output-plugins.html

logstash配置codec插件-多行模式

用途
应用日志多行打印
配置logstash
input {
    file {
        path => ["/data/test/test/test.log"]
        type => "demo-codec-multiline-log"
        start_position => "beginning"
        codec => multiline {
            pattern => "^["
            negate => true
            what => "previous"
        }
    }
}
output {
    stdout{
        codec=>rubydebug
    }
}
备注:
what 只能是previous或者next,previous指定行匹配pattern选项的内容是上一行的一部分,next指定行匹配pattern选项的内容是下一行的一部分
启动
bin/logstash -f /etc/logstash/conf.d/demo-codec-multiline.conf
结果
{
          "path" => "/data/test/test/test.log",
    "@timestamp" => 2017-06-13T07:09:16.452Z,
      "@version" => "1",
          "host" => "192-168-56-201",
       "message" => "[info] test 4\ntest 5\ntest 6",
          "type" => "demo-codec-multiline-log",
          "tags" => [
        [0] "multiline"
    ]
}
{
          "path" => "/data/test/test/test.log",
    "@timestamp" => 2017-06-13T07:09:40.516Z,
      "@version" => "1",
          "host" => "192-168-56-201",
       "message" => "[error]test 6\ntest 7",
          "type" => "demo-codec-multiline-log",
          "tags" => [
        [0] "multiline"
    ]
}

logstash配置codec插件-JSON模式

配置nginx日志
log_format json '{"remote_addr":"$remote_addr" ,"host":"$host" ,"server_addr":"$server_addr" ,"timestamp":"$time_iso8601" ,"request_time":$request_time, "remote_user":"$remote_user",  "request":"$request" ,"status":$status, "body_sent":$body_bytes_sent ,"http_referer":"$http_referer" ,"http_user_agent":"$http_user_agent" ,"http_x_forwarded_for":"$http_x_forwarded_for"}';
配置logstash
input {
	file {
		path => ["/data/logs/nginx/collectd.dev-access.log"]
		type => "demo-codec-json-log"
		start_position => "beginning"
        codec => "json"
	}
}
output {
	stdout{
		codec=>rubydebug
	}
}
启动
bin/logstash -f /etc/logstash/conf.d/demo-codec-json.conf
结果
{
             "remote_addr" => "192.168.56.1",
                 "request" => "GET /graph.php?p=load&t=load&h=192.168.56.201&s=86400 HTTP/1.1",
                    "type" => "demo-codec-json-log",
             "server_addr" => "192.168.56.201",
         "http_user_agent" => "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.110 Safari/537.36",
             "remote_user" => "-",
                    "path" => "/data/logs/nginx/collectd.dev-access.log",
            "request_time" => 0.026,
              "@timestamp" => 2017-06-13T06:31:12.761Z,
            "http_referer" => "http://collectd.dev/host.php?h=192.168.56.201&p=load",
                    "host" => "collectd.dev",
    "http_x_forwarded_for" => "-",
                "@version" => "1",
               "body_sent" => 13863,
               "timestamp" => "2017-06-13T06:31:12+00:00",
                  "status" => 200
}
备注
nginx日志当中部分字段可能会是数字或者-,可以将日志全部转换为字符串,然后通过filter来处理

logstash之input配置redis类型详解

用途
监控redis数据
配置示例
input {
    redis {
        data_type => "list"
        key => "logstash-demo"
        host => "127.0.0.1"
        port => 6379
        threads => 5
    }
}
output {
 stdout {
 codec => rubydebug
 }
}

启动
bin/logstash -f /etc/logstash/conf.d/demo-input-redis.conf

测试
redis-cli -h 127.0.0.1
rpush logstash-demo test
结果
{
    "@timestamp" => 2017-06-12T13:55:11.689Z,
      "@version" => "1",
       "message" => "test",
          "tags" => [
        [0] "_jsonparsefailure"
    ]
}
date_type	只能是list(使用BLPOP获取消息)、channel(使用SUBSCRIBE获取消息)、pattern_channel(使用PSUBSCRIBE获取消息)

logstash之input配置syslog类型详解

用途
监控syslog,监控系统运行情况
配置示例
input {
    syslog {
        port => 5000
        type => "demo-syslog"
    }
}

output {
    stdout {
        codec => rubydebug
    }
}


启动
bin/logstash -f /etc/logstash/conf.d/demo-input-syslog.conf

测试
telnet localhost 5000
结果
{
          "severity" => 0,
        "@timestamp" => 2017-06-12T09:41:46.655Z,
          "@version" => "1",
              "host" => "127.0.0.1",
           "message" => "heloooooooo\r\n",
              "type" => "demo-syslog",
          "priority" => 0,
          "facility" => 0,
    "severity_label" => "Emergency",
              "tags" => [
        [0] "_grokparsefailure_sysloginput"
    ],
    "facility_label" => "kernel"
}

Logstash启动测试

一、logstash启动测试
在logstash目录执行
bin/logstash -e 'input{stdin{}}output{stdout{codec=>rubydebug}}'
然后输入1
返回
{
    "@timestamp" => 2017-06-11T14:35:00.816Z,
      "@version" => "1",
          "host" => "ubuntu-101",
       "message" => "message"
}
二、logstash的input、filter、output详解
提醒:logstash配置文件里至少需要有input和output两个部分构成

Supervisor管理ELK

1.添加用户
useradd -s /bin/bash -d /home/elk elk
2.将ELK相关文件授权给ELK用户
sudo chown -R elk:elk /usr/share/elasticsearch
sudo chown -R elk:elk /etc/elasticsearch
sudo chown -R elk:elk /etc/elasticsearch
sudo chown -R elk:elk /etc/elasticsearch
sudo chown -R elk:elk /etc/elasticsearch
sudo chown -R elk:elk /etc/elasticsearch
3.添加supervisor配置
[program:elasticsearch]
user = elk
command = /usr/share/elasticsearch/bin/elasticsearch 
process_name=%(program_name)s-%(process_num)s
numprocs=1
autostart=true
autorestart=true
redirect_stderr = true
stdout_logfile_maxbytes=20MB
stdout_logfile_backups=20
stdout_logfile=/data/logs/supervisor/elasticsearch.log
4.在supervisor运行报错
max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]
需要在/etc/supervisor/supervisord.conf
添加
[supervisord]
minfds=65536
minprocs=32768