打开APP
userphoto
未登录

开通VIP,畅享免费电子书等14项超值服

开通VIP
使用Elasticsearch + Logstash + Kibana搭建日志集中分析平台实践
谢权'blog 2015-10-26 1 阅读
在上周的上海Gopher Meetup的聚会上,听了 ASTA谢 的演讲。然后公司最近也需要实现一个日志集中分析平台。 ASTA谢 恰好也讲了他使用了Elasticsearch + Logstash + Kibana这个组合进行日志分析。回来之后就买了一本书然后各种google把它配置好了,当然只是把框架搭好了。这三个组建还有很多功能并没有熟悉。本文只是简单的介绍在Centos如果配置ELK(因为公司的服务器是Centos的,个人比较喜欢Ubuntu 哈哈)
什么是ELK:
Elasticsearch + Logstash + Kibana(ELK)是一套开源的日志管理方案,分析网站的访问情况时我们一般会借助Google/百度/CNZZ等方式嵌入JS做数据统计,但是当网站访问异常或者被攻击时我们需要在后台分析如Nginx的具体日志,而Nginx日志分割/GoAccess/Awstats都是相对简单的单节点解决方案,针对分布式集群或者数据量级较大时会显得心有余而力不足,而ELK的出现可以使我们从容面对新的挑战。
Logstash:负责日志的收集,处理和储存
Elasticsearch:负责日志检索和分析
Kibana:负责日志的可视化
官方网站:
JDK – http://www.oracle.com/technetwork/java/javase/downloads/index.html
Elasticsearch – https://www.elastic.co/downloads/elasticsearch
Logstash – https://www.elastic.co/downloads/logstash
Kibana – https://www.elastic.co/downloads/kibana
Nginx- https://www.nginx.com/
服务端配置:
安装Java JDK:
cat /etc/redhat-release//这是我linux的版本CentOS Linux release 7.1.1503 (Core) //我们通过yum 方式安装Java Jdkyum install java-1.7.0-openjdk Elasticsearch安装:
#下载安装wget https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.7.1.noarch.rpmyum localinstall elasticsearch-1.7.1.noarch.rpm #启动相关服务service elasticsearch startservice elasticsearch status#查看Elasticsearch的配置文件rpm -qc elasticsearch/etc/elasticsearch/elasticsearch.yml/etc/elasticsearch/logging.yml/etc/init.d/elasticsearch/etc/sysconfig/elasticsearch/usr/lib/sysctl.d/elasticsearch.conf/usr/lib/systemd/system/elasticsearch.service/usr/lib/tmpfiles.d/elasticsearch.conf#查看端口使用情况netstat -nltpActive Internet connections (only servers)Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1817/master tcp 0 0 0.0.0.0:5601 0.0.0.0:* LISTEN 27369/node tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 31848/nginx: master tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 16567/sshd tcp6 0 0 127.0.0.1:8005 :::* LISTEN 8263/java tcp6 0 0 :::5000 :::* LISTEN 2771/java tcp6 0 0 :::8009 :::* LISTEN 8263/java tcp6 0 0 :::3306 :::* LISTEN 28839/mysqld tcp6 0 0 :::80 :::* LISTEN 31848/nginx: master tcp6 0 0 :::8080 :::* LISTEN 8263/java tcp6 0 0 :::9200 :::* LISTEN 25808/java tcp6 0 0 :::9300 :::* LISTEN 25808/java tcp6 0 0 :::9301 :::* LISTEN 2771/java tcp6 0 0 :::22 :::* LISTEN 16567/sshd
我们看到9200端口了说明我们安装成功了,我们可以在终端输入
#测试访问curl -X GET http://localhost:9200/
或者直接浏览器打开我们可以看到
{status: 200,name: "Pip the Troll",cluster_name: "elasticsearch",version: {number: "1.7.2",build_hash: "e43676b1385b8125d647f593f7202acbd816e8ec",build_timestamp: "2015-09-14T09:49:53Z",build_snapshot: false,lucene_version: "4.10.4"},tagline: "You Know, for Search"}
说明我们的程序是运行正常的。
Kibana的安装:
#下载tar包wget https://download.elastic.co/kibana/kibana/kibana-4.1.1-linux-x64.tar.gz#解压tar zxf kibana-4.1.1-linux-x64.tar.gz -C /usr/local/cd /usr/local/mv kibana-4.1.1-linux-x64 kibana#创建kibana服务vim /etc/rc.d/init.d/kibana#!/bin/bash### BEGIN INIT INFO# Provides: kibana# Default-Start: 2 3 4 5# Default-Stop: 0 1 6# Short-Description: Runs kibana daemon# Description: Runs the kibana daemon as a non-root user### END INIT INFO# Process nameNAME=kibanaDESC="Kibana4"PROG="/etc/init.d/kibana"# Configure location of Kibana binKIBANA_BIN=/usr/local/kibana/bin# PID InfoPID_FOLDER=/var/run/kibana/PID_FILE=/var/run/kibana/$NAME.pidLOCK_FILE=/var/lock/subsys/$NAMEPATH=/bin:/usr/bin:/sbin:/usr/sbin:$KIBANA_BINDAEMON=$KIBANA_BIN/$NAME# Configure User to run daemon processDAEMON_USER=root# Configure logging locationKIBANA_LOG=/var/log/kibana.log# Begin ScriptRETVAL=0if [ `id -u` -ne 0 ]; then echo "You need root privileges to run this script" exit 1fi# Function library. /etc/init.d/functions start() { echo -n "Starting $DESC : "pid=`pidofproc -p $PID_FILE kibana` if [ -n "$pid" ] ; then echo "Already running." exit 0 else # Start Daemonif [ ! -d "$PID_FOLDER" ] ; then mkdir $PID_FOLDER fidaemon --user=$DAEMON_USER --pidfile=$PID_FILE $DAEMON 1>"$KIBANA_LOG" 2>&1 & sleep 2 pidofproc node > $PID_FILE RETVAL=$? [[ $? -eq 0 ]] && success || failureecho [ $RETVAL = 0 ] && touch $LOCK_FILE return $RETVAL fi}reload(){ echo "Reload command is not implemented for this service." return $RETVAL}stop() { echo -n "Stopping $DESC : " killproc -p $PID_FILE $DAEMON RETVAL=$?echo [ $RETVAL = 0 ] && rm -f $PID_FILE $LOCK_FILE} case "$1" in start) start;; stop) stop ;; status) status -p $PID_FILE $DAEMON RETVAL=$? ;; restart) stop start ;; reload)reload;; *)# Invalid Arguments, print the following message. echo "Usage: $0 {start|stop|status|restart}" >&2exit 2 ;;esac#修改启动权限chmod +x /etc/rc.d/init.d/kibana#启动kibana服务service kibana startservice kibana status#查看端口netstat -nltp
因为刚刚已经执行过
netstat -nltp
所以显示的效果我这里就不贴了,如果我们能看到5601端口就说明我们安装成功了。
Option 1:Generate SSL Certificates:
生成SSL证书是为了服务端和客户端进行验证:
sudo vi /etc/pki/tls/openssl.cnf
Find the [ v3_ca ] section in the file, and add this line under it (substituting in the Logstash Server’s private IP address):
subjectAltName = IP: logstash_server_private_ipcd /etc/pki/tlssudo openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt Option 2: FQDN (DNS):
cd /etc/pki/tlssudo openssl req -subj '/CN=<^>logstash_server_fqdn/' -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt Logstash安装:
Logstash Forwarder(客户端):
安装Logstash Forwarderwget https://download.elastic.co/logstash-forwarder/binaries/logstash-forwarder-0.4.0-1.x86_64.rpmyum localinstall logstash-forwarder-0.4.0-1.x86_64.rpm#查看logstash-forwarder的配置文件位置rpm -qc logstash-forwarder/etc/logstash-forwarder.conf#备份配置文件cp /etc/logstash-forwarder.conf /etc/logstash-forwarder.conf.save#编辑 /etc/logstash-forwarder.conf,需要根据实际情况进行修改vim /etc/logstash-forwarder.conf { "network": { "servers": [ "这里写服务器的ip:5000" ], "ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt", "timeout": 15 }, "files": [ { "paths": [ "/var/log/messages", "/var/log/secure" ], "fields": { "type": "syslog" } } ]} Logstash Server(服务端):
#下载rpm包wget https://download.elastic.co/logstash/logstash/packages/centos/logstash-1.5.4-1.noarch.rpm#安装yum localinstall logstash-1.5.4-1.noarch.rpm #创建一个01-logstash-initial.conf文件vim /etc/logstash/conf.d/01-logstash-initial.conf input { lumberjack { port => 5000 type => "logs" ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt" ssl_key => "/etc/pki/tls/private/logstash-forwarder.key" }}filter { if [type] == "syslog" { grok { match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}" } add_field => [ "received_at", "%{@timestamp}" ] add_field => [ "received_from", "%{host}" ] } syslog_pri { } date { match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ] } }}output { elasticsearch { host => localhost } stdout { codec => rubydebug }}#启动logstash服务service logstash startservice logstash status#访问Kibana,Time-field name 选择 @timestamp 要在下一步操作 Nginx 日志配置之后访问 不然会没有数据不能创建http://localhost:5601/#增加节点和客户端配置一样,注意同步证书(可以通过SSH的方式同步)/etc/pki/tls/certs/logstash-forwarder.crt 配置Nginx日志:
#修改客户端配置vim /etc/logstash-forwarder.conf{ "network": { "servers": [ "自己服务器的ip地址:5000" ], "ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt", "timeout": 15 }, "files": [ { "paths": [ "/var/log/messages", "/var/log/secure" ], "fields": { "type": "syslog" } }, { "paths": [ "/app/local/nginx/logs/access.log" ], "fields": { "type": "nginx" } } ]}#服务端增加patternsmkdir /opt/logstash/patternsvim /opt/logstash/patterns/nginxNGUSERNAME [a-zA-Z.@-+_%]+NGUSER %{NGUSERNAME}NGINXACCESS %{IPORHOST:remote_addr} - - [%{HTTPDATE:time_local}] "%{WORD:method} %{URIPATH:path}(?:%{URIPARAM:param})? HTTP/%{NUMBER:httpversion}" %{INT:status} %{INT:body_bytes_sent} %{QS:http_referer} %{QS:http_user_agent}#修改logstash权限chown -R logstash:logstash /opt/logstash/patterns#修改服务端配置vim /etc/logstash/conf.d/01-logstash-initial.conf input { lumberjack { port => 5000 type => "logs" ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt" ssl_key => "/etc/pki/tls/private/logstash-forwarder.key" }}filter { if [type] == "syslog" { grok { match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}" } add_field => [ "received_at", "%{@timestamp}" ] add_field => [ "received_from", "%{host}" ] } syslog_pri { } date { match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ] } } if [type] == "nginx" { grok { match => { "message" => "%{NGINXACCESS}" } } }}output { elasticsearch { host => localhost } stdout { codec => rubydebug }}
我们看一下完成配置之后的效果:
好了,我是折腾了2天才折腾出来的,感觉自己好笨。写篇总结为了下一次能够快速的搭建起来。
我们可以ton
使用Elasticsearch + Logstash + Kibana搭建日志集中分析平台实践
作者:谢权'blog
Give Me Freedom,Give Me Fire ,Give Me Reason .Take Me Higher
原文地址:使用Elasticsearch + Logstash + Kibana搭建日志集中分析平台实践, 感谢原作者分享。
经典算法-完全平方数Golang MySQL 数据库开发
发表评论
本站仅提供存储服务,所有内容均由用户发布,如发现有害或侵权内容,请点击举报
打开APP,阅读全文并永久保存 查看更多类似文章
猜你喜欢
类似文章
【热】打开小程序,算一算2024你的财运
【日志可视化】ELK in Docker 安装、配置和效果展示|SECTONG|赛克通博客
Logstash 日志管理工具
威胁狩猎:基于ELK的日志监控
ELK部署详解
CentOS 7安装部署ELK 6.2.4
ELK日志系统之使用Rsyslog快速方便的收集Nginx日志
更多类似文章 >>
生活服务
热点新闻
分享 收藏 导长图 关注 下载文章
绑定账号成功
后续可登录账号畅享VIP特权!
如果VIP功能使用有故障,
可点击这里联系客服!

联系客服