侧边栏壁纸
博主头像
背锅小王子博主等级

我从事运维工作有十年之久,主要从事云原生相关的工作,对k8s、devops、servicemesh、可观察性等较为熟悉!

  • 累计撰写 59 篇文章
  • 累计创建 64 个标签
  • 累计收到 1 条评论

目 录CONTENT

文章目录
ELK

使用EFKK收集日志

背锅小王子
2022-11-23 / 0 评论 / 0 点赞 / 216 阅读 / 2,079 字
温馨提示:
本文最后更新于 2022-11-23,若内容或图片失效,请留言反馈。部分素材来自网络,若不小心影响到您的利益,请联系我们删除。

1、环境介绍

角色 主机名 IP地址 软件版本
elasticsearch node-01 192.168.96.73 7.17.7
elasticsearch node-02 192.168.96.75 7.17.7
elasticsearch node-03 192.168.96.41 7.17.7
kibana node-01 192.168.96.73 7.17.7
logstash node-02 192.168.96.75 7.17.7
kafka node-02 192.168.96.75 2.8.2
zookeeper node-02 192.168.96.75 3.6.3
filebeat 7.17.7

2、配置主机Host记录

在每个主机节点添加如下hosts记录,方便后面使用主机记录做解析

tee -a /etc/hosts << EOF
192.168.96.73  node-01
192.168.96.75  node-02
192.168.96.41  node-03
EOF

3、下载软件

# elasticsearch
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.17.7-x86_64.rpm

# logstash
wget https://artifacts.elastic.co/downloads/logstash/logstash-7.17.7-x86_64.rpm

# kibana
wget https://artifacts.elastic.co/downloads/kibana/kibana-7.17.7-x86_64.rpm

# filebeat
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.17.7-x86_64.rpm

# kafka
wget https://downloads.apache.org/kafka/2.8.2/kafka_2.13-2.8.2.tgz

# zookeeper
wget https://downloads.apache.org/zookeeper/stable/apache-zookeeper-3.6.3-bin.tar.gz

4、安装openjdk

yum -y install java-1.8.0-openjdk.x86_64

5、安装elasticsearch

分别在3台主机上面安装elasticsearch

rpm -ivh elasticsearch-7.17.7-x86_64.rpm

修改node-01主机的elasticsearch配置

sed -i 's/^.*cluster.name:.*$/cluster.name: elk/g' /etc/elasticsearch/elasticsearch.yml
sed -i 's/^.*node.name:.*$/node.name: node-01/g' /etc/elasticsearch/elasticsearch.yml
sed -i 's/^.*network.host.*$/network.host: 192.168.96.73/g' /etc/elasticsearch/elasticsearch.yml
sed -i 's/^.*http.port.*$/http.port: 9200/g' /etc/elasticsearch/elasticsearch.yml
sed -i 's/^.*discovery.seed_hosts.*$/discovery.seed_hosts: \["node-01", "node-02", "node-03"\]/g' /etc/elasticsearch/elasticsearch.yml
sed -i 's/^.*cluster.initial_master_nodes.*$/cluster.initial_master_nodes: \["node-01", "node-02", "node-03"\]/g' /etc/elasticsearch/elasticsearch.yml
sed -i 's/^.*path.data.*$/path.data: \/opt\/elasticsearch\/data/g' /etc/elasticsearch/elasticsearch.yml
sed -i 's/^.*path.logs.*$/path.logs: \/opt\/elasticsearch\/logs/g' /etc/elasticsearch/elasticsearch.yml

修改node-02主机的elasticsearch配置

sed -i 's/^.*cluster.name:.*$/cluster.name: elk/g' /etc/elasticsearch/elasticsearch.yml
sed -i 's/^.*node.name:.*$/node.name: node-02/g' /etc/elasticsearch/elasticsearch.yml
sed -i 's/^.*network.host.*$/network.host: 192.168.96.75/g' /etc/elasticsearch/elasticsearch.yml
sed -i 's/^.*http.port.*$/http.port: 9200/g' /etc/elasticsearch/elasticsearch.yml
sed -i 's/^.*discovery.seed_hosts.*$/discovery.seed_hosts: \["node-01", "node-02", "node-03"\]/g' /etc/elasticsearch/elasticsearch.yml
sed -i 's/^.*cluster.initial_master_nodes.*$/cluster.initial_master_nodes: \["node-01", "node-02", "node-03"\]/g' /etc/elasticsearch/elasticsearch.yml
sed -i 's/^.*path.data.*$/path.data: \/opt\/elasticsearch\/data/g' /etc/elasticsearch/elasticsearch.yml
sed -i 's/^.*path.logs.*$/path.logs: \/opt\/elasticsearch\/logs/g' /etc/elasticsearch/elasticsearch.yml

修改node-03主机的elasticsearch配置

sed -i 's/^.*cluster.name:.*$/cluster.name: elk/g' /etc/elasticsearch/elasticsearch.yml
sed -i 's/^.*node.name:.*$/node.name: node-03/g' /etc/elasticsearch/elasticsearch.yml
sed -i 's/^.*network.host.*$/network.host: 192.168.96.41/g' /etc/elasticsearch/elasticsearch.yml
sed -i 's/^.*http.port.*$/http.port: 9200/g' /etc/elasticsearch/elasticsearch.yml
sed -i 's/^.*discovery.seed_hosts.*$/discovery.seed_hosts: \["node-01", "node-02", "node-03"\]/g' /etc/elasticsearch/elasticsearch.yml
sed -i 's/^.*cluster.initial_master_nodes.*$/cluster.initial_master_nodes: \["node-01", "node-02", "node-03"\]/g' /etc/elasticsearch/elasticsearch.yml
sed -i 's/^.*path.data.*$/path.data: \/opt\/elasticsearch\/data/g' /etc/elasticsearch/elasticsearch.yml
sed -i 's/^.*path.logs.*$/path.logs: \/opt\/elasticsearch\/logs/g' /etc/elasticsearch/elasticsearch.yml

6、调整系统参数

分别在三个主机修改系统参数

# 修改es三个节点的JVM运行内存
sed -i 's/Xms1g/Xms4g/g' /etc/elasticsearch/jvm.options
sed -i 's/Xmx1g/Xmx4g/g' /etc/elasticsearch/jvm.options

# 修改Linux最大打开文件数
tee -a  /etc/security/limits.conf <<EOF
* soft nofile 655360
* hard nofile 655360
EOF

ulimit -n 655360

# 修改sysctl.conf文件
tee -a /usr/lib/sysctl.d/elasticsearch.conf <<EOF
vm.max_map_count=655360
EOF

sysctl -p

7、启动elasticsearch

修改elasticsearch目录权限

chown -R elasticsearch:elasticsearch /opt/elasticsearch

启动elasticsearch

systemctl daemon-reload
systemctl start elasticsearch
systemctl enable elasticsearch

查看elasticsearch

ps -ef | grep elasticsearch

8、安装kibana

rpm -vih kibana-7.17.7-x86_64.rpm 

修改kibana配置文件 /etc/kibana/kibana.yml,内容如下:

server.port: 5601
server.host: "0.0.0.0"
elasticsearch.hosts: ["http://node-01:9200", "http://node-02:9200", "http://node-03:9200",]

启动kibana服务

systemctl daemon-reload
systemctl start kibana
systemctl enable kibana

查看kibana访问

ps -ef | grep kibana

访问kibana界面:http://192.168.96.73:5601
图片-1669190515576

创建索引模板
图片-1669198330505
在Stack Management中,找到Index Management,选择Index Templates,创建一个模板:

name:log
Index patterns:log-*

在Index settings中配置如下:

{
  "index": {
    "lifecycle": {
      "name": "del-3d-logs",
      "rollover_alias": "log-alias"
    },
    "number_of_shards": "10",
    "number_of_replicas": "0"
  }
}

创建索引生命周期策略
在Index Lifecycle Policies中创建一个策略:del-3d-logs

Hot phase 阶段:
修改 Keep data in this phase forever 为 Delete data after this phase
在Advanced settings中关闭所有选项

Warm phase和Cold phase阶段不用修改

Delete phase阶段:
修改 Move data into phase when: 3 days

添加创建的索引生命周期策略:del-3d-logs 到 索引模板 log

9、安装kafka

  • 安装、配置zookeeper
tar xf apache-zookeeper-3.6.3-bin.tar.gz
cd apache-zookeeper-3.6.3-bin

mv conf/zoo_sample.cfg   conf/zoo.cfg 
# 修改zookeeper配置文件 zoo.cfg 中数据保存位置
sed -i 's#/tmp/zookeeper#/opt/zookeeper#g' conf/zoo.cfg

# 启动zookeeper服务
./bin/zkServer.sh start

# 查看zookeeper服务
ps -ef | grep zookeeper
  • 安装、配置kafka
tar  xf  kafka_2.13-2.8.2.tgz
cd  kafka_2.13-2.8.2

# 修改kafka配置文件 config/server.properties 
# kafka数据默认保存时间是168小时,也就是7天,可以根据时间情况修改
sed -i 's#log.retention.hours=168#log.retention.hours=4#g' config/server.properties 

# 修改kafka的jvm堆大小
sed -i 's#KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"#KAFKA_HEAP_OPTS="-Xmx2G -Xms2G"#g' bin/kafka-server-start.sh

# 启动kafka服务
./bin/kafka-server-start.sh config/server.properties -d

# 查看kafka服务
ps -ef | grep kafka

10、安装logstash

安装logstash

rpm -ivh logstash-7.17.7-x86_64.rpm

修改logstash的jvm配置

sed -i 's#Xms1g#Xms2g#g'  /etc/logstash/jvm.options
sed -i 's#Xmx1g#Xmx2g#g'  /etc/logstash/jvm.options

创建logstash-es.json模板

{
    "template": "log-*",
    "settings": {
        "number_of_shards": 10,
        "number_of_replicas" : 0
    }
}

创建logstash-es.yml配置文件

input {
  kafka {
    bootstrap_servers => "127.0.0.1:9092"
    topics_pattern => "log-.*"
    group_id => "es-consumer-group"
    codec => "json"
    decorate_events => true
  }
}
filter {
    grok {
        match => ["message","%{TIMESTAMP_ISO8601:timestamp}"]
    }
    date {
        match => ["timestamp", "yyyy-MM-dd HH:mm:ss.SSS"]
        target => "@timestamp"
        remove_field => ["timestamp"]
    }
}
output {
  elasticsearch {
    hosts => ["http://192.168.96.73:9200","http://192.168.96.75:9200","http://192.168.96.41:9200"]
    index => "%{[@metadata][kafka][topic]}"
    template => "/etc/logstash/logstash-es.json"
    template_name => "log-*"
    template_overwrite => "false"
  }
}

使用自定义配置文件启动logstash

/usr/share/logstash/bin/logstash -f /etc/logstash/logstash-es.yml &

11、安装filebeat

安装filebeat

rpm -ivh filebeat-7.17.7-x86_64.rpm

备份filebeat配置文件

 cd /etc/filebeat/
 
 mv filebeat.yml filebeat.yml.bak

创建filebeat配置文件

filebeat.inputs:
- type: log
  enabled: true
  paths:
    -  /data/logs/xxx/info.log  
  fields:
    type: log-xxx
  multiline.pattern: '^\{'
  multiline.negate:  true
  multiline.match: after
  tail_files: true
  processors:
    - drop_fields:
        fields: ["input", "log.offset"]

- type: log
  enabled: true
  paths:
    -  /data/logs/xxx/error.log
  #  添加自定义字段
  fields:
    type: log-xxx-error
  # 多行采集匹配模式
  multiline.pattern: '^\{'
  # 默认是false,匹配pattern的行合并到上一行;true,不匹配pattern的行合并到上一行
  multiline.negate:  true
  # 合并到上一行的末尾或开头
  multiline.match: after
  tail_files: true
  # 丢弃的字段
  processors:
    - drop_fields:
        fields: ["input", "log.offset"]

# 设置默认的分片大小
setup.template.settings:
  index.number_of_shards: 10

# 设置es中的模板
setup.template.name: "log"
setup.template.pattern: "log-*"

output.kafka:
  enabled: true
  hosts: ["192.168.96.73:9092","192.168.96.75:9092","192.168.96.41:9092"]
  max_retries: 5
  timeout: 90
  # 通过自定义字段来动态生成kafka的topic
  topic: "%{[fields.type]}-%{+yyyy.MM.dd}"
  # 发送到kafka的单条消息大小,默认为10M
  max_message_bytes: 20000000
  partition.hash:
    reachable_only: false
  required_acks: 1

启动filebeat服务

systemctl daemon-reload
systemctl start filebeat
systemctl enable filebeat

查看filebeat

ps -ef | grep filebeat

查看kafka中是否生成对应的topic

./kafka-topics.sh --zookeeper 127.0.0.1:2181 --list

__consumer_offsets
log-xxx-2022.11.23
log-xxx-error-2022.11.23

查看kibana是否生成对应的索引
在kibana的Stack Management中查看Index Management,生成的索引如下图:
图片-1669197623647

创建Index patterns
图片-1669198938508

在Discover中查询日志
图片-1669199051574

0

评论区