1、Loggie介绍
Loggie是一个基于Golang的轻量级、高性能、云原生日志采集Agent和中转处理Aggregator,支持多Pipeline和组件热插拔,提供了:
-
一栈式日志解决方案:同时支持日志中转、过滤、解析、切分、日志报警等
-
云原生的日志形态:快速便捷的容器日志采集方式,原生的Kubernetes CRD动态配置下发
-
生产级的特性:基于长期的大规模运维经验,形成了全方位的可观测性、快速排障、异常预警、自动化运维能力
架构图:
2、环境说明
k8s集群版本为:1.22.0
Runtime:docker
应用日志通过hostpath挂载主机目录
准备通过loggie收集日志然后输入到ES,通过kibana来做搜索
3、部署Loggie日志收集agent
通过helm的方式来部署,如果没有安装helm,请自动到GitHub下载安装
下载chart包
helm pull https://github.com/loggie-io/installation/releases/download/v1.3.0/loggie-v1.3.0.tgz && tar xvzf loggie-v1.3.0.tgz
修改value.yml文件:
如果runtime为docker
extraVolumeMounts:
- mountPath: /var/log/pods
name: podlogs
- mountPath: /var/lib/kubelet/pods
name: kubelet
- mountPath: /var/lib/docker
name: docker
- mountPath: /data/logs
name: applog
extraVolumes:
- hostPath:
path: /var/log/pods
type: DirectoryOrCreate
name: podlogs
- hostPath:
path: /var/lib/kubelet/pods
type: DirectoryOrCreate
name: kubelet
- hostPath:
path: /var/lib/docker
type: DirectoryOrCreate
name: docker
- hostPath:
path: /data/logs
type: DirectoryOrCreate
name: applog
如果runtime为containerd
extraVolumeMounts:
- mountPath: /var/log/pods
name: podlogs
- mountPath: /var/lib/kubelet/pods
name: kubelet
- mountPath: /data/logs
name: applog
extraVolumes:
- hostPath:
path: /var/log/pods
type: DirectoryOrCreate
name: podlogs
- hostPath:
path: /var/lib/kubelet/pods
type: DirectoryOrCreate
name: kubelet
- hostPath:
path: /data/logs
type: DirectoryOrCreate
name: applog
部署loggie
helm install loggie ./loggie -n logging --create-namespace
4、部署es和kibana
通过helm部署ES
helm repo add bitnami https://charts.bitnami.com/bitnami
helm pull bitnami/elasticsearch
tar xf elasticsearch-19.0.2.tgz && cd elasticsearch
# 修改value.yml
global.kibanaEnabled: true
data.persistence.size: 大小根据实际情况修改
helm install es . -n logging
5、添加CRD配置来收集日志
创建一个Logconfig来收集日志:logconfig.yml
apiVersion: loggie.io/v1beta1
kind: LogConfig
metadata:
name: applog
namespace: default
spec:
selector:
type: pod
pipeline:
sources: |
- type: file
name: applog-1
paths:
- /data/logs/**/*info.log
sinkRef: default
interceptorRef: default
创建一个interceptor来处理日志:interceptor.yml
apiVersion: loggie.io/v1beta1
kind: Interceptor
metadata:
name: default
spec:
interceptors: |
- type: normalize
name: default
processor:
- rename:
convert:
- from: "body"
to: "message"
- drop:
targets: ["fields.logconfig", "state.bytes", "state.hostname", "state.offset", "state.pipeline", "state.source"]
创建一个sink来存储日志:sink-es.yml
apiVersion: loggie.io/v1beta1
kind: Sink
metadata:
name: default
spec:
sink: |
type: elasticsearch
hosts: ["es-elasticsearch:9200"]
index: "log-${fields.containername}-${+YYYY.MM.DD}"
6、创建索引模板
在Stack Management中,找到Index Management,选择Index Templates,创建一个模板:
name:log
Index patterns:log-*
在Index settings中配置如下:
{
"index": {
"lifecycle": {
"name": "del-3d-logs",
"rollover_alias": "log-alias"
},
"number_of_shards": "10",
"number_of_replicas": "0"
}
}
7、创建索引生命周期策略
需求:删除3天以前的日志索引
在Index Lifecycle Policies中创建一个策略:del-3d-logs
Hot phase 阶段:
修改 Keep data in this phase forever 为 Delete data after this phase
在Advanced settings中关闭所有选项
Warm phase和Cold phase阶段不用修改
Delete phase阶段:
修改 Move data into phase when: 3 days
添加创建的索引生命周期策略:del-3d-logs 到 索引模板 log
评论区