Install Elasticsearch

  • ES8 较 ES7,部署过程中支持自动生成、分发 ssl 证书,无需手动签发配置证书,大大简化了部署复杂度。

下载安装包配置路径

1
2
3
4
5
6
7
$ wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-8.2.0-linux-x86_64.tar.gz
$ tar -zxvf elasticsearch-8.2.0-linux-x86_64.tar.gz
$ mkdir -p /app/elasticsearch
$ cd /app/elasticsearch
$ mv ~/elasticsearch-8.2.0 .
$ mkdir es_data
$ mkdir es_log

调整系统环境变量

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
sudo rm -rf  /etc/security/limits.d/20-nproc.conf
echo "* soft nofile 65536" >> /etc/security/limits.conf
echo "* hard nofile 65536" >> /etc/security/limits.conf
echo "* soft nproc 65536"  >> /etc/security/limits.conf
echo "* hard nproc 65536"  >> /etc/security/limits.conf
echo "* soft memlock unlimited"  >> /etc/security/limits.conf
echo "* hard memlock unlimited"  >> /etc/security/limits.conf

echo "vm.max_map_count = 655360" >> /etc/sysctl.conf

sysctl -p

配置用户及权限

1
2
$ useradd -s /sbin/nologin -M -r es
$ sudo chown -R es.es /app/elasticsearch

更改 es 配置

更改 es 内存大小

  • 更改所有 es 节点使用的内存大小
1
2
3
4
$ vi elasticsearch-8.2.0/config/jvm.options
# 根据实际情况添加
-Xms1g
-Xmx1g

初始化主节点并配置集群设置

  • 更改 es 配置
1
2
3
4
5
$ vi config/elasticsearch.yml
cluster.name: test-cluster
node.name: node01
path.data: /app/elasticsearch/es_data
path.logs: /app/elasticsearch/es_log
  • 启动 es 并获取首次启动自动生成的账号密码信息
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
$ sudo -u es bin/elasticsearch
……
✅ Elasticsearch security features have been automatically configured!
✅ Authentication is enabled and cluster connections are encrypted.

ℹ️  Password for the elastic user (reset with `bin/elasticsearch-reset-password -u elastic`):
  saSlGOqT2o6GBiDlNjhV

ℹ️  HTTP CA certificate SHA-256 fingerprint:
  03b72491ad950f2660ee2df84c3cdf96a7d4d255536b478ad5610234c7bffc25

ℹ️  Configure Kibana to use this cluster:
• Run Kibana and click the configuration link in the terminal when Kibana starts.
• Copy the following enrollment token and paste it into Kibana in your browser (valid for the next 30 minutes):
  eyJ2ZXIiOiI4LjIuMCIsImFkciI6WyIxNzIuMTYuMi4xNzU6OTIwMCJdLCJmZ3IiOiIwM2I3MjQ5MWFkOTUwZjI2NjBlZTJkZjg0YzNjZGY5NmE3ZDRkMjU1NTM2YjQ3OGFkNTYxMDIzNGM3YmZmYzI1Iiwia2V5IjoiN0ktanJJQUJxWVNEcFdNWEVScFg6UGlHMVk3cmtTdEd3aklIdHRfQ24tZyJ9

ℹ️  Configure other nodes to join this cluster:
• On this node:
  ⁃ Create an enrollment token with `bin/elasticsearch-create-enrollment-token -s node`.
  ⁃ Uncomment the transport.host setting at the end of config/elasticsearch.yml.
  ⁃ Restart Elasticsearch.
• On other nodes:
  ⁃ Start Elasticsearch with `bin/elasticsearch --enrollment-token <token>`, using the enrollment token that you generated.
……
  • 初始化后,也会 elasticsearch 会自动生成 ssl 证书并更改配置文件,新增配置文件如下,无需手动调整
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
xpack.security.http.ssl:
  enabled: true
  keystore.path: certs/http.p12
xpack.security.transport.ssl:
  enabled: true
  verification_mode: certificate
  keystore.path: certs/transport.p12
  truststore.path: certs/transport.p12
cluster.initial_master_nodes: ["node01"]
http.host: 0.0.0.0
  • 停止 es 并调整配置,更改 es 9300 端口监听地址

    • es 9300 端口默认只监听 127.0.0.1,需调整 transport.host 监听外部访问
      • 注:此配置需初始化主节点后调整,否则会因为未配置 xpack 导致无法启动
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    
    # 默认监听 127.0.0.1
    $ sudo ss -anlp|grep 9300
    tcp    LISTEN     0      128      [::ffff:127.0.0.1]:9300               [::]:*                   users:(("java",pid=20576,fd=368))
    tcp    LISTEN     0      128       [::1]:9300               [::]:*                   users:(("java",pid=20576,fd=367))
    $ vi config/elasticsearch.yml
    transport.host: 0.0.0.0
    # 启动集群
    $ sudo -u es bin/elasticsearch
    # 9300 端口监听所有地址
    $ sudo ss -anlp|grep 9300
    tcp    LISTEN     0      128    [::]:9300               [::]:*                   users:(("java",pid=20989,fd=367))
    
  • 验证节点状态

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
$ ES_HOME=/app/elasticsearch/elasticsearch-8.2.0
$ curl --cacert $ES_HOME/config/certs/http_ca.crt -u elastic https://localhost:9200 
# 输入主节点初始化时密码
Enter host password for user 'elastic':
{
  "name" : "node01",
  "cluster_name" : "test-cluster",
  "cluster_uuid" : "cbRUCSWKQcWiM9s-VMTXrg",
  "version" : {
    "number" : "8.2.0",
    "build_flavor" : "default",
    "build_type" : "tar",
    "build_hash" : "b174af62e8dd9f4ac4d25875e9381ffe2b9282c5",
    "build_date" : "2022-04-20T10:35:10.180408517Z",
    "build_snapshot" : false,
    "lucene_version" : "9.1.0",
    "minimum_wire_compatibility_version" : "7.17.0",
    "minimum_index_compatibility_version" : "7.0.0"
  },
  "tagline" : "You Know, for Search"
}

其他节点加入集群

  • 要接入集群的节点更改配置
1
2
3
4
5
$ vi config/elasticsearch.yml
cluster.name: test-cluster
node.name: node02
path.data: /app/elasticsearch/es_data
path.logs: /app/elasticsearch/es_log
  • 初始化的主节生成注册集群的 token
1
2
3
# node01 上执行
$ sudo -u es bin/elasticsearch-create-enrollment-token -s node
eyJ2ZXIiOiI4LjIuMCIsImFkciI6WyIxNzIuMTYuMi4xNzU6OTIwMCJdLCJmZ3IiOiIwM2I3MjQ5MWFkOTUwZjI2NjBlZTJkZjg0YzNjZGY5NmE3ZDRkMjU1NTM2YjQ3OGFkNTYxMDIzNGM3YmZmYzI1Iiwia2V5IjoieGpPX3JJQUI1RE42UnZ0VV92UkI6aFFUWXU4LUpRVDZkbWUtWHhhMFNiQSJ9
  • 要加入集群的节点 带 token 启动 es
1
sudo -u es bin/elasticsearch --enrollment-token  eyJ2ZXIiOiI4LjIuMCIsImFkciI6WyIxNzIuMTYuMi4xNzU6OTIwMCJdLCJmZ3IiOiIwM2I3MjQ5MWFkOTUwZjI2NjBlZTJkZjg0YzNjZGY5NmE3ZDRkMjU1NTM2YjQ3OGFkNTYxMDIzNGM3YmZmYzI1Iiwia2V5IjoieGpPX3JJQUI1RE42UnZ0VV92UkI6aFFUWXU4LUpRVDZkbWUtWHhhMFNiQSJ9
  • 节点 elasticsearch.yml 自动生成以下配置
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
xpack.security.http.ssl:
  enabled: true
  keystore.path: certs/http.p12
xpack.security.transport.ssl:
  enabled: true
  verification_mode: certificate
  keystore.path: certs/transport.p12
  truststore.path: certs/transport.p12
discovery.seed_hosts: ["172.16.2.175:9300"]
http.host: 0.0.0.0
transport.host: 0.0.0.0
  • 查看集群信息
1
2
3
4
5
$ ES_HOME=/app/elasticsearch/elasticsearch-8.2.0
$ curl --cacert $ES_HOME/config/certs/http_ca.crt -u elastic https://172.16.2.175:9200/_cat/health?v
Enter host password for user 'elastic':
epoch      timestamp cluster      status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1652166781 07:13:01  test-cluster green           3         3      4   2    0    0        0             0                  -                100.0%

调整集群配置文件

elasticsearch.yml

  • discovery.seed_hosts
    • 由于 es 脚本局限性问题,早注册的节点此配置存在不完善情况,需进行调整,配置为本集群所有节点的集群通信地址
1
2
$ vi config/elasticsearch.yml
discovery.seed_hosts: ["172.16.2.175:9300","172.16.2.176:9300","172.16.2.177:9300"]
  • cluster.initial_master_nodes
    • 配置为所有允许成为主节点的节点名称
1
2
$ vi config/elasticsearch.yml
cluster.initial_master_nodes: ["node01","node02","node03"]

配置 systemctl,并启动

  • vi /etc/systemd/system/elasticsearch.service
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
[Unit]
Description=elasticsearch
After=network.target

[Service]
Type=forking
User=es
Group=es
Restart=no
PIDFile=/app/elasticsearch/es_log/elasticsearch.pid
ExecStart=/app/elasticsearch/elasticsearch-8.2.0/bin/elasticsearch -d -p /app/elasticsearch/es_log/elasticsearch.pid
ExecStop=/usr/bin/pkill -F /app/elasticsearch/es_log/elasticsearch.pid
PrivateTmp=true

LimitNOFILE=65535
LimitNPROC=4096
LimitAS=infinity
LimitFSIZE=infinity

[Install]

WantedBy=multi-user.target
  • 启动服务并配置自启
1
2
3
$ systemctl daemon-reload 
$ systemctl start elasticsearch
$ systemctl enable elasticsearch

Install Kibana

下载安装包配置路径

1
2
3
4
5
6
$ wget https://artifacts.elastic.co/downloads/kibana/kibana-8.2.0-darwin-x86_64.tar.gz
$ tar -zxvf kibana-8.2.0-linux-x86_64.tar.gz
$ mkdir -p /app/kibana
$ cd /app/kibana
$ mv ~/kibana-8.2.0 .
$ mkdir kibana_logs

配置用户及权限

1
2
$ useradd -s /sbin/nologin -M -r es
$ sudo chown -R es.es /app/kibana

更改 es 配置

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
$ cd /app/kibana/kibana-8.2.0/config/
$ cp -rp kibana.yml kibana_bk.yml
$ vim kibana.yml
server.port: 5601
server.host: "0.0.0.0"
server.basePath: "/tgkibana"
server.rewriteBasePath: true
elasticsearch.pingTimeout: 1500
elasticsearch.requestTimeout: 60000
elasticsearch.shardTimeout: 60000
pid.file: /app/kibana/kibana_logs/kibana.pid
logging:
  appenders:
    file:
      type: file
      fileName: /app/kibana/kibana_logs/kibana.log
      layout:
        type: pattern
  root:
    appenders: [file]
i18n.locale: "zh-CN"

测试启动

  • 前台启动 kibana
1
2
3
4
$ cd /app/kibana/kibana-8.2.0
$ sudo -u es ./bin/kibana
i Kibana has not been configured.
Go to http://0.0.0.0:5601/tgkibana/?code=421792 to get started.
  • es 生成 kibana 访问 token
1
2
3
$ cd /app/elasticsearch/elasticsearch-8.2.0
$ sudo -u es ./bin/elasticsearch-create-enrollment-token -s kibana
eyJ2ZXIiOiI4LjIuMCIsImFkciI6WyIxNzIuMTYuMi4xNzU6OTIwMCJdLCJmZ3IiOiIwM2I3MjQ5MWFkOTUwZjI2NjBlZTJkZjg0YzNjZGY5NmE3ZDRkMjU1NTM2YjQ3OGFkNTYxMDIzNGM3YmZmYzI1Iiwia2V5IjoiMTJYZHJZQUJHX1FSVllDVzlfV3o6anBVX2YyeTZTYUdfanNRbFBqekIydyJ9
  • 访问 172.16.2.175:5601/tgkibana/?code=421792 并配置 token

    • 可使用创建 es 时自动生成的 es 用户访问
  • 初始启动后,更改 es 配置,配置完整 es 节点

1
2
$ vi /app/kibana/kibana-8.2.0/config/kibana.yml
elasticsearch.hosts: ['https://172.16.2.175:9200','https://172.16.2.176:9200','https://172.16.2.177:9200']
  • 关闭 Geolp 采集
1
2
$ vi config/elasticsearch.yml
ingest.geoip.downloader.enabled: false

配置 systemctl,并设置自启动

  • vi /etc/systemd/system/kibana.service
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
[Unit]
Description=kibana
After=network.target

[Service]
Type=simple
User=es
Group=es
Restart=no
ExecStart=/app/kibana/kibana-8.2.0/bin/kibana
PrivateTmp=true

[Install]

WantedBy=multi-user.target
  • 服务自启动
1
2
3
$ sudo systemctl daemon-reload
$ sudo systemctl start kibana
$ sudo systemctl enable kibana

配置集群监控

  • xpack 和 metricbeat 二选一,官方推荐使用 metricbeat,但开启 xpack 更简单

xpack 内置监控

  • 开发工具内发送请求
1
2
3
4
5
6
7
PUT _cluster/settings
{
  "persistent": {
    "xpack.monitoring.collection.enabled": true,
    "xpack.monitoring.elasticsearch.collection.enabled": true
  }
}

metricbeat

下载安装包

1
2
$ wget https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-8.2.0-x86_64.rpm
$ yum -y install metricbeat-8.2.0-x86_64.rpm

配置存储 metricbeat es 连接地址

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
$ vim /etc/metricbeat/metricbeat.yml
# 结尾处增加
output.elasticsearch:
  hosts: ["https://172.16.2.175:9200","https://172.16.2.176:9200","https://172.16.2.177:9200"]
  username: "elastic"
  password: "saSlGOqT2o6GBiDlNjhV"
  ssl:
    enabled: true
    # 集群初始化时生成,也可查询 kibana 配置文件查询
    ca_trusted_fingerprint: "03b72491ad950f2660ee2df84c3cdf96a7d4d255536b478ad5610234c7bffc25"
  • 测试配置文件
1
$ metricbeat test config
  • 测试连接 es
1
$ metricbeat test output

配置 metricbeat 监控 es

  • 开启 es 监控
1
2
3
4
# 查看 es module
$ metricbeat modules list|grep elasticsearch
# 开启
$ metricbeat modules enable elasticsearch-xpack
  • 创建监控用户

    • 在 kibana 控制台中创建用户,用户配置 monitoring_user,remote_monitoring_agent,remote_monitoring_collector 角色
    • 以用户名:metricbeat_user 密码:vGx2VNxXn330wfFE 为例
  • 配置 elasticsearch-xpack 配置文件

1
2
3
4
5
6
7
8
9
$ vim /etc/metricbeat/modules.d/elasticsearch-xpack.yml
- module: elasticsearch
  xpack.enabled: true
  period: 10s
  hosts: ["https://localhost:9200"]
  username: "metricbeat_user"
  password: "vGx2VNxXn330wfFE"
  ssl.enabled: true
  ssl.certificate_authorities: "/app/elasticsearch/elasticsearch-8.2.0/config/certs/http_ca.crt"
  • 初始化 es 中存储监控信息的索引
1
$ metricbeat setup -e

开启服务并配置开机自启

1
2
$ systemctl start metricbeat
$ systemctl enable metricbeat

Install Logstash

es 配置 logstash user

创建 logstash role

  • 创建 logstash_role
    • 集群配置 manage_index_templates,monitor 权限
      • 创建时如果遇到报错,可先不添加 monitor role,创建够更改 role 权限,增加 monitor role
    • 索引 logstash-* nginx-* t-log-* mysql-slow-* 配置 create write read create_index manage_ilm manage 权限(或 all 权限)

创建 logstash_user

  • 创建 logstash_user 用户,绑定 logstash_system,logstash_role

下载安装包并配置路径

1
2
3
4
5
6
7
$ wget https://artifacts.elastic.co/downloads/logstash/logstash-8.2.0-linux-x86_64.tar.gz
$ tar -zxvf logstash-8.2.0-linux-x86_64.tar.gz
$ mkdir -p /app/logstash
$ cd /app/logstash
$ mv ~/logstash-8.2.0
$ mkdir logstash_data
$ mkdir logstash_logs

调整 logstash 配置

jvm.options

  • vi /app/logstash/logstash-8.2.0/config/jvm.options
1
2
-Xms1g
-Xmx1g

logstash.yml

  • vim /app/logstash/logstash-8.2.0/config/logstash.yml
1
2
3
4
5
6
7
8
9
node.name: logstash01
path.data: /app/logstash/logstash_data
pipeline.id: main
pipeline.workers: 4
pipeline.ordered: auto
path.config: /app/logstash/logstash-8.2.0/config/conf.d/*.conf
config.reload.automatic: true
config.reload.interval: 3s
path.logs: /usr/local/platform/logstash/logstash_logs

startup.options

  • vi /usr/local/platform/logstash/logstash-7.12.0/config/startup.options
1
2
3
4
LS_HOME=/app/logstash/logstash-8.2.0
LS_SETTINGS_DIR=/app/logstash/logstash-8.2.0/config
LS_PIDFILE=/app/logstash/logstash_logs/logstash.pid
LS_GC_LOG_FILE=/app/logstash/logstash_logs/gc.log

patterns/patterns.conf

  • 根据实际情况配置正则匹配提取规则
1
2
$ mkdir patterns
$ vim  patterns/patterns.conf
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
PASS [\s\S]*?
APIADDRESS [\w\/]*?
MESSAGE [\s\S]*


ALLTIME ^(\d{2}|\d{4})(?:\-)?([0]{1}\d{1}|[1]{1}[0-2]{1})(?:\-)?([0-2]{1}\d{1}|[3]{1}[0-1]{1})(?:\s)?([0-1]{1}\d{1}|[2]{1}[0-3]{1})(?::)?([0-5]{1}\d{1})(?::)?([0-5]{1}\d{1})(?:\.)?[0-9]{3}(?:[[:space:]]\+\d{4})

SAMPLE %{ALLTIME} \[%{PASS}\] %{LOGLEVEL:loglevel} %{JAVACLASS:javaclass} -\[%{APIADDRESS:api}\]%{MESSAGE:get_message}
SAMPLE2 %{ALLTIME} \[%{PASS}\] %{LOGLEVEL:loglevel} %{JAVACLASS:javaclass} -%{MESSAGE:get_message}
SAMPLE3 %{ALLTIME} %{MESSAGE:get_message}
ALLLOG %{MESSAGE:get_message}

Exception_TYPE  (?:(ErrorCodeException|ManagerExceptionHandler)):(?<ExcetionType>(\s+\S+))

LOGLEVEL ([Aa]lert|ALERT|[Tt]race|TRACE|[Dd]ebug|DEBUG|[Nn]otice|NOTICE|INFO|[Ww]arn+(?:ing)?|WARN+(?:ING)?|[Ee]rr+(?:or)?|ERR+(?:OR)?|[Cc]rit+(?:ical)?|CRIT+(?:ICAL)?|[Ff]atal|FATAL|[Ss]evere|SEVERE|EMERG(?:ENCY)?|[Ee]merg(?:ency)?|信息|警告|错误|严重|[Ww]ARN|[Ww]arn)
#LOGLEVEL ([Aa]lert|ALERT|[Tt]race|TRACE|[Dd]ebug|DEBUG|[Nn]otice|NOTICE|[Ii]nfo|INFO|[Ww]arn?(?:ing)?|WARN?(?:ING)?|[Ee]rr?(?:or)?|ERR?(?:OR)?|[Cc]rit?(?:ical)?|CRIT?(?:ICAL)?|[Ff]atal|FATAL|[Ss]evere|SEVERE|EMERG(?:ENCY)?|[Ee]merg(?:ency)?|信息|警告|错误|严重|WARN)




TIME8601 ^(\d{2}|\d{4})(?:\-)?([0]{1}\d{1}|[1]{1}[0-2]{1})(?:\-)?([0-2]{1}\d{1}|[3]{1}[0-1]{1})(?:\s)?T([0-1]{1}\d{1}|[2]{1}[0-3]{1})(?::)?([0-5]{1}\d{1})(?::)?([0-5]{1}\d{1})(?:\.)?\+([0-1]{1}\d{1}|[2]{1}[0-3]{1})(?::)?([0-1]{1}\d{1}|[2]{1}[0-3]{1})

USERIP  ((25[0-5]|2[0-4]\d|[01]?\d\d?)\.){3}(25[0-5]|2[0-4]\d|[01]?\d\d?)

HTTPSTATUS \d{3}

APIPATH (((?<=GET\s)|(?<=POST\s))(.*)(?=\?))|((?<=GET\s)|(?<=POST\s))(.*)((?<=js)|(?<=css)|(?<=png)|(?<=jpg)|(?<=gif)|(?<=ico))|((?<=GET\s)|(?<=POST\s))(.*\/\w+)(?=\sHTTP)



TENGINE   %{TIME8601} \| %{SIMPLEIP} \| %{HTTPSTATUS} \| %{APIPATH} \| %{MESSAGE} \| %{SIMPLEIP} \|

conf.d

filter.conf

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
filter {
    if [agent][type] == "filebeat" {
        #提取日志时间
        grok {
                patterns_dir => ["/app/logstash/logstash-8.2.0/config/patterns/patterns.conf"]
                match => {"message" => "%{ALLTIME:log_time}" }
                }
        date {
                match => ["log_time", "yyyy-MM-dd HH:mm:ss.SSS Z"]
                timezone => "Asia/Shanghai"
                }
        #提取日志级别
        grok {
                patterns_dir => ["/app/logstash/logstash-8.2.0/config/patterns/patterns.conf"]
                match => { "message" => "%{LOGLEVEL:log_level}"}
                }
    }
  mutate {
    remove_field => ["agent"]
    remove_field => ["ecs"]
    remove_field => ["tags"]
  }
}

input.conf

1
2
3
4
5
input {
    beats {
        port => "6122"
    }
}

mysql-filter.conf

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
filter {
    if [log_type]=="mysql-slow-log" {
 grok {
        match => [ "message" , "(?m)^#\s+User@Host:\s+%{USER:user}\[[^\]]+\]\s+@\s+(?:(?<clienthost>\S*) )?\[(?:%{IPV4:clientip})?\]\s+Id:\s+%{NUMBER:row_id:int}\n#\s+Query_time:\s+%{NUMBER:query_time:float}\s+Lock_time:\s+%{NUMBER:lock_time:float}\s+Rows_sent:\s+%{NUMBER:rows_sent:int}\s+Rows_examined:\s+%{NUMBER:rows_examined:int}\n\s*(?:use %{DATA:database};\s*\n)?SET\s+timestamp=%{NUMBER:timestamp};\n\s*(?<sql>(?<action>\w+)\b.*;)\s*(?:\n#\s+Time)?.*$" ]
    }

    date {
        match => ["timestamp_mysql","UNIX"]
        target => "@timestamp"
    }
    mutate {
            remove_field => "@version"
    }
 }
}

nginx-filter.conf

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
filter {
        if [tag]=~ "nginx-\w+" {
                mutate {
                        gsub => ["message", "\\x", "\\\x"]
                }
                json {
                        source => "message"
                }
                grok {
                    patterns_dir => ["/app/logstash/logstash-8.2.0/config/patterns/patterns.conf"]
                    match => {"request" => "%{APIPATH:api_path}" }
                }
                mutate {
                         gsub => ["upstream_response_time","-","0"]
                         gsub => ["req_time","-","0"]
                }
                grok {
                    patterns_dir => ["/app/logstash/logstash-8.2.0/config/patterns/patterns.conf"]
                    match => { "ip_list" =>"%{USERIP:user_ip}" }
                }
                #geoip {
                #    source => "user_ip"
                #}
                mutate {
                  remove_field => ["tags"]
                }
        }
}

output.conf

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
output {
    if [@metadata][beat] == "filebeat" {
        if [platform] == "nginx-test"{
            elasticsearch {
                hosts => ["https://172.16.2.175:9200", "https://172.16.2.176:9200","https://172.16.2.177:9200"]
                cacert => "/app/elasticsearch/elasticsearch-8.2.0/config/certs/http_ca.crt"
                index => "nginx-test"
                user => "logstash_user"
                password => "5UuCLGx10ZQABugN"
            }
        }
        else if  [platform] == "tomcat-test"{
            elasticsearch {
                hosts => ["https://172.16.2.175:9200", "https://172.16.2.176:9200","https://172.16.2.177:9200"]
                cacert => "/app/elasticsearch/elasticsearch-8.2.0/config/certs/http_ca.crt"
                index => "t-log-test"
                user => "logstash_user"
                password => "5UuCLGx10ZQABugN"
            }
        }
        else if [platform] == "mysql-slow-test"{
            elasticsearch {
                hosts => ["https://172.16.2.175:9200", "https://172.16.2.176:9200","https://172.16.2.177:9200"]
                cacert => "/app/elasticsearch/elasticsearch-8.2.0/config/certs/http_ca.crt"
                index => "mysql-slow-test"
                user => "logstash_user"
                password => "5UuCLGx10ZQABugN"
            }
        }
    }
}

配置用户及权限

1
2
$ useradd -s /sbin/nologin -M -r es
$ sudo chown -R es.es /app/logstash_logs

配置 systemctl 并启动

  • vim /etc/default/logstash
1
2
3
4
5
LS_HOME=/app/logstash/logstash-8.2.0
LS_PIDFILE=/app/logstash/logstash_logs/logstash.pid
LS_USER=es
LS_GROUP=es
LS_GC_LOG_FILE=/app/logstash/logstash_logs/gc.log
  • vi /etc/systemd/system/logstash.service
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
[Unit]
Description=logstash

[Service]
Type=simple
User=es
Group=es
# Load env vars from /etc/default/ and /etc/sysconfig/ if they exist.
# Prefixing the path with '-' makes it try to load, but if the file doesn't
# exist, it continues onward.
EnvironmentFile=-/etc/default/logstash
EnvironmentFile=-/etc/sysconfig/logstash
ExecStart=/app/logstash/logstash-8.2.0/bin/logstash "--path.settings" "/app/logstash/logstash-8.2.0/config"

Restart=always
WorkingDirectory=/app/logstash/logstash-8.2.0
Nice=19
LimitNOFILE=65535

# When stopping, how long to wait before giving up and sending SIGKILL?
# Keep in mind that SIGKILL on a process can cause data loss.
TimeoutStopSec=infinity

[Install]
WantedBy=multi-user.target
  • 启动服务并配置自启
1
2
3
$ systemctl daemon-reload 
$ systemctl start logstash
$ systemctl enable logstash

配置生命周期及索引模板

创建生命周期策略

  • iml-t-log
  • 热阶段
    • 调整最大主分片大小 10G,最大存在时间 1 天
  • 温阶段
    • 在以下情况下将数据移到相应阶段:2 天
    • 开启索引只读
  • 冷阶段:
    • 在以下情况下将数据移到相应阶段:30 天
    • 开启只读
  • 删除阶段
    • 在以下情况下将数据移到相应阶段:60 天

创建索引模板

  • 针对不同日志类型,可以更改 mappings
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
PUT _index_template/iml-t-log
{
  "index_patterns": [
    "t-log-*"
  ],
  "template": {
    "settings": {
      "number_of_shards": 3,
      "number_of_replicas": 1,
      "refresh_interval": "10s",
      "lifecycle": {
        "name": "iml-t-log",
        "rollover_alias": "t-log-test"
      }
    },
    "mappings": {
      "_source": {
        "excludes": [],
        "includes": [],
        "enabled": true
      },
      "properties": {
        "host_name": {
          "type": "keyword"
        },
        "created_at": {
          "type": "date",
          "format": "EEE MMM dd HH:mm:ss Z yyyy"
        }
      }
    }
  }
}

创建索引并绑定别名

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
PUT /t-log-00001
# 如果已经创建索引模板,则无需创建索引
#{
#  "settings": {
#    "index": {
#      "number_of_shards": 3,  
#      "number_of_replicas": 1 
#    }
#  }
#}
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
POST /_aliases
{
  "actions": [
    {
      "add": {
        "index": "t-log-00001",
        "alias": "t-log-test"
      }
    }
  ]
}

创建索引模板(8 叫数据视图)

安装 filebeat

1
2
$ wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-8.2.0-x86_64.rpm
$ sudo yum -y install filebeat-8.2.0-x86_64.rpm
  • 更改配置
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
$ cd /etc/filebeat
$ mv -rp filebeat.yml  filebeat.yml.bk
$ vi filebeat.yml
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /app/tomcats/api/*/logs/info.log
    - /app/tomcats/api/*/logs/warn.log
    - /app/tomcats/api/*/logs/error.log
    - /app/tomcats/api/*/logs/catalina.20*
  multiline.pattern: '^\d{4}\-\d{2}\-\d{2}\ \d{2}\:\d{2}\:\d{2}'
  multiline.negate: true
  multiline.match: after
  multiline.timeout: 10s
  max_bytes: 50000
  fields_under_root: true
  fields:
     module: test-module
     platform: tomcat-test


name: test-APP01

output.logstash:
  hosts: ["127.0.0.1:6122"]
processors:
  - add_host_metadata: ~

启动服务并配置启动

1
2
$ systemctl start filebeat
$ systemctl enable filebeat