美文网首页
5、Beats数据采集

5、Beats数据采集

作者: 蓝莲花L | 来源:发表于2020-02-20 00:45 被阅读0次

文章摘自课程链接:https://study.163.com/course/introduction/1005164019.htm


1、 beats采集架构介绍

简介

出现背景

logstash-forwarder 采集比较耗资源

架构

2、filebeat安装

安装

curl -L -Ohttps://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.2.3-x86_64.rpm

sudo rpm -vi filebeat-6.2.3-x86_64.rpm

配置

input

output:logstash、elasticsearch

3、filebeat采集数据到logstash

#=========================== Filebeat prospectors =============================

filebeat.prospectors:

- type: log 

  enabled: true

  paths:

    - /tmp/test.txt

    #- c:\programdata\elasticsearch\logs\*

​​#----------------------------- Logstash output --------------------------------

output.logstash:

  # The Logstash hosts

  hosts: ["172.31.44.210:5044"]

  # Optional SSL. By default is off.

  # List of root certificates for HTTPS server verifications

  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication

  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key

  #ssl.key: "/etc/pki/client/cert.key"

4、packetbeat安装

安装

sudo yum install libpcap

curl -L -Ohttps://artifacts.elastic.co/downloads/beats/packetbeat/packetbeat-6.2.3-x86_64.rpm

sudo rpm -vi packetbeat-6.2.3-x86_64.rpm

配置

识别的应用:icmp、dns、http、mysql、redis

input:网口(可以设置any)

output:elasticsearch

5、利用packetbeat采集流量到ES中

#============================== Network device ================================

# Select the network interface to sniff the data. On Linux, you can use the

# "any" keyword to sniff on all connected interfaces.

packetbeat.interfaces.device: any

#================================== Flows =====================================

# Set `enabled: false` or comment out all options to disable flows reporting.

packetbeat.flows:

  # Set network flow timeout. Flow is killed if no packet is received before being

  # timed out.

  timeout: 30s

  # Configure reporting period. If set to -1, only killed flows will be reported

  period: 10s

#========================== Transaction protocols =============================

packetbeat.protocols:

- type: icmp

  # Enable ICMPv4 and ICMPv6 monitoring. Default: false

  enabled: true

- type: amqp

  # Configure the ports where to listen for AMQP traffic. You can disable

  # the AMQP protocol by commenting out the list of ports.

  ports: [5672]

- type: cassandra

  #Cassandra port for traffic monitoring.

  ports: [9042]

- type: dns

  # Configure the ports where to listen for DNS traffic. You can disable

  # the DNS protocol by commenting out the list of ports.

  ports: [53]

  # include_authorities controls whether or not the dns.authorities field

  # (authority resource records) is added to messages.

  include_authorities: true

  # include_additionals controls whether or not the dns.additionals field

  # (additional resource records) is added to messages.

  include_additionals: true

- type: http

  # Configure the ports where to listen for HTTP traffic. You can disable

  # the HTTP protocol by commenting out the list of ports.

  ports: [80, 8080, 8000, 5000, 8002]

- type: memcache

  # Configure the ports where to listen for memcache traffic. You can disable

  # the Memcache protocol by commenting out the list of ports.

  ports: [11211]

- type: mysql

  # Configure the ports where to listen for MySQL traffic. You can disable

  # the MySQL protocol by commenting out the list of ports.

  ports: [3306]

- type: pgsql

  # Configure the ports where to listen for Pgsql traffic. You can disable

  # the Pgsql protocol by commenting out the list of ports.

  ports: [5432]

- type: redis

  # Configure the ports where to listen for Redis traffic. You can disable

  # the Redis protocol by commenting out the list of ports.

  ports: [6379]

- type: thrift

  # Configure the ports where to listen for Thrift-RPC traffic. You can disable

  # the Thrift-RPC protocol by commenting out the list of ports.

  ports: [9090]

- type: mongodb

  # Configure the ports where to listen for MongoDB traffic. You can disable

  # the MongoDB protocol by commenting out the list of ports.

  ports: [27017]

- type: nfs

  # Configure the ports where to listen for NFS traffic. You can disable

  # the NFS protocol by commenting out the list of ports.

  ports: [2049]

- type: tls

  # Configure the ports where to listen for TLS traffic. You can disable

  # the TLS protocol by commenting out the list of ports.

  ports: [443]

#==================== Elasticsearch template setting ==========================

setup.template.settings:

  index.number_of_shards: 3

  #index.codec: best_compression

  #_source.enabled: false

#================================ General =====================================

# The name of the shipper that publishes the network data. It can be used to group

# all the transactions sent by a single shipper in the web interface.#name:

# The tags of the shipper are included in their own field with each

# transaction published.#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the

# output.#fields:

#  env: staging

#============================== Dashboards =====================================

# These settings control loading the sample dashboards to the Kibana index. Loading

# the dashboards is disabled by default and can be enabled either by setting the

# options here, or by using the `-setup` CLI flag or the `setup` command.#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL

# has a value which is computed based on the Beat name and version. For released

# versions, this URL points to the dashboard archive on the artifacts.elastic.co# website.#setup.dashboards.url:

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.

# This requires a Kibana endpoint configuration.

setup.kibana:

  # Kibana Host

  # Scheme and port can be left out and will be set to the default (http and 5601)

  # In case you specify and additional path, the scheme is required: http://localhost:5601/path  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601

  #host: "localhost:5601"

#============================= Elastic Cloud ==================================

# These settings simplify using packetbeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and

# `setup.kibana.host` options.

# You can find the `cloud.id` in the Elastic Cloud web UI.#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and

# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.#cloud.auth:

#================================ Outputs =====================================

# Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------

output.elasticsearch:

  # Array of hosts to connect to.

  hosts: ["172.31.44.210:9200"]

  # Optional protocol and basic auth credentials.

  #protocol: "https"

  #username: "elastic"

  #password: "changeme"

#----------------------------- Logstash output --------------------------------#output.logstash:

  # The Logstash hosts

 # hosts: ["172.31.44.210:5033"]

  # Optional SSL. By default is off.

  # List of root certificates for HTTPS server verifications

  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication

  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key

  #ssl.key: "/etc/pki/client/cert.key"

#================================ Logging =====================================

# Sets log level. The default log level is info.

# Available log levels are: error, warning, info, debug#logging.level: debug

# At debug level, you can selectively enable logging only for some components.

# To enable all selectors use ["*"]. Examples of other selectors are "beat",

# "publish", "service".#logging.selectors: ["*"]

#============================== Xpack Monitoring ===============================

# packetbeat can export internal metrics to a central Elasticsearch monitoring

# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The

# reporting is disabled by default.

# Set to true to enable the monitoring reporter.#xpack.monitoring.enabled: false

# Uncomment to send the metrics to Elasticsearch. Most settings from the

# Elasticsearch output are accepted here as well. Any setting that is not set is

# automatically inherited from the Elasticsearch output configuration, so if you

# have the Elasticsearch output configured, you can simply uncomment the

# following line.#xpack.monitoring.elasticsearch:

相关文章

  • 5、Beats数据采集

    文章摘自课程链接:https://study.163.com/course/introduction/100516...

  • 大数据的ELK

    数据采集的ELK一般是是指ElasticsearchLogstashKibana Beats轻量型数据采集器,采集...

  • Beats入门

    Beats 简介 Beats集合了多种单一用途的数据采集器,主要有以下几种类型 filebeat:采集日志文件 m...

  • 1.Beats 概述

    介绍 Beats用于采集数据,然后发送到Elasticsearch或者Logstash,最后可以由Kibana进行...

  • 基于Metricbeat+Elasticsearch+Kiban

    1.组件介绍: Beats(version:6.1.2):数据采集的得力工具。将这些采集器安装在您的服务器中,它们...

  • ElasticSearch | Beats

    什么是 Beats 轻量级的数据采集器,可以搞定所有数据类型,以搜集数据为主,支持与 Logstash 或 Ela...

  • Filebeat轻量级日志采集工具

    Beats 平台集合了多种单一用途数据采集器。这些采集器安装后可用作轻量型代理,从成百上千或成千上万台机器向 Lo...

  • PacketBeat奇妙的OOM小记

    Beats这个项目的确很好用,几行命令下来,一个成型的Agent就出来了。使用者只需要关注采集什么数据就好,后续的...

  • 帝国cms采集功能使用

    采集步骤: 1、增加采集节点; 2、预览采集正则是否正确; 3、开始采集; 4、对采集的数据进行审核并入库; 5、...

  • 关于作业

    一、5-10个关键词 数据采集 本质:数据本身是事实和观察的呈现结果,数据的采集是对事实和观察的结果采集。 语境:...

网友评论

      本文标题:5、Beats数据采集

      本文链接:https://www.haomeiwen.com/subject/sfktqhtx.html