# Elasticsearch ## 安装 ```sh rpm -ivh elasticsearch-7.11.1-x86_64.rpm mkdir -p /home/elasticsearch/lib mkdir -p /home/elasticsearch/log chown -R elasticsearch:elasticsearch /home/elasticsearch/lib chown -R elasticsearch:elasticsearch /home/elasticsearch/log sudo /bin/systemctl daemon-reload sudo systemctl start elasticsearch.service sudo /bin/systemctl enable elasticsearch.service cd /usr/share/elasticsearch/ bin/elasticsearch-certutil ca bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12 cp elastic-certificates.p12 /etc/elasticsearch/elastic-certificates.p12 chown -R elasticsearch:elasticsearch /etc/elasticsearch/elastic-certificates.p12 bin/elasticsearch-setup-passwords interactive ``` ## 配置 ```yaml # ======================== Elasticsearch Configuration ========================= # # NOTE: Elasticsearch comes with reasonable defaults for most settings. # Before you set out to tweak and tune the configuration, make sure you # understand what are you trying to accomplish and the consequences. # # The primary way of configuring a node is via this file. This template lists # the most important settings you may want to configure for a production cluster. # # Please consult the documentation for further information on configuration options: # https://www.elastic.co/guide/en/elasticsearch/reference/index.html # # ---------------------------------- Cluster ----------------------------------- # # Use a descriptive name for your cluster: # cluster.name: bsw # # ------------------------------------ Node ------------------------------------ # # Use a descriptive name for the node: # node.name: node-3 # # Add custom attributes to the node: # #node.attr.rack: r1 # # ----------------------------------- Paths ------------------------------------ # # Path to directory where to store the data (separate multiple locations by comma): # path.data: /var/lib/elasticsearch # # Path to log files: # path.logs: /var/log/elasticsearch # # ----------------------------------- Memory ----------------------------------- # # Lock the memory on startup: # # bootstrap.memory_lock: true # # Make sure that the heap size is set to about half the memory available # on the system and that the owner of the process is allowed to use this # limit. # # Elasticsearch performs poorly when the system is swapping the memory. # # ---------------------------------- Network ----------------------------------- # # Set the bind address to a specific IP (IPv4 or IPv6): # network.host: 192.168.0.13 # # Set a custom port for HTTP: # #http.port: 9200 # # For more information, consult the network module documentation. # # --------------------------------- Discovery ---------------------------------- # # Pass an initial list of hosts to perform discovery when this node is started: # The default list of hosts is ["127.0.0.1", "[::1]"] # discovery.seed_hosts: ["192.168.0.10", "192.168.0.15", "192.168.0.13"] # # Bootstrap the cluster using an initial set of master-eligible nodes: # cluster.initial_master_nodes: ["node-0", "node-5", "node-3"] # # For more information, consult the discovery and cluster formation module documentation. # # ---------------------------------- Gateway ----------------------------------- # # Block initial recovery after a full cluster restart until N nodes are started: # #gateway.recover_after_nodes: 3 # # For more information, consult the gateway module documentation. # # ---------------------------------- Various ----------------------------------- # # Require explicit names when deleting indices: # #action.destructive_requires_name: true xpack.security.enabled: true xpack.license.self_generated.type: basic xpack.security.transport.ssl.enabled: true xpack.security.transport.ssl.verification_mode: certificate xpack.security.transport.ssl.keystore.path: /etc/elasticsearch/elastic-certificates.p12 xpack.security.transport.ssl.truststore.path: /etc/elasticsearch/elastic-certificates.p12 ``` # Kibana ## 安装 ```shell rpm -ivh kibana-7.11.1-x86_64.rpm ``` # Logstash ```shell rpm -ivh logstash-7.11.1-x86_64.rpm ``` 配置 kafka.conf ``` # Sample Logstash configuration for creating a simple # Beats -> Logstash -> Elasticsearch pipeline. input { kafka{ bootstrap_servers => "192.168.0.10:9092,192.168.0.11:9092,192.168.0.12:9092,192.168.0.13:9092,192.168.0.14:9092" topics => ["logstash"] consumer_threads => 5 decorate_events => true codec => "json" auto_offset_reset => "latest" } } filter { if [@metadata][kafka][topic] == "logstash"{ mutate { add_field => { "[@metadata][kafka][app]" => "%{[app]}" } remove_field => "app" } } } output { if [@metadata][kafka][topic] == "logstash"{ elasticsearch { hosts => ["http://192.168.0.10:9200"] index => "log-%{[@metadata][kafka][app]}-%{+YYYY.MM.dd}" #index => "log-%{[@metadata][kafka][topic]}-%{+YYYY.MM.dd}" #user => "elastic" #password => "asdf*123" } } stdout { codec => rubydebug } } ``` # Metricbeat ```shell rpm -vi metricbeat-7.11.1-x86_64.rpm cd /usr/share/metricbeat/ metricbeat modules enable elasticsearch-xpack service metricbeat start systemctl enable metricbeat ```