Elasticsearch ile Log Yönetimi ve Analizi

Elasticsearch kurulumu, Logstash ve Kibana entegrasyonu, log toplama, indeksleme ve analiz işlemleri. ELK Stack yönetimi rehberi.

9 min read
1.794 words

Elasticsearch ile Log Yönetimi ve Analizi#

Elasticsearch, büyük miktarda veriyi gerçek zamanlı olarak arama, analiz ve görselleştirme imkanı sunan dağıtık arama ve analitik motorudur. ELK Stack (Elasticsearch, Logstash, Kibana) ile kapsamlı log yönetimi sistemi kurabilirsiniz.

Elasticsearch Kurulumu#

Ubuntu/Debian Sistemlerde Kurulum#

# Java kurulumu (Elasticsearch için gerekli) sudo apt update sudo apt install openjdk-11-jdk # Elasticsearch GPG anahtarını ekle wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add - # Elasticsearch repository ekle echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee /etc/apt/sources.list.d/elastic-7.x.list # Elasticsearch kurulumu sudo apt update sudo apt install elasticsearch # Servisi etkinleştir ve başlat sudo systemctl enable elasticsearch sudo systemctl start elasticsearch # Durumu kontrol et sudo systemctl status elasticsearch

CentOS/RHEL Sistemlerde Kurulum#

# Java kurulumu sudo yum install java-11-openjdk # Elasticsearch repository ekle sudo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch cat > /etc/yum.repos.d/elasticsearch.repo << EOF [elasticsearch] name=Elasticsearch repository for 7.x packages baseurl=https://artifacts.elastic.co/packages/7.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=0 autorefresh=1 type=rpm-md EOF # Elasticsearch kurulumu sudo yum install --enablerepo=elasticsearch elasticsearch # Servisi başlat sudo systemctl enable elasticsearch sudo systemctl start elasticsearch

Elasticsearch Yapılandırması#

Ana Yapılandırma Dosyası (/etc/elasticsearch/elasticsearch.yml)#

# Cluster ayarları cluster.name: production-logs node.name: node-1 node.roles: [ master, data, ingest ] # Network ayarları network.host: localhost http.port: 9200 transport.port: 9300 # Discovery ayarları discovery.type: single-node # Cluster için: # discovery.seed_hosts: ["host1", "host2"] # cluster.initial_master_nodes: ["node-1", "node-2"] # Path ayarları path.data: /var/lib/elasticsearch path.logs: /var/log/elasticsearch # Memory ayarları bootstrap.memory_lock: true # Security ayarları (X-Pack) xpack.security.enabled: false xpack.monitoring.collection.enabled: true # Index ayarları action.auto_create_index: true action.destructive_requires_name: true

JVM Heap Ayarları (/etc/elasticsearch/jvm.options)#

# Heap boyutunu sistem RAM'inin yarısı kadar ayarla (max 32GB) -Xms4g -Xmx4g # GC ayarları -XX:+UseG1GC -XX:G1HeapRegionSize=16m -XX:+UseLargePages -XX:+UnlockExperimentalVMOptions -XX:+UseTransparentHugePages # JVM crash durumunda heap dump al -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/lib/elasticsearch

Sistem Ayarları#

# Memory lock limitini artır echo "elasticsearch soft memlock unlimited" >> /etc/security/limits.conf echo "elasticsearch hard memlock unlimited" >> /etc/security/limits.conf # File descriptor limitini artır echo "elasticsearch soft nofile 65536" >> /etc/security/limits.conf echo "elasticsearch hard nofile 65536" >> /etc/security/limits.conf # Virtual memory ayarı echo "vm.max_map_count=262144" >> /etc/sysctl.conf sysctl -p # Elasticsearch'i yeniden başlat sudo systemctl restart elasticsearch

Logstash Kurulumu ve Yapılandırması#

Logstash Kurulumu#

# Logstash kurulumu (Elasticsearch repository zaten ekli) sudo apt install logstash # Servisi etkinleştir sudo systemctl enable logstash

Temel Logstash Yapılandırması#

# /etc/logstash/conf.d/apache-logs.conf input { file { path => "/var/log/apache2/access.log" start_position => "beginning" type => "apache_access" } file { path => "/var/log/apache2/error.log" start_position => "beginning" type => "apache_error" } beats { port => 5044 } } filter { if [type] == "apache_access" { grok { match => { "message" => "%{COMBINEDAPACHELOG}" } } date { match => [ "timestamp", "dd/MMM/yyyy:HH:mm:ss Z" ] } mutate { convert => { "response" => "integer" } convert => { "bytes" => "integer" } } if [clientip] { geoip { source => "clientip" target => "geoip" } } } if [type] == "apache_error" { grok { match => { "message" => "\[%{HTTPDATE:timestamp}\] \[%{WORD:level}\] %{GREEDYDATA:error_message}" } } } } output { elasticsearch { hosts => ["localhost:9200"] index => "apache-logs-%{+YYYY.MM.dd}" } stdout { codec => rubydebug } }

Syslog Yapılandırması#

# /etc/logstash/conf.d/syslog.conf input { syslog { port => 514 type => "syslog" } } filter { if [type] == "syslog" { grok { match => { "message" => "%{SYSLOGTIMESTAMP:timestamp} %{IPORHOST:server} %{PROG:program}(?:\[%{POSINT:pid}\])?: %{GREEDYDATA:message}" } overwrite => [ "message" ] } date { match => [ "timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ] } } } output { elasticsearch { hosts => ["localhost:9200"] index => "syslog-%{+YYYY.MM.dd}" } }

Logstash'i Başlatma#

# Yapılandırmayı test et sudo -u logstash /usr/share/logstash/bin/logstash --path.settings /etc/logstash -t # Logstash'i başlat sudo systemctl start logstash sudo systemctl status logstash

Kibana Kurulumu ve Yapılandırması#

Kibana Kurulumu#

# Kibana kurulumu sudo apt install kibana # Servisi etkinleştir sudo systemctl enable kibana

Kibana Yapılandırması (/etc/kibana/kibana.yml)#

# Server ayarları server.port: 5601 server.host: "localhost" server.name: "kibana-server" # Elasticsearch bağlantısı elasticsearch.hosts: ["http://localhost:9200"] # Logging logging.dest: /var/log/kibana/kibana.log logging.verbose: false # Security (X-Pack) xpack.security.enabled: false xpack.monitoring.enabled: true # Index pattern kibana.index: ".kibana"

Kibana'yı Başlatma#

# Kibana'yı başlat sudo systemctl start kibana sudo systemctl status kibana # Web arayüzüne erişim # http://localhost:5601

Filebeat ile Log Toplama#

Filebeat Kurulumu#

# Filebeat kurulumu sudo apt install filebeat # Servisi etkinleştir sudo systemctl enable filebeat

Filebeat Yapılandırması (/etc/filebeat/filebeat.yml)#

# Input ayarları filebeat.inputs: - type: log enabled: true paths: - /var/log/apache2/*.log - /var/log/nginx/*.log fields: logtype: webserver fields_under_root: true - type: log enabled: true paths: - /var/log/syslog - /var/log/auth.log fields: logtype: system fields_under_root: true # Output ayarları output.logstash: hosts: ["localhost:5044"] # Alternatif: Doğrudan Elasticsearch'e gönder # output.elasticsearch: # hosts: ["localhost:9200"] # index: "filebeat-%{+yyyy.MM.dd}" # Logging logging.level: info logging.to_files: true logging.files: path: /var/log/filebeat name: filebeat keepfiles: 7 permissions: 0644

Filebeat'i Başlatma#

# Yapılandırmayı test et sudo filebeat test config sudo filebeat test output # Filebeat'i başlat sudo systemctl start filebeat sudo systemctl status filebeat

Index Template ve Mapping#

Index Template Oluşturma#

# Apache logs için template curl -X PUT "localhost:9200/_template/apache-logs" -H 'Content-Type: application/json' -d' { "index_patterns": ["apache-logs-*"], "settings": { "number_of_shards": 1, "number_of_replicas": 1, "index.refresh_interval": "30s" }, "mappings": { "properties": { "@timestamp": { "type": "date" }, "clientip": { "type": "ip" }, "response": { "type": "integer" }, "bytes": { "type": "long" }, "verb": { "type": "keyword" }, "request": { "type": "text", "analyzer": "standard" }, "geoip": { "properties": { "location": { "type": "geo_point" }, "country_name": { "type": "keyword" }, "city_name": { "type": "keyword" } } } } } }'

Index Lifecycle Management (ILM)#

# ILM policy oluştur curl -X PUT "localhost:9200/_ilm/policy/logs-policy" -H 'Content-Type: application/json' -d' { "policy": { "phases": { "hot": { "actions": { "rollover": { "max_size": "5GB", "max_age": "7d" } } }, "warm": { "min_age": "7d", "actions": { "allocate": { "number_of_replicas": 0 } } }, "cold": { "min_age": "30d", "actions": { "allocate": { "number_of_replicas": 0 } } }, "delete": { "min_age": "90d" } } } }'

Elasticsearch API Kullanımı#

Temel Sorgular#

# Cluster durumu curl -X GET "localhost:9200/_cluster/health?pretty" # Node bilgileri curl -X GET "localhost:9200/_nodes?pretty" # Index listesi curl -X GET "localhost:9200/_cat/indices?v" # Index istatistikleri curl -X GET "localhost:9200/apache-logs-*/_stats?pretty"

Arama Sorguları#

# Basit arama curl -X GET "localhost:9200/apache-logs-*/_search?q=status:404&pretty" # JSON query curl -X GET "localhost:9200/apache-logs-*/_search" -H 'Content-Type: application/json' -d' { "query": { "bool": { "must": [ { "range": { "@timestamp": { "gte": "now-1h" } } }, { "term": { "response": 404 } } ] } }, "sort": [ { "@timestamp": { "order": "desc" } } ], "size": 100 }' # Aggregation sorgusu curl -X GET "localhost:9200/apache-logs-*/_search" -H 'Content-Type: application/json' -d' { "size": 0, "aggs": { "status_codes": { "terms": { "field": "response", "size": 10 } }, "top_ips": { "terms": { "field": "clientip", "size": 10 } } } }'

Kibana Dashboard Oluşturma#

Index Pattern Oluşturma#

  1. Kibana web arayüzüne git (http://localhost:5601)
  2. Management > Stack Management > Index Patterns
  3. "Create index pattern" tıkla
  4. Index pattern: apache-logs-*
  5. Time field: @timestamp
  6. Create index pattern

Visualization Oluşturma#

// Status Code Distribution (Pie Chart) { "aggs": { "2": { "terms": { "field": "response", "size": 10, "order": { "_count": "desc" } } } } } // Request Volume Over Time (Line Chart) { "aggs": { "2": { "date_histogram": { "field": "@timestamp", "interval": "1h", "time_zone": "Europe/Istanbul", "min_doc_count": 1 } } } } // Top IP Addresses (Data Table) { "aggs": { "2": { "terms": { "field": "clientip", "size": 20, "order": { "_count": "desc" } }, "aggs": { "3": { "cardinality": { "field": "request.keyword" } } } } } }

Performans Optimizasyonu#

Elasticsearch Optimizasyonu#

# Index ayarları curl -X PUT "localhost:9200/apache-logs-*/_settings" -H 'Content-Type: application/json' -d' { "index": { "refresh_interval": "30s", "number_of_replicas": 0, "translog.flush_threshold_size": "1gb" } }' # Force merge (eski indexler için) curl -X POST "localhost:9200/apache-logs-2024.01.*/_forcemerge?max_num_segments=1" # Cache temizleme curl -X POST "localhost:9200/_cache/clear"

Shard Optimizasyonu#

# Shard boyutunu kontrol et curl -X GET "localhost:9200/_cat/shards?v&s=store:desc" # Index template'de shard sayısını ayarla { "settings": { "number_of_shards": 1, "number_of_replicas": 1 } }

Monitoring ve Alerting#

Elasticsearch Monitoring#

# Cluster health monitoring scripti #!/bin/bash HEALTH=$(curl -s "localhost:9200/_cluster/health" | jq -r '.status') if [ "$HEALTH" != "green" ]; then echo "Elasticsearch cluster health is $HEALTH" | mail -s "ES Alert" [email protected] fi # Disk usage kontrolü DISK_USAGE=$(curl -s "localhost:9200/_nodes/stats" | jq '.nodes[].fs.total.available_in_bytes')

Watcher ile Alerting (X-Pack)#

{ "trigger": { "schedule": { "interval": "5m" } }, "input": { "search": { "request": { "search_type": "query_then_fetch", "indices": ["apache-logs-*"], "body": { "query": { "bool": { "must": [ { "range": { "@timestamp": { "gte": "now-5m" } } }, { "term": { "response": 500 } } ] } } } } } }, "condition": { "compare": { "ctx.payload.hits.total": { "gt": 10 } } }, "actions": { "send_email": { "email": { "to": ["[email protected]"], "subject": "High 500 Error Rate", "body": "More than 10 500 errors in the last 5 minutes" } } } }

Backup ve Restore#

Snapshot Repository Oluşturma#

# Snapshot dizini oluştur sudo mkdir -p /backup/elasticsearch sudo chown elasticsearch:elasticsearch /backup/elasticsearch # Repository kaydet curl -X PUT "localhost:9200/_snapshot/backup_repo" -H 'Content-Type: application/json' -d' { "type": "fs", "settings": { "location": "/backup/elasticsearch", "compress": true } }'

Snapshot Alma#

# Manuel snapshot curl -X PUT "localhost:9200/_snapshot/backup_repo/snapshot_$(date +%Y%m%d_%H%M%S)" -H 'Content-Type: application/json' -d' { "indices": "apache-logs-*,syslog-*", "ignore_unavailable": true, "include_global_state": false }' # Snapshot durumunu kontrol et curl -X GET "localhost:9200/_snapshot/backup_repo/_all?pretty"

Otomatik Snapshot Scripti#

#!/bin/bash # /usr/local/bin/elasticsearch_backup.sh SNAPSHOT_NAME="snapshot_$(date +%Y%m%d_%H%M%S)" RETENTION_DAYS=30 # Snapshot al curl -X PUT "localhost:9200/_snapshot/backup_repo/$SNAPSHOT_NAME" -H 'Content-Type: application/json' -d' { "indices": "*", "ignore_unavailable": true, "include_global_state": false }' # Eski snapshot'ları sil CUTOFF_DATE=$(date -d "$RETENTION_DAYS days ago" +%Y%m%d) curl -s "localhost:9200/_snapshot/backup_repo/_all" | jq -r '.snapshots[].snapshot' | while read snapshot; do SNAPSHOT_DATE=$(echo $snapshot | grep -o '[0-9]\{8\}') if [ "$SNAPSHOT_DATE" -lt "$CUTOFF_DATE" ]; then curl -X DELETE "localhost:9200/_snapshot/backup_repo/$snapshot" fi done

Troubleshooting#

Yaygın Sorunlar ve Çözümleri#

  1. Cluster Yellow/Red Status
# Unassigned shard'ları kontrol et curl -X GET "localhost:9200/_cat/shards?h=index,shard,prirep,state,unassigned.reason&v" # Replica sayısını azalt curl -X PUT "localhost:9200/*/_settings" -H 'Content-Type: application/json' -d' { "index": { "number_of_replicas": 0 } }'
  1. Yüksek Memory Kullanımı
# Field data cache temizle curl -X POST "localhost:9200/_cache/clear?fielddata=true" # Circuit breaker ayarları curl -X PUT "localhost:9200/_cluster/settings" -H 'Content-Type: application/json' -d' { "transient": { "indices.breaker.fielddata.limit": "30%" } }'
  1. Yavaş Sorgular
# Slow log etkinleştir curl -X PUT "localhost:9200/apache-logs-*/_settings" -H 'Content-Type: application/json' -d' { "index.search.slowlog.threshold.query.warn": "10s", "index.search.slowlog.threshold.query.info": "5s", "index.search.slowlog.threshold.query.debug": "2s", "index.search.slowlog.threshold.query.trace": "500ms" }'

Sonuç#

Elasticsearch ve ELK Stack ile kapsamlı log yönetimi sistemi kurabilirsiniz. Bu rehberde ele aldığımız konular:

  • Elasticsearch, Logstash, Kibana kurulumu
  • Log toplama ve işleme
  • Index yönetimi ve optimizasyon
  • Monitoring ve alerting
  • Backup ve restore işlemleri
  • Performance tuning
  • Troubleshooting

Doğru yapılandırma ve düzenli bakım ile büyük ölçekli log analizi ve monitoring sistemi işletebilirsiniz.