Optimizing Elasticsearch Master Node for Cluster Stability

[post-views]
December 10, 2024 · 4 min read
Optimizing Elasticsearch Master Node for Cluster Stability

The master node is responsible for lightweight cluster-wide actions such as creating or deleting an index, tracking which nodes are part of the cluster, and deciding which shards to allocate to which nodes. It is important for cluster health to have a stable master node.

This guide provides best practices for optimizing the master node to ensure a stable and robust Elasticsearch cluster.

Dedicated Master Nodes

Why It Matters

  • Mixing master-eligible nodes with data nodes or coordinating nodes can overburden the master node.
  • A dedicated master node focuses solely on cluster management tasks, ensuring consistent performance.

Recommendation

  • Use three dedicated master-eligible nodes to maintain quorum and prevent split-brain scenarios.
  • Ensure these nodes have node.master: true and node.data: false. For early versions of Elasticsearch.
node.master: true
node.data: false

OR in later versions

node.roles: [ master ]

Ensure Sufficient Resources

CPU and RAM

  • Allocate enough resources to the master nodes for smooth operations:
    • RAM: 4-8 GB of heap size is generally sufficient for most use cases.
    • CPU: 2-4 cores dedicated to the master node tasks.

Storage

  • Master nodes do not store large volumes of data but need fast storage for logs and cluster metadata.

Heap Size Configuration

Why It Matters

  • JVM heap size directly impacts the performance of master nodes.

Recommendation

  • Allocate 50% of the available system memory to the JVM heap, but no more than 32 GB to avoid pointer inefficiencies, and also a multiple of 2.
  • Example: If a node has 8 GB of memory, set the heap size to 4 GB.
export ES_JAVA_OPTS="-Xms4g -Xmx4g"

Network Configuration

Stable Communication is Critical

  • A slow or unreliable network can lead to cluster instability.

Best Practices

  • Use a low-latency, high-bandwidth network.
  • Configure the discovery.seed_hosts and cluster.initial_master_nodes to include IP addresses of master nodes.
discovery.seed_hosts: ["master1.example", "master2.example", "master3.example"]
cluster.initial_master_nodes: ["master1", "master2", "master3"]
  • Enable minimum master node quorum to prevent split-brain:
discovery.zen.minimum_master_nodes: 2  # For a three-node master setup

Cluster State Management

Why It Matters

  • The master node updates and propagates the cluster state across the cluster.
  • A large cluster state can cause performance bottlenecks.

Optimization Techniques

  • Minimize the number of shards per node.
    • Recommended: Aim for 20 shards per GB of heap memory.
  • Regularly delete unused indices to reduce cluster state size.
  • Use the _cat/shards API to monitor shard distribution.

Monitor and Tune Garbage Collection

Garbage Collection Impact

  • Inefficient garbage collection (GC) can cause latency and instability.

Recommendations

  • Use G1GC for Elasticsearch:
-XX:+UseG1GC
  • Monitor GC activity using Elasticsearch’s monitoring tools or external systems like Metricbeat.

High Availability for Master Nodes

Avoid Single Points of Failure

  • Always deploy at least three master-eligible nodes in a cluster.
  • Distribute nodes across different physical locations (e.g., data centers) to improve fault tolerance.

Tip

  • Use an odd number of master-eligible nodes to simplify quorum calculations.

Limit the Number of Indices and Shards

Why It Matters

  • Each index and shard increases the workload on the master node.

Guidelines

  • Combine small indices into larger ones where possible.
  • Reindex older or less-used data into fewer shards.
  • Use the Index Lifecycle Management (ILM) feature to automate index rollover and deletion.

Use Tribe or Cross-Cluster Search for Large Deployments

When to Consider

  • For extremely large clusters with numerous indices and shards, offload query handling to separate coordinating nodes.
  • Use Cross-Cluster Search (CCS) for querying multiple clusters.

Monitor Master Node Performance

Tools and Metrics

  • Use Kibana or Metricbeat to monitor:
    • Cluster state update duration.
    • Latency in shard allocation.
    • Resource utilization (CPU, heap usage, network I/O).

Example Monitoring Query

To check cluster health and master node status:

GET _cluster/health
GET _cat/nodes?v

Conclusion

A well-configured master node keeps your Elasticsearch cluster running smoothly. By following all of the recommendations described in this article, you will minimize disruptions, improve performance, and ensure the long-term reliability of your Elasticsearch environment. Regular monitoring and proactive tuning are key to maintaining optimal performance.

If you need further insights, consult the official Elasticsearch documentation.

Was this article helpful?

Like and share it with your peers.
Join SOC Prime's Detection as Code platform to improve visibility into threats most relevant to your business. To help you get started and drive immediate value, book a meeting now with SOC Prime experts.

Related Posts