Ansible Role elasticsearch
This role installs and configures an Elasticsearch server.
Note that this role does NOT let you specify a particular Elasticsearch server version. It simply installs the latest available Elasticsearch server version from the repos configured in the system. If you want or need to install a specific version, have a look at the linuxfabrik.lfops.repo_elasticsearch role.
Mandatory Requirements
Enable the official elasticsearch repository. This can be done using the linuxfabrik.lfops.repo_elasticsearch role.
If you use the elasticsearch playbook, this is automatically done for you.
Optional Requirements
Set
vm.swappinessto 1. This can be done using the linuxfabrik.lfops.kernel_settings role.
If you use the elasticsearch playbook, this is automatically done for you.
Post-Installation Steps
After setting up a single node or cluster, generate the initial password for the elastic user:
/usr/share/elasticsearch/bin/elasticsearch-reset-password --username elastic
Setting Up an Elasticsearch Cluster
This role supports creating a multi-node Elasticsearch cluster using manual certificate distribution. This approach provides full automation and avoids the limitations of Elasticsearch’s enrollment token system (which requires interactive commands that cannot be automated in Ansible).
All cluster nodes must:
Have the same
elasticsearch__cluster_name__*_varconfiguredBe able to communicate with each other (configure
elasticsearch__network_hostto be accessible from other nodes, e.g.,0.0.0.0or a specific IP)Have
elasticsearch__discovery_seed_hostsset to the list of all cluster nodes from the start
Deploy First Node and Generate Certificates
Deploy Elasticsearch on the first node (stopped state) to use the certutil tool:
ansible-playbook --inventory inventory linuxfabrik.lfops.elasticsearch --limit node1.example.com --extra-vars='{"elasticsearch__service_state": "stopped"}'
Connect to the first node and generate certificates:
# generate CA (use empty password for automation) with 10 years validity
# IMPORTANT: Back up this CA - it's needed for adding nodes later
/usr/share/elasticsearch/bin/elasticsearch-certutil ca --pem --out /etc/elasticsearch/ca.zip --days 3650
cd /etc/elasticsearch/
unzip ca.zip
# the role automatically creates /tmp/certutil.yml based on your inventory
# review and adjust the IPs/DNS names in /tmp/certutil.yml
# generate node certificates for all cluster nodes
/usr/share/elasticsearch/bin/elasticsearch-certutil cert \
--ca-cert /etc/elasticsearch/ca/ca.crt \
--ca-key /etc/elasticsearch/ca/ca.key \
--in /tmp/certutil.yml \
--pem \
--out /tmp/certs.zip
Copy the generated certificates to your Ansible inventory (have a look at the Optional Variables below for the paths).
The certificates are used for elasticsearch__{http,transport}_{cert,key}. It is possible to either use different certificates for http and transport or use the same for both.
Bootstrap the First Node(s)
You can bootstrap one or more nodes - both options work. Starting with a single node makes troubleshooting easier. However, bear in mind that both the master and data roles are required for bootstrapping. Either the first node must have these roles, or multiple nodes must be bootstrapped.
ansible-playbook --inventory inventory linuxfabrik.lfops.elasticsearch --limit node1.example.com --extra-vars='{"elasticsearch__cluster_initial_master_nodes": ["node1.example.com"]}'
Attention: Only include the first node in elasticsearch__cluster_initial_master_nodes. If you include nodes that do not exist yet, the first node will wait indefinitely for them and the cluster will not form.
Verify Cluster State
On the first node, generate the initial password and verify the cluster state:
/usr/share/elasticsearch/bin/elasticsearch-reset-password --username elastic
export ELASTIC_PASSWORD='your-password-here'
export elastic_host='node1.example.com'
export elastic_cacert='/etc/elasticsearch/certs/ca.crt'
# check cluster health (might be yellow with 1 node)
curl --cacert "$elastic_cacert" \
--user "elastic:${ELASTIC_PASSWORD}" \
--request GET "https://$elastic_host:9200/_cluster/health?pretty=true"
# list nodes (should show only node1)
curl --cacert "$elastic_cacert" \
--user "elastic:${ELASTIC_PASSWORD}" \
--request GET "https://$elastic_host:9200/_cat/nodes?v&h=name,ip,node.role"
Deploy Additional Nodes
Note: Deploy remaining nodes without elasticsearch__cluster_initial_master_nodes.
ansible-playbook --inventory inventory linuxfabrik.lfops.elasticsearch --limit node2.example.com,node3.example.com
The nodes will automatically join the existing cluster using elasticsearch__discovery_seed_hosts.
Clear Initial Master Nodes Configuration
After at least one additional master node has joined, remove the cluster.initial_master_nodes setting from the first node:
ansible-playbook --inventory inventory linuxfabrik.lfops.elasticsearch --limit node1.example.com --tags elasticsearch:configure
This prevents issues when the first node is restarted.
Verify Complete Cluster
Verify all nodes have joined the cluster:
curl --cacert "$elastic_cacert" \
--user "elastic:${ELASTIC_PASSWORD}" \
--request GET "https://$elastic_host:9200/_cluster/health?pretty=true"
curl --cacert "$elastic_cacert" \
--user "elastic:${ELASTIC_PASSWORD}" \
--request GET "https://$elastic_host:9200/_cat/nodes?v&h=name,ip,node.role"
The status should be green with all nodes listed.
Adding a New Node to an Existing Cluster
Generate certificates for the new node using the existing CA. On the node where the CA is stored:
cat > /tmp/new-node-cert.yml <<EOF
instances:
- name: 'new-node.example.com'
ip:
- '127.0.0.1'
- '192.0.2.1'
dns:
- 'localhost'
- 'new-node.example.com' # make sure to always include the Common Name in the Subject Alternative Names as well
- 'new-node'
EOF
/usr/share/elasticsearch/bin/elasticsearch-certutil cert \
--ca-cert /etc/elasticsearch/ca/ca.crt \
--ca-key /etc/elasticsearch/ca/ca.key \
--in new-node-cert.yml \
--pem \
--out new-node-certs.zip
Copy certificates to your inventory
Add the new node to
elasticsearch__discovery_seed_hostsin group_varsCreate host_vars for the new node with a unique
elasticsearch__node_nameDeploy the new node:
ansible-playbook --inventory inventory linuxfabrik.lfops.elasticsearch --limit new-node.example.com
Roll out the
elasticsearch__discovery_seed_hoststo all cluster nodes
Optional Role Variables
Variable |
Description |
Default Value |
|---|---|---|
|
Automatic index creation allows any index to be created automatically. |
|
|
ASCII-armored PEM CA certificate for TLS. When set, enables manual certificate management mode and disables auto-enrollment. All cluster nodes should use the same CA certificate. |
unset |
|
A list of initial master-eligible nodes. The entries have to match the |
unset |
|
A descriptive name for your cluster. |
|
|
List of awareness attribute names to enable shard allocation awareness. Distributes replicas across different attribute values to minimize risk of data loss during failures. Configure the same attributes on all master-eligible nodes |
|
|
Dictionary for forced awareness to prevent replica overloading when a location fails. Key is the attribute name, value is list of expected attribute values. Elasticsearch will leave replicas unassigned rather than concentrating them in remaining locations. |
|
|
Float |
|
|
Float |
|
|
A list of IPs or hostnames that point to all master-eligible nodes of the cluster. The port defaults to 9300 but can be overwritten using |
unset |
|
ASCII-armored PEM HTTP certificate. |
unset |
|
ASCII-armored PEM HTTP private key. |
unset |
|
Number of days to retain rotated Elasticsearch log files (server, deprecation, slowlog, audit). All log appenders rotate daily and delete files older than this value. |
|
|
Sets the address for both HTTP and transport traffic. Accepts an IP address, a hostname, or a special value. |
|
|
Dictionary of custom node attributes. Can be used for shard allocation awareness. Each attribute identifies a node’s physical location or characteristic. |
|
|
A descriptive name for the node |
|
|
List of roles for this node. Available roles: |
unset |
|
Path to the directory where Elasticsearch stores its data. |
|
|
Paths pointing to Shared file system repositories used for snapshots (backups). |
|
|
Multiline string. Raw content which will be appended to the |
unset |
|
Enables or disables the elasticsearch service, analogous to |
|
|
Controls the state of the elasticsearch service, analogous to |
|
|
ASCII-armored PEM transport certificate. |
unset |
|
ASCII-armored PEM transport private key. |
unset |
Example:
# optional
elasticsearch__action_auto_create_index__group_var: false
elasticsearch__ca_cert: '{{ lookup("ansible.builtin.file", "{{ inventory_dir }}/group_files/elasticsearch_cluster/etc/elasticsearch/certs/ca.crt") }}'
elasticsearch__cluster_name__group_var: 'my-cluster'
elasticsearch__cluster_name__host_var: 'my-single-node'
elasticsearch__cluster_routing_allocation_awareness_attributes:
- 'datacenter'
elasticsearch__cluster_routing_allocation_awareness_force:
datacenter:
- 'dc1'
- 'dc2'
- 'dc3'
elasticsearch__discovery_seed_hosts:
- 'node1.example.com'
- 'node2.example.com'
- 'node3.example.com:9301'
elasticsearch__http_cert: '{{ lookup("ansible.builtin.file", "{{ inventory_dir }}/host_files/{{ inventory_hostname }}/etc/elasticsearch/certs/http.crt") }}'
elasticsearch__http_key: '{{ lookup("ansible.builtin.file", "{{ inventory_dir }}/host_files/{{ inventory_hostname }}/etc/elasticsearch/certs/http.key") }}'
elasticsearch__log4j2_retention_days: 7
elasticsearch__network_host: '0.0.0.0'
elasticsearch__network_host: '_local_' # or '127.0.0.1' for single node
elasticsearch__node_attributes:
datacenter: 'dc1'
host: 'pod01'
elasticsearch__node_name: 'node1'
elasticsearch__node_roles:
- 'master'
- 'remote_cluster_client'
elasticsearch__path_data: '/data'
elasticsearch__path_repos:
- '/mnt/backups'
- '/mnt/long_term_backups'
elasticsearch__raw: |-
http.max_content_length: 200mb
indices.recovery.max_bytes_per_sec: 100mb
elasticsearch__service_enabled: false
elasticsearch__service_state: 'stopped'
elasticsearch__transport_cert: '{{ lookup("ansible.builtin.file", "{{ inventory_dir }}/host_files/{{ inventory_hostname }}/etc/elasticsearch/certs/transport.crt") }}'
elasticsearch__transport_key: '{{ lookup("ansible.builtin.file", "{{ inventory_dir }}/host_files/{{ inventory_hostname }}/etc/elasticsearch/certs/transport.key") }}'