Ansible

Siehe auch

Ansible (2012 entstanden, 2016 von Red Hat gekauft) ist ein SSH-basiertes Verwaltungstool zur Orchestrierung und Konfiguration. Ansible kommt ohne Agenten aus und erledigt durchzuführende Aktionen per SSH und Python >= 2.6 direkt auf den Zielsystemen.

Begriffe

  • Control Node: = Deployment Host; der Host, der Ansible ausführt

  • Managed Node: der vom Control Node zu konfigurierende Client (bei Salt heisst dieser beispielsweise Minion)

Links

Ansible Cheat Sheet

Control Node - Installation per pip:

# using pip, for the current user:
pip install --user ansible ansible-core ansible-base

Control Node - Installation per Subscription Manager:

# with full support
subscription-manager register
subscription-manager role --set="Red Hat Enterprise Linux Server"
subscription-manager list --available
subscription-manager attach --pool=<engine-subscription-pool>

# with limited support
subscription-manager refresh

# after one of both:
subscription-manager repos --enable ansible-2-for-rhel-8-x86_64-rpms

# then:
yum install ansible

Ansible System Roles von Red Hat installieren:

yum -y install rhel-system-roles
# /usr/share/ansible/roles/rhel-system-roles.\*
# /usr/share/doc/rhel-system-roles/

Voraussetzungen auf den Managed Hosts:

# RHEL 8
dnf module install python36
dnf install python3-libselinux

# RHEL 7
yum install python
yum install libselinux-python

Doc:

# list all or specific type of modules
ansible-doc --type become|module|... --list

ansible-doc ping
ansible-doc --snippet ping

Konfiguration - ./ansible.cfg überschreibt ~/.ansible.cfg überschreibt /etc/ansible/ansible.cfg. Beispiel:

ansible.cfg
[defaults]
ask_pass = false
inventory = /path/to/inventory_dir
log_path = /path/to/ansible.log
remote_user = linuxfabrik

[privilege_escalation]
# Disable privilege escalation by default.
become = False
# If privilege escalation is enabled (from the command line), configure default settings
# to have Ansible use the sudo method to switch to the root user account
become_ask_pass = False
become_method = sudo
become_user = root

Mögliche Verzeichnisstruktur:

project
+-- ansible.cfg
+-- group_vars
    +-- group_name (file or dir)
+-- host_vars
    +-- hostname
        +-- vault
+-- inventory
    +-- hosts
+-- roles
+-- playbooks

Inventory als ini-File (hosts.ini):

[ch]
zurich[1:2].linuxfabrik.ch

[de]
user=joe
duesseldorf.linuxfabrik.de ansible_connection=ssh ansible_user=linus ansible_port=6666 ansible_host=192.0.2.50
stuttgart.linuxfabrik.de

[europe:children]
ch
de

Inventory als YAML-File (hosts.yml):

test:
  hosts:
    192.0.2.34:
    192.0.2.199:

# a comment
rocky8:
  hosts:
    192.0.2.249:
  children:
    cis_rocky8:
      vars:
        ansible_become: true
      hosts:
        192.0.2.249:

Interessante group_vars/host_vars:

ansible_connection: winrm
ansible_port: 5986

facts_file (Variablen für Playbooks):

[general]
key = value

Variablen - Reihenfolge des Überschreibens (nachfolgende überschreibt vorhergehende):

  • role/defaults

  • inventory

  • group_vars

  • host_vars

  • Command-Line

Variablen:

group_vars/myvars.yml
# Vars (key: value)
var1: value1
var2: value2

# Dictionary. Usage: `{{ users['linux']['home_dir'] }}`
users:
  linus:
    first_name: Linus
    home_dir: /users/linus

# List
wishlist:
  - item1
  - item2

Die nützlichsten Ansible-internen „Magic“-Variablen:

  • group_names (when: "dev" in group_names)

  • groups

  • hostvars

  • inventory_hostname

Häufiger verwendete Ansible Facts:

ansible --module-name=setup localhost
ansible_facts['date_time']
ansible_facts['default_ipv4']
ansible_facts['devices']
ansible_facts['distribution']
ansible_facts['distribution_version']
ansible_facts['dns']
ansible_facts['fqdn']
ansible_facts['hostname']
ansible_facts['interfaces']
ansible_facts['kernel']
ansible_facts['memfree_mb']
ansible_facts['memtotal_mb']
ansible_facts['mounts']
ansible_facts['os_family']
ansible_facts['processor_count']

Playbook-Elemente:

- name: My first play.
  hosts:
    - localhost
    - srv.linuxfabrik.ch
    - 192.168.109.*

  remote_user: ansible
  become: true

  vars:
    user: linus
    home: /home/linus
    facts_file: custom.fact
  vars_files:
    - vault/secret.yml

  # for example if VM currently does not exist:
  gather_facts: true

  # static, preprocessed
  import_playbook: db.yml
  import_tasks: install.yml

  # dynamic, during the run
  include_role: ...
  include_tasks: tasks/environment.yml

  pre_tasks:
    - name: ...

  serial: 2
  tasks:
    - name: ...
      task_name:
        ...
      delegate_to: localhost
  roles:
    - role: sshd
      key1: value1

  post_tasks:
    - name: ...

Mögliche Elemente eines Tasks:

- name: "Describe what we do here with {{ variables }}"
  hosts: localhost
  become: false

  <modulename>:
    <attr1>: "{{ the_devil }}"
    <attr2>: "{{ lookup('file', 'files/' + item.uid) }}"
    <attr3>: present

  loop: "{{ var }}"  # use "{{ item }}"
  loop_control:
    loop_var: i      # use "{{ i }}"

  register: result

  when: >
    ( a == "RedHat" and a == "7" )
    or
    ( a == "Fedora" and a == "28" )
  ignore_errors: true
  failed_when:
  changed_when:
    - false
    - not a
    - a is not defined
    - a is failed
    - "Success" not in a.stdout
    - "dev" in a
    - a == "RedHat" or a == "Fedora"
    - a < 256

  notify:
    - systemctl restart httpd
  force_handlers: true

- name: Print all facts
  debug:
    var: ansible_facts
    verbosity: 2

- name: ...
  block:
    ...
  rescue:
    # tasks to run if the tasks defined in the block clause fail
  always:
    # tasks that will always run independently of the success or failure of tasks
    # defined in the block and rescue clauses
  when:
    # also applies to rescue and always clauses if present

Role erstellen:

ansible-galaxy init myrole

Handler sind nichts anderes als Tasks:

- name: systemctl restart httpd
  systemd:
    name: httpd
    state: restarted
  when: a == false and b is not defined

Dependencies (meta/main.yml):

dependencies:
  - role: apache
    port: 8080
  - name: my-role
    src: https://github.com/Linuxfabrik/lfops
    scm: git
    version: main

Häufig verwendete Core-Module:

setup:

blockinfile:
lineinfile:

file: (= mkdir, chmod etc.)
stat:
sefcontext:

copy:
fetch:
synchronize: (= rsync)

assert:
that:

fail:

authorized_key:
known_hosts:

command:
shell:

at:
cron:

debug:

apt:
dnf:
gem:
package:
package_facts:
pip:
rpm_key:
yum:
yum_repository:

service:
systemd:

filesystem: (= Filesystem erzeugen)
lvg:
lvol:
mount:
parted:

firewalld:
nmcli:

get_url:
uri:

mysql_user:

group:
user: (inkl. SSH-Keys)

redhat_subscription:
rhsm_repository:

template: (Jinja2)

timezone:

reboot:

Newlines behalten:

include_newlines: |
    Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod
    tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam,
    quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo
    consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse
    cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non
    proident, sunt in culpa qui officia deserunt mollit anim id est laborum.

Newlines in Spaces umwandeln:

fold_newlines: >
    Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod
    tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam,
    quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo
    consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse
    cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non
    proident, sunt in culpa qui officia deserunt mollit anim id est laborum.

Jinja-Templating:

# {{ ansible_managed }}
{# Jinja comment #}
{{ ansible_facts['default_ipv4']['address'] }}

{% if finished %}
    {{ result }}
{% endif %}

{% for user in users if not user == "root" %}
    {{ loop.index }}: {{ user }}
{% endfor %}
- name: template render
  template:
    src: path/to/app.conf.j2
    dest: /path/to/app.conf

Jinja-Filter:

{{ output | from_json }}
{{ output | to_json }}
{{ output | to_nice_json }}
{{ output | from_yaml }}
{{ output | to_yaml }}
{{ output | to_nice_yaml }}

Ad-hoc:

ansible all|localhost|ungrouped|mygroup|mysrv --inventory inventory --user user --one-line ...

ansible ... --list-hosts
ansible ... --module-name=setup --args='filter=ansible_devices'
ansible ... --module-name=ping
ansible ... --module-name=command|shell|raw --args='uptime'
ansible ... --module-name=user --args="name=the_devil uid=666 state=present"

# request privilege escalation from the command line
ansible ... --module-name copy --args='content="Managed by Ansible" dest=/etc/motd' --become

Ansible-Config (dazu ins Ansible-Verzeichnis wechseln):

# list all options:
ansible-config list

# cd to your working directory
ansible-config dump --verbose --only-changed

Ansible-Inventory veranschaulichen:

ansible-inventory --inventory path/to/inventory --graph

Run der Ansible Playbooks:

ansible-playbook --syntax-check path/to/playbook.yml

ansible-playbook -vvvv path/to/playbook.yml
ansible-playbook --inventory path/to/hosts.yml path/to/playbook.yml

ansible-playbook --step path/to/playbook.yml
ansible-playbook --start-at-task "my task" path/to/playbook.yml

# dry-run / smoke tests without changing anything:
ansible-playbook --check path/to/playbook.yml

# report the changes made to (small) files
ansible-playbook --diff path/to/playbook.yml

ansible-playbook --extra-vars='key1=value1 key2=value2' path/to/playbook.yml

Tipp

Vor einem Produktiv-Run empfiehlt sich ein ansible-playbook --check --diff playbook.yml, um die kommenden Änderungen zu prüfen.

Aber beachten:

  • Wenn Tasks über Conditionals gesteuert werden, funktioniert --check möglicherweise nicht wie erwartet.

  • Tasks mit check_mode: false werden immer, Tasks mit check_mode: true nur im Check-Mode ausgeführt.

Tipp

ansible-config, ansible-doc, ansible-inventory und ansible-playbook können durch ansible-navigator ersetzt werden, welches zudem mit einem TUI aufwarten kann.

Ansible-Vault (Verwendung per vars_files:):

ansible-vault create --vault-password-file=vault-pass secret.yml
ansible-vault view secret.yml
ansible-vault edit secret.yml
ansible-vault encrypt plain.yml --output=secret.yml
ansible-vault decrypt secret.yml --output=plain.yml
ansible-vault rekey secret.yml
ansible-vault rekey --new-vault-password-file=NEW_VAULT_PASSWORD_FILE secret.yml

ansible-playbook --ask-vault-pass|--vault-password-file=vault-pass
ansible-playbook --vault-id @prompt --vault-password-file=vault-pw-file playbook.yml

Ansible Galaxy:

ansible-galaxy list
ansible-galaxy search 'redis' --platforms EL
ansible-galaxy info geerlingguy.redis
ansible-galaxy install --role-file roles/requirements.yml --roles-path roles
ansible-galaxy collection install git@github.com:Linuxfabrik/lfops.git
ansible-galaxy remove nginx-acme-ssh

Eine Ansible Galaxy requirements.yml-Datei:

- name: my-role
  src: https://github.com/Linuxfabrik/lfops
  scm: git
  version: main

Ansible und Windows

Auf den Windows-Maschinen WinRM aktivieren (WinRM HTTP über Port 5985, oder WinRM HTTPS über Port 5986): https://docs.ansible.com/ansible/latest/os_guide/windows_winrm.html

Die WinRM-Konfiguration auf den Windows-Servers ermittelt man so:

PS C:\Windows\system32> winrm enumerate winrm/config/Listener
Listener
    Address = *
    Transport = HTTP
    Port = 5985
    Hostname
    Enabled = true
    URLPrefix = wsman
    CertificateThumbprint
    ListeningOn = 127.0.0.1, 192.168.0.62, ::1, fe80::abcd:1234:dcba:4d65%15

Per Adhoc-Kommando und beispielsweise dem raw-Modul (welches WinRM beherrscht):

ansible windows --inventory myinv --module-name=raw --args='winrm enumerate winrm/config/Listener'

Ansible Adhoc-Kommando mit für Windows ausgelegten Modulen:

ansible --inventory myinv winsrv01 --module-name=ansible.windows.win_copy --args='src=/tmp/service.dist/ dest=C:\\ProgramData\\icinga2\\usr\\lib64\\nagios\\plugins\\'

Ansible-Lint

Hält man die Best Practices ein?

dnf -y install ansible-lint

# list alle rules
ansible-lint -L

# check my playbook
ansible-lint playbooks/playbook.yml

Ansible-Review

dnf -y install ansible-review

Mitogen - Ansible Beine machen

Vielversprechend. Siehe https://mitogen.networkgenomics.com

  • mitogen v0.2.9 bietet keinen Support für Python-Interpreter Discovery, und funktioniert deshalb nicht unter RHEL 8 (da es kein /usr/bin/python gibt).

  • mitogen v0.2.10 (pre-release) hat noch Probleme: AttributeError: module 'ansible_collections.ansible.builtin.plugins.action' has no attribute 'ActionBase'.

Zeiten (ansible-playbook --inventory inv playbook.yml --tags mytag):

  • default: 0:07:15.744 (h:mm:ss)

  • ssh-pipelining: 0:03:59.723

  • mitogen gegen CentOS 7: 0:00:11.122

  • mitogen gegen CentOS 7 & ssh-pipelining: 0:00:11.256

  • mitogen gegen CentOS 8: 0:00:17.376

Besonderheiten von Ansible

Ansible hat einige Besonderheiten bzw. weist an einigen Stellen ein unerwartetes Verhalten auf. Hier eine kleine Sammlung:

Kombinieren von Variablen

Ziel ist es, Role Defaults aus einer anderen Rolle (__dependent_var), oder den Group- (__group_var) bzw Host-Vars (__host_var) zu überschreiben.

defaults/main.yml
test__list_of_dicts__dependent_var: []
test__list_of_dicts__group_var: []
test__list_of_dicts__host_var: []
test__list_of_dicts__role_var:
  - name: 'a'
    state: 'present'
  - name: 'b'
    state: 'present'

test__list_of_dicts__combined_var: '{{ test__list_of_dicts__role_var +
  test__list_of_dicts__dependent_var +
  test__list_of_dicts__group_var +
  test__list_of_dicts__host_var
  | flatten
 }}'


test__list_of_strings__dependent_var: []
test__list_of_strings__group_var: []
test__list_of_strings__host_var: []
test__list_of_strings__role_var:
  - 'a'
  - 'b'

test__list_of_strings__combined_var: '{{ test__list_of_strings__role_var +
  test__list_of_strings__dependent_var +
  test__list_of_strings__group_var +
  test__list_of_strings__host_var
  | flatten
 }}'


test__dict__role_var:
  alias:
    enabled: true
    state: 'present'

  blias:
    enabled: true
    state: 'present'

test__dict__group_var: {}
test__dict__host_var: {}
test__dict__dependent_var: {}

test__dict__combined_var: '{{ test__dict__role_var
  | combine(test__dict__dependent_var)
  | combine(test__dict__group_var)
  | combine(test__dict__host_var)
 }}'
defaults/main.yml
test__list_of_dicts__host_var:
  - name: 'c'
    state: 'absent'

  - name: 'a'
    state: 'absent'
# output
# - name: a
#   state: present
# - name: b
#   state: present
# - name: c
#   state: absent
# - name: a
#   state: absent


test__list_of_strings__host_var:
  - 'c'
# output
# - a
# - b
# - c


test__dict__host_var:
  clias:
    enabled: true
    state: 'present'

  blias:
    enabled: true
    state: 'absent'
# output
# alias:
#   enabled: true
#   state: present
# blias:
#   enabled: true
#   state: absent
# clias:
#   enabled: true
#   state: present

Ergebnis: TODO

AWX

Die empfohlene Art, AWX zu installieren, ist in einem Kubernetes-Cluster. Für Testzwecke kann jedoch schnell ein lokaler Kubernetes-Cluster mit Hilfe von minikube eingerichtet werden. Dazu muss zunächst Docker installiert werden, dies kann mit unserer linuxfabrik.lfops.docker LFOps Rolle geschehen.

dnf install git -y
dnf install https://storage.googleapis.com/minikube/releases/latest/minikube-latest.x86_64.rpm -y

useradd minikube --groups docker
sudo -u minikube -i
minikube start

mkdir -p ~/.local/bin
cat > ~/.local/bin/kubectl << 'EOF'
#!/bin/bash
minikube kubectl -- "$@"
EOF
chmod +x ~/.local/bin/kubectl

kubectl get nodes
kubectl get pods --all-namespaces

Jetzt kann AWX aufgsetzt werden:

git clone https://github.com/ansible/awx-operator.git
cd awx-operator
# https://github.com/ansible/awx-operator/releases
git checkout tags/2.19.1
make deploy

kubectl config set-context --current --namespace=awx
kubectl get pods
# awx-operator-controller-manager-687b856498-zjcwv   2/2     Running   0          4m6s

cat > awx-demo.yml << 'EOF'
---
apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
  name: awx-demo
spec:
  service_type: nodeport
EOF
kubectl apply --filename awx-demo.yml

Der erste Start von AWX dauert eine Weile. Am Ende sollten 4 Pods und ein NodePort Service zu sehen sein:

kubectl get pods --selector "app.kubernetes.io/managed-by=awx-operator"
NAME                              READY   STATUS      RESTARTS   AGE
# awx-demo-migration-24.6.1-4wzfg   0/1     Completed   0          3h46m
# awx-demo-postgres-15-0            1/1     Running     0          3h48m
# awx-demo-task-5bc65c5867-qhnjv    4/4     Running     0          3h48m
# awx-demo-web-9fd8667cc-85nf4      3/3     Running     0          3h48m

kubectl get svc --selector "app.kubernetes.io/managed-by=awx-operator"
# NAME                   TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
# awx-demo-postgres-15   ClusterIP   None           <none>        5432/TCP       3h48m
# awx-demo-service       NodePort    10.101.12.92   <none>        80:32691/TCP   3h48m

Mit minikube service -n awx awx-demo-service --url wird die URL des NodePort Dienstes angezeigt, z.B. http://192.168.49.2:32691. Die URL ist allerdings nur auf dem Host erreichbar, auf dem minikube läuft - wer per SSH zugreifen will, muss einen SSH-Tunnel einrichten: ssh -L 32691:192.168.49.2:32691 minikube-host. Nun kann http://localhost:32691 auf der Workstation geöffnet werden.

Um an das Passwort des admin-Benutzers zu gelangen, muss das Kubernetes Secret ausgelesen werden:

kubectl get secret awx-demo-admin-password -o jsonpath="{.data.password}" | base64 --decode; echo

Damit ein Playbook ausgeführt werden kann, müssen die folgenden Ressourcen angelegt werden:

  1. Inventory

  2. Project

  3. Credential

  4. Template

Danach kann über das entsprechende Template der Run gestartet werden.

ARA

Ansible Runtime Analyzer.

python3 -m pip install --user ansible "ara[server]"
export ANSIBLE_CALLBACK_PLUGINS="$(python3 -m ara.setup.callback_plugins)"
ara-manage runserver

Anschliessend ein Playbook laufen lassen, http://localhost:8000 aufrufen, und die Ansible-Outputs im Browser analysieren.

Ansible Facts

ansible_facts:
  all_ipv4_addresses:
  - 192.0.2.95
  all_ipv6_addresses: []
  ansible_local: {}
  apparmor:
    status: disabled
  architecture: x86_64
  bios_date: 04/01/2014
  bios_vendor: SeaBIOS
  bios_version: 1.16.0-1.fc36
  board_asset_tag: NA
  board_name: NA
  board_serial: NA
  board_vendor: NA
  board_version: NA
  chassis_asset_tag: NA
  chassis_serial: NA
  chassis_vendor: QEMU
  chassis_version: pc-i440fx-6.2
  cmdline:
    BOOT_IMAGE: (hd0,msdos1)/vmlinuz-4.18.0-372.19.1.el8_6.x86_64
    biosdevname: '0'
    crashkernel: auto
    net.ifnames: '0'
    no_timer_check: true
    quiet: true
    rd.lvm.lv: rl_rocky8/swap
    resume: /dev/mapper/rl_rocky8-swap
    rhgb: true
    ro: true
    root: /dev/mapper/rl_rocky8-root
  date_time:
    date: '2022-09-19'
    day: '19'
    epoch: '1663612370'
    epoch_int: '1663612370'
    hour: '18'
    iso8601: '2022-09-19T18:32:50Z'
    iso8601_basic: 20220919T183250307002
    iso8601_basic_short: 20220919T183250
    iso8601_micro: '2022-09-19T18:32:50.307002Z'
    minute: '32'
    month: 09
    second: '50'
    time: '18:32:50'
    tz: UTC
    tz_dst: UTC
    tz_offset: '+0000'
    weekday: Monday
    weekday_number: '1'
    weeknumber: '38'
    year: '2022'
  default_ipv4:
    address: 192.0.2.95
    alias: eth0
    broadcast: 192.0.2.255
    gateway: 192.0.2.1
    interface: eth0
    macaddress: 52:54:00:55:f4:62
    mtu: 1500
    netmask: 255.255.255.0
    network: 192.0.2.0
    type: ether
  default_ipv6: {}
  device_links:
    ids:
      dm-0:
      - dm-name-rl_rocky8-root
      - dm-uuid-LVM-uIlBj9tx7F37rhGSJbOJ3KVm7dc48WnmR9gwER0reve9YXZR6LLH51wwC3q3oZ1N
      dm-1:
      - dm-name-rl_rocky8-swap
      - dm-uuid-LVM-uIlBj9tx7F37rhGSJbOJ3KVm7dc48WnmcsWSXzCnjhhgc95tUKmF5xaJJZXTPTho
      vda2:
      - lvm-pv-uuid-xm3U6V-5442-1GuQ-FMge-id3A-Jjko-GR1Y6u
    labels: {}
    masters:
      vda2:
      - dm-0
      - dm-1
    uuids:
      dm-0:
      - 79d9dea6-fc39-42ce-bea2-ae14a5623cc9
      dm-1:
      - ba088962-39f9-47c4-ba6b-6f00a6fb3c97
      vda1:
      - 35de54a8-c56e-4f38-9c23-5ec570cfbea0
  devices:
    dm-0:
      holders: []
      host: ''
      links:
        ids:
        - dm-name-rl_rocky8-root
        - dm-uuid-LVM-uIlBj9tx7F37rhGSJbOJ3KVm7dc48WnmR9gwER0reve9YXZR6LLH51wwC3q3oZ1N
        labels: []
        masters: []
        uuids:
        - 79d9dea6-fc39-42ce-bea2-ae14a5623cc9
      model: null
      partitions: {}
      removable: '0'
      rotational: '1'
      sas_address: null
      sas_device_handle: null
      scheduler_mode: ''
      sectors: '262021120'
      sectorsize: '512'
      size: 124.94 GB
      support_discard: '512'
      vendor: null
      virtual: 1
    dm-1:
      holders: []
      host: ''
      links:
        ids:
        - dm-name-rl_rocky8-swap
        - dm-uuid-LVM-uIlBj9tx7F37rhGSJbOJ3KVm7dc48WnmcsWSXzCnjhhgc95tUKmF5xaJJZXTPTho
        labels: []
        masters: []
        uuids:
        - ba088962-39f9-47c4-ba6b-6f00a6fb3c97
      model: null
      partitions: {}
      removable: '0'
      rotational: '1'
      sas_address: null
      sas_device_handle: null
      scheduler_mode: ''
      sectors: '4308992'
      sectorsize: '512'
      size: 2.05 GB
      support_discard: '512'
      vendor: null
      virtual: 1
    vda:
      holders: []
      host: ''
      links:
        ids: []
        labels: []
        masters: []
        uuids: []
      model: null
      partitions:
        vda1:
          holders: []
          links:
            ids: []
            labels: []
            masters: []
            uuids:
            - 35de54a8-c56e-4f38-9c23-5ec570cfbea0
          sectors: '2097152'
          sectorsize: 512
          size: 1.00 GB
          start: '2048'
          uuid: 35de54a8-c56e-4f38-9c23-5ec570cfbea0
        vda2:
          holders:
          - rl_rocky8-swap
          - rl_rocky8-root
          links:
            ids:
            - lvm-pv-uuid-xm3U6V-5442-1GuQ-FMge-id3A-Jjko-GR1Y6u
            labels: []
            masters:
            - dm-0
            - dm-1
            uuids: []
          sectors: '266336256'
          sectorsize: 512
          size: 127.00 GB
          start: '2099200'
          uuid: null
      removable: '0'
      rotational: '1'
      sas_address: null
      sas_device_handle: null
      scheduler_mode: none
      sectors: '268435456'
      sectorsize: '512'
      size: 128.00 GB
      support_discard: '512'
      vendor: '0x1af4'
      virtual: 1
  discovered_interpreter_python: /usr/libexec/platform-python
  distribution: Rocky
  distribution_file_parsed: true
  distribution_file_path: /etc/redhat-release
  distribution_file_variety: RedHat
  distribution_major_version: '8'
  distribution_release: Green Obsidian
  distribution_version: '8.6'
  dns:
    nameservers:
    - 192.0.2.1
    search:
    - localdomain
  domain: localdomain
  effective_group_id: 0
  effective_user_id: 0
  env:
    HISTSIZE: '100000'
    HOME: /root
    LANG: C.utf8
    LC_ALL: C.utf8
    LC_CTYPE: C.UTF-8
    LC_MEASUREMENT: de_CH.UTF-8
    LC_MESSAGES: C.utf8
    LC_MONETARY: de_CH.UTF-8
    LC_NUMERIC: de_CH.UTF-8
    LC_PAPER: de_CH.UTF-8
    LC_TIME: de_CH.UTF-8
    LOGNAME: root
    LS_COLORS: 'rs=0:di=38;5;33:ln=38;5;51:mh=00:pi=40;38;5;11:so=38;5;13:do=38;5;5:bd=48;5;232;38;5;11:cd=48;5;232;38;5;3:or=48;5;232;38;5;9:mi=01;05;37;41:su=48;5;196;38;5;15:sg=48;5;11;38;5;16:ca=48;5;196;38;5;226:tw=48;5;10;38;5;16:ow=48;5;10;38;5;21:st=48;5;21;38;5;15:ex=38;5;40:*.tar=38;5;9:*.tgz=38;5;9:*.arc=38;5;9:*.arj=38;5;9:*.taz=38;5;9:*.lha=38;5;9:*.lz4=38;5;9:*.lzh=38;5;9:*.lzma=38;5;9:*.tlz=38;5;9:*.txz=38;5;9:*.tzo=38;5;9:*.t7z=38;5;9:*.zip=38;5;9:*.z=38;5;9:*.dz=38;5;9:*.gz=38;5;9:*.lrz=38;5;9:*.lz=38;5;9:*.lzo=38;5;9:*.xz=38;5;9:*.zst=38;5;9:*.tzst=38;5;9:*.bz2=38;5;9:*.bz=38;5;9:*.tbz=38;5;9:*.tbz2=38;5;9:*.tz=38;5;9:*.deb=38;5;9:*.rpm=38;5;9:*.jar=38;5;9:*.war=38;5;9:*.ear=38;5;9:*.sar=38;5;9:*.rar=38;5;9:*.alz=38;5;9:*.ace=38;5;9:*.zoo=38;5;9:*.cpio=38;5;9:*.7z=38;5;9:*.rz=38;5;9:*.cab=38;5;9:*.wim=38;5;9:*.swm=38;5;9:*.dwm=38;5;9:*.esd=38;5;9:*.jpg=38;5;13:*.jpeg=38;5;13:*.mjpg=38;5;13:*.mjpeg=38;5;13:*.gif=38;5;13:*.bmp=38;5;13:*.pbm=38;5;13:*.pgm=38;5;13:*.ppm=38;5;13:*.tga=38;5;13:*.xbm=38;5;13:*.xpm=38;5;13:*.tif=38;5;13:*.tiff=38;5;13:*.png=38;5;13:*.svg=38;5;13:*.svgz=38;5;13:*.mng=38;5;13:*.pcx=38;5;13:*.mov=38;5;13:*.mpg=38;5;13:*.mpeg=38;5;13:*.m2v=38;5;13:*.mkv=38;5;13:*.webm=38;5;13:*.ogm=38;5;13:*.mp4=38;5;13:*.m4v=38;5;13:*.mp4v=38;5;13:*.vob=38;5;13:*.qt=38;5;13:*.nuv=38;5;13:*.wmv=38;5;13:*.asf=38;5;13:*.rm=38;5;13:*.rmvb=38;5;13:*.flc=38;5;13:*.avi=38;5;13:*.fli=38;5;13:*.flv=38;5;13:*.gl=38;5;13:*.dl=38;5;13:*.xcf=38;5;13:*.xwd=38;5;13:*.yuv=38;5;13:*.cgm=38;5;13:*.emf=38;5;13:*.ogv=38;5;13:*.ogx=38;5;13:*.aac=38;5;45:*.au=38;5;45:*.flac=38;5;45:*.m4a=38;5;45:*.mid=38;5;45:*.midi=38;5;45:*.mka=38;5;45:*.mp3=38;5;45:*.mpc=38;5;45:*.ogg=38;5;45:*.ra=38;5;45:*.wav=38;5;45:*.oga=38;5;45:*.opus=38;5;45:*.spx=38;5;45:*.xspf=38;5;45:'
    MAIL: /var/mail/root
    PATH: /sbin:/bin:/usr/sbin:/usr/bin
    PWD: /home/vagrant
    SHELL: /bin/bash
    SHLVL: '1'
    SUDO_COMMAND: /bin/sh -c echo BECOME-SUCCESS-qucmobtsgnccianxsbyhtflpqpcmmaza ; /usr/libexec/platform-python /home/vagrant/.ansible/tmp/ansible-tmp-1663612369.7760205-245841-188365991067997/AnsiballZ_setup.py
    SUDO_GID: '1000'
    SUDO_UID: '1000'
    SUDO_USER: vagrant
    TERM: xterm-256color
    USER: root
    _: /usr/libexec/platform-python
  eth0:
    active: true
    device: eth0
    features:
      esp_hw_offload: off [fixed]
      esp_tx_csum_hw_offload: off [fixed]
      fcoe_mtu: off [fixed]
      generic_receive_offload: 'on'
      generic_segmentation_offload: 'on'
      highdma: on [fixed]
      hw_tc_offload: off [fixed]
      l2_fwd_offload: off [fixed]
      large_receive_offload: off [fixed]
      loopback: off [fixed]
      netns_local: off [fixed]
      ntuple_filters: off [fixed]
      receive_hashing: off [fixed]
      rx_all: off [fixed]
      rx_checksumming: on [fixed]
      rx_fcs: off [fixed]
      rx_gro_hw: off [fixed]
      rx_gro_list: 'off'
      rx_udp_gro_forwarding: 'off'
      rx_udp_tunnel_port_offload: off [fixed]
      rx_vlan_filter: on [fixed]
      rx_vlan_offload: off [fixed]
      rx_vlan_stag_filter: off [fixed]
      rx_vlan_stag_hw_parse: off [fixed]
      scatter_gather: 'on'
      tcp_segmentation_offload: 'on'
      tls_hw_record: off [fixed]
      tls_hw_rx_offload: off [fixed]
      tls_hw_tx_offload: off [fixed]
      tx_checksum_fcoe_crc: off [fixed]
      tx_checksum_ip_generic: 'on'
      tx_checksum_ipv4: off [fixed]
      tx_checksum_ipv6: off [fixed]
      tx_checksum_sctp: off [fixed]
      tx_checksumming: 'on'
      tx_esp_segmentation: off [fixed]
      tx_fcoe_segmentation: off [fixed]
      tx_gre_csum_segmentation: off [fixed]
      tx_gre_segmentation: off [fixed]
      tx_gso_list: off [fixed]
      tx_gso_partial: off [fixed]
      tx_gso_robust: on [fixed]
      tx_ipxip4_segmentation: off [fixed]
      tx_ipxip6_segmentation: off [fixed]
      tx_lockless: off [fixed]
      tx_nocache_copy: 'off'
      tx_scatter_gather: 'on'
      tx_scatter_gather_fraglist: off [fixed]
      tx_sctp_segmentation: off [fixed]
      tx_tcp6_segmentation: 'on'
      tx_tcp_ecn_segmentation: 'on'
      tx_tcp_mangleid_segmentation: 'off'
      tx_tcp_segmentation: 'on'
      tx_tunnel_remcsum_segmentation: off [fixed]
      tx_udp_segmentation: off [fixed]
      tx_udp_tnl_csum_segmentation: off [fixed]
      tx_udp_tnl_segmentation: off [fixed]
      tx_vlan_offload: off [fixed]
      tx_vlan_stag_hw_insert: off [fixed]
      vlan_challenged: off [fixed]
    hw_timestamp_filters: []
    ipv4:
      address: 192.0.2.95
      broadcast: 192.0.2.255
      netmask: 255.255.255.0
      network: 192.0.2.0
    macaddress: 52:54:00:55:f4:62
    module: virtio_net
    mtu: 1500
    pciid: virtio2
    promisc: false
    speed: -1
    timestamping: []
    type: ether
  fibre_channel_wwn: []
  fips: false
  form_factor: Other
  fqdn: localhost.localdomain
  gather_subset:
  - all
  hostname: rocky8
  hostnqn: ''
  interfaces:
  - eth0
  - lo
  is_chroot: false
  iscsi_iqn: ''
  kernel: 4.18.0-372.19.1.el8_6.x86_64
  kernel_version: '#1 SMP Tue Aug 2 16:19:42 UTC 2022'
  lo:
    active: true
    device: lo
    features:
      esp_hw_offload: off [fixed]
      esp_tx_csum_hw_offload: off [fixed]
      fcoe_mtu: off [fixed]
      generic_receive_offload: 'on'
      generic_segmentation_offload: 'on'
      highdma: on [fixed]
      hw_tc_offload: off [fixed]
      l2_fwd_offload: off [fixed]
      large_receive_offload: off [fixed]
      loopback: on [fixed]
      netns_local: on [fixed]
      ntuple_filters: off [fixed]
      receive_hashing: off [fixed]
      rx_all: off [fixed]
      rx_checksumming: on [fixed]
      rx_fcs: off [fixed]
      rx_gro_hw: off [fixed]
      rx_gro_list: 'off'
      rx_udp_gro_forwarding: 'off'
      rx_udp_tunnel_port_offload: off [fixed]
      rx_vlan_filter: off [fixed]
      rx_vlan_offload: off [fixed]
      rx_vlan_stag_filter: off [fixed]
      rx_vlan_stag_hw_parse: off [fixed]
      scatter_gather: 'on'
      tcp_segmentation_offload: 'on'
      tls_hw_record: off [fixed]
      tls_hw_rx_offload: off [fixed]
      tls_hw_tx_offload: off [fixed]
      tx_checksum_fcoe_crc: off [fixed]
      tx_checksum_ip_generic: on [fixed]
      tx_checksum_ipv4: off [fixed]
      tx_checksum_ipv6: off [fixed]
      tx_checksum_sctp: on [fixed]
      tx_checksumming: 'on'
      tx_esp_segmentation: off [fixed]
      tx_fcoe_segmentation: off [fixed]
      tx_gre_csum_segmentation: off [fixed]
      tx_gre_segmentation: off [fixed]
      tx_gso_list: 'on'
      tx_gso_partial: off [fixed]
      tx_gso_robust: off [fixed]
      tx_ipxip4_segmentation: off [fixed]
      tx_ipxip6_segmentation: off [fixed]
      tx_lockless: on [fixed]
      tx_nocache_copy: off [fixed]
      tx_scatter_gather: on [fixed]
      tx_scatter_gather_fraglist: on [fixed]
      tx_sctp_segmentation: 'on'
      tx_tcp6_segmentation: 'on'
      tx_tcp_ecn_segmentation: 'on'
      tx_tcp_mangleid_segmentation: 'on'
      tx_tcp_segmentation: 'on'
      tx_tunnel_remcsum_segmentation: off [fixed]
      tx_udp_segmentation: 'on'
      tx_udp_tnl_csum_segmentation: off [fixed]
      tx_udp_tnl_segmentation: off [fixed]
      tx_vlan_offload: off [fixed]
      tx_vlan_stag_hw_insert: off [fixed]
      vlan_challenged: on [fixed]
    hw_timestamp_filters: []
    ipv4:
      address: 127.0.0.1
      broadcast: ''
      netmask: 255.0.0.0
      network: 127.0.0.0
    mtu: 65536
    promisc: false
    timestamping: []
    type: loopback
  lsb: {}
  lvm:
    lvs:
      root:
        size_g: '124.94'
        vg: rl_rocky8
      swap:
        size_g: '2.05'
        vg: rl_rocky8
    pvs:
      /dev/vda2:
        free_g: '0'
        size_g: '127.00'
        vg: rl_rocky8
    vgs:
      rl_rocky8:
        free_g: '0'
        num_lvs: '2'
        num_pvs: '1'
        size_g: '127.00'
  machine: x86_64
  machine_id: d0c58bb5ebf941b2b33f6c28dd11c25c
  memfree_mb: 2570
  memory_mb:
    nocache:
      free: 3198
      used: 533
    real:
      free: 2570
      total: 3731
      used: 1161
    swap:
      cached: 0
      free: 2103
      total: 2103
      used: 0
  memtotal_mb: 3731
  module_setup: true
  mounts:
  - block_available: 31970783
    block_size: 4096
    block_total: 32743680
    block_used: 772897
    device: /dev/mapper/rl_rocky8-root
    fstype: xfs
    inode_available: 65438971
    inode_total: 65505280
    inode_used: 66309
    mount: /
    options: rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
    size_available: 130952327168
    size_total: 134118113280
    uuid: 79d9dea6-fc39-42ce-bea2-ae14a5623cc9
  - block_available: 207979
    block_size: 4096
    block_total: 259584
    block_used: 51605
    device: /dev/vda1
    fstype: xfs
    inode_available: 523978
    inode_total: 524288
    inode_used: 310
    mount: /boot
    options: rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
    size_available: 851881984
    size_total: 1063256064
    uuid: 35de54a8-c56e-4f38-9c23-5ec570cfbea0
  nodename: rocky8.localdomain
  os_family: RedHat
  packages:
    MariaDB-client:
    - arch: x86_64
      epoch: null
      name: MariaDB-client
      release: 1.el8
      source: rpm
      version: 10.6.10
    MariaDB-common:
    - arch: x86_64
      epoch: null
      name: MariaDB-common
      release: 1.el8
      source: rpm
      version: 10.6.10
    MariaDB-server:
    - arch: x86_64
      epoch: null
      name: MariaDB-server
      release: 1.el8
      source: rpm
      version: 10.6.10
    # ...
    zlib:
    - arch: x86_64
      epoch: null
      name: zlib
      release: 18.el8_5
      source: rpm
      version: 1.2.11
  pkg_mgr: dnf
  proc_cmdline:
    BOOT_IMAGE: (hd0,msdos1)/vmlinuz-4.18.0-372.19.1.el8_6.x86_64
    biosdevname: '0'
    crashkernel: auto
    net.ifnames: '0'
    no_timer_check: true
    quiet: true
    rd.lvm.lv:
    - rl_rocky8/root
    - rl_rocky8/swap
    resume: /dev/mapper/rl_rocky8-swap
    rhgb: true
    ro: true
    root: /dev/mapper/rl_rocky8-root
  processor:
  - '0'
  - GenuineIntel
  - Intel Xeon Processor (Cooperlake)
  - '1'
  - GenuineIntel
  - Intel Xeon Processor (Cooperlake)
  processor_cores: 1
  processor_count: 2
  processor_nproc: 2
  processor_threads_per_core: 1
  processor_vcpus: 2
  product_name: Standard PC (i440FX + PIIX, 1996)
  product_serial: NA
  product_uuid: d0c58bb5-ebf9-41b2-b33f-6c28dd11c25c
  product_version: pc-i440fx-6.2
  python:
    executable: /usr/libexec/platform-python
    has_sslcontext: true
    type: cpython
    version:
      major: 3
      micro: 8
      minor: 6
      releaselevel: final
      serial: 0
    version_info:
    - 3
    - 6
    - 8
    - final
    - 0
  python_version: 3.6.8
  real_group_id: 0
  real_user_id: 0
  selinux:
    config_mode: enforcing
    mode: enforcing
    policyvers: 33
    status: enabled
    type: targeted
  selinux_python_present: true
  service_mgr: systemd
  ssh_host_key_ecdsa_public: AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMte51OUR/ZURw4fUQK1Uy0Y3kUJgOZf9/U4d5WQn0kHNV0ArEHlnmoq7FrWmr4Z3OG7hV/pVGr1d6hJ97jl+x4=
  ssh_host_key_ecdsa_public_keytype: ecdsa-sha2-nistp256
  ssh_host_key_ed25519_public: AAAAC3NzaC1lZDI1NTE5AAAAIL003rK9hTjPNRtO96xwAJysYgvKFQPauPmBvJfsu9rE
  ssh_host_key_ed25519_public_keytype: ssh-ed25519
  ssh_host_key_rsa_public: AAAAB3NzaC1yc2EAAAADAQABAAABgQCukH7ofmYgwsevlh1W+f+Y92S0idExXGHjw/mttDOICNmJSqETJNwO8IlfigkpJQxC/XRQ8hYIhXfxZOVMoa5c5L+IYs+itgD9TN8v7nCfUVgjDlSyDqKQoo9Oom4CQy9geYYSJqbDE7Du6vJi+EEo0sxo179TdS2FjkxfvLCqS834KNVcanKzgxw11rq44AGwedWS84feI1fXTgkFNDNy3ZOglXC7H+FqjN3WUaahy81Ung9WVpEnsoWv9PyNP9xKRUmcNXRVdaOFICAsbX97p3FsNJG89j+row5kPvrkUrTHdj1BrfqRrhrsykbYxr08I/lq+LyVPGDB2POpxdXo2EtNrjTBcOzmGpKvVd73WR9kLcNfsGHsKmgtLhVGYZqA9GUm28SCzC7Z0B2wnKa9k72RB15cwErE2g0XtIJwLXLI1/yFhf6drhHao1CzRCtGuu45CsA3CtGU/mlnZcVVGn/1kkaTLdNYWKj7Rcs2cK/57645auAEfuDxYAiOA58=
  ssh_host_key_rsa_public_keytype: ssh-rsa
  swapfree_mb: 2103
  swaptotal_mb: 2103
  system: Linux
  system_capabilities: []
  system_capabilities_enforced: 'False'
  system_vendor: QEMU
  uptime_seconds: 188787
  user_dir: /root
  user_gecos: root
  user_gid: 0
  user_id: root
  user_shell: /bin/bash
  user_uid: 0
  userspace_architecture: x86_64
  userspace_bits: '64'
  virtualization_role: guest
  virtualization_tech_guest:
  - kvm
  virtualization_tech_host:
  - kvm
  virtualization_type: kvm

Troubleshooting

ModuleNotFoundError: No module named 'ansible.module_utils.six.moves'
pip install --upgrade --user pip
pip install --force-reinstall --user ansible
Die Verbindung zu den Hosts geht z.B. nach einem Netzwerkunterbruch nicht mehr?

Um die Verbindungen neu zu erzwingen müssen die persistenten SSH-Verbindungen („ControlPaths“) gelöscht werden: rm ~/.ansible/cp/*.

Proxy Environment-Variablen verhindern die WinRM Verbindung?

Den Proxy für WinRM explizit deaktivieren:

ansible_winrm_proxy: '~'

Built on 2024-11-18