1. Presentations

Get the presentations here.

2. Overview

After this lab you will have a basic understanding of how ansible works and how to use and modify the predefined ansible roles for installing SAP HANA. This Lab uses the infrastructure of the Red Hat Partner Demo System.

The lab is preprovisoned. So please go to the following Webpage Select the following values:

  • Lab Code: P2H - SAP HANA Lab

  • Lab Key: Given in your session

  • Click Next

You will get your GUID and further access information to the lab

After this lab you will have a basic understanding of how to use and modify the predefined ansible roles for installing SAP HANA with Ansible Tower and Red Hat CloudForms.

2.1. Product Versions used in this lab:

Product Version

Red Hat CloudForms

4.6.4

Red Hat Enterprise Linux

7.4

Ansible Tower

3.3.0

Red Hat Satellite

6.4

2.2. Requirements to access and perform this lab

2.2.1. Base requirements

  • A computer with access to Internet :-)

  • SSH client (for Microsoft Windows users Putty and WinSCP or MobaXterm is recommended)

  • Firefox 17 or higher, or Chromium / Chrome

2.2.2. Server Environment

A full new demo environment is deployed on every request. To make the environment unique a 4 character identifier is assigned to it (i.e. 1e37), this identifier is referred in this documentation as your GUID.

The demo environment consists of the following systems:

Hostname Internal IP External name Description

tower.example.com

10.0.0.10

tower-GUID.rhpds.opentlc.com

Jump host and Ansible Tower host

cf.example.com

10.0.0.100

cf-GUID.rhpds.opentlc.com

CloudForms server

hana0.example.com

10.0.0.20

hana0-GUID.rhpds.opentlc.com

SAP HANA host

hana1.example.com

10.0.0.21

hana1-GUID.rhpds.opentlc.com

SAP HANA host

satellite.example.com

10.0.0.101

satellite-GUID.rhpds.opentlc.com

Red Hat Satellite server

3. Ansible foundation

3.1. Check that Ansible is installed

  1. Connect to the control node (workstation):

    # ssh root@tower-GUID.rhpds.opentlc.com
  2. Check that Ansible is installed and usable:

    [root@tower ~]# ansible --version
    ansible 2.4.2.0
      config file = /etc/ansible/ansible.cfg
      configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
      ansible python module location = /usr/lib/python2.7/site-packages/ansible
      executable location = /usr/bin/ansible
      python version = 2.7.5 (default, May 31 2018, 09:41:32) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)]

3.2. Check the Prerequisites

Ansible is keeping configuration management simple. Ansible requires no database or running daemons and can run easily on a laptop. On the managed hosts it needs no running agent.

Verify that the managed hosts accept password-less connections with key authentication from the tower-GUID node as user ansible, e.g.:

[root@tower-GUID ~]# su - ansible
[ansible@tower-GUID ~]$ ssh hana0.example.com
[ansible@servera ~]$ exit
[ansible@tower-GUID ~]$ ssh hana1.example.com
[ansible@serverb ~]$ exit

To allow user ansible to execute commands on hana0.example.com and hana1.example.com as root sudo needs to be configured on the managed hosts.

Test that the configuration allows ansible to run commands using sudo on hana0.example.com and hana1.example.com without a password, e.g.:

[ansible@tower-GUID ~]$ ssh hana0.example.com
[ansible@servera ~]$ sudo cat /etc/shadow
[ansible@servera ~]$ exit
In all subsequent exercises you should work as the ansible user on the control node if not explicitly told differently.

3.2.1. Working the Labs

You might have guessed by now this lab is pretty commandline-centric…​ :-)

  • Don’t type everything manually, use copy & paste from the browser when appropriate. But don’t stop to think and understand…​ ;-)

  • All labs where prepared using Vi, but feel free to use mc (function keys can be reached via Esc-<n>) or nano.

In the lab guide commands you are supposed to run are shown with or without the expected output, whatever makes more sense in the context.
The command line can wrap on the web page from time to time. Therefor the output is separated from the command line for better readability by an empty line. Anyway, the line you should actually run should be recognizable by the prompt. :-)

3.2.2. Challenge Labs

You will soon discover that many chapters in this lab guide come with a "Challenge Lab" section. These labs are meant to give you a small task to solve using what you have learned so far. The solution of the task is shown underneath a warning sign.

3.3. Getting Started with Ansible

3.3.1. The Inventory

To use the ansible command for host management, you need to provide an inventory file which defines a list of hosts to be managed from the control node. One way to do this is to specify the path to the inventory file with the -i option to the ansible command.

Make sure you are user ansible on tower-GUID. Create a directory for your Ansible files:

[ansible@tower-GUID ~]$ mkdir ansible-files

Now create a simple inventory file as ~/ansible-files/inventory with the following content:

hana0.example.com
hana1.example.com

To reference inventory hosts, you supply a host pattern to the ansible command. Ansible has a --list-hosts option which can be useful for clarifying which managed hosts are referenced by the host pattern in an ansible command.

The most basic host pattern is the name for a single managed host listed in the inventory file. This specifies that the host will be the only one in the inventory file that will be acted upon by the ansible command. Run:

[ansible@tower-GUID ~]$ ansible "hana0.example.com" -i ~/ansible-files/inventory --list-hosts

  hosts (1):
    hana0.example.com

An inventory file can contain a lot more information, it can organize your hosts in groups or define variables. You will use grouping most of the times, change your inventory file to look like this:

[hanaserver]
hana0.example.com
hana1.example.com

[ftpserver]
hana1.example.com

Now run Ansible with these host patterns and observe the output:

[ansible@tower-GUID ~]$ ansible hanaserver -i ~/ansible-files/inventory --list-hosts
[ansible@tower-GUID ~]$ ansible ftpserver,hana0.example.com -i ~/ansible-files/inventory --list-hosts
[ansible@tower-GUID ~]$ ansible '*.example.com' -i ~/ansible-files/inventory --list-hosts
[ansible@tower-GUID ~]$ ansible all -i ~/ansible-files/inventory --list-hosts
It is ok to put systems in more than one group, for instance a server could be both a ftpserver and a database server.
The inventory can contain more data. E.g. if you have hosts that run on non-standard SSH ports you can put the port number after the hostname with a colon. Or you could define names specific to Ansible and have them point to the "real" IP or hostname.

3.3.2. The Ansible Configuration Files

The behavior of Ansible can be customized by modifying settings in Ansible’s ini-style configuration file. Ansible will select its configuration file from one of several possible locations on the control node, please refer to the documentation.

The recommended practice is to create an ansible.cfg file in a directory from which you run Ansible commands. This directory would also contain any files used by your Ansible project, such as the inventory and Playbooks.

Make sure your inventory file is used by default when executing commands from the ~/ansible-files/ directory:

  • On tower-GUID as ansible create the file ~/ansible-files/ansible.cfg with the following content:

[defaults]
inventory=/home/ansible/ansible-files/inventory
  • Check with ansible --version, first from ansible’s home directory and then from ~/ansible-files/. You should find when run from ~/ansible-files/ your personal config settings override the main config file.

  • From ~/ansible-files/ run ansible all --list-hosts.

Your Ansible inventory was used without providing the -i option. To double-check, run the command again from outside ~/ansible-files/:

[ansible@tower-GUID ~]$ ansible all --list-hosts

 [WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match 'all'

  hosts (0):

3.3.3. Running Ansible Ad-Hoc Commands

Ansible allows administrators to execute on-demand tasks on managed hosts. These ad hoc commands are the most basic operations that can be performed with Ansible. They are great for learning about Ansible, for trying new things or for quick non-intrusive tasks like reporting. Let’s try something straight forward:

Don’t forget to run the commands from ~/ansible-files/ where your ansible.cfg file is located, otherwise it will complain about an empty host list.

Run the examples on tower-GUID from the ~/ansible-files/ directory as user ansible.

[ansible@tower-GUID ansible-files]$ ansible all -m ping

The -m option defines which Ansible module to use. Options can be passed to the specified modul using the -a option. BTW the ping module is not running an ICMP ping but does a simple connection test.

Think of a module as a tool which is designed to accomplish a specific task.

3.3.4. Listing Modules and Getting Help

Ansible comes with a lot of modules by default. To list all modules run:

[ansible@tower-GUID ansible-files]$ ansible-doc -l
In ansible-doc use the up/down arrows to scroll through the content and leave with q.

To find a module try e.g.:

[ansible@tower-GUID ansible-files]$ ansible-doc -l | grep -i user

Get help for a specific module including usage examples:

[ansible@tower-GUID ansible-files]$ ansible-doc user
Mandatory options are marked by a "=" in ansible-doc.

3.3.5. More Ad Hoc Commands

Let’s try a simple module that just executes a command on a managed host:

[ansible@tower-GUID ansible-files]$ ansible hana0.example.com -m command -a 'id'

hana0.example.com | SUCCESS | rc=0 >>
uid=1000(ansible) gid=1000(ansible) groups=1000(ansible),10(wheel) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023

In this case the module is called command and the option passed with -a is the actual command to run. Try to run this ad hoc command on both hosts using the all host pattern.

Another example: Have a quick look at the kernel versions your hosts are running:

[ansible@tower-GUID ansible-files]$ ansible all -m command -a 'uname -r'

Sometimes it’s desirable to have the output for a host on one line:

[ansible@tower-GUID ansible-files]$ ansible all -m command -a 'uname -r' -o

Using the copy module, execute an ad hoc command on tower-GUID to change the contents of the /etc/motd file on hana0.example.com. The content is handed to the module through an option in this case.

Run:

Expect an error!
[ansible@tower-GUID ansible-files]$ ansible hana0.example.com -m copy -a 'content="Managed by Ansible\n" dest=/etc/motd'

Output:

hana0.example.com | FAILED! => {
    "changed": false,
    "checksum": "a314620457effe3a1db7e02eacd2b3fe8a8badca",
    "failed": true,
    "msg": "Destination /etc not writable"
}

Should be all red for you, the ad hoc command failed. Why? Because user ansible is not allowed to write the motd file.

Now this is a case for privilege escalation and the reason sudo has to be setup properly. We need to instruct ansible to use sudo to run the command as root by using the parameter -b (think "become").

Ansible will connect to the machines using your current user name (ansible in this case), just like SSH would. To override the remote user name, you could use the -u parameter.

For us it’s okay to connect as ansible because sudo is set up. Change the command to use the -b parameter and run again:

[ansible@tower-GUID ansible-files]$ ansible hana0.example.com -m copy -a 'content="Managed by Ansible\n" dest=/etc/motd' -b

Output:

hana0.example.com | SUCCESS => {
    "changed": true,
    "checksum": "a314620457effe3a1db7e02eacd2b3fe8a8badca",
    "dest": "/etc/motd",
    "gid": 0,
    "group": "root",
    "md5sum": "7a924f6b4cbcbc7414eda7763dc0e43b",
    "mode": "0644",
    "owner": "root",
    "secontext": "system_u:object_r:etc_t:s0",
    "size": 19,
    "src": "/home/ansible/.ansible/tmp/ansible-tmp-1472132609.82-261447806330276/source",
    "state": "file",
    "uid": 0
}

Check the motd file:

[ansible@tower-GUID ansible-files]$ ansible hana0.example.com -m command -a 'cat /etc/motd'

hana0.example.com | SUCCESS | rc=0 >>
Managed by Ansible

Run the ansible hana0.example.com -m copy ... command from above again. Note:

  • the different output color (proper terminal config provided)

  • the change from "changed": true, to "changed": false,.

This makes it a lot easier to spot changes and what Ansible actually did.

3.3.6. Challenge Lab: Modules

  • Using ansible-doc

    • Find a module that uses Yum to manage software packages.

    • Look up the help examples for the module to learn how to install a package in the latest version

  • Run an Ansible ad hoc command to install the package "screen" in the latest version on hana0.example.com

Use the copy ad hoc command from above as a template and change the module and options.
Solution below!
[ansible@tower-GUID ansible-files]$ ansible-doc -l | grep -i yum
[ansible@tower-GUID ansible-files]$ ansible-doc yum
[ansible@tower-GUID ansible-files]$ ansible hana0.example.com -m yum -a 'name=screen state=latest' -b
Expect this command to fail when your servers are not registered properly against a satellite or Red Hat Network or get packages from other sources
hana0.example.com | FAILED! => {
    "changed": false,
    "msg": "There are no enabled repos.\n Run \"yum repolist all\" to see the repos you have.\n To enable Red Hat Subscription Management repositories:\n     subscription-manager repos --enable <repo>\n To enable custom repositories:\n     yum-config-manager --enable <repo>\n",
    "rc": 1,
    "results": []

3.4. Ansible Playbooks: Introduction

While Ansible ad hoc commands are useful for simple operations, they are not suited for complex configuration management or orchestration scenarios.

Playbooks are files which describe the desired configurations or steps to implement on managed hosts. Playbooks can change lengthy, complex administrative tasks into easily repeatable routines with predictable and successful outcomes.

Here is a nice analogy: When Ansible modules are the tools in your workshop, the inventory is the materials and the Playbooks are the instructions.

3.4.1. Playbook Basics

Playbooks are text files written in YAML format and therefore need:

  • to start with three dashes (---)

  • proper identation using spaces and not tabs!

There are some important concepts:

  • hosts: the managed hosts to perform the tasks on

  • tasks: the operations to be performed by invoking Ansible modules and passing them the necessary options.

  • become: privilege escalation in Playbooks, same as using -b in the ad hoc command.

The ordering of the contents within a Playbook is important, because Ansible executes plays and tasks in the order they are presented.

A Playbook should be idempotent, so if a Playbook is run once to put the hosts in the correct state, it should be safe to run it a second time and it should make no further changes to the hosts.

Most Ansible modules are idempotent, so it is relatively easy to ensure this is true.
Try to avoid the command, shell, and raw modules in Playbooks. Because these take arbitrary commands, it is very easy to end up with non-idempotent Playbooks with these modules.

3.5. Your first Playbook

Enough theory, it’s time to create your first Playbook. In this lab we follow chapter 3 of Red Hat Enterprise Linux 7.x Configuration Guide for SAP HANA. At first you create a playbook to register your servers against the local satellite (satellite.example.com)

3.5.1. Playbook: Register Servers

The first step in this playbook makes sure the package containing the satellite credentials is installed on hana0.example.com.

You obviously need to use privilege escalation to install a package or run any other task that requires root permissions. This is done in the Playbook by become: yes.

On tower-GUID as user ansible create the file ~/ansible-files/00-register.yml with the following content:

---
- name: Satellite Registration
  hosts: hana0.example.com
  become: yes
  tasks:
  - name: ensure Satellite keys are installed
    yum:
      name: http://satellite.example.com/pub/katello-ca-consumer-latest.noarch.rpm
      state: present

This shows one of Ansible’s strenghts: The Playbook syntax is easy to read and understand. In this Playbook:

  • A name is given for the play

  • The host to run against and privilege escalation is configured

  • A task is defined and named, here it uses the module "yum" with the needed options.

3.5.2. Running Playbooks

Playbooks are executed using the ansible-playbook command on the control node. Before you run a new Playbook it’s a good idea to check for syntax errors:

[ansible@tower-GUID ansible-files]$ ansible-playbook --syntax-check 00-register.yml

Now you should be ready to run your Playbook:

[ansible@tower-GUID ansible-files]$ ansible-playbook 00-register.yml

Use SSH to make sure the key package has been installed on hana0.example.com.

[ansible@tower ansible-files]$ ssh hana0.example.com 'rpm -qi katello-ca-consumer-satellite.example.com'
Name        : katello-ca-consumer-satellite.example.com
Version     : 1.0
[...]

Or even better use an Ansible ad hoc command!

[ansible@tower ansible-files]$ ansible hana0.example.com -m command -a 'rpm -qi katello-ca-consumer-satellite.example.com'

Run the Playbook a second time.

The different colors, the "ok" and "changed" counters and the "PLAY RECAP" make it easy to spot what Ansible actually did.

3.5.3. Extend your Playbook: Register against Satellite

The next part of the playbook ensures that the server is registered against satellite and obtains a proper subscription. Satellite has a predefined activation key that is used to provide this information. On tower as user ansible edit the file ~/ansible-files/00-register.yml to add a second task using the redhat_subscription module. The Playbook should now look like this:

---
- name: Satellite Registration
  hosts: hana0.example.com
  become: yes
  tasks:
  - name: ensure latest Satellite keys are installed
    yum:
      name: http://satellite.example.com/pub/katello-ca-consumer-latest.noarch.rpm
      state: present
  - name: ensure system is registered using known activation key
    redhat_subscription:
      activationkey: sap-hana
      org_id: RHPDS_Demo
      server_insecure: "yes"
      state: present

And again what it does is easy to understand:

  • a second task is defined

  • a module is specified (redhat_subscription)

  • options are supplied

As this is YAML take care of the correct indentation when copy/pasting!

Run your extended Playbook:

[ansible@tower-GUID ansible-files]$ ansible-playbook 00-register.yml
  • Note some tasks are shown as "ok" in green and one is shown as "changed" in yellow.

  • Use an Ansible ad hoc command again to make sure system is now subscribed properly, e.g. with: subscription-manager status (must be run as root!)

  • Run the Playbook a second time to get used to the change in the output.

3.5.4. Extend your Playbook: Subscribe to the required channels

When you look now at the repositories hana0.example.com is subscribed to, you will recognize that the system is not subscribed to the EUS channels and is not pinned to a fixed minor release.

Currently there are no modules to do this (ansible 2.7 will introduce a module for this), so that you need to fall back to shell commands

The shell commands to register the system against the Update Services (e4s repositories) and pin it to the required release are described in this KB-article.

  • subscription-manager release --set=7.4 to pin the release

  • subscription-manager repos --disable="*" --enable="rhel-7-server-e4s-rpms" --enable="rhel-sap-hana-for-rhel-7-server-e4s-rpms" to make sure only the two e4s repositories are configured

  • yum clean all to clean the cache

So we need the command or shell module to execute these commands.

On tower as user ansible edit the file ~/ansible-files/00-register.yml and add a new tasks utilizing the command module. It should now look like this:

---
- name: Satellite Registration
  hosts: hana0.example.com
  become: yes
  tasks:
  - name: ensure latest Satellite keys are installed
    yum:
      name: http://satellite.example.com/pub/katello-ca-consumer-latest.noarch.rpm
      state: present
  - name: ensure system is registered using known activation key
    redhat_subscription:
      activationkey: sap-hana
      org_id: RHPDS_Demo
      server_insecure: "yes"
      state: present
  - name: Set fix osrelease to 7.4"
    command: 'subscription-manager release --set=7.4'
  - name: Enable required repositories
    command: 'subscription-manager repos --disable="*" --enable="rhel-7-server-e4s-rpms" --enable="rhel-sap-hana-for-rhel-7-server-e4s-rpms"'
  - name: Cleanup yum cache
    command: 'yum clean all'

You are getting used to the Playbook syntax, so what happens? The new task uses the command module and runs the commands that are passed as the only parameter

Run your extended Playbook:

[ansible@tower ansible-files]$ ansible-playbook  00-register.yml
  • Have a good look at the output

  • Run the ad hoc command to verify the repository list again

3.5.5. Challenge Lab: Playbooks

This was nice but the real power of Ansible is to apply the same set of tasks reliably to many hosts.

  • Change the 00-register.yml playbook to run on hana0.example.com and hana1.example.com.

There are multiple ways to do this, try to edit the "hana" group in your inventory file to include both hosts and change your Playbook to use the group in hosts:
  • Run the playbook

  • Test using the ad hoc command to verify the repository list again

Solution below!

The changed inventory file:

[hanaserver]
hana0.example.com
hana1.example.com

The Playbook now pointing to the group "hanaserver":

---
- name: Satellite Registration
  hosts: hanaserver
  become: yes
  tasks:
  - name: ensure latest Satellite keys are installed
    yum:
      name: http://satellite.example.com/pub/katello-ca-consumer-latest.noarch.rpm
      state: present
  - name: ensure system is registered using known activation key
    redhat_subscription:
      activationkey: sap-hana
      org_id: RHPDS_Demo
      server_insecure: "yes"
      state: present
  - name: Set fix osrelease to 7.4"
    command: 'subscription-manager release --set=7.4'
  - name: Enable required repositories
    command: 'subscription-manager repos --disable="*" --enable="rhel-7-server-e4s-rpms" --enable="rhel-sap-hana-for-rhel-7-server-e4s-rpms"'
  - name: Cleanup yum cache
    command: 'yum clean all'

Run the Playbook:

[ansible@tower-GUID ansible-files]$ ansible-playbook 00-register.yml

And the commands to check the repository on both servers

[ansible@tower ansible-files]$ ansible hanaserver -a "yum repolist" -b

Now that the servers are properly registered you can use the yum module install the required packages as listed in the configuration guide.

As user ansible on tower create the playbook 01-packages.yml with the following content:

---
- name: Package Installation
  hosts: hanaserver
  become: yes
  tasks:

  - name: ensure base package group is installed
    yum:
      state: latest
      name: "@base"

  - name: ensure required packages are installed
    yum:
      state: latest
      name:
        - chrony
        - xfsprogs
        - libaio
        - net-tools
        - bind-utils
        - gtk2
        - libicu
        - xulrunner
        - tcsh
        - sudo
        - libssh2
        - expect
        - cairo
        - graphviz
        - iptraf-ng
        - krb5-workstation
        - krb5-libs
        - libpng12
        - nfs-utils
        - lm_sensors
        - rsyslog
        - openssl
        - PackageKit-gtk3-module
        - libcanberra-gtk2
        - libtool-ltdl
        - xorg-x11-xauth
        - numactl
        - tuned
        - tuned-profiles-sap-hana
        - compat-sap-c++-5
        - compat-sap-c++-6

3.6. Ansible Variables

3.6.1. Introduction

Ansible supports variables to store values that can be used in Playbooks. Variables can be defined in a variety of places and have a clear precedence. Ansible substitutes the variable with its value when a task is executed.

Variables are referenced in Playbooks by placing the variable name in double curly braces.

Here comes a variable {{ variable1 }}

The recommended practice is to define variables in files located in two directories named host_vars and group_vars:

  • To e.g. define variables for a group "servers", create a YAML file named group_vars/servers with the variable definitions.

  • To define variables specifically for a host "hana0.example.com", create the file host_vars/hana0.example.com with the variable definitions.

Host variables take precedence over group variables (more about precedence can be found in the docs).

3.6.2. Off to the Lab

For understanding and practice let’s do a lab. Following up on the theme "Let’s build hana servers. Or two. Or even more…​" you will change the /etc/motd to show the environment (dev/prod) a server is deployed in, when logging in.

On tower as user ansible create the directories to hold the variable definitions in ~/ansible-files/:

[ansible@tower-GUID ansible-files]$ mkdir host_vars group_vars

3.6.3. Create the Variable Files

Now create two files containing variable definitions. We’ll define a variable named stage which will point to different environments, dev or prod:

  • ~/ansible-files/group_vars/hanaserver with this content:

    ---
    stage: dev
  • ~/ansible-files/host_vars/hana1.example.com, content:

    ---
    stage: prod

What is this about?

  • For all servers in the hanaserver group the variable stage with value dev is defined. So as default we flag them as members of the dev environment.

  • For server "hana1.example.com" this is overriden and the host is flagged as a production server.

3.6.4. Create /etc/motd Files

Now create two files in ~/ansible-files/:

One called prod_motd with the following content:

This is a production database, take care!

And the other called dev_motd with the following content:

This is a development database, have fun!

3.6.5. Create the Playbook

Now you need a Playbook that copies the prod or dev motd file according to the "stage" variable.

Create a new Playbook called 01_motd.yml in the ~/ansible-files/ directory.

Note how the variable "stage" is used in the name of the file to copy.
---
- name: Copy motd
  hosts: hanaserver
  become: yes

  tasks:
  - name: copy motd
    copy:
      src: ~/ansible-files/{{ stage }}_motd
      dest: /etc/motd
  • Run the Playbook:

[ansible@tower-GUID ansible-files]$ ansible-playbook 01_motd.yml

3.6.6. Test the Result

The Playbook should copy different files as Message of the day to each host. Login to hana0.example.com and hana1.example.com to see the difference:

[ansible@tower ansible-files]$ ssh hana0
Last login: Tue Jan 22 10:50:51 2019 from tower.example.com
This is a development database, have fun!
[ansible@tower ansible-files]$ ssh hana1
Last login: Tue Jan 22 10:50:51 2019 from tower.example.com
This is a production database, take care!
If by now you think: There has to be a smarter way to change content in files…​ you are absolutely right. This lab was done to introduce variables, you are about to learn about templates in one of the next labs.

3.7. Ansible Facts

Ansible facts are variables that are automatically discovered by Ansible from a managed host. Facts are pulled by the setup module and contain useful information stored into variables that administrators can reuse.

To get an idea what facts Ansible collects by default, on tower-GUID as user ansible from the ~/ansible-files/ directory run:

[ansible@tower-GUID ansible-files]$ ansible hana0.example.com -m setup
You still remember why you have to run ansible from this directory?

This might be a bit too much, you can use filters to limit the output to certain facts, the expression is shell-style wildcard:

[ansible@tower-GUID ansible-files]$ ansible hana0.example.com -m setup -a 'filter=ansible_eth0'

Or what about only looking for memory related facts:

[ansible@tower-GUID ansible-files]$ ansible all -m setup -a 'filter=ansible_*_mb'

3.7.1. Challenge Lab: Facts

  • Try to find and print the distribution (Red Hat) of your managed hosts. On one line, please.

Use grep to find the fact, then apply a filter to only print this fact.
Solution below!
[ansible@tower-GUID ansible-files]$ ansible hana0.example.com -m setup | grep distribution
[ansible@tower-GUID ansible-files]$ ansible all -m setup -a 'filter=ansible_distribution' -o

3.7.2. Using Facts in Playbooks

Facts can be used in a Playbook like variables, using the proper naming, of course. Create this Playbook as facts.yml in the ~/ansible-files/ directory:

---
- name: Output facts within a playbook
  hosts: all
  tasks:
  - name: Prints Ansible facts
    debug:
      msg: The default IPv4 address of {{ ansible_fqdn }} is {{ ansible_default_ipv4.address }}
The "debug" module is handy for e.g. debugging variables or expressions.

Execute it to see how the facts are printed:

[ansible@tower-GUID ansible-files]$ ansible-playbook facts.yml

PLAY [all] *********************************************************************

TASK [setup] *******************************************************************
ok: [hana0.example.com]
ok: [hana1.example.com]

TASK [Prints various Ansible facts] ********************************************
ok: [hana0.example.com] => {
    "msg": "The default IPv4 address of hana0.example.com is 10.0.0.20\n"
}
ok: [hana1.example.com] => {
    "msg": "The default IPv4 address of hana1.example.com is 10.0.0.21\n"
}

PLAY RECAP *********************************************************************
hana0.example.com          : ok=2    changed=0    unreachable=0    failed=0
hana1.example.com          : ok=2    changed=0    unreachable=0    failed=0

3.8. Ansible Conditionals

Ansible can use conditionals to execute tasks or plays when certain conditions are met.

To implement a conditional, the when statement must be used, followed by the condition to test. The condition is expressed using one of the available operators like e.g. for comparison:

==

Compares two objects for equality.

!=

Compares two objects for inequality.

>

true if the left hand side is greater than the right hand side.

>=

true if the left hand side is greater or equal to the right hand side.

<

true if the left hand side is lower than the right hand side.

< =

true if the left hand side is lower or equal to the right hand side.

For more on this, please refer to the documentation: http://jinja.pocoo.org/docs/2.9/templates/

3.8.1. Inventory Group Membership in Conditional

In the installaton guide in Appendix 7.1 the required packages for SAP HANA are listed. As an example lets say you plan to use OFED on hana0.example.com you need to install additional packages.

Edit your inventory file and a hana0.example.com to the ofedserver group, so that the file looks like this:

[hanaserver]
hana0.example.com
hana1.example.com

[ofedserver]
hana0.example.com

As user ansible create this playbook on tower-GUID as 02-packages.yml in the ~/ansible-files/ directory, run it and examine the output.

you can also modify your 01-packages.yml playbook accordingly and add this task
---
- name: Install additional packages on servers that use OFED
  hosts: all
  become: yes

  vars:
    ofed_packages:
      - gcc
      - glib2
      - glibc-devel
      - glib2-devel
      - kernel-devel
      - libstdc++-devel
      - redhat-rpm-config
      - rpm-build
      - zlib-devel

  tasks:
    - name: Ensure packages for ofed are installed
      yum:
        name: "{{ ofed_packages }}"
        state: latest
      when: inventory_hostname in groups["ofedserver"]
The when statement must be placed "outside" of the module by being indented at the top level of the task.
instead of putting the packages in the task, it is a good practice to store them in a variable at the beginning of the playbook or even better in the group_vars or host_vars files, in case you need to change them at later point in time.
If a module parameter starts with a variable it has to be placed in single or double quotes

Expected outcome: The task is skipped on hana1.example.com because it is not in the ofedserver group in your inventory file:

[...]
TASK [Ensure packages for ofed are installed] *************************
skipping: [hana1.example.com]
changed: [hana0.example.com]
[...]

3.9. Ansible Handlers

Sometimes when a task does make a change to the system, a further task may need to be run. For example, a change to a service’s configuration file may then require that the service be reloaded so that the changed configuration takes effect.

Here Ansible’s handlers come into play. Handlers can be seen as inactive tasks that only get triggered when explicitly invoked using the "notify" statement.

As a an example, let’s write a Playbook that:

  • manages the chrony time server configuration file /etc/chrony.conf on all hosts in the hanaserver group

  • restarts chronyd when the file has changed

First we need the file Ansible will deploy, let’s just take the one from tower-GUID:

[ansible@tower-GUID ansible-files]$ cp /etc/chrony.conf .

Then create the Playbook chrony_conf.yml:

---
- name: manage chrony.conf
  hosts: hanaserver
  become: yes
  tasks:
  - name: Copy chrony configuration file
    copy:
      src: chrony.conf
      dest: /etc/chrony.conf
    notify:
       - restart_chronyd
  handlers:
    - name: restart_chronyd
      service:
        name: chronyd
        state: restarted

So what’s new here?

  • The "notify" section calls the handler only when the copy task changed the file.

  • The "handlers" section defines a task that is only run on notification.

Run the Playbook. We didn’t change anything in the file yet so there should not be any changed lines in the output and of course the handler shouldn’t have fired.

  • Now change the NTP servers to North America pool servers by changing the server * lines in chrony.conf to:

server 0.north-america.pool.ntp.org iburst
server 1.north-america.pool.ntp.org iburst
server 2.north-america.pool.ntp.org iburst
server 3.north-america.pool.ntp.org iburst
  • Run the Playbook again. Now the Ansible’s output should be a lot more interesting:

    • chrony.conf should have been copied over

    • The handler should have restarted chronyd

You can verify that chronyd is restarted by running systemctl status chronyd on each hana server.

You will see that the service is restarted in the Active: line:

[ansible@hana0-repl ~]$ systemctl status chronyd
● chronyd.service - NTP client/server
   Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; vendor preset: enabled)
   Active: active (running) since Tue 2019-01-22 15:43:55 EST; 20s ago
     Docs: man:chronyd(8)
           man:chrony.conf(5)
  Process: 20175 ExecStartPost=/usr/libexec/chrony-helper update-daemon (code=exited, status=0/SUCCESS)
  Process: 20171 ExecStart=/usr/sbin/chronyd $OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 20173 (chronyd)
   CGroup: /system.slice/chronyd.service
           └─20173 /usr/sbin/chronyd

Jan 22 15:43:55 hana0-repl systemd[1]: Starting NTP client/server...
Jan 22 15:43:55 hana0-repl chronyd[20173]: chronyd version 3.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SECHASH +SIGND +ASYNCDNS +IPV6 +DEBUG)
Jan 22 15:43:55 hana0-repl chronyd[20173]: Frequency -93.438 +/- 0.024 ppm read from /var/lib/chrony/drift
Jan 22 15:43:55 hana0-repl systemd[1]: Started NTP client/server.
Jan 22 15:44:02 hana0-repl chronyd[20173]: Selected source 198.46.223.227
Jan 22 15:44:09 hana0-repl chronyd[20173]: Received KoD RATE from 23.239.24.67

Feel free to change the chronyd.conf file again and run the Playbook.

3.10. Ansible Templates

Ansible uses Jinja2 templating to modify files before they are distributed to managed hosts. Jinja2 is one of the most used template engines for Python (http://jinja.pocoo.org/).

3.10.1. Using Templates in Playbooks

When a template for a file has been created, it can be deployed to the managed hosts using the template module, which supports the transfer of a local file from the control node to the managed hosts.

As an example of using templates you will change the motd file to contain host-specific data.

In the ~/ansible-files/ directory on tower-GUID as user ansible create the template file motd-facts.j2:

Welcome to {{ ansible_hostname }}.
{{ ansible_distribution }} {{ ansible_distribution_version}}
deployed on {{ ansible_architecture }} architecture.
This system is running in stage {{stage}}

In the ~/ansible-files/ directory on tower-GUID as user ansible create the Playbook motd-facts.yml:

---
- name: Fill motd file with host data
  hosts: hana0.example.com
  become: yes
  tasks:
    - template:
        src: motd-facts.j2
        dest: /etc/motd
        owner: root
        group: root
        mode: 0644

You have done this a couple of times by now:

  • Understand what the Playbook does.

  • Execute the Playbook motd-facts.yml

  • Login to hana0.example.com via SSH and check the motto of the day message.

  • Log out of hana0.example.com

You should see how ansible replaces the variables with the facts it discovered from the system and the previously defined variable.

3.10.2. Challenge Lab

Change the template to use the FQDN hostname:

  • Find a fact that contains the fully qualified hostname using the commands you learned in the "Ansible Facts" chapter.

Do a grep -i for fqdn
  • Change the template to use the fact you found.

  • Run the Playbook again.

  • Check motd by logging in to hana0.example.com

Solution below!
  • Find the fact:

[ansible@tower-GUID ansible-files]$ ansible hana0.example.com -m setup | grep -i fqdn
  • Use the ansible_fqdn fact in the template motd-facts.j2.

So you finished the first part of the training. But it doesn’t have to end here. We prepared some slightly more advanced bonus labs for you to follow through if you like.

3.11. Bonus Labs

If you are done with the labs and still have some time, here are some more labs for you:

3.11.1. Bonus Lab: Ad Hoc Commands

  • Create a new user "testuser" on servera and serverb using an ad hoc command

    • Find the parameters for the appropriate module using ansible-doc user (leave with q)

    • Use an Ansible ad hoc command to create the user with the comment "Test D User"

    • Use the "command" module with the proper invocation to find the userid

  • Delete the user and check it has been deleted

Remember privilege escalation…​
Solution below!

Your commands could look like these:

[ansible@tower-GUID ansible-files]$ ansible-doc -l | grep -i user
[ansible@tower-GUID ansible-files]$ ansible-doc user
[ansible@tower-GUID ansible-files]$ ansible all -m user -a "name=testuser comment='Test D User'" -b
[ansible@tower-GUID ansible-files]$ ansible all -m command -a " id testuser" -b
[ansible@tower-GUID ansible-files]$ ansible all -m user -a "name=testuser state=absent remove=yes" -b
[ansible@tower-GUID ansible-files]$ ansible all -m command -a " id testuser" -b

3.11.2. Bonus Lab: Change a Configuration File

This lab is about how to automate a pretty common sys admin task: Make sure a configuration file setting is configured in a certain way. As an example let’s make sure the SSH daemon is not accepting direct root logins.

You’ll need to learn about a new module; lineinfile. Here is your job:

  • Read the lineinfile doc

  • Copy apache_config_tpl.yml to no_sshd_root.yml and adapt it to:

    • Use the module lineinfile with these parameters:

      • Use the dest option to specify the config file (/etc/ssh/sshd_config)

      • Use the line option to provide the proper config file value (use "PermitRootLogin no")

  • Configure a handler restart_sshd to restart sshd when the configuration was changed.

  • Test the SSH login as root, the password is the same as for everything else.

Solution below!
  • Create the Playbook no_sshd_root.yml

---
- name: no root login to sshd
  hosts: all
  become: yes
  tasks:
  - name: change sshd config file
    lineinfile:
      dest: /etc/ssh/sshd_config
      line: "PermitRootLogin no"
    notify:
       - restart_ssh
  handlers:
    - name: restart_ssh
      service:
        name: sshd
        state: restarted
  • Run it and check the SSH login as root:

[ansible@tower-GUID ansible-files]$ ansible-playbook no_sshd_root.yml
[ansible@tower-GUID ansible-files]$ ssh root@hana0.example.com
root@hana0.example.com's password:
Permission denied, please try again.

3.11.3. Bonus Lab: continue on your playbook to prepare the system for HANA

You have learned the basics about Ansible modules, templates, variables and handlers. Let’s combine all of these.

You already have implemented a couple of things from the RHEL7 Configuration Guide from SAP HANA, so that you can now continue step by step creating playbooks to automate the preparation of hana.

To do this you need to use more modules listed in the module index of the ansible documentation. The most useful will be:

  • lineinfile: change content of a files

  • hostname: set the hostname of a system

  • shell or command: run a bash command or script

  • selinux: use to disable selinux

  • sysctl: set kernel parameters

  • file: create files, symbolic links, directories etc.

  • service or systemd: manage services

In the online documentation are several examples on how to use them. If you don’t want to implement everything on your own, continue with the next chapter of the training.

4. SAP HANA on RHEL 7 with Ansible

In this chapter you will be introduced into the concept of ansible roles. An ansible role is a collection of tasks that are parametrized with variables to fulfil certain tasks. Red Hat is developing and supporting roles to make system admin tasks easier and reproducable

You will learn how to use these roles in your playbooks e.g. to configure

4.1. Register for download Hana Express

To deploy HANA you need to use your own license of SAP. The fastest and easieast way of getting access is to SAP HANA is to download SAP HANA Express.

HANA Express is a reduced Version of SAP HANA and requires less resources than HANA Platform Edition. It lacks at least the following features:

  • Smart Data Integration (SDI)

  • Smart Data Streaming

  • Sytem Replication (HSR)

  • Dynamic tiering

For a full list see the HANA Express FAQ

For this quickstart guide you need to download the Binary installer method as described in detail on this page: https://www.sap.com/developer/tutorials/hxe-ua-installing-binary.html

As the SAP tutorial only describes the graphical interface which is not feasible for the training server in the cloud, you need to do the following

  1. Register for Hana Express at https://www.sap.com/sap-hana-express

    hana01
  2. Download the platform independant installer (HXEDownloadManager.jar)

    hana02
  3. Copy the Installer to your workstation, e.g. from Linux or Mac:

    $ scp HXEDownloadManager.jar root@tower-e5ba.rhpds.opentlc.com:/export
    HXEDownloadManager.jar                             100%  561KB 971.1KB/s   00:00

    If you use mobaXterm on Windows just login to tower and drag the jar file to the dialog box on the left or use winscp to upload the files

  4. Login to the tower server and switch to /export

    $ ssh root@tower-GUID.rhpds.opentlc.com
    [root@tower ~]# cd /export
    [root@tower export]# ls
    HXEDownloadManager.jar
  5. Download Hana express

    [root@tower export]# java -jar HXEDownloadManager.jar -d . linuxx86_64 installer hxe.tgz
    Connecting to download server...
    
    SAP HANA, express edition version: 2.00.033.00.20180925.2
    
    WARNING: The package(s) you chose to download require a minimum of 8 GB of memory to install.  You only have 3 GB on this system.
    Downloading "Server only installer"...
    hxe.tgz : 100%
    Concatenate download files to ./hxe.tgz...
    ./hxe.tgz created.
    Verify ./hxe.tgz file checksum...
    ./hxe.tgz file checksum is OK.
  6. unpack HANA Express

    [root@tower export]# tar xzvf hxe.tgz
    setup_hxe.sh
    HANA_EXPRESS_20/change_key.sh
    HANA_EXPRESS_20/hxe_gc.sh
    HANA_EXPRESS_20/hxe_optimize.sh
    [...]

4.2. Update the ansible environment

In the previous part training you have learned how to create basic playbooks, setup an inventory file and start. Update your environment like this:

  1. Make sure your inventory file is used by default when executing commands from the ~/ansible-files/ directory. On tower-GUID as ansible create the file ~/ansible-files/ansible.cfg with the following content:

    [defaults]
    inventory=/home/ansible/ansible-files/inventory
  2. Create an inventory file with one hana host in this directory

    $ rm ~ansible/ansible-files/inventory
    $ echo "[hana]" > ~ansible/ansible-files/inventory
    $ echo "hana1.example.com" >> ~ansible/ansible-files/inventory
  3. remove existing files in the subdirectory group_vars and host_vars

    $ rm /home/ansible/ansible-files/group_vars/*
    $ rm /home/ansible/ansible-files/host_vars/*
  4. Test with an ad-hoc command that the ansible connection is working:

    [ansible@tower-GUID ~]$ ansible -m ping hana1.example.com
    hana1.example.com | SUCCESS => {
        "changed": false,
        "ping": "pong"
    }

4.3. Install SAP HANA with ansible roles

4.3.1. Find available roles for SAP deployment

On Ansible Galaxy a lot of ready to use roles exist. Red Hat maintaines the linux-system-roles, which are upstream to supported RHEL System Roles.

See the follwoing pages for more details:

Now find the roles that are useful for installing SAP HANA on http://galaxy.ansible.com

Search for "SAP" and you will find the following roles:

These and some other userful useful roles can be found here: Roles Overview

4.3.2. Install SAP HANA using these Roles

Check and install the above roles on workstation as user ansible. Get familiar with these roles. e.g. read the documentation of each of the above role and browse the roles itself

[ansible@tower ~]$ ansible-galaxy list
- linux-system-roles.kdump, (unknown version)
- linux-system-roles.network, (unknown version)
- linux-system-roles.postfix, (unknown version)
- linux-system-roles.selinux, (unknown version)
- linux-system-roles.timesync, (unknown version)
- rhel-system-roles.kdump, (unknown version)
- rhel-system-roles.network, (unknown version)
- rhel-system-roles.postfix, (unknown version)
- rhel-system-roles.selinux, (unknown version)
- rhel-system-roles.timesync, (unknown version)
- mk-ansible-roles.disk-init, master
- mk-ansible-roles.saphana-deploy, master
- mk-ansible-roles.saphana-hsr, master
- mk-ansible-roles.saphana-preconfigure, master
- mk-ansible-roles.subscribe-rhn, master
If the roles are not installed yet use the follwing command to install them: ansible-galaxy install mk-ansible-roles.saphana-preconfigure mk-ansible-roles.saphana-deploy linux-system-roles.timesync mk-ansible-roles.subscribe-rhn
Global roles can be installed (as root) to /usr/share/ansible/roles or /etc/ansible/roles using the -p option, per default roles are installed to ${HOME}/.ansible/roles. You need to set your roles_path in ansible.cfg appropriately

Now go ahead an read the Readme of saphana-preconfigure role either on the web (easier to read) or on the commandline:

[ansible@workstation-GUID ~]$ ansible-galaxy info mk-ansible-roles.saphana-preconfigure
[ansible@workstation-GUID ~]$ ansible-galaxy info mk-ansible-roles.saphana-deploy
[ansible@workstation-GUID ~]$ ansible-galaxy info mk-ansible-roles.subscribe-rhn
Information you should use in your playbooks

Now write playbook to prepare hana1.example.com for HANA installation. The following preconditions are:

  1. The lab environment has a version of Red Hat Enterprise Linux 7.4 base server already installed, but the channels for HANA installation are not correct. Use the mk-ansible-roles.subscribe-rhn to set the correct repositories. For this use the following variables:

    • Satellite subscription

      satellite_server: satellite.example.com
      reg_activation_key: sap-hana
      reg_organization_id: RHPDS_Demo
      reg_server_insecure: yes
    • Pin the release to RHEL 7.4

      reg_osrelease: 7.4
    • remove all previously set repositories

      repo_reset: true
    • subscribe to the following repositories for 4 years update services

      repositories:
                    - rhel-sap-hana-for-rhel-7-server-e4s-rpms
                    - rhel-7-server-e4s-rpms
      you will find more details on the Update Services for SAP in our Knowledbase
  2. use the linux-system-roles.network to define your timeserver. Use the following parameters:

    ntp_servers:
            - hostname: 0.rhel.pool.ntp.org
              iburst: yes
            - hostname: 1.rhel.pool.ntp.org
              iburst: yes
            - hostname: 2.rhel.pool.ntp.org
              iburst: yes
            - hostname: 3.rhel.pool.ntp.org
              iburst: yes
  3. Network setup is already done. In your environment you could use linux-system-roles.network. See RHEL System Roles for more info.

  4. If you login to hana1.example.com you will realize that the disks are not configured. For a quick configuration use the role mk-ansible-roles.disk-init with the following parameters:

    disks:
            /dev/vdb: vg00
    logvols:
            hana_shared:
                    size: 32G
                    vol: vg00
                    mountpoint: /hana/shared
            hana_data:
                    size: 32G
                    vol: vg00
                    mountpoint: /hana/data
            hana_logs:
                    size: 16G
                    vol: vg00
                    mountpoint: /hana/logs
            usr_sap:
                    size: 50G
                    vol: vg00
                    mountpoint: /usr/sap
  5. To do all preconfiguration steps for SAP HANA which are described in the SAP Note 2292690 use mk-ansible-roles.saphana-preconfigure

    • The SAP Installation media is provided on an NFS Server which is mounted automatically during installation by setting the following paramenters:

      # SAP-Media Check
      install_nfs: "tower.example.com:/export"
      installroot: /install/hxe
      installversion: "HANA_EXPRESS_20"
      hana_installdir: "{{ installroot + '/' + installversion }}"
    • For the preparation of SAP users and hostagent use the following variables.

      hana_pw_hostagent_ssl: "Ab01%%bA"
      id_user_sapadm: "30200"
      id_group_shm: "30220"
      id_group_sapsys: "30200"
      pw_user_sapadm_clear: "Adm12356"
  6. To install SAP HANA database, use the role mk-ansible-roles.saphana-deploy. For this role you need to add the instance specific parameters in the according host_vars file:

    • The first parameter to set is the hostname/interfacename of the interface SAP hostagent will use to talk. If you just have one interface use "{{ ansible_hostname }}" which is the default value.

      hostname: "{{ ansible_hostname }}"
    • The second parameter to set, is whether you want to prepare the installation or execute the installation. Set it to true, if you want to run hdblcm

      deployment_instance: true
    • Now describe your instance. These variables are similar to the unattended install file:

      instances:
        instance01:
          hdblcm_params: "--ignore=check_min_mem,check_platform"
          id_user_sidadm: "30210"
          pw_user_sidadm: "Adm12356"
          hana_pw_system_user_clear: "System123"
          hana_components: "client,server"
          hana_system_type: "Master"
          id_group_shm: "30220"
          hana_instance_hostname: "{{ ansible_hostname }}"
          hana_addhosts:
          hana_sid: "HXE"
          hana_instance_number: "90"
          hana_system_usage: custom
The backend you are using in this course is a test environment that is not officially supported by SAP, as such depending on the HANA installer version the installation prerequisite checks fail. To be safe add the following line to your instance: hdblcm_params: "--ignore=check_platform"`
In case of deploying a HANA scale-out cluster only one server must have deployment_instance: true, all other need this variable to be unset. The hosts of the scale-out cluster need to be listed in hana_addhosts
If you want to install multiple HANA instances on one server you can add more than one instance here and the installer will loop over these instances.
The variable information should be split to appropriate group_vars and host_vars files, because some information is shared across all servers (group_vars/all) and the whole SAP HANA servers(group_vars/hana), while other is special to the host itself (host_vars/hana1.example.com)

Now create your var files and playbook to run the installation. After the installation has finished, log into hana1.example.com and assume user hxeadm to see if SAP HANA is running:

[root@hana1-GUID ~]# su - hxeadm
Last login: Fri May 11 18:26:48 EDT 2018
hxeadm@hana1-GUID:/usr/sap/HXE/HDB90> HDB info
USER       PID  PPID %CPU    VSZ   RSS COMMAND
hxeadm   11618 11617  1.6 116308  2940 -bash
hxeadm   11680 11618  2.0 113260  1640  \_ /bin/sh /usr/sap/HXE/HDB90/HDB info
hxeadm   11711 11680  0.0 151040  1804      \_ ps fx -U hxeadm -o user,pid,ppid,pcpu,vsz,rss,args
hxeadm    6805     1  0.0  43232  1888 sapstart pf=/hana/shared/HXE/profile/HXE_HDB90_hana1-GUID
hxeadm    6814  6805  0.1 225944 31780  \_ /usr/sap/HXE/HDB90/hana1-GUID/trace/hdb.sapHXE_HDB90 -d -nw -f /usr/sap/HXE/HDB90/hana1-GUID/daemon.ini pf=/usr/sap/HXE/SYS/profile/HXE_HDB90_hana1-GUID
hxeadm    6830  6814 53.7 7641816 5200160      \_ hdbnameserver
hxeadm    7149  6814  1.3 1254272 259132      \_ hdbcompileserver
hxeadm    7151  6814 57.3 3253036 2306784      \_ hdbpreprocessor
hxeadm    7194  6814 51.7 7298972 5381920      \_ hdbindexserver -port 39003
hxeadm    7196  6814  3.3 2038712 936348      \_ hdbxsengine -port 39007
hxeadm    8293  6814  1.8 1567760 292932      \_ hdbwebdispatcher
hxeadm    6726     1  0.4 519388 23088 /usr/sap/HXE/HDB90/exe/sapstartsrv pf=/hana/shared/HXE/profile/HXE_HDB90_hana1-GUID -D -u hxeadm
Solution Below

You need to create the following files:

  1. The required playbook: ./install-hana.yml:

    ---
    - name: Install SAP HANA
      hosts: hana
      become: yes
    
      roles:
                  - mk-ansible-roles.subscribe-rhn
                  - linux-system-roles.timesync
                  - mk-ansible-roles.disk-init
                  - mk-ansible-roles.saphana-preconfigure
                  - mk-ansible-roles.saphana-deploy
  2. The required group_vars file: ./group_vars/hana

    ---
    #####################################################
    # Default Subscription Information for HANA Servers
    # used in: mk-ansible-roles.rhn-subscribe
    #
    satellite_server: satellite.example.com
    reg_activation_key: sap-hana
    reg_organization_id: RHPDS_Demo
    reg_server_insecure: yes
    reg_osrelease: 7.4
    
    # Can be set to false
    repo_reset: true
    
    repositories:
         - rhel-7-server-e4s-rpms
         - rhel-sap-hana-for-rhel-7-server-e4s-rpms
    
    #####################################################
    #
    # Default Timeserver settings
    # used in: rhel-system-roles.timeserver
    #
    ntp_servers:
            - hostname: 0.rhel.pool.ntp.org
              iburst: yes
            - hostname: 1.rhel.pool.ntp.org
              iburst: yes
            - hostname: 2.rhel.pool.ntp.org
              iburst: yes
            - hostname: 3.rhel.pool.ntp.org
              iburst: yes
    
    ######################################################
    #
    # Default settings
    # used in the hana deployment roles
    #
    
    # SAP-Media Check
    install_nfs: "tower.example.com:/export"
    installroot: /install/hxe
    installversion: "HANA_EXPRESS_20"
    hana_installdir: "{{ installroot + '/' + installversion }}"
    
    hana_pw_hostagent_ssl: "Ab01%%bA"
    id_user_sapadm: "30200"
    id_group_shm: "30220"
    id_group_sapsys: "30200"
    pw_user_sapadm_clear: "Adm12356"
  3. The required host_vars file: ./host_vars/hana1.example.com:

    ---
    #### Disk Configguration
    disks:
            /dev/vdb: vg00
    logvols:
            hana_shared:
                    size: 32G
                    vol: vg00
                    mountpoint: /hana/shared
            hana_data:
                    size: 32G
                    vol: vg00
                    mountpoint: /hana/data
            hana_logs:
                    size: 16G
                    vol: vg00
                    mountpoint: /hana/logs
            usr_sap:
                    size: 50G
                    vol: vg00
                    mountpoint: /usr/sap
    
    #### HANA Configuration
    hostname: "{{ ansible_hostname }}"
    
    deployment_instance: true
    
    instances:
      instance01:
        hdblcm_params: "--ignore=check_min_mem,check_platform"
        id_user_sidadm: "30210"
        pw_user_sidadm: "Adm12356"
        hana_pw_system_user_clear: "System123"
        hana_components: "client,server"
        hana_system_type: "Master"
        id_group_shm: "30220"
        hana_instance_hostname: "{{ ansible_hostname }}"
        hana_addhosts:
        hana_sid: "HXE"
        hana_instance_number: "90"
        hana_system_usage: custom

Now kick off the installation as user ansible on workstion-GUID:

[ansible@tower-GUID ~]$ ansible-playbook install-hana.yml
run with -vvv to increase debuglevel to get mor information whats happening

You finished your Lab deploying SAP HANA fully automated. You now know the basics and should be able to integrate this with Satellite, Ansible Tower and even CloudForms. To learn about thes tools join us in one of the upcoming management classes.

4.4. Bonus Labs

4.4.1. Install and configure Insights

If you have access to your own subscription run subscription-manager unregister and register the server against your subscription. Then follow the instructions of the getting started guide

Solution Below

As we are in an ansible training we use an ansible playbook to add insights.

  1. Install the insights role from galaxy:

    # ansible-galaxy install redhataccess.redhat-access-insights-client
  2. Create a playbook install-insights.yml to install to configure Insights

    # Playbook installing Insights
    ---
    - hosts: hana1.example.com
      become: yes
      roles:
      - { role: redhataccess.redhat-access-insights-client, when: ansible_os_family == 'RedHat' }
  3. Run the playbook

    # ansible-playbook install-insights.yml
  4. Goto the Red Hat Insights portal to see the results

4.4.2. Upgrade HANA Server

Do you know how to upgrade SAP HANA servers

  1. with new RHEL patches?

  2. to a new RHEL minor reg_osrelease?

Solution Below
  1. Update the system to the latest patches in the current RHEL minor release:

    • Make sure you have your release set to the current minor release:

      [root@hana1-GUID ~]# subscription-manager release
      Release: 7.4
    • update the system

        [root@hana1-GUID ~]# yum -y update
      * stop the HANA database
      [root@hana1-GUID ~]# su - hxeadm
      hxeadm@hana1-GUID.rhpds:/usr/sap/HXE/HDB90> HDB stop
    • reboot [root@hana1-GUID ~]# reboot

    • login and start HANA again (in case it is not started automatically)

      [root@workstation-GUID ~]# ssh hana1.example.com
      [root@hana1-GUID ~]# su - hxeadm
      hxeadm@hana1-GUID.rhpds:/usr/sap/HXE/HDB90> HDB start
  2. Update the system to the latest patches in a newer RHEL minor release:

    • Check available releases:

      [root@hana1-GUID ~]# subscription-manager release --list
      +-------------------------------------------+
                Available Releases
      +-------------------------------------------+
      7.0
      7.1
      7.2
      7.3
      7.4
      7.5
      7Server
    • Set the desired release 7.5

      [root@hana1-GUID ~]# subscription-manager release --set 7.5
      Release set to: 7.5
    • update the systems and reboot as described in the previous steps

Do you really want to do this manually? If not, here is a playbook that covers both upgrade scenarios. If you change reg_osrelease, an upgrade to another RHEL release will be performed.

- name: Update Hana Server
  hosts: hana1.example.com
  become: yes

  vars:
              # Repositories setup
              reg_osrelease: 7.5
              repo_reset: false
              repositories:
                 - rhel-7-server-e4s-rpms
                 - rhel-sap-hana-for-rhel-7-server-e4s-rpms

              sid: hxe

  roles:
              ## We can use this role to change  repsoitories, if we need to and to switch the minor relase
              - { role: mk-ansible-roles.subscribe-rhn }

  tasks:
              ## update the system
              - name: ensure the the system is updated
                yum: name=* state=latest

              ## stop database
              - name: ensure HANA is stopped
                command: su - "{{ sid + 'adm' }}" -c "HDB stop"

              # Reboot the server now and wait until it is back
              # inspired by https://support.ansible.com/hc/en-us/articles/201958037-Reboot-a-server-and-wait-for-it-to-come-back
              - name: restart machine if required
                shell: sleep 2 && shutdown -r now "Ansible updates triggered"
                async: 1
                poll: 0
                become: true
                ignore_errors: true

              - name: waiting for server to come back
                local_action: wait_for host={{ inventory_hostname }} port=22 state=started delay=90 sleep=2 timeout=900
                become: false

              ## start database again
              - name: ensure HANA is started
                command: su - "{{ sid + 'adm' }}" -c "HDB start"
Read the man page for needs-restarting and enhance the playbook by using needs-restarting -r
You could also think about splitting this playbook into separate roles, that can be reused in differetn playbooks, such as:
  • Stop HANA instances

  • Start HANA instances

  • Update Server. In this role you could implement reboot as a handler, so that a system is only rebooted if a new kernel or other patch which requires a reboot is installed

5. The End

Congratulations, you finished your labs! We hope you enjoyed your first steps using Ansible as much as we enjoyed creating the labs.