[masters] ec2-52-6-179-239.compute-1.amazonaws.com openshift_public_hostname=ose3-master.public.example.com
A reference configuration implemented using Ansible playbooks is available as the advanced installation method for installing a OpenShift Origin cluster. Familiarity with Ansible is assumed, however you can use this configuration as a reference to create your own implementation using the configuration management tool of your choosing.
While RHEL Atomic Host is supported for running containerized OpenShift Origin services, the advanced installation method utilizes Ansible, which is not available in RHEL Atomic Host, and must therefore be run from a supported version of Fedora, CentOS, or RHEL. The host initiating the installation does not need to be intended for inclusion in the OpenShift Origin cluster, but it can be. |
To install OpenShift Origin as a stand-alone registry, see Installing a Stand-alone Registry. |
Before installing OpenShift Origin, you must first see the Prerequisites topic to prepare your hosts, which includes verifying system and environment requirements per component type and properly installing and configuring Docker. It also includes installing Ansible version 2.2.0 or later, as the advanced installation method is based on Ansible playbooks and as such requires directly invoking Ansible.
If you are interested in installing OpenShift Origin using the containerized method (optional for RHEL but required for RHEL Atomic Host), see Installing on Containerized Hosts to ensure that you understand the differences between these methods, then return to this topic to continue.
After following the instructions in the Prerequisites topic and deciding between the RPM and containerized methods, you can continue in this topic to Configuring Ansible.
The /etc/ansible/hosts file is Ansible’s inventory file for the playbook to use during the installation. The inventory file describes the configuration for your OpenShift Origin cluster. You must replace the default contents of the file with your desired configuration.
The following sections describe commonly-used variables to set in your inventory file during an advanced installation, followed by example inventory files you can use as a starting point for your installation. The examples describe various environment topographies, including using multiple masters for high availability. You can choose an example that matches your requirements, modify it to match your own environment, and use it as your inventory file when running the advanced installation.
To assign environment variables to hosts during the Ansible installation, indicate the desired variables in the /etc/ansible/hosts file after the host entry in the [masters] or [nodes] sections. For example:
[masters] ec2-52-6-179-239.compute-1.amazonaws.com openshift_public_hostname=ose3-master.public.example.com
The following table describes variables for use with the Ansible installer that can be assigned to individual host entries:
Variable | Purpose |
---|---|
|
This variable overrides the internal cluster host name for the system. Use this when the system’s default IP address does not resolve to the system host name. |
|
This variable overrides the system’s public host name. Use this for cloud installations, or for hosts on networks using a network address translation (NAT). |
|
This variable overrides the cluster internal IP address for the system. Use this when using an interface that is not configured with the default route. |
|
This variable overrides the system’s public IP address. Use this for cloud installations, or for hosts on networks using a network address translation (NAT). |
|
If set to true, containerized OpenShift Origin services are run on the target master and node hosts instead of installed using RPM packages. If set to false or unset, the default RPM method is used. RHEL Atomic Host requires the containerized method, and is automatically selected for you based on the detection of the /run/ostree-booted file. See Installing on Containerized Hosts for more details. |
|
This variable adds labels to nodes during installation. See Configuring Node Host Labels for more details. |
|
This variable is used to configure |
|
Default node selector for automatically deploying router pods. See Configuring Node Host Labels for details. |
|
Default node selector for automatically deploying registry pods. See Configuring Node Host Labels for details. |
|
This variable configures additional Docker options within /etc/sysconfig/docker, such as options used in Managing Container Logs. Example usage: "--log-driver json-file --log-opt max-size=1M --log-opt max-file=3". |
To assign environment variables during the Ansible install that apply more globally to your OpenShift Origin cluster overall, indicate the desired variables in the /etc/ansible/hosts file on separate, single lines within the [OSEv3:vars] section. For example:
[OSEv3:vars] openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}] openshift_master_default_subdomain=apps.test.example.com
The following table describes variables for use with the Ansible installer that can be assigned cluster-wide:
Variable | Purpose |
---|---|
|
This variable sets the SSH user for the installer to use and defaults to root. This user should allow SSH-based authentication without requiring a password. If using SSH key-based authentication, then the key should be managed by an SSH agent. |
|
If |
|
This variable sets which INFO messages are logged to the
For more information on debug log levels, see Configuring Logging Levels. |
|
If set to true, containerized OpenShift Origin services are run on all target master and node hosts in the cluster instead of installed using RPM packages. If set to false or unset, the default RPM method is used. RHEL Atomic Host requires the containerized method, and is automatically selected for you based on the detection of the /run/ostree-booted file. See Installing on Containerized Hosts for more details. |
|
This variable overrides the host name for the cluster, which defaults to the host name of the master. |
|
This variable overrides the public host name for the cluster, which defaults to the host name of the master. |
|
Optional. This variable defines the HA method when deploying multiple masters.
Supports the |
|
This variable enables rolling restarts of HA masters (i.e., masters are taken
down one at a time) when
running
the upgrade playbook directly. It defaults to |
|
This variable configures which OpenShift SDN plug-in to use for the pod network, which defaults to redhat/openshift-ovs-subnet for the standard SDN plug-in. Set the variable to redhat/openshift-ovs-multitenant to use the multitenant plug-in. |
|
This variable overrides the identity provider, which defaults to Deny All. |
|
These variables are used to configure custom certificates which are deployed as part of the installation. See Configuring Custom Certificates for more information. |
|
|
|
These variables override defaults for session options in the OAuth configuration. See Configuring Session Options for more information. |
|
|
|
|
|
|
|
This variable configures the subnet in which services will be created within the OpenShift Origin SDN. This network block should be private and must not conflict with any existing network blocks in your infrastructure to which pods, nodes, or the master may require access to, or the installation will fail. Defaults to 172.30.0.0/16, and cannot be re-configured after deployment. If changing from the default, avoid 172.16.0.0/16, which the docker0 network bridge uses by default, or modify the docker0 network. |
|
This variable overrides the default subdomain to use for exposed routes. |
|
This variable specifies the service proxy mode to use: either iptables for the default, pure-iptables implementation, or userspace for the user space proxy. |
|
This variable overrides the node selector that projects will use by default when placing pods. |
|
This variable overrides the SDN cluster network CIDR block. This is the network from which pod IPs are assigned. This network block should be a private block and must not conflict with existing network blocks in your infrastructure to which pods, nodes, or the master may require access. Defaults to 10.128.0.0/14 and cannot be arbitrarily re-configured after deployment, although certain changes to it can be made in the SDN master configuration. |
|
This variable specifies the size of the per host subnet allocated for pod IPs by OpenShift Origin SDN. Defaults to 9 which means that a subnet of size /23 is allocated to each host; for example, given the default 10.128.0.0/14 cluster network, this will allocate 10.128.0.0/23, 10.128.2.0/23, 10.128.4.0/23, and so on. This cannot be re-configured after deployment. |
|
This variable enables flannel as an alternative networking layer instead of the default SDN. If enabling flannel, disable the default SDN with the openshift_use_openshift_sdn variable. For more information, see Using Flannel. |
|
OpenShift Origin adds the specified additional registry or registries to the Docker configuration. |
|
OpenShift Origin adds the specified additional insecure registry or registries to the Docker configuration. |
|
OpenShift Origin adds the specified blocked registry or registries to the Docker configuration. |
|
This variable sets the host name for integration with the metrics console. The
default is
|
If you are using an image registry other than the default at
registry.access.redhat.com
, specify the desired registry within the
/etc/ansible/hosts file.
oreg_url=example.com/openshift3/ose-${component}:${version} openshift_examples_modify_imagestreams=true
Variable | Purpose |
---|---|
|
Set to the alternate image location. Necessary if you are not using the default registry at |
|
Set to |
If your hosts require use of a HTTP or HTTPS proxy in order to connect to external hosts, there are many components that must be configured to use the proxy, including masters, Docker, and builds. Node services only connect to the master API requiring no external access and therefore do not need to be configured to use a proxy.
In order to simplify this configuration, the following Ansible variables can be specified at a cluster or host level to apply these settings uniformly across your environment.
See Configuring Global Build Defaults and Overrides for more information on how the proxy environment is defined for builds. |
Variable | Purpose |
---|---|
|
This variable specifies the |
|
This variable specifices the |
|
This variable is used to set the |
|
This boolean variable specifies whether or not the names of all defined
OpenShift hosts and |
|
This variable defines the |
|
This variable defines the |
|
This variable defines the |
|
This variable defines the HTTP proxy used by |
|
This variable defines the HTTPS proxy used by |
You can assign labels to node hosts during the Ansible install by configuring the /etc/ansible/hosts file. Labels are useful for determining the placement of pods onto nodes using the scheduler. Other than region=infra (discussed below), the actual label names and values are arbitrary and can be assigned however you see fit per your cluster’s requirements.
To assign labels to a node host during an Ansible install, use the
openshift_node_labels
variable with the desired labels added to the desired
node host entry in the [nodes] section. In the following example, labels are
set for a region called primary and a zone called east:
[nodes] node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}"
The openshift_router_selector
and openshift_registry_selector
Ansible
settings are set to region=infra by default:
# default selectors for router and registry services # openshift_router_selector='region=infra' # openshift_registry_selector='region=infra'
The default router and registry will be automatically deployed if nodes exist that match the selector settings above. For example:
[nodes] node1.example.com openshift_node_labels="{'region':'infra','zone':'default'}"
The registry and router are only able to run on node hosts with the 'region=infra' label. Ensure that at least one node host in your OpenShift Origin environment has the 'region=infra' label. |
Any hosts you designate as masters during the installation process should also be configured as nodes by adding them to the [nodes] section so that the masters are configured as part of the OpenShift Origin SDN.
However, in order to ensure that your masters are not burdened with running
pods, you can make them
unschedulable
by adding the openshift_schedulable=false
option any node that is also a
master. For example:
[nodes] master.example.com openshift_node_labels="{'region':'infra','zone':'default'}" openshift_schedulable=false
Session
options in the OAuth configuration are configurable in the inventory file. By
default, Ansible populates a sessionSecretsFile
with generated
authentication and encryption secrets so that sessions generated by one master
can be decoded by the others. The default location is
/etc/origin/master/session-secrets.yaml, and this file will only be
re-created if deleted on all masters.
You can set the session name and maximum number of seconds with
openshift_master_session_name
and openshift_master_session_max_seconds
:
openshift_master_session_name=ssn openshift_master_session_max_seconds=3600
If provided, openshift_master_session_auth_secrets
and
openshift_master_encryption_secrets
must be equal length.
For openshift_master_session_auth_secrets
, used to authenticate sessions
using HMAC, it is recommended to use secrets with 32 or 64 bytes:
openshift_master_session_auth_secrets=['DONT+USE+THIS+SECRET+b4NV+pmZNSO']
For openshift_master_encryption_secrets
, used to encrypt sessions, secrets
must be 16, 24, or 32 characters long, to select AES-128, AES-192, or AES-256:
openshift_master_session_encryption_secrets=['DONT+USE+THIS+SECRET+b4NV+pmZNSO']
Custom serving certificates for the public host names of the OpenShift Origin API and web console can be deployed during an advanced installation and are configurable in the inventory file.
Custom certificates should only be configured for the host name associated with
the |
Certificate and key file paths can be configured using the
openshift_master_named_certificates
cluster variable:
openshift_master_named_certificates=[{"certfile": "/path/to/custom1.crt", "keyfile": "/path/to/custom1.key"}]
File paths must be local to the system where Ansible will be run. Certificates are copied to master hosts and are deployed within the /etc/origin/master/named_certificates/ directory.
Ansible detects a certificate’s Common Name
and Subject Alternative Names
.
Detected names can be overridden by providing the "names"
key when setting
openshift_master_named_certificates
:
openshift_master_named_certificates=[{"certfile": "/path/to/custom1.crt", "keyfile": "/path/to/custom1.key", "names": ["public-master-host.com"]}]
Certificates configured using openshift_master_named_certificates
are cached
on masters, meaning that each additional Ansible run with a different set of
certificates results in all previously deployed certificates remaining in place
on master hosts and within the master configuration file.
If you would like openshift_master_named_certificates
to be overwritten with
the provided value (or no value), specify the
openshift_master_overwrite_named_certificates
cluster variable:
openshift_master_overwrite_named_certificates=true
For a more complete example, consider the following cluster variables in an inventory file:
openshift_master_cluster_method=native openshift_master_cluster_hostname=lb.openshift.com openshift_master_cluster_public_hostname=custom.openshift.com
To overwrite the certificates on a subsequent Ansible run, you could set the following:
openshift_master_named_certificates=[{"certfile": "/root/STAR.openshift.com.crt", "keyfile": "/root/STAR.openshift.com.key", "names": ["custom.openshift.com"]}] openshift_master_overwrite_named_certificates=true
Cluster metrics are not set to automatically deploy by default. Set the following to enable cluster metrics when using the advanced install:
[OSEv3:vars] openshift_hosted_metrics_deploy=true
The openshift_metrics_cassandra_storage_type
variable must be set in order to
use persistent storage. If openshift_metrics_cassandra_storage_type
is not set,
then cluster metrics data is stored in an EmptyDir
volume, which will
be deleted when the Cassandra pod terminates.
There are three options for enabling cluster metrics storage when using the advanced install:
Option A - NFS Host Group
When the following variables are set, an NFS volume is created during an
advanced install with path <nfs_directory>/<volume_name> on the host within
the [nfs]
host group. For example, the volume path using these options would
be /exports/metrics:
[OSEv3:vars] openshift_hosted_metrics_storage_kind=nfs openshift_hosted_metrics_storage_access_modes=['ReadWriteOnce'] openshift_hosted_metrics_storage_nfs_directory=/exports openshift_hosted_metrics_storage_nfs_options='*(rw,root_squash)' openshift_hosted_metrics_storage_volume_name=metrics openshift_hosted_metrics_storage_volume_size=10Gi
Option B - External NFS Host
To use an external NFS volume, one must already exist with a path of <nfs_directory>/<volume_name> on the storage host.
[OSEv3:vars] openshift_hosted_metrics_storage_kind=nfs openshift_hosted_metrics_storage_access_modes=['ReadWriteOnce'] openshift_hosted_metrics_storage_host=nfs.example.com openshift_hosted_metrics_storage_nfs_directory=/exports openshift_hosted_metrics_storage_volume_name=metrics openshift_hosted_metrics_storage_volume_size=10Gi
The remote volume path using the following options would be nfs.example.com:/exports/metrics.
Option C - Dynamic
Use the following variable if your OpenShift Origin environment supports dynamic volume provisioning for your cloud platform:
[OSEv3:vars] #openshift_metrics_cassandra_storage_type=dynamic
Cluster logging is not set to automatically deploy by default. Set the following to enable cluster logging when using the advanced installation method:
[OSEv3:vars] openshift_hosted_logging_deploy=true
The openshift_hosted_logging_storage_kind
variable must be set in order to
use persistent storage. If openshift_hosted_logging_storage_kind
is not set,
then cluster logging data is stored in an emptyDir
volume, which will
be deleted when the Elasticsearch pod terminates.
There are three options for enabling cluster logging storage when using the advanced install:
Option A - NFS Host Group
When the following variables are set, an NFS volume is created during an
advanced install with path <nfs_directory>/<volume_name> on the host within
the [nfs]
host group. For example, the volume path using these options would be
/exports/logging:
[OSEv3:vars] openshift_hosted_logging_storage_kind=nfs openshift_hosted_logging_storage_access_modes=['ReadWriteOnce'] openshift_hosted_logging_storage_nfs_directory=/exports openshift_hosted_logging_storage_nfs_options='*(rw,root_squash)' openshift_hosted_logging_storage_volume_name=logging openshift_hosted_logging_storage_volume_size=10Gi
Option B - External NFS Host
To use an external NFS volume, one must already exist with a path of <nfs_directory>/<volume_name> on the storage host.
[OSEv3:vars] openshift_hosted_logging_storage_kind=nfs openshift_hosted_logging_storage_access_modes=['ReadWriteOnce'] openshift_hosted_logging_storage_host=nfs.example.com openshift_hosted_logging_storage_nfs_directory=/exports openshift_hosted_logging_storage_volume_name=logging openshift_hosted_logging_storage_volume_size=10Gi
The remote volume path using the following options would be nfs.example.com:/exports/logging.
Option C - Dynamic
Use the following variable if your OpenShift Origin environment supports dynamic volume provisioning for your cloud platform:
[OSEv3:vars] #openshift_hosted_logging_storage_kind=dynamic
You can configure an environment with a single master and multiple nodes, and either a single or multiple number of external etcd hosts.
Moving from a single master cluster to multiple masters after installation is not supported. |
Single Master and Multiple Nodes
The following table describes an example environment for a single master (with etcd on the same host) and two nodes:
Host Name | Infrastructure Component to Install |
---|---|
master.example.com |
Master and node |
master.example.com |
etcd |
node1.example.com |
Node |
node2.example.com |
You can see these example hosts present in the [masters] and [nodes] sections of the following example inventory file:
# Create an OSEv3 group that contains the masters and nodes groups [OSEv3:children] masters nodes # Set variables common for all OSEv3 hosts [OSEv3:vars] # SSH user, this user should allow ssh based auth without requiring a password ansible_ssh_user=root # If ansible_ssh_user is not root, ansible_become must be set to true #ansible_become=true deployment_type=origin # uncomment the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider #openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}] # host group for masters [masters] master.example.com # host group for etcd [etcd] master.example.com # host group for nodes, includes region info [nodes] master.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}" node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}"
To use this example, modify the file to match your environment and specifications, and save it as /etc/ansible/hosts.
Single Master, Multiple etcd, and Multiple Nodes
The following table describes an example environment for a single master, three etcd hosts, and two nodes:
Host Name | Infrastructure Component to Install |
---|---|
master.example.com |
Master and node |
etcd1.example.com |
etcd |
etcd2.example.com |
|
etcd3.example.com |
|
node1.example.com |
Node |
node2.example.com |
When specifying multiple etcd hosts, external etcd is installed and configured. Clustering of OpenShift Origin’s embedded etcd is not supported. |
You can see these example hosts present in the [masters], [nodes], and [etcd] sections of the following example inventory file:
# Create an OSEv3 group that contains the masters, nodes, and etcd groups [OSEv3:children] masters nodes etcd # Set variables common for all OSEv3 hosts [OSEv3:vars] ansible_ssh_user=root deployment_type=origin # uncomment the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider #openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}] # host group for masters [masters] master.example.com # host group for etcd [etcd] etcd1.example.com etcd2.example.com etcd3.example.com # host group for nodes, includes region info [nodes] master.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}" node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}"
To use this example, modify the file to match your environment and specifications, and save it as /etc/ansible/hosts.
You can configure an environment with multiple masters, multiple etcd hosts, and multiple nodes. Configuring multiple masters for high availability (HA) ensures that the cluster has no single point of failure.
Moving from a single master cluster to multiple masters after installation is not supported. |
When configuring multiple masters, the advanced installation supports the following high availability (HA) method:
|
Leverages the native HA master capabilities built into OpenShift Origin and can be combined with any load balancing solution. If a host is defined in the [lb] section of the inventory file, Ansible installs and configures HAProxy automatically as the load balancing solution. If no host is defined, it is assumed you have pre-configured a load balancing solution of your choice to balance the master API (port 8443) on all master hosts. |
For your pre-configured load balancing solution, you must have:
A pre-created load balancer VIP configured for SSL passthrough.
A domain name for VIP registered in DNS.
The domain name will become the value of both
openshift_master_cluster_public_hostname
and
openshift_master_cluster_hostname
in the OpenShift Origin installer.
See External Load Balancer Integrations for more information.
For more on the high availability master architecture, see Kubernetes Infrastructure. |
Note the following when using the native
HA method:
The advanced installation method does not currently support multiple HAProxy load balancers in an active-passive setup. See the Load Balancer Administration documentation for post-installation amendments.
In a HAProxy setup, controller manager servers run as standalone processes.
They elect their active leader with a lease stored in etcd. The lease
expires after 30 seconds by default. If a failure happens on an active
controller server, it will take up to this number of seconds to elect another
leader. The interval can be configured with the osm_controller_lease_ttl
variable.
To configure multiple masters, refer to the following section.
Multiple Masters with Multiple etcd, and Using Native HA
The following describes an example environment for three
masters,
one HAProxy load balancer, three
etcd
hosts, and two
nodes
using the native
HA method:
Host Name | Infrastructure Component to Install |
---|---|
master1.example.com |
Master (clustered using native HA) and node |
master2.example.com |
|
master3.example.com |
|
lb.example.com |
HAProxy to load balance API master endpoints |
etcd1.example.com |
etcd |
etcd2.example.com |
|
etcd3.example.com |
|
node1.example.com |
Node |
node2.example.com |
When specifying multiple etcd hosts, external etcd is installed and configured. Clustering of OpenShift Origin’s embedded etcd is not supported. |
You can see these example hosts present in the [masters], [etcd], [lb], and [nodes] sections of the following example inventory file:
# Create an OSEv3 group that contains the master, nodes, etcd, and lb groups. # The lb group lets Ansible configure HAProxy as the load balancing solution. # Comment lb out if your load balancer is pre-configured. [OSEv3:children] masters nodes etcd lb # Set variables common for all OSEv3 hosts [OSEv3:vars] ansible_ssh_user=root deployment_type=origin # Uncomment the following to enable htpasswd authentication; defaults to # DenyAllPasswordIdentityProvider. #openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}] # Native high availbility cluster method with optional load balancer. # If no lb group is defined installer assumes that a load balancer has # been preconfigured. For installation the value of # openshift_master_cluster_hostname must resolve to the load balancer # or to one or all of the masters defined in the inventory if no load # balancer is present. openshift_master_cluster_method=native openshift_master_cluster_hostname=openshift-cluster.example.com openshift_master_cluster_public_hostname=openshift-cluster.example.com # apply updated node defaults openshift_node_kubelet_args={'pods-per-core': ['10'], 'max-pods': ['250'], 'image-gc-high-threshold': ['90'], 'image-gc-low-threshold': ['80']} # override the default controller lease ttl #osm_controller_lease_ttl=30 # enable ntp on masters to ensure proper failover openshift_clock_enabled=true # host group for masters [masters] master1.example.com master2.example.com master3.example.com # host group for etcd [etcd] etcd1.example.com etcd2.example.com etcd3.example.com # Specify load balancer host [lb] lb.example.com # host group for nodes, includes region info [nodes] master[1:3].example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}" node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}"
To use this example, modify the file to match your environment and specifications, and save it as /etc/ansible/hosts.
Multiple Masters with Master and etcd on the Same Host, and Using Native HA
The following describes an example environment for three
masters with etcd on each host,
one HAProxy load balancer, and two
nodes
using the native
HA method:
Host Name | Infrastructure Component to Install |
---|---|
master1.example.com |
Master (clustered using native HA) and node with etcd on each host |
master2.example.com |
|
master3.example.com |
|
lb.example.com |
HAProxy to load balance API master endpoints |
node1.example.com |
Node |
node2.example.com |
You can see these example hosts present in the [masters], [etcd], [lb], and [nodes] sections of the following example inventory file:
# Create an OSEv3 group that contains the master, nodes, etcd, and lb groups. # The lb group lets Ansible configure HAProxy as the load balancing solution. # Comment lb out if your load balancer is pre-configured. [OSEv3:children] masters nodes etcd lb # Set variables common for all OSEv3 hosts [OSEv3:vars] ansible_ssh_user=root deployment_type=openshift-enterprise # Uncomment the following to enable htpasswd authentication; defaults to # DenyAllPasswordIdentityProvider. #openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}] # Native high availbility cluster method with optional load balancer. # If no lb group is defined installer assumes that a load balancer has # been preconfigured. For installation the value of # openshift_master_cluster_hostname must resolve to the load balancer # or to one or all of the masters defined in the inventory if no load # balancer is present. openshift_master_cluster_method=native openshift_master_cluster_hostname=openshift-cluster.example.com openshift_master_cluster_public_hostname=openshift-cluster.example.com # override the default controller lease ttl #osm_controller_lease_ttl=30 # host group for masters [masters] master1.example.com master2.example.com master3.example.com # host group for etcd [etcd] master1.example.com master2.example.com master3.example.com # Specify load balancer host [lb] lb.example.com # host group for nodes, includes region info [nodes] master[1:3].example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}" node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}"
To use this example, modify the file to match your environment and specifications, and save it as /etc/ansible/hosts.
After you have configured Ansible by defining an inventory file in /etc/ansible/hosts, you can run the advanced installation using the following playbook:
# ansible-playbook ~/openshift-ansible/playbooks/byo/config.yml
If for any reason the installation fails, before re-running the installer, see Known Issues to check for any specific instructions or workarounds.
After the installation completes:
Verify that the master is started and nodes are registered and reporting in Ready status. On the master host, run the following as root:
# oc get nodes NAME STATUS AGE master.example.com Ready,SchedulingDisabled 165d node1.example.com Ready 165d node2.example.com Ready 165d
To verify that the web console is installed correctly, use the master host name and the console port number to access the console with a web browser.
For example, for a master host with a hostname of master.openshift.com
and using the default port of 8443
, the web console would be found at:
https://master.openshift.com:8443/console
Now that the install has been verified, run the following command on each master and node host to add the atomic-openshift packages back to the list of yum excludes on the host:
# atomic-openshift-excluder exclude
The default port for the console is |
Multiple etcd Hosts
If you installed multiple etcd hosts:
On a master host, verify the etcd cluster health, substituting for the FQDNs of your etcd hosts in the following:
# etcdctl -C \ https://etcd1.example.com:2379,https://etcd2.example.com:2379,https://etcd3.example.com:2379 \ --ca-file=/etc/origin/master/master.etcd-ca.crt \ --cert-file=/etc/origin/master/master.etcd-client.crt \ --key-file=/etc/origin/master/master.etcd-client.key cluster-health
Also verify the member list is correct:
# etcdctl -C \ https://etcd1.example.com:2379,https://etcd2.example.com:2379,https://etcd3.example.com:2379 \ --ca-file=/etc/origin/master/master.etcd-ca.crt \ --cert-file=/etc/origin/master/master.etcd-client.crt \ --key-file=/etc/origin/master/master.etcd-client.key member list
Multiple Masters Using HAProxy
If you installed multiple masters using HAProxy as a load balancer, browse to the following URL according to your [lb] section definition and check HAProxy’s status:
http://<lb_hostname>:9000
You can verify your installation by consulting the HAProxy Configuration documentation.
You can uninstall OpenShift Origin hosts in your cluster by running the uninstall.yml playbook. This playbook deletes OpenShift Origin content installed by Ansible, including:
Configuration
Containers
Default templates and image streams
Images
RPM packages
The playbook will delete content for any hosts defined in the inventory file that you specify when running the playbook. If you want to uninstall OpenShift Origin across all hosts in your cluster, run the playbook using the inventory file you used when installing OpenShift Origin initially or ran most recently:
# ansible-playbook [-i /path/to/file] \ ~/openshift-ansible/playbooks/adhoc/uninstall.yml
You can also uninstall node components from specific hosts using the uninstall.yml playbook while leaving the remaining hosts and cluster alone:
This method should only be used when attempting to uninstall specific node hosts and not for specific masters or etcd hosts, which would require further configuration changes within the cluster. |
First follow the steps in Deleting Nodes to remove the node object from the cluster, then continue with the remaining steps in this procedure.
Create a different inventory file that only references those hosts. For example, to only delete content from one node:
[OSEv3:children] nodes (1) [OSEv3:vars] ansible_ssh_user=root deployment_type=origin [nodes] node3.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}" (2)
1 | Only include the sections that pertain to the hosts you are interested in uninstalling. |
2 | Only include hosts that you want to uninstall. |
Specify that new inventory file using the -i
option when running the
uninstall.yml playbook:
# ansible-playbook -i /path/to/new/file \ ~/openshift-ansible/playbooks/adhoc/uninstall.yml
When the playbook completes, all OpenShift Origin content should be removed from any specified hosts.
The following are known issues for specified installation configurations.
Multiple Masters
On failover, it is possible for the controller manager to overcorrect, which causes the system to run more pods than what was intended. However, this is a transient event and the system does correct itself over time. See https://github.com/kubernetes/kubernetes/issues/10030 for details.
On failure of the Ansible installer, you must start from a clean operating system installation. If you are using virtual machines, start from a fresh image. If you are using bare metal machines, run the following on all hosts:
# yum -y remove openshift openshift-* etcd docker docker-common # rm -rf /etc/origin /var/lib/openshift /etc/etcd \ /var/lib/etcd /etc/sysconfig/atomic-openshift* /etc/sysconfig/docker* \ /root/.kube/config /etc/ansible/facts.d /usr/share/openshift
Now that you have a working OpenShift Origin instance, you can:
Configure authentication; by default, authentication is set to Allow All.
Deploy an integrated Docker registry.
Deploy a router.
Populate your OpenShift Origin installation with a useful set of Red Hat-provided image streams and templates.