kubernetesMasterConfig: apiServerArguments: deserialization-cache-size: - "1000"
The following sections identify the hardware specifications and system-level requirements of all hosts within your OpenShift Origin environment.
The system requirements vary per host type:
|
|
|
|
External etcd Nodes |
|
OpenShift Origin only supports servers with x86_64 architecture. |
Meeting the /var/ file system sizing requirements in RHEL Atomic Host requires making changes to the default configuration. See Managing Storage in Red Hat Enterprise Linux Atomic Host for instructions on configuring this during or after installation. |
Test or sample environments function with the minimum requirements. For production environments, the following recommendations apply:
In a highly available OpenShift Origin cluster with external etcd, a master host should have 1 CPU core and 1.5 GB of memory, on top of the defaults in the table above, for each 1000 pods. Therefore, the recommended size of master host in an OpenShift Origin cluster of 2000 pods would be 2 CPU cores and 3 GB of RAM, in addition to the minimum requirements for a master host of 2 CPU cores and 16 GB of RAM.
When planning an environment with multiple masters, a minimum of three etcd hosts as well as a load-balancer between the master hosts, is required.
The OpenShift Origin master caches deserialized versions of resources aggressively to ease CPU load. However, in smaller clusters of less than 1000 pods, this cache can waste a lot of memory for negligible CPU load reduction. The default cache size is 50000 entries, which, depending on the size of your resources, can grow to occupy 1 to 2 GB of memory. This cache size can be reduced using the following setting the in /etc/origin/master/master-config.yaml:
kubernetesMasterConfig: apiServerArguments: deserialization-cache-size: - "1000"
The size of a node host depends on the expected size of its workload. As an OpenShift Origin cluster administrator, you will need to calculate the expected workload, then add about 10 percent for overhead. For production environments, allocate enough resources so that node host failure does not affect your maximum capacity.
Oversubscribing the physical resources on a node affects resource guarantees the Kubernetes scheduler makes during pod placement. Learn what measures you can take to avoid memory swapping. |
By default, OpenShift Origin masters and nodes use all available cores in the
system they run on. You can choose the number of cores you want OpenShift Origin
to use by setting the GOMAXPROCS
environment
variable.
For example, run the following before starting the server to make OpenShift Origin only run on one core:
# export GOMAXPROCS=1
Alternatively, if you plan to
run
OpenShift in a container, add -e GOMAXPROCS=1
to the docker run
command when launching the server.
Security-Enhanced Linux (SELinux) must be enabled on all of the servers before
installing OpenShift Origin or the installer will fail. Also, configure
SELINUXTYPE=targeted
in the /etc/selinux/config file:
# This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - No SELinux policy is loaded. SELINUX=enforcing # SELINUXTYPE= can take one of these three values: # targeted - Targeted processes are protected, # minimum - Modification of targeted policy. Only selected processes are protected. # mls - Multi Level Security protection. SELINUXTYPE=targeted
You must enable Network Time Protocol (NTP) to prevent masters and nodes in the
cluster from going out of sync. Set openshift_clock_enabled
to true
in the
Ansible playbook to enable NTP on masters and nodes in the cluster during
Ansible installation.
# openshift_clock_enabled=true
OpenShift Origin runs
containers on your hosts, and in some cases, such as build operations and the
registry service, it does so using privileged containers. Furthermore, those
containers access your host’s Docker daemon and perform docker build
and
docker push
operations. As such, you should be aware of the inherent security
risks associated with performing docker run
operations on arbitrary images as
they effectively have root access.
For more information, see these articles:
To address these risks, OpenShift Origin uses security context constraints that control the actions that pods can perform and what it has the ability to access.
The following section defines the requirements of the environment containing your OpenShift Origin configuration. This includes networking considerations and access to external services, such as Git repository access, storage, and cloud infrastructure providers.
OpenShift Origin requires a fully functional DNS server in the environment. This is ideally a separate host running DNS software and can provide name resolution to hosts and containers running on the platform.
Adding entries into the /etc/hosts file on each host is not enough. This file is not copied into containers running on the platform. |
Key components of OpenShift Origin run themselves inside of containers and use the following process for name resolution:
By default, containers receive their DNS configuration file (/etc/resolv.conf) from their host.
OpenShift Origin then inserts one DNS value into the pods
(above the node’s nameserver values). That value is defined in the
/etc/origin/node/node-config.yaml file by the dnsIP
parameter, which by
default is set to the address of the host node because the host is using
dnsmasq.
If the dnsIP
parameter is omitted from the node-config.yaml
file, then the value defaults to the kubernetes service IP, which is the first
nameserver in the pod’s /etc/resolv.conf file.
As of OpenShift Origin 1.2, dnsmasq is automatically configured on all masters and nodes. The pods use the nodes as their DNS, and the nodes forward the requests. By default, dnsmasq is configured on the nodes to listen on port 53, therefore the nodes cannot run any other type of DNS application.
NetworkManager is required on the nodes in order to populate dnsmasq with the DNS IP addresses. |
The following is an example set of DNS records for the Single Master and Multiple Nodes scenario:
master A 10.64.33.100 node1 A 10.64.33.101 node2 A 10.64.33.102
If you do not have a properly functioning DNS environment, you could experience failure with:
Product installation via the reference Ansible-based scripts
Deployment of the infrastructure containers (registry, routers)
Access to the OpenShift Origin web console, because it is not accessible via IP address alone
Make sure each host in your environment is configured to resolve hostnames from your DNS server. The configuration for hosts' DNS resolution depend on whether DHCP is enabled. If DHCP is:
Disabled, then configure your network interface to be static, and add DNS nameservers to NetworkManager.
Enabled, then the NetworkManager dispatch script automatically configures DNS
based on the DHCP configuration. Optionally, you can add a value to dnsIP
in the node-config.yaml file to prepend the pod’s resolv.conf file. The
second nameserver is then defined by the host’s first nameserver. By default,
this will be the IP address of the node host.
For most configurations, do not set the Instead, allow the installer to configure each node to use dnsmasq and forward
requests to SkyDNS or the external DNS provider. If you do set the
|
To properly check that hosts are correctly configured to resolved to your DNS server:
Check the contents of /etc/resolv.conf:
$ cat /etc/resolv.conf # Generated by NetworkManager search example.com nameserver 10.64.33.1 # nameserver updated by /etc/NetworkManager/dispatcher.d/99-origin-dns.sh
In this example, 10.64.33.1 is the address of our DNS server.
Test the DNS servers listed in /etc/resolv.conf are able to resolve to the addresses of all the masters and nodes in your OpenShift Origin environment:
$ dig <node_hostname> @<IP_address> +short
For example:
$ dig master.example.com @10.64.33.1 +short 10.64.33.100 $ dig node1.example.com @10.64.33.1 +short 10.64.33.101
If you want to disable dnsmasq (for example, if your /etc/resolv.conf is
managed by a configuration tool other than NetworkManager), then set
openshift_use_dnsmasq
to false in the Ansible playbook.
However, certain containers do not properly move to the next nameserver when the first issues SERVFAIL. Red Hat Enterprise Linux (RHEL)-based containers do not suffer from this, but certain versions of uclibc and musl do.
Optionally, configure a wildcard for the router to use, so that you do not need to update your DNS configuration when new routes are added.
A wildcard for a DNS zone must ultimately resolve to the IP address of the OpenShift Origin router.
For example, create a wildcard DNS entry for cloudapps that has a low time-to-live value (TTL) and points to the public IP address of the host where the router will be deployed:
*.cloudapps.example.com. 300 IN A 192.168.133.2
In almost all cases, when referencing VMs you must use host names, and the host
names that you use must match the output of the hostname -f
command on each
node.
In your /etc/resolv.conf file on each node host, ensure that the DNS server that has the wildcard entry is not listed as a nameserver or that the wildcard domain is not listed in the search list. Otherwise, containers managed by OpenShift Origin may fail to resolve host names properly. |
A shared network must exist between the master and node hosts. If you plan to configure multiple masters for high-availability using the advanced installation method, you must also select an IP to be configured as your virtual IP (VIP) during the installation process. The IP that you select must be routable between all of your nodes, and if you configure using a FQDN it should resolve on all nodes.
NetworkManager, a program for providing detection and configuration for systems to automatically connect to the network, is required.
The OpenShift Origin installation automatically creates a set of internal
firewall rules on each host using iptables
. However, if your network
configuration uses an external firewall, such as a hardware-based firewall, you
must ensure infrastructure components can communicate with each other through
specific ports that act as communication endpoints for certain processes or
services.
Ensure the following ports required by OpenShift Origin are open on your network and configured to allow access between hosts. Some ports are optional depending on your configuration and usage.
4789 |
UDP |
Required for SDN communication between pods on separate hosts. |
53 or 8053 |
TCP/UDP |
Required for DNS resolution of cluster services (SkyDNS). Installations prior to 1.2 or environments upgraded to 1.2 use port 53. New installations will use 8053 by default so that dnsmasq may be configured. |
4789 |
UDP |
Required for SDN communication between pods on separate hosts. |
443 or 8443 |
TCP |
Required for node hosts to communicate to the master API, for the node hosts to post back status, to receive tasks, and so on. |
4789 |
UDP |
Required for SDN communication between pods on separate hosts. |
10250 |
TCP |
The master proxies to node hosts via the Kubelet for |
In the following table, (L) indicates the marked port is also used in loopback mode, enabling the master to communicate with itself. In a single-master cluster:
In a multiple-master cluster, all the listed ports must be open. |
53 (L) or 8053 (L) |
TCP/UDP |
Required for DNS resolution of cluster services (SkyDNS). Installations prior to 1.2 or environments upgraded to 1.2 use port 53. New installations will use 8053 by default so that dnsmasq may be configured. |
2049 (L) |
TCP/UDP |
Required when provisioning an NFS host as part of the installer. |
2379 |
TCP |
Used for standalone etcd (clustered) to accept changes in state. |
2380 |
TCP |
etcd requires this port be open between masters for leader election and peering connections when using standalone etcd (clustered). |
4001 (L) |
TCP |
Used for embedded etcd (non-clustered) to accept changes in state. |
4789 (L) |
UDP |
Required for SDN communication between pods on separate hosts. |
9000 |
TCP |
If you choose the |
443 or 8443 |
TCP |
Required for node hosts to communicate to the master API, for node hosts to post back status, to receive tasks, and so on. |
22 |
TCP |
Required for SSH by the installer or system administrator. |
53 or 8053 |
TCP/UDP |
Required for DNS resolution of cluster services (SkyDNS). Installations prior to 1.2 or environments upgraded to 1.2 use port 53. New installations will use 8053 by default so that dnsmasq may be configured. Only required to be internally open on master hosts. |
80 or 443 |
TCP |
For HTTP/HTTPS use for the router. Required to be externally open on node hosts, especially on nodes running the router. |
1936 |
TCP |
For router statistics use. Required to be open when running the template router to access statistics, and can be open externally or internally to connections depending on if you want the statistics to be expressed publicly. |
4001 |
TCP |
For embedded etcd (non-clustered) use. Only required to be internally open on the master host. 4001 is for server-client connections. |
2379 and 2380 |
TCP |
For standalone etcd use. Only required to be internally open on the master host. 2379 is for server-client connections. 2380 is for server-server connections, and is only required if you have clustered etcd. |
4789 |
UDP |
For VxLAN use (OpenShift SDN). Required only internally on node hosts. |
8443 |
TCP |
For use by the OpenShift Origin web console, shared with the API server. |
10250 |
TCP |
For use by the Kubelet. Required to be externally open on nodes. |
Notes
In the above examples, port 4789 is used for User Datagram Protocol (UDP).
When deployments are using the SDN, the pod network is accessed via a service proxy, unless it is accessing the registry from the same node the registry is deployed on.
OpenShift Origin internal DNS cannot be received over SDN. Depending on the detected values of openshift_facts
, or if the openshift_ip
and openshift_public_ip
values are overridden, it will be the computed value of openshift_ip
. For non-cloud deployments, this will default to the IP address associated with the default route on the master host. For cloud deployments, it will default to the IP address associated with the first internal interface as defined by the cloud metadata.
The master host uses port 10250 to reach the nodes and does not go over SDN. It depends on the target host of the deployment and uses the computed values of openshift_hostname
and openshift_public_hostname
.
9200 |
TCP |
For Elasticsearch API use. Required to be internally open on any infrastructure
nodes so Kibana is able to retrieve logs for display. It can be externally
opened for direct access to Elasticsearch by means of a route. The route can be
created using |
9300 |
TCP |
For Elasticsearch inter-cluster use. Required to be internally open on any infrastructure node so the members of the Elasticsearch cluster may communicate with each other. |
The Kubernetes persistent volume framework allows you to provision an OpenShift Origin cluster with persistent storage using networked storage available in your environment. This can be done after completing the initial OpenShift Origin installation depending on your application needs, giving users a way to request those resources without having any knowledge of the underlying infrastructure.
The Installation and Configuration Guide provides instructions for cluster administrators on provisioning an OpenShift Origin cluster with persistent storage using NFS, GlusterFS, Ceph RBD, OpenStack Cinder, AWS Elastic Block Store (EBS), GCE Persistent Disks, and iSCSI.
There are certain aspects to take into consideration if installing OpenShift Origin on a cloud provider.
When installing on AWS or OpenStack, ensure that you set up the appropriate security groups. These are some ports that you should have in your security groups, without which the installation will fail. You may need more depending on the cluster configuration you want to install. For more information and to adjust your security groups accordingly, see Required Ports for more information.
All OpenShift Origin Hosts |
|
etcd Security Group |
|
Master Security Group |
|
Node Security Group |
|
Infrastructure Nodes (ones that can host the OpenShift Origin router) |
|
If configuring ELBs for load balancing the masters and/or routers, you also need to configure Ingress and Egress security groups for the ELBs appropriately.
Some deployments require that the user override the detected host names and IP
addresses for the hosts. To see the default values, run the openshift_facts
playbook:
# ansible-playbook playbooks/byo/openshift_facts.yml
Now, verify the detected common settings. If they are not what you expect them to be, you can override them.
The Advanced Installation topic discusses the available Ansible variables in greater detail.
Variable | Usage |
---|---|
|
|
|
|
|
|
|
|
|
|
If |
In AWS, situations that require overriding the variables include:
Variable | Usage |
---|---|
|
The user is installing in a VPC that is not configured for both |
|
Possibly if they have multiple network interfaces configured and they want to
use one other than the default. You must first set
|
|
|
|
|
If setting openshift_hostname
to something other than the metadata-provided
private-dns-name
value, the native cloud integration for those providers
will no longer work.
For EC2 hosts in particular, they must be deployed in a VPC that has both
DNS host names
and DNS resolution
enabled, and openshift_hostname
should not be overridden.