kubeletArguments:
kube-reserved:
- "cpu=200m,memory=30G"
system-reserved:
- "cpu=200m,memory=30G"
To provide more reliable scheduling and minimize node resource overcommitment, each node can reserve a portion of its resources for use by all underlying node components (e.g., kubelet, kube-proxy, Docker) and the remaining system components (e.g., sshd, NetworkManager) on the host. Once specified, the scheduler has more information about the resources (e.g., memory, CPU) a node has allocated for pods.
Resources reserved for node components are based on two node settings:
Setting | Description |
---|---|
|
Resources reserved for node components. Default is none. |
|
Resources reserved for the remaining system components. Default is none. |
You can set these in the kubeletArguments
section of the
node
configuration file (the /etc/origin/node/node-config.yaml file by default)
using a set of <resource_type>=<resource_quantity>
pairs (e.g.,
cpu=200m,memory=30G). Add the section if it does not already exist:
kubeletArguments:
kube-reserved:
- "cpu=200m,memory=30G"
system-reserved:
- "cpu=200m,memory=30G"
Currently, the cpu
and memory
resource types are supported. For cpu
,
the resource quantity is specified in units of cores (e.g., 200m, 100Ki, 50M).
For memory
, it is specified in units of bytes (e.g., 200Ki, 100M, 50Gi).
See Compute Resources for more details.
If a flag is not set, it defaults to 0. If none of the flags are set, the allocated resource is set to the node’s capacity as it was before the introduction of allocatable resources.
An allocated amount of a resource is computed based on the following formula:
[Allocatable] = [Node Capacity] - [kube-reserved] - [system-reserved]
If [Allocatable]
is negative, it is set to 0.
To see a node’s current capacity and allocatable resources, you can run:
$ oc get node/<node_name> -o yaml ... status: ... allocatable: cpu: "4" memory: 8010948Ki pods: "110" capacity: cpu: "4" memory: 8010948Ki pods: "110" ...
Starting with OpenShift Origin
1.3,
each node reports system resources utilized by the container runtime and kubelet.
To better aid your ability to configure --system-reserved
and --kube-reserved
,
you can introspect corresponding node’s resource usage using the node summary API,
which is accessible at <master>/api/v1/nodes/<node>/proxy/stats/summary.
For instance, to access the resources from cluster.node22 node, you can run:
$ curl <certificate details> https://<master>/api/v1/nodes/cluster.node22/proxy/stats/summary { "node": { "nodeName": "cluster.node22", "systemContainers": [ { "cpu": { "usageCoreNanoSeconds": 929684480915, "usageNanoCores": 190998084 }, "memory": { "rssBytes": 176726016, "usageBytes": 1397895168, "workingSetBytes": 1050509312 }, "name": "kubelet" }, { "cpu": { "usageCoreNanoSeconds": 128521955903, "usageNanoCores": 5928600 }, "memory": { "rssBytes": 35958784, "usageBytes": 129671168, "workingSetBytes": 102416384 }, "name": "runtime" } ] } }
See REST API Overview for more details about certificate details.