apiVersion: "v1"
kind: "BuildConfig"
metadata:
name: "sample-build"
spec:
resources:
limits:
cpu: "100m" (1)
memory: "256Mi" (2)
By default, builds are completed by pods using unbound resources, such as memory and CPU. These resources can be limited by specifying resource limits in a project’s default container limits.
You can also limit resource use by specifying resource limits as part of the
build configuration. In the following example, each of the resources
,
cpu
, and memory
parameters are optional:
apiVersion: "v1"
kind: "BuildConfig"
metadata:
name: "sample-build"
spec:
resources:
limits:
cpu: "100m" (1)
memory: "256Mi" (2)
1 | cpu is in CPU units: 100m represents 0.1 CPU units (100 * 1e-3). |
2 | memory is in bytes: 256Mi represents 268435456 bytes (256 * 2 ^ 20). |
However, if a quota has been defined for your project, one of the following two items is required:
A resources
section set with an explicit requests
:
resources:
requests: (1)
cpu: "100m"
memory: "256Mi"
1 | The requests object contains the list of resources that correspond to
the list of resources in the quota. |
A limit range defined in your project, where the
defaults from the LimitRange
object apply to pods created during the
build process.
Otherwise, build pod creation will fail, citing a failure to satisfy quota.
When defining a BuildConfig
, you can define its maximum duration by setting
the completionDeadlineSeconds
field. It is specified in seconds and is not
set by default. When not set, there is no maximum duration enforced.
The maximum duration is counted from the time when a build pod gets scheduled in the system, and defines how long it can be active, including the time needed to pull the builder image. After reaching the specified timeout, the build is terminated by OpenShift Origin.
The following example shows the part of a BuildConfig
specifying
completionDeadlineSeconds
field for 30 minutes:
spec: completionDeadlineSeconds: 1800
Builds can be targeted to run on specific nodes by specifying labels in the
nodeSelector
field of a build configuration. The nodeSelector
value is a set
of key/value pairs that are matched to node
labels when scheduling the build
pod.
apiVersion: "v1"
kind: "BuildConfig"
metadata:
name: "sample-build"
spec:
nodeSelector:(1)
key1: value1
key2: value2
1 | Builds associated with this build configuration will run only on nodes with the key1=value2 and key2=value2 labels. |
The nodeSelector
value can also be controlled by cluster-wide default and
override values. Defaults will only be applied if the build configuration does
not define any key/value pairs for the nodeSelector
and also does not define
an explicitly empty map value of nodeSelector:{}
. Override values will replace
values in the build configuration on a key by key basis.
See Configuring Global Build Defaults and Overrides for more information.
If the specified |
For compiled languages (Go, C, C++, Java, etc.), including the dependencies necessary for compilation in the application image might increase the size of the image or introduce vulnerabilities that can be exploited.
To avoid these problems, two builds can be chained together: one that produces the compiled artifact, and a second build that places that artifact in a separate image that runs the artifact.
In the following example, a Source-to-Image build is combined with a Docker build to compile an artifact that is then placed in a separate runtime image.
Although this example chains a Source-to-Image build and a Docker build, the first build can use any strategy that will produce an image containing the desired artifacts, and the second build can use any strategy that can consume input content from an image. |
The first build takes the application source and produces an image containing a
WAR file. The image is pushed to the artifact-image
image stream. The path of
the output artifact will depend on the assemble script of the
Source-to-Image builder used. In this case, it will be output to
/wildfly/standalone/deployments/ROOT.war.
apiVersion: v1
kind: BuildConfig
metadata:
name: artifact-build
spec:
output:
to:
kind: ImageStreamTag
name: artifact-image:latest
source:
git:
uri: https://github.com/openshift/openshift-jee-sample.git
type: Git
strategy:
sourceStrategy:
from:
kind: ImageStreamTag
name: wildfly:10.1
namespace: openshift
type: Source
The second build uses Image Source with a path to the WAR file inside the output image from the first build. An inline Dockerfile copies that WAR file into a runtime image.
apiVersion: v1
kind: BuildConfig
metadata:
name: image-build
spec:
output:
to:
kind: ImageStreamTag
name: image-build:latest
source:
type: Dockerfile
dockerfile: |-
FROM jee-runtime:latest
COPY ROOT.war /deployments/ROOT.war
images:
- from: (1)
kind: ImageStreamTag
name: artifact-image:latest
paths: (2)
- sourcePath: /wildfly/standalone/deployments/ROOT.war
destinationDir: "."
strategy:
dockerStrategy:
from: (3)
kind: ImageStreamTag
name: jee-runtime:latest
type: Docker
triggers:
- imageChange: {}
type: ImageChange
1 | from specifies that the Docker build should include the output of the image
from the artifact-image image stream, which was the target of the previous
build. |
2 | paths specifies which paths from the target image to include in the current
Docker build. |
3 | The runtime image is used as the source image for the Docker build. |
The result of this setup is that the output image of the second build does not need to contain any of the build tools that are needed to create the WAR file. Also, because the second build contains an image change trigger, whenever the first build is run and produces a new image with the binary artifact, the second build is automatically triggered to produce a runtime image that contains that artifact. Therefore, both builds behave as a single build with two stages.