Difficulty: beginner
Estimated Time: 30 minutes

Operator Lifecycle Manager

The Operator Lifecycle Manager project is a component of the Operator Framework, an open source toolkit to manage Kubernetes native applications, called Operators, in an effective, automated, and scalable way.

OLM extends Kubernetes to provide a declarative way to install, manage, and upgrade operators and their dependencies in a cluster.

It also enforces some constraints on the components it manages in order to ensure a good user experience.

OLM enables users to do the following:

  • Define applications as a single Kubernetes resource that encapsulates requirements and metadata.
  • Install applications automatically with dependency resolution or manually with nothing but kubectl.
  • Upgrade applications automatically with different approval policies.

For more information, check out the links below:

Git Hub

Chat

Operator Lifecycle Manager

Step 1 of 5

Setting Up OLM

The Operator Lifecycle Manager is not installed in our current Katacoda OpenShift environment. We will now install it from scratch by deploying the following objects:

  • CustomResourceDefinitions:
    • Subscription, InstallPlan, CatalogSource, ClusterServiceVersion
  • Namespace:
    • openshift-operator-lifecycle-manager
  • Service Account:
    • olm-operator-serviceaccount
  • ClusterRole:
    • system:controller:operator-lifecycle-manager
  • ClusterRoleBinding:
    • olm-operator-binding-openshift-operator-lifecycle-manager
  • CatalogSource:
    • rh-operators
  • ConfigMap:
    • rh-operators
  • Deployments:
    • olm-operator, catalog-operator, package-server

Note: The initial setup of the Operator Lifecycle Manager (OLM) is a one-time task and is reserved for Kubernetes administrators with cluster-admin privileges. Once OLM is properly setup, Kubernetes administrators can then delegate Operator install privileges to non-admin Kubernetes users.

Let's get started by cloning the official OLM repository:

git clone https://github.com/operator-framework/operator-lifecycle-manager


Create the dedicated openshift-operator-lifecycle-manager Namespace:

oc create -f operator-lifecycle-manager/deploy/ocp/manifests/0.7.2/0000_30_00-namespace.yaml


Verify the Namespace was successfully created:

oc get namespaces openshift-operator-lifecycle-manager


Create the olm-operator-serviceaccount Service Account, system:controller:operator-lifecycle-manager ClusterRole, and olm-operator-binding-openshift-operator-lifecycle-manager ClusterRoleBinding:

oc create -f operator-lifecycle-manager/deploy/ocp/manifests/0.7.2/0000_30_01-olm-operator.serviceaccount.yaml


Verify the Service Account, ClusterRole, and ClusterRoleBinding were successfully created:

oc -n openshift-operator-lifecycle-manager get serviceaccount olm-operator-serviceaccount
oc get clusterrole system:controller:operator-lifecycle-manager
oc get clusterrolebinding olm-operator-binding-openshift-operator-lifecycle-manager

Create the OLM Custom Resource Definitions (Subscription, InstallPlan, CatalogSource, ClusterServiceVersion):

for num in {02..05}; do oc create -f operator-lifecycle-manager/deploy/ocp/manifests/0.7.2/0000_30_$num*; done


Verify all four OLM CRDs are present:

oc get crds


Create the internal rh-operators CatalogSource and rh-operators ConfigMap which contains manifests for some popular Operators:

for num in {06,09}; do oc create -f operator-lifecycle-manager/deploy/ocp/manifests/0.7.2/0000_30_$num*; done


Verify the CatalogSource and ConfigMap were successfully created:

oc -n openshift-operator-lifecycle-manager get catalogsource rh-operators
oc -n openshift-operator-lifecycle-manager get configmap rh-operators


Create the remaining OLM objects including OLM, Catalog, and Package Deployments:

for num in {10..13}; do oc create -f operator-lifecycle-manager/deploy/ocp/manifests/0.7.2/0000_30_$num*; done


Verify all three OLM deployments were successfully created:

oc -n openshift-operator-lifecycle-manager get deployments


We have successfully setup the Operator Lifecycle Manager in our OpenShift cluster.

This tab will not be visible to users and provides only information to help authors when creating content.

Creating Katacoda Scenarios

Thanks for creating Katacoda scenarios. This tab is designed to help you as an author have quick access the information you need when creating scenarios.

Here are some useful links to get you started.

Running Katacoda Workshops

If you are planning to use Katacoda for workshops, please contact [email protected] to arrange capacity.

Debugging Scenarios

Below is the response from any background scripts run or files uploaded. This stream can aid debugging scenarios.

If you still need assistance, please contact [email protected]