In this chapter, we will cover topics such as how to manage a node, configure a service account, etc.
In OpenShift, we need to use the start command along with OC to boot up a new server. While launching a new master, we need to use the master along with the start command, whereas while starting the new node we need to use the node along with the start command. In order to do this, we need to create configuration files for the master as well as for the nodes. We can create a basic configuration file for the master and the node using the following command.
$ openshift start master --write-config = /openshift.local.config/master
$ oadm create-node-config --node-dir = /openshift.local.config/node-<node_hostname> --node = <node_hostname> --hostnames = <hostname>,<ip_address>
Once we run the following commands, we will get the base configuration files that can be used as the starting point for configuration. Later, we can have the same file to boot the new servers.
apiLevels: - v1beta3 - v1 apiVersion: v1 assetConfig: logoutURL: "" masterPublicURL: https://172.10.12.1:7449 publicURL: https://172.10.2.2:7449/console/ servingInfo: bindAddress: 0.0.0.0:7449 certFile: master.server.crt clientCA: "" keyFile: master.server.key maxRequestsInFlight: 0 requestTimeoutSeconds: 0 controllers: '*' corsAllowedOrigins: - 172.10.2.2:7449 - 127.0.0.1 - localhost dnsConfig: bindAddress: 0.0.0.0:53 etcdClientInfo: ca: ca.crt certFile: master.etcd-client.crt keyFile: master.etcd-client.key urls: - https://10.0.2.15:4001 etcdConfig: address: 10.0.2.15:4001 peerAddress: 10.0.2.15:7001 peerServingInfo: bindAddress: 0.0.0.0:7001 certFile: etcd.server.crt clientCA: ca.crt keyFile: etcd.server.key servingInfo: bindAddress: 0.0.0.0:4001 certFile: etcd.server.crt clientCA: ca.crt keyFile: etcd.server.key storageDirectory: /root/openshift.local.etcd etcdStorageConfig: kubernetesStoragePrefix: kubernetes.io kubernetesStorageVersion: v1 openShiftStoragePrefix: openshift.io openShiftStorageVersion: v1 imageConfig: format: openshift/origin-${component}:${version} latest: false kind: MasterConfig kubeletClientInfo: ca: ca.crt certFile: master.kubelet-client.crt keyFile: master.kubelet-client.key port: 10250 kubernetesMasterConfig: apiLevels: - v1beta3 - v1 apiServerArguments: null controllerArguments: null masterCount: 1 masterIP: 10.0.2.15 podEvictionTimeout: 5m schedulerConfigFile: "" servicesNodePortRange: 30000-32767 servicesSubnet: 172.30.0.0/16 staticNodeNames: [] masterClients: externalKubernetesKubeConfig: "" openshiftLoopbackKubeConfig: openshift-master.kubeconfig masterPublicURL: https://172.10.2.2:7449 networkConfig: clusterNetworkCIDR: 10.1.0.0/16 hostSubnetLength: 8 networkPluginName: "" serviceNetworkCIDR: 172.30.0.0/16 oauthConfig: assetPublicURL: https://172.10.2.2:7449/console/ grantConfig: method: auto identityProviders: - challenge: true login: true name: anypassword provider: apiVersion: v1 kind: AllowAllPasswordIdentityProvider masterPublicURL: https://172.10.2.2:7449/ masterURL: https://172.10.2.2:7449/ sessionConfig: sessionMaxAgeSeconds: 300 sessionName: ssn sessionSecretsFile: "" tokenConfig: accessTokenMaxAgeSeconds: 86400 authorizeTokenMaxAgeSeconds: 300 policyConfig: bootstrapPolicyFile: policy.json openshiftInfrastructureNamespace: openshift-infra openshiftSharedResourcesNamespace: openshift projectConfig: defaultNodeSelector: "" projectRequestMessage: "" projectRequestTemplate: "" securityAllocator: mcsAllocatorRange: s0:/2 mcsLabelsPerProject: 5 uidAllocatorRange: 1000000000-1999999999/10000 routingConfig: subdomain: router.default.svc.cluster.local serviceAccountConfig: managedNames: - default - builder - deployer masterCA: ca.crt privateKeyFile: serviceaccounts.private.key privateKeyFile: serviceaccounts.private.key publicKeyFiles: - serviceaccounts.public.key servingInfo: bindAddress: 0.0.0.0:8443 certFile: master.server.crt clientCA: ca.crt keyFile: master.server.key maxRequestsInFlight: 0 requestTimeoutSeconds: 3600
allowDisabledDocker: true apiVersion: v1 dnsDomain: cluster.local dnsIP: 172.10.2.2 dockerConfig: execHandlerName: native imageConfig: format: openshift/origin-${component}:${version} latest: false kind: NodeConfig masterKubeConfig: node.kubeconfig networkConfig: mtu: 1450 networkPluginName: "" nodeIP: "" nodeName: node1.example.com podManifestConfig: path: "/path/to/pod-manifest-file" fileCheckIntervalSeconds: 30 servingInfo: bindAddress: 0.0.0.0:10250 certFile: server.crt clientCA: node-client-ca.crt keyFile: server.key volumeDirectory: /root/openshift.local.volumes
This is how the node configuration files look like. Once we have these configuration files in place, we can run the following command to create master and node server.
$ openshift start --master-config = /openshift.local.config/master/master- config.yaml --node-config = /openshift.local.config/node-<node_hostname>/node- config.yaml
In OpenShift, we have OC command line utility which is mostly used for carrying out all the operations in OpenShift. We can use the following commands to manage the nodes.
$ oc get nodes NAME LABELS node1.example.com kubernetes.io/hostname = vklnld1446.int.example.com node2.example.com kubernetes.io/hostname = vklnld1447.int.example.com
$ oc describe node <node name>
$ oc delete node <node name>
$ oadm manage-node <node1> <node2> --list-pods [--pod-selector=<pod_selector>] [-o json|yaml]
$ oadm manage-node <node1> <node2> --evacuate --dry-run [--pod-selector=<pod_selector>]
In OpenShift master, there is a built-in OAuth server, which can be used for managing authentication. All OpenShift users get the token from this server, which helps them communicate to OpenShift API.
There are different kinds of authentication level in OpenShift, which can be configured along with the main configuration file.
While defining the master configuration, we can define the identification policy where we can define the type of policy that we wish to use.
Allow All
oauthConfig: ... identityProviders: - name: Allow_Authontication challenge: true login: true provider: apiVersion: v1 kind: AllowAllPasswordIdentityProvider
This will deny access to all usernames and passwords.
oauthConfig: ... identityProviders: - name: deny_Authontication challenge: true login: true provider: apiVersion: v1 kind: DenyAllPasswordIdentityProvider
HTPasswd is used to validate the username and password against an encrypted file password.
For generating an encrypted file, following is the command.
$ htpasswd </path/to/users.htpasswd> <user_name>
Using the encrypted file.
oauthConfig: ... identityProviders: - name: htpasswd_authontication challenge: true login: true provider: apiVersion: v1 kind: HTPasswdPasswordIdentityProvider file: /path/to/users.htpasswd
This is used for LDAP authentication wherein LDAP server plays a key role in authentication.
oauthConfig: ... identityProviders: - name: "ldap_authontication" challenge: true login: true provider: apiVersion: v1 kind: LDAPPasswordIdentityProvider attributes: id: - dn email: - mail name: - cn preferredUsername: - uid bindDN: "" bindPassword: "" ca: my-ldap-ca-bundle.crt insecure: false url: "ldap://ldap.example.com/ou=users,dc=acme,dc=com?uid"
This is used when the validation of username and password is done against a server-to-server authentication. The authentication is protected in the base URL and is presented in JSON format.
oauthConfig: ... identityProviders: - name: my_remote_basic_auth_provider challenge: true login: true provider: apiVersion: v1 kind: BasicAuthPasswordIdentityProvider url: https://www.vklnld908.int.example.com/remote-idp ca: /path/to/ca.file certFile: /path/to/client.crt keyFile: /path/to/client.key
Service accounts provide a flexible way of accessing OpenShift API exposing the username and password for authentication.
Service account uses a key pair of public and private key for authentication. Authentication to API is done using a private key and validating it against a public key.
ServiceAccountConfig: ... masterCA: ca.crt privateKeyFile: serviceaccounts.private.key publicKeyFiles: - serviceaccounts.public.key - ...
Use the following command to create a service account
$ Openshift cli create service account <name of server account>
In most of the production environment, direct access to Internet is restricted. They are either not exposed to Internet or they are exposed via a HTTP or HTTPS proxy. In an OpenShift environment, this proxy machine definition is set as an environment variable.
This can be done by adding a proxy definition on the master and node files located under /etc/sysconfig. This is similar as we do for any other application.
/etc/sysconfig/openshift-master
HTTP_PROXY=http://USERNAME:PASSWORD@172.10.10.1:8080/ HTTPS_PROXY=https://USERNAME:PASSWORD@172.10.10.1:8080/ NO_PROXY=master.vklnld908.int.example.com
/etc/sysconfig/openshift-node
HTTP_PROXY=http://USERNAME:PASSWORD@172.10.10.1:8080/ HTTPS_PROXY=https://USERNAME:PASSWORD@172.10.10.1:8080/ NO_PROXY=master.vklnld908.int.example.com
Once done, we need to restart the master and node machines.
/etc/sysconfig/docker
HTTP_PROXY = http://USERNAME:PASSWORD@172.10.10.1:8080/ HTTPS_PROXY = https://USERNAME:PASSWORD@172.10.10.1:8080/ NO_PROXY = master.vklnld1446.int.example.com
In order to make a pod run in a proxy environment, it can be done using −
containers: - env: - name: "HTTP_PROXY" value: "http://USER:PASSWORD@:10.0.1.1:8080"
OC environment command can be used to update the existing env.
In OpenShift, the concept of persistent volume and persistent volume claims forms persistent storage. This is one of the key concepts in which first persistent volume is created and later that same volume is claimed. For this, we need to have enough capacity and disk space on the underlying hardware.
apiVersion: v1 kind: PersistentVolume metadata: name: storage-unit1 spec: capacity: storage: 10Gi accessModes: - ReadWriteOnce nfs: path: /opt server: 10.12.2.2 persistentVolumeReclaimPolicy: Recycle
Next, using OC create command create Persistent Volume.
$ oc create -f storage-unit1.yaml persistentvolume " storage-unit1 " created
Claiming the created volume.
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: Storage-clame1 spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi
Create the claim.
$ oc create -f Storage-claim1.yaml persistentvolume " Storage-clame1 " created
User and role administration is used to manage users, their access and controls on different projects.
Predefined templates can be used to create new users in OpenShift.
kind: "Template" apiVersion: "v1" parameters: - name: vipin required: true objects: - kind: "User" apiVersion: "v1" metadata: name: "${email}" - kind: "Identity" apiVersion: "v1" metadata: name: "vipin:${email}" providerName: "SAML" providerUserName: "${email}" - kind: "UserIdentityMapping" apiVersion: "v1" identity: name: "vipin:${email}" user: name: "${email}"
Use oc create –f <file name> to create users.
$ oc create –f vipin.yaml
Use the following command to delete a user in OpenShift.
$ oc delete user <user name>
ResourceQuotas and LimitRanges are used for limiting user access levels. They are used for limiting the pods and containers on the cluster.
apiVersion: v1 kind: ResourceQuota metadata: name: resources-utilization spec: hard: pods: "10"
$ oc create -f resource-quota.yaml –n –Openshift-sample
$ oc describe quota resource-quota -n Openshift-sample Name: resource-quota Namespace: Openshift-sample Resource Used Hard -------- ---- ---- pods 3 10
Defining the container limits can be used for limiting the resources which are going to be used by deployed containers. They are used to define the maximum and minimum limitations of certain objects.
This is basically used for the number of projects a user can have at any point of time. They are basically done by defining the user levels in categories of bronze, silver, and gold.
We need to first define an object which holds the value of how many projects a bronze, silver, and gold category can have. These need to be done in the master-confif.yaml file.
admissionConfig: pluginConfig: ProjectRequestLimit: configuration: apiVersion: v1 kind: ProjectRequestLimitConfig limits: - selector: level: platinum - selector: level: gold maxProjects: 15 - selector: level: silver maxProjects: 10 - selector: level: bronze maxProjects: 5
Restart the master server.
Assigning a user to a particular level.
$ oc label user vipin level = gold
Moving the user out of the label, if required.
$ oc label user <user_name> level-
Adding roles to a user.
$ oadm policy add-role-to-user<user_name>
Removing the role from a user.
$ oadm policy remove-role-from-user<user_name>
Adding a cluster role to a user.
$ oadm policy add-cluster-role-to-user<user_name>
Removing a cluster role from a user.
$ oadm policy remove-cluster-role-from-user<user_name>
Adding a role to a group.
$ oadm policy add-role-to-user<user_name>
Removing a role from a group.
$ oadm policy remove-cluster-role-from-user<user_name>
Adding a cluster role to a group.
$ oadm policy add-cluster-role-to-group<groupname>
Removing a cluster role from a group.
$ oadm policy remove-cluster-role-from-group <role> <groupname>
This is one of the most powerful roles where the user has the capability to manage a complete cluster starting from creation till deletion of a cluster.
$ oadm policy add-role-to-user admin <user_name> -n <project_name>
$ oadm policy add-cluster-role-to-user cluster-admin <user_name>