Blog Image

Open Grieves

Assimilate quickly!

You must comply!

CRI-O on OpenShift Container Platform 3.9 and RHEL 7.5

Out of trouble Posted on 2018-04-21 17:38:45

In order to try out CRI-O on OpenShift Container Platform 3.9 running on Red Hat Enterprise Linux 7.5, you need to label your nodes with runtime: cri-o.

Eg, in your /etc/ansible/hosts file:

mynode openshift_node_labels=”{‘region’: ‘nodes’, ‘zone’: ‘default’, ‘runtime’: ‘cri-o’}”

This is a workaround for this bug:

Ansible Tower on Azure

Out of trouble Posted on 2017-02-26 22:33:02

You can’t run the installer ( as root, run it with sudo (sudo sh ./ instead. That works, or it complains that you are not root..

Limit project creation to different users in OpenShift 3.4

You must read Posted on 2017-02-14 00:06:38


Today: How to limit users ability to create projects in OpenShift Container Platform 3.4.

1. Add Example 2 found here: to the top of your /etc/origin/master/master-config.yaml file.

2. Add fitting labels to the different users like such:
$ oc label user/admin level=admin
$ oc label user/user1 level=silver
$ oc label user/user2 level=gold

3. For a single master cluster, restart ‘atomic-openshift-master’ and for a HA (3-5) master cluster, restart ‘atomic-openshift-master-api atomic-openshift-master-controllers’ on all masters.

4. Done!

Azure Load Balancer json with session persistence

Out of trouble Posted on 2017-02-13 12:33:25

It took some time for me to find out that to get session persistence, you use the variable ‘loadDistribution’ and to get Client IP session persistence, you use ‘sourceIP’ as value.

So.. yeah.. the load balancers rules would in json look something like below:

“loadBalancingRules”: [{
“name”: “myLBrules”,
“properties”: {
“frontendIPConfiguration”: {
“id”: “[variables(‘myLbFrontEndConfigId’)]”
“backendAddressPool”: {
“id”: “[variables(‘myLbBackendPoolId’)]”
“protocol”: “Tcp”,
“loadDistribution”: “sourceIP”,
“idleTimeoutInMinutes”: 30,
“frontendPort”: 8443,
“backendPort”: 8443,
“probe”: {
“id”: “[variables(‘myLb8443ProbeId’)]”

If you want something more cut and paste friendly, have a look here:

Troubleshooting Azure deployment template for OpenShift

Out of trouble Posted on 2017-02-07 13:14:13

Hi all,

If you are trying out to deploy OpenShift on Azure using instructions here:

..And you got errors doing that. Here’s how to debug the deployment process. It’s easy to type the wrong this. Try to keep to just copy-and-pasting if you are using the template in the Azure Portal. Trust me 🙂

1. Click on the resource group you’ve created.

2. Click on your deployment. It should say “1 Deploying” (or 2)

3. Click on “Microsoft Template”

4. Scroll down to “Operations details”

5. Find the resource which does not state “OK” or “Created”. Should be marked in red and state something like “Conflict” or “Error”.

6. Find out what went wrong. In this example, it’s a custom script which failed to run.

7. Logon to the server affected and follow the below debug flow:

[root@ocpmaster ~]# cd /var/lib/waagent/custom-script/download/
[root@ocpmaster download]# ls
0 1
[root@ocpmaster download]# ls */
0/: stderr stdout

1/: stderr stdout
[root@ocpmaster download]# cd 1
[root@ocpmaster 1]# tail stderr
Adding password for user mglantz
error: A server URL must be specified
[root@ocpmaster 1]#

8. Issue found.

Running a Git server (GOGS) on OpenShift

You must read Posted on 2017-02-07 11:11:21

Hi all,

Sometimes your OpenShift cluster may not have a connection to a git repository, such as GitHub or etc. Then it may be a good idea to run a Git service on the OpenShift cluster itself. This may also be a good idea in general, if each development team want their own clean environment, shared with no-one.

Here’s an example on how you can do that. In my example, I’m using the GOGS (GO Git Service), as it’s tiny, and there are existing OpenShift templates for deploying it.

Here we go, step-by-step:

1. Clone Alessandro Arrichiello’s GitHub repo, which containers templates and more for us to use
$ git clone

2. Create a project to deploy your GOGS server in. In my example, I’ll use the name: my-gogs-project
$ oc new-project my-gogs-project
$ oc project my-gogs-project

3. Go into the cloned ‘openshift-gogs-template’ project.
$ cd openshift-gogs-template

4. Create the gogs service account
$ oc create -f gogs-sa.yml

5. Add permissions for the gogs service account:
$ oadm policy add-scc-to-user privileged system:serviceaccount:my-gogs-project:gogs
$ oc edit scc anyuid

allowPrivilegedContainer: false
# TO
allowPrivilegedContainer: true

# ADD:
– system:serviceaccount:my-gogs-project:gogs
# SAVE (do not add this line..)

6. Create the template. If the template is not created in the openshift project, it will not be available to all projects.
$ oc create -f gogs-standalone-template.yml -n openshift

7. Deploy GOGS from template (Add to project in the GUI).
8. Goto the gogs URL (route) to do the final setup (don’t forget to create a user).
8. Done.

DNS issue with OCP 3.3 deployment to Azure

Out of trouble Posted on 2017-01-04 01:38:15

* This was a DNS issue in Azure which has been resolved.

If you are deploying OpenShift to Azure using Harold Wongs deployment templates, I just stumbled into an issue where deployOpenShift would fail to trigger a successful Ansible run, due to problems with DNS resolution. I worked around this temporarily by adding the infra and node names to /etc/hosts together with their internal IPs (which you find in the Azure Portal if you goto a virtual machine and click on it’s network interface).

Will try to find out why this happens..

More info to follow here:

Red Hat OpenShift (OCP) 3.3 in Microsoft Azure using Azure CLI

You must read Posted on 2016-12-03 20:16:28

Hi all,

If you read my previous post regarding deploying OpenShift Container Platform 3.3 in Azure, here’s an add-on to that.

To deploy an OpenShift cluster using the azure CLI, do as follows:

$ azure group deployment create theResourceGroup -f ./azuredeploy.json

$ azure group deployment create theResourceGroup -f ./azuredeploy.ha.json


$ azure group deployment create –resource-group theResourceGroup –template-uri “”

azure group deployment create –resource-group theResourceGroup –template-uri


Please note that I created a downstream fork of Harold Wongs template, as it didn’t work with the Azure CLI when you deploy a non-HA cluster. That fix will hopefully soon find it’s way into the normal upstream repo, which I recommend you use.

A non-HA cluster is up and running in approximately 20 minutes.

Red Hat OpenShift (OCP) 3.3 in Microsoft Azure

You must read Posted on 2016-11-23 10:26:16

Hi all,

Here’s a quick guide on how to get Red Hat OpenShift Container Platform, version 3.3 (latest) running in Microsoft Azure, step-by-step.

* OpenShift subscriptions
* Azure account
* Azure CLI: Download here.

Run the following Azure CLI commands:

0. Activate keystore provider, this will allow you to create keystores.
a. azure provider register Microsoft.KeyVault
Ex: [azure provider register Microsoft.KeyVault]

1. Create a keystore, in which you’ll store your SSH key, to provide access to you.

Create Key Vault using Azure CLI – must be run from a Linux machine (can use Azure CLI container from Docker for Windows) or Mac

a. Create new Resource Group: azure group create <name> <location>

Ex: [azure group create ResourceGroupName ‘East US’]

b. Create Key Vault: azure keyvault create -u <vault-name> -g <resource-group> -l <location>

Ex: [azure keyvault create -u KeyVaultName -g ResourceGroupName -l ‘East US’]

c. Create Secret: azure keyvault secret set -u <vault-name> -s <secret-name> –file <private-key-file-name>
Ex: [azure keyvault secret set -u KeyVaultName -s SecretName –file ~/.ssh/id_rsa

d. Enable the Keyvvault for Template Deployment: azure keyvault
set-policy -u <vault-name> –enabled-for-template-deployment true

Ex: [azure keyvault set-policy -u KeyVaultName –enabled-for-template-deployment true]

Fill in the deployment template, deploy and configure the OpenShift cluster:

2. Deploy the OpenShift cluster in Azure

a. Goto

b. Initiate template in Azure. Click on ‘Deploy to Azure’

c. Fill in the information in the template. Hoover your mouse over the little ‘i’ at the end of each row.

Read the additional notes below, as well:

Resource group: Select the same resource group as you used to created the KeyStore!
OpenShift Cluster Prefix: Doesn’t matter much what you set here. Just put in something that makes sense.
OpenShift Master Public Ip Dns Label: Whatever you put here will complete the masters public DNS name, as such: <what-you-put-here>.<availability-zone>
Node Lb Public Ip Dns Label: Whatever you put here will complete the loadbalancers public DNS name, as such: <what-you-put-here>.<availability-zone> Doesn’t matter much as it won’t be exposed to end users.
Node Instance Count: Will define the number of servers that will host actual containers. For me I got into issues when defining values higher than 1, I think that was some limitation set on my Azure account.
Data Disk Size: Depending on what you’ll use this for, set something that makes sense. I used 75 GB for my demo environment.
Admin Username: I used the same user name as the one I have for azure. This will be the username you use to log in to OpenShift with.
OpenShift password: The password for the user above.
Cloud Access Username: This is your Red Hat Network username, which has access to the OpenShift subscriptions.
Cloud Access Password: Password to your Red Hat Network user.
Cloud Access Pool Id: This is the subscription pool id, which holds your OpenShift subscriptions. To get it, logon to a Red Hat server, registered with that username and type: subscription-manager list –available. You will then see the line ‘Pool ID: 8a85f98157b24ea1011234567890’. Copy and paste the pool ID.
Subscription Id: This is your Azure subscription ID. Run: ‘azure account show’ and look at the line that states ‘ID’ and copy and paste.
Vault Key Resource Group: What you defined earlier in step 1, should be the same as ‘Resource Group’, if you listened to my instructions.
Vault Key Name: What you defined earlier in step 1.
Vault Key Secret: What you defined earlier in step 1.
Default Sub Domain Type: Here, I selected ‘custom’, because I want to set my own domain that applications deployed in OpenShift use. If you do not have your own domain, that doesn’t really matter, you can always just put whatever here and then put the hostname + IP to the load balancer, in your hostfile of your laptop, for testing purposes. When I selected xipio, the deployment failed.
Default Sub Domain: Your own domain or something abitrary such as ‘’

3. Click purchase and wait until everything is deployed. For me it took an hour or so.

4. Log in to the master node, like so: https://<master-label>.<availabilityzone> with the user and password defined above.

5 (Optional) Setup of custom DNS
If you want your applications to be reachable via something else than the domain and use your own domain, follow the instructions below:

a. Create a wildcard DNS A record that looks like something like * and which points at the Load Balancers IP address. To find it, select ‘All resources’ in Azure, and then click on ‘loadb’ (Public IP address) and copy and paste. This means you’ll be able to access your applications via DNS and do not have to edit host files and such. I found no way to do this in Azure, I used my own private DNS provider to create the wildcard A record.

b. Ensure that ‘/etc/origin/master/master-config.yaml’ states:
subdomain: “”

c. If not, edit the file and restart the OpenShift master, with ‘systemctl restart atomic-openshift-master’


Red Hat Satellite 6.2.2 Pulp sync results in Error 500

Out of trouble Posted on 2016-11-02 10:43:09

If you, when you click on Content > Sync status, get an error message, and when you look at /var/log/foreman/production.log, see the below error message:

2016-11-01 19:20:52 [katello/pulp_rest] [E] “https://sat6.FQDN/pulp/api/v2/repositories/search/“, 1613 byte(s) length, “Accept”=>”*/*; q=0.5, application/xml”, “Accept-Encoding”=>”gzip, deflate”, “Content-Length”=>”1613”, “accept”=>”application/json”, “content_type”=>”application/json”
| \n# => 500 InternalServerError | text/html 531 bytes
2016-11-01 19:20:52 [app] [I] Completed 500 Internal Server Error in 1047ms
2016-11-01 19:20:52 [app] [F]
| RestClient::InternalServerError (500 Internal Server Error):
| katello ( app/models/katello/glue/pulp/repos.rb:53:in `prepopulate!’
| katello ( app/helpers/katello/sync_management_helper.rb:38:in `collect_repos’
| katello ( app/controllers/katello/sync_management_controller.rb:27:in `index’
| app/controllers/concerns/application_shared.rb:13:in `set_timezone’
| lib/middleware/catch_json_parse_errors.rb:9:in `call’

Then, perhaps, the pulp database did not upgrade properly when you upgraded pulp at some point. Try:

(Running as root)
# usermod -s /bin/bash apache
# su – apache

(Running as apache)
$ pulp-db-migrate
$ exit

(Running as root)

# usermod -s /bin/false apache
# katello-service restart


Next »