Blog Image

Open Grieves

Assimilate quickly!

You must comply!

Limit project creation to different users in OpenShift 3.4

You must read Posted on 2017-02-14 00:06:38


Today: How to limit users ability to create projects in OpenShift Container Platform 3.4.

1. Add Example 2 found here: to the top of your /etc/origin/master/master-config.yaml file.

2. Add fitting labels to the different users like such:
$ oc label user/admin level=admin
$ oc label user/user1 level=silver
$ oc label user/user2 level=gold

3. For a single master cluster, restart ‘atomic-openshift-master’ and for a HA (3-5) master cluster, restart ‘atomic-openshift-master-api atomic-openshift-master-controllers’ on all masters.

4. Done!

Running a Git server (GOGS) on OpenShift

You must read Posted on 2017-02-07 11:11:21

Hi all,

Sometimes your OpenShift cluster may not have a connection to a git repository, such as GitHub or etc. Then it may be a good idea to run a Git service on the OpenShift cluster itself. This may also be a good idea in general, if each development team want their own clean environment, shared with no-one.

Here’s an example on how you can do that. In my example, I’m using the GOGS (GO Git Service), as it’s tiny, and there are existing OpenShift templates for deploying it.

Here we go, step-by-step:

1. Clone Alessandro Arrichiello’s GitHub repo, which containers templates and more for us to use
$ git clone

2. Create a project to deploy your GOGS server in. In my example, I’ll use the name: my-gogs-project
$ oc new-project my-gogs-project
$ oc project my-gogs-project

3. Go into the cloned ‘openshift-gogs-template’ project.
$ cd openshift-gogs-template

4. Create the gogs service account
$ oc create -f gogs-sa.yml

5. Add permissions for the gogs service account:
$ oadm policy add-scc-to-user privileged system:serviceaccount:my-gogs-project:gogs
$ oc edit scc anyuid

allowPrivilegedContainer: false
# TO
allowPrivilegedContainer: true

# ADD:
– system:serviceaccount:my-gogs-project:gogs
# SAVE (do not add this line..)

6. Create the template. If the template is not created in the openshift project, it will not be available to all projects.
$ oc create -f gogs-standalone-template.yml -n openshift

7. Deploy GOGS from template (Add to project in the GUI).
8. Goto the gogs URL (route) to do the final setup (don’t forget to create a user).
8. Done.

Red Hat OpenShift (OCP) 3.3 in Microsoft Azure using Azure CLI

You must read Posted on 2016-12-03 20:16:28

Hi all,

If you read my previous post regarding deploying OpenShift Container Platform 3.3 in Azure, here’s an add-on to that.

To deploy an OpenShift cluster using the azure CLI, do as follows:

$ azure group deployment create theResourceGroup -f ./azuredeploy.json

$ azure group deployment create theResourceGroup -f ./azuredeploy.ha.json


$ azure group deployment create –resource-group theResourceGroup –template-uri “”

azure group deployment create –resource-group theResourceGroup –template-uri


Please note that I created a downstream fork of Harold Wongs template, as it didn’t work with the Azure CLI when you deploy a non-HA cluster. That fix will hopefully soon find it’s way into the normal upstream repo, which I recommend you use.

A non-HA cluster is up and running in approximately 20 minutes.

Red Hat OpenShift (OCP) 3.3 in Microsoft Azure

You must read Posted on 2016-11-23 10:26:16

Hi all,

Here’s a quick guide on how to get Red Hat OpenShift Container Platform, version 3.3 (latest) running in Microsoft Azure, step-by-step.

* OpenShift subscriptions
* Azure account
* Azure CLI: Download here.

Run the following Azure CLI commands:

0. Activate keystore provider, this will allow you to create keystores.
a. azure provider register Microsoft.KeyVault
Ex: [azure provider register Microsoft.KeyVault]

1. Create a keystore, in which you’ll store your SSH key, to provide access to you.

Create Key Vault using Azure CLI – must be run from a Linux machine (can use Azure CLI container from Docker for Windows) or Mac

a. Create new Resource Group: azure group create <name> <location>

Ex: [azure group create ResourceGroupName ‘East US’]

b. Create Key Vault: azure keyvault create -u <vault-name> -g <resource-group> -l <location>

Ex: [azure keyvault create -u KeyVaultName -g ResourceGroupName -l ‘East US’]

c. Create Secret: azure keyvault secret set -u <vault-name> -s <secret-name> –file <private-key-file-name>
Ex: [azure keyvault secret set -u KeyVaultName -s SecretName –file ~/.ssh/id_rsa

d. Enable the Keyvvault for Template Deployment: azure keyvault
set-policy -u <vault-name> –enabled-for-template-deployment true

Ex: [azure keyvault set-policy -u KeyVaultName –enabled-for-template-deployment true]

Fill in the deployment template, deploy and configure the OpenShift cluster:

2. Deploy the OpenShift cluster in Azure

a. Goto

b. Initiate template in Azure. Click on ‘Deploy to Azure’

c. Fill in the information in the template. Hoover your mouse over the little ‘i’ at the end of each row.

Read the additional notes below, as well:

Resource group: Select the same resource group as you used to created the KeyStore!
OpenShift Cluster Prefix: Doesn’t matter much what you set here. Just put in something that makes sense.
OpenShift Master Public Ip Dns Label: Whatever you put here will complete the masters public DNS name, as such: <what-you-put-here>.<availability-zone>
Node Lb Public Ip Dns Label: Whatever you put here will complete the loadbalancers public DNS name, as such: <what-you-put-here>.<availability-zone> Doesn’t matter much as it won’t be exposed to end users.
Node Instance Count: Will define the number of servers that will host actual containers. For me I got into issues when defining values higher than 1, I think that was some limitation set on my Azure account.
Data Disk Size: Depending on what you’ll use this for, set something that makes sense. I used 75 GB for my demo environment.
Admin Username: I used the same user name as the one I have for azure. This will be the username you use to log in to OpenShift with.
OpenShift password: The password for the user above.
Cloud Access Username: This is your Red Hat Network username, which has access to the OpenShift subscriptions.
Cloud Access Password: Password to your Red Hat Network user.
Cloud Access Pool Id: This is the subscription pool id, which holds your OpenShift subscriptions. To get it, logon to a Red Hat server, registered with that username and type: subscription-manager list –available. You will then see the line ‘Pool ID: 8a85f98157b24ea1011234567890’. Copy and paste the pool ID.
Subscription Id: This is your Azure subscription ID. Run: ‘azure account show’ and look at the line that states ‘ID’ and copy and paste.
Vault Key Resource Group: What you defined earlier in step 1, should be the same as ‘Resource Group’, if you listened to my instructions.
Vault Key Name: What you defined earlier in step 1.
Vault Key Secret: What you defined earlier in step 1.
Default Sub Domain Type: Here, I selected ‘custom’, because I want to set my own domain that applications deployed in OpenShift use. If you do not have your own domain, that doesn’t really matter, you can always just put whatever here and then put the hostname + IP to the load balancer, in your hostfile of your laptop, for testing purposes. When I selected xipio, the deployment failed.
Default Sub Domain: Your own domain or something abitrary such as ‘’

3. Click purchase and wait until everything is deployed. For me it took an hour or so.

4. Log in to the master node, like so: https://<master-label>.<availabilityzone> with the user and password defined above.

5 (Optional) Setup of custom DNS
If you want your applications to be reachable via something else than the domain and use your own domain, follow the instructions below:

a. Create a wildcard DNS A record that looks like something like * and which points at the Load Balancers IP address. To find it, select ‘All resources’ in Azure, and then click on ‘loadb’ (Public IP address) and copy and paste. This means you’ll be able to access your applications via DNS and do not have to edit host files and such. I found no way to do this in Azure, I used my own private DNS provider to create the wildcard A record.

b. Ensure that ‘/etc/origin/master/master-config.yaml’ states:
subdomain: “”

c. If not, edit the file and restart the OpenShift master, with ‘systemctl restart atomic-openshift-master’


SCAP/PCI DSS v3 compliance on RHEL 7.2 with Satellite 6.1.7

You must read Posted on 2016-02-21 23:27:43

Here’s how you get a good base for PCI DSS v3 compliance on Red Hat Enterprise Linux 7.2 using Red Hat Satellite 6.1.7 and OpenSCAP which is in Technology Preview in Satellite atm.

Step 1. Install some RPMs.

If you only have a Red Hat Satellite server..
-Install these packages on the Satellite server:
# yum install ruby193-rubygem-foreman_openscap rubygem-smart_proxy_openscap puppet-foreman_scap_client

If you have a Red Hat Satellite and Capsule server(s):
-Install these packages on the Satellite server:
# yum install ruby193-rubygem-foreman_openscap rubygem-smart_proxy_openscap puppet-foreman_scap_client

-Install these packages on the Capsule server(s):
# yum install rubygem-smart_proxy_openscap puppet-foreman_scap_client

Step 2: Restart Satellite/Capsule services with:
# katello-service restart

Step 3. Add a cronjob for the foreman-proxy user on your Satellite and Capsule, to run the following command:

This will push the reports to the Satellite GUI.

Step 4. Add two Puppet modules available from Puppet Forge to your Content View:

Step 5: Promote Content View.

Step 6: Create a special hostgroup for PCI DSS compliance. Just call it whatever you like, parent should be your ‘base SOE’ hostgroup. You can call it just PCI DSS. Like that:

Step 7: Add the puppetlabs/stdlib Puppet module to your PCI DSS hostgroup.

Step 8: Download the following file from your Satellite server:
(Owned by package scap-security-guide)

Step 9: Goto Hosts > SCAP Contents (At the bottom of the host menu).

Step 10: Click ‘New SCAP content’

Title: RHEL7.2 Security Guides
Scap file: <upload ssg-rhel7-ds.xml>
Location: Default Location?
Organisation: Default Organisation?

Step 11: Goto Hosts > Policies (At the bottom of the hosts menu)

Step 12: Click ‘New Compliance Policy’.
Name: PCIDSSv3
Description: Payment Card Industry Data Security Standard, Version 3
(Click Next)
SCAP Content: RHEL7.2 Security Guides
XCCDF Profile: Draft PCI-DSS v3 Control Baseline for Red Hat Enterprise Linux 7
(Click Next)
Period: <Whatever you like>
(Click Next)
Locations: Default location?
(Click Next)
Organisations: Default organisation?
(Click Next)
Host groups: <Select your PCI DSS hostgroup>

Step 13: In your Kickstart Default provisioning template..
Add the following code.. (Please note that if you have not cloned your Kickstart Default provisioning template, you have do do that first, to be able to edit it).
<% if @host.hostgroup.to_s == “RHEL7 SOE Development/PCI DSS” -%>
%addon org_fedora_oscap
content-type = scap-security-guide
profile = pci-dss
<% end -%>
after: skipx
before: <% subnet = @host.subnet -%>

(In above example, my PCI DSS hostgroup is called PCI DSS and it’s parent is RHEL7 SOE Development)

Step 14: In your Kickstart Default provisioning template.. Add the following code to be executed in %POST (adjust as required):

# Put name of hostgroup into variable
HOSTGROUP=”<%= @host.hostgroup.to_s %>”

# If user has selected the PCI DSS hostgroup, apply the PCI DSS v3 compliant SCAP profile.
if echo $HOSTGROUP|grep “PCI DSS” >/dev/null; then
oscap xccdf eval –remediate –profile xccdf_org.ssgproject.content_profile_pci-dss /usr/share/xml/scap/ssg/content/ssg-rhel7-ds.xml >/tmp/pcidss-hardening 2>&1

# Send scap report at boot-up
echo “foreman_scap_client 1” >>/etc/rc.d/rc.local

Step 15: There are no more steps! Install a new system and behold how much simpler PCI DSS compliance has become 🙂

Note: Ofcourse, just applying a best practice guide does not make you PCI DSS compliant, there is much more to it than that, but this helps, a lot 🙂

How much diskspace does Satellite 6 with RHEL7 synced-in require?

You must read Posted on 2016-02-18 15:22:37

If you’re to do a quick demo installation of Red Hat Satellite 6.1, you may wonder how much or how little disk space you can get away with. The answer is, running on a minimal installation of Red Hat Enterprise Linux 7.2, excluding the RHEL7 Supplementary repo, but including:

* Red Hat Enterprise Linux 7 Server RPMs x86_64 7Server
* Red Hat Enterprise Linux 7 Server – Extras RPMs x86_64

* Red Hat Enterprise Linux 7 Server – Optional RPMs x86_64 7Server

* Red Hat Satellite Tools 6.1 for RHEL 7 Server RPMs x86_64

Red Hat Enterprise Linux 7 Server Kickstart x86_64 7.2

* Puppet forge (~3950 modules)

It takes approximately 36 GB of total space, excluding SWAP, as of this writing 🙂

Red Hat Solution Architect in the nordics

You must read Posted on 2016-02-18 12:16:25

So. I’m now employed at Red Hat as a Solution Architect in the nordics. Primarily I work in Denmark. If you also are located there and you want to know more about what cool stuff can be done with Red Hat’s products, let me know. My area of specialty is in and around infrastructure, LCM, IaaS, PaaS, SaaS, migration, HA, SoE, etc.

/ sudo ‘thatthingthatgoesbeforethedomainname’ redhat dotcom..

New job

You must read Posted on 2016-01-11 08:49:12

Starting mid-February I start my new job as Solution Architect at Red Hat in the Nordics with focus on Denmark. So… wish me luck 🙂

Simple automatic decommisioning

You must read Posted on 2015-11-04 12:56:57

Problem: 100000000 servers that are not used in your development or test environmnets.
Solution: Put below script in /etc/cron.daily/. 30 days after installation date, the server will shutdown automatically. You can make this as fancy as you want, but the solution is still fairly simple.

# Date format is 15-01-31

DATEA=$(cat /etc/server-installation-date-gets-put-in-this-file)
DATEB=$(date +%y-%m-%d)

# Calculates number of days between today and installation date
if [ “$(( ($(date –date=”$DATEB” +%s) – $(date –date=”$DATEA” +%s) )/(60*60*24) ))” -ge 30 ]; then
logger -t auto_decommision “Server has passed it’s end date. Halting server.”
shutdown -h now

Cloning VMs on VMware clusters using Red Hat Satellite 6.1

You must read Posted on 2015-09-23 10:50:28

Keep in mind that you need to put a template (in Sat 6.1, called image) on each cluster.

That was all.

Next »