Blog Image

Open Grieves

Assimilate quickly!

You must comply!

Red Hat Satelite 6.1 API integration, howto deploy and decomission servers

You must read Posted on 2015-09-16 20:46:39


So, I spend some time integrating with Red Hat Satellite 6.1, deploying and decomissioning servers using it’s API. I’ve compiled all the goodies for you 🙂

# Please note
I deploy cloned VMware guests, if your compute resource or provisioning method differ, adjust accordingly.
* This requires you to have created a Linux standard in Red Hat Satellite 6.1.
* Just so it’s completely clear, you need to have defined a hostgroup that points out all the stuff required to deploy a server.

# General API information

API type: REST
Header: “Accept: application/json”
Header: “Content-Type: application/json”

# Howto deploy a new server via Red Hat Satellite 6.1 API.

Step 1: Firstboot script that you put in your VM template:
-s -o /root/ –user user:pass -H
“Content-Type:application/json” -H “Accept:application/json” -k
# Also, don’t forget to do some santity checking, retry of fetch, etc.
if [ -f /root/ ]; then
sh /root/

To make a very long story, very short..
is because getting the finish script to run in Red Hat Satellite 6.1
via API seems very difficult and impractical (you have to define
undocumented compute_attributes during host creation). If you find a way to do
this without compute_attributes, let me know.

Step 2: Create a new server.

POST: https://satellite-fqdn/api/v2/hosts


{host: { name: host-fqdn,
location_id: 2,
organization_id: 1,
managed: true,
compute_resource_id: 1,
hostgroup_id: 1,
compute_profile_id: 1,
enabled: true,
build: true

Example integration code:
curl -k -u user:pass -X POST -d “{\”host\”: { \”name\”: \”host-fqdn\”, \”location_id\”: \”2\”, \”organization_id\”: \”1\”, \”managed\”: \”true\”, \”compute_resource_id\”: \”1\”, \”hostgroup_id\”: \”1\”, \”compute_profile_id\”: \”1\”, \”enabled\”: \”true\”, \”build\”: \”true\” }}” -H “Accept: application/json” -H “Content-Type: application/json” https://satellite-fqdn/api/v2/hosts

Note: For other purposes than a POC:

* Dynamic values, depending on environment:

compute_resource_id: Defines what VMware Vsphere server or other compute resource to contact
hostgroup_id: Defines what Linux standard to install and in what Life Cycle Environment

* Dynamic values depending on version of Linux standard:
hostgroup_id: Defines what Linux standard to install and in what Life Cycle Environment

* Dynamic values depending on application:
compute_profile_id: Defines resources in VMware such as disk, VLAN, disk, CPU, and Memory. Settings get’s predefined in Satellite 6.1 for each application.

* Step 3: Power on server:

PUT https://satellite-fqdn/api/v2/hosts/host-fqdn/power

{power_action: on }

Expected on success:


Example integration code:
curl -k -u user:pass -X PUT -d “{\”power_action\”: \”on\” }” -H “Accept: application/json” -H “Content-Type: application/json” https://satellite-fqdn/api/v2/hosts/host-fqdn/power

# Howto get server status via Red Hat Satellite 6.1 API:

Step 1: Get host ID:

GET https://satellite-fqdn/api/v2/hosts?search=”host-fqdn”

Step 2: Get host status:

GET https://satellite-fqdn/api/v2/hosts/HOSTID/status

Expected during installation of server:

{“status”:”Pending Installation”}

Expected when installation complete (OS configured):
{“status”:”No changes”}

Example integration code (Get host ID and get status):
HOSTID=$(curl -s -k -u user:pass -X GET https://satellite-fqdn/api/v2/hosts?search=”host-fqdn”|cut -d: -f4|grep ip|cut -d, -f1)
curl -k -u user:pass -X GET -H “Accept: application/json” -H “Content-Type: application/json” https://satellite-fqdn/api/v2/hosts/${HOSTID}/status

# Howto delete server via Red Hat Satellite 6.1 API:

DELETE https://satellite-fqdn/api/v2/hosts/host-fqdn

Example integration code:
curl -k -u user:pass -X DELETE -H “Accept: application/json” -H “Content-Type: application/json” https://satellite-fqdn/api/v2/hosts/host-fqdn

VMware and Red Hat Enterprise Linux 7

You must read Posted on 2015-06-17 10:58:14

Heads up for Red Hat Enterprise Linux 7. Open VMware tools are now a part of RHEL (7). So, you don’t have to download VMware tools yourself. Just install ‘yum install open-vm-tools’. And no, it won’t override vmxnet3 drivers and etc.

Me at Red Hat Summit 2015 in Boston

You must read Posted on 2015-06-10 16:40:28

So, yeah, I’m going to talk at Red Hat Summit 2015 in Boston. So, make sure to catch my session. I promise you, it will be like nothing else you’ve seen 🙂

Also, I’ll spill my guts regarding how to create a solid Red Hat Enterprise Linux SOE (standard).

In short “We’ll give you tips about how to assemble your own reliable,
cost-efficient Red Hat Enterprise Linux SOE, including hands-on
technical solutions, processes, and blueprints –all based on real-world
challenges and solutions.”

Get all the details about the session by clicking here.

Red Hat Satellite 6.0: Automatically publish a content view using hammer

You must read Posted on 2015-05-20 12:18:11

If you have a content view that you want to always contain the latest stuff, then you have to automate the process of publishing a new version of the content view and then promoting it to relevant lifecycle environments.

This will likely become easier in Satellite 6.1 as there is a lot of changes going into that version in regards to content views and promotion of content.

Here’s a script that I used to do this.

# Magnus Glantz, open.grieves you-know-what-goes-here, 2015
# Automatically promote a content view from Library to Production
# Assumes 4 life cycle environments, including Library
# Ugly sleep cycles based on waiting times for content view with only two simple filters, adjust as required. Expect up to 30 minutes delay for complex content views.

# Edit below
# Get contentview id with: hammer -u user -p thepass content-view list –organization Default_Organization

echo “Remove after having edited above variables.” ; exit 0

# Publish a new version of the content view, promotes it to Library as well.
hammer -u $USER -p $PASS content-view publish –id $CONTENTVIEW_ID –organization Default_Organization –async

# Wait 13 minutes for content view to complete publish cycle
sleep 800

# Extract version ID of new content view version
VERSION=$(hammer -u $USER -p $PASS content-view version list –content-view-id $CONTENTVIEW_ID|head -4|tail -1|awk ‘{ print $1 }’)

# Publish new version to Development
hammer -u $USER -p $PASS content-view version promote –content-view-id $CONTENTVIEW_ID –id $VERSION –lifecycle-environment-id 2

# Wait 20 minutes for content view to complete publish cycle
sleep 1200

# Publish new version to Test
hammer -u $USER -p $PASS content-view version promote –content-view-id $CONTENTVIEW_ID –id $VERSION –lifecycle-environment-id 3

# Wait 20 minutes for content view to complete publish cycle
sleep 1200

# Publish new version to Production
hammer -u $USER -p $PASS content-view version promote –content-view-id $CONTENTVIEW_ID –id $VERSION –lifecycle-environment-id 4

RHEL7 things I’ve picked up

You must read Posted on 2015-03-29 11:09:39

Creating a standard for RHEL7 is at it’s end. So here’s some stuff I’ve picked up or got reminded of:

RHEL 6 configuration that also worked in RHEL 7

* /etc/fstab, including nosuid, noexec, nodev parameters
* cron
* sssd, nsswitch and pam to get LDAP integration
* ssh

Things that are a bit different

* configuring
networking and services in kickstart %post is not very straight
forward, atm I suggest you move all that into a first boot script
* firewalld.service is the service that creates the iptables rules, so if
you don’t want or need a local firewall, disable that service. The
default rules are not to allow everything.
* A lot of services reads the hostname of the server from /etc/hostname
* The mount commands output is much more verbose, to get the old view use ‘mount -t xfs’ etc.
* You cannot restart auditd.service

Active Directory integration for RHEL6.5 and beyond

You must read Posted on 2014-10-21 15:59:19

Here’s how I’ve integrated with Active Directory on RHEL 6.5 and beyond (6.5 was when sssd-ad was introduced). The goal was to get single-sign-on for SSH and to move from an openldap solution to Active Directory running on Windows 2008 R2.

This get’s you Single-Sign-On, authentication and lookups via Active Directory using SSSD. Keytab creation is done using Samba, meaning it’s easy to automate the setup during install time.

Here’s the short version for you.

# Active Directory prereqs:

Install Identify Management for UNIX Components on your Active Directory server(s).
Setup your users and groups.

# Packages used:


# yum install krb5-libs krb5-server krb5-workstation oddjob oddjob-mkhomedir openssh openssh-clients openssh-server pam_krb5 samba samba4-libs samba-client samba-common samba-winbind samba-winbind-clients sssd sssd-client

# Configuration used:

config_file_version = 2
reconnection_retries = 3
sbus_timeout = 30
services = nss, pam

filter_groups = root
filter_users = root,ldap,named,avahi,haldaemon,dbus,radvd,tomcat,radiusd,news,mailman,nscd,gdm,patrol,ctmagent,oracle,sshd,xfs,ntp,hpsmh,postfix
reconnection_retries = 3

reconnection_retries = 3

case_sensitive = false
id_provider = ad
access_provider = ad

# This is the default as well
ldap_krb5_keytab = /etc/krb5.keytab

# defines user/group schema type
ldap_schema = ad

# using explicit POSIX attributes in the Windows entries
ldap_id_mapping = False

# caching credentials
cache_credentials = true
enumerate = false
entry_cache_timeout = 86400

# performance
ldap_disable_referrals = true

# default settings
default_shell = /bin/bash

passwd: files sss
shadow: files sss
group: files sss

hosts: files dns

bootparams: files

ethers: files
netmasks: files
networks: files
protocols: files
rpc: files
services: files

netgroup: files

publickey: files

automount: files
aliases: files

workgroup = THEDOMAIN
password server = *
security = ads
kerberos method = secrets and dedicated keytab
dedicated keytab file=/etc/krb5.keytab
client signing = yes
client use spnego = yes
log file = /var/log/samba/%m.log
server string = Samba Server Version %v

# /etc/krb5.conf:
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log

default_realm = THEDOMAIN.COM
dns_lookup_realm = true
dns_lookup_kdc = true
ticket_lifetime = 24h
renew_lifetime = 7d
forwardable = false

default_domain =


# /etc/pam.d/system-auth-ac:
# This file is auto-generated.
# User changes will be destroyed the next time authconfig is run.
auth required
auth sufficient nullok try_first_pass
auth requisite uid >= 500 quiet
auth sufficient use_first_pass
auth required
account required broken_shadow
account sufficient
account sufficient uid < 500 quiet
account [default=bad success=ok user_unknown=ignore]
account required
password requisite try_first_pass retry=3
password sufficient sha512 shadow nullok try_first_pass use_authtok
password sufficient use_authtok
password required
session optional revoke
session required
session optional
session [success=1 default=ignore] service in crond quiet use_uid
session required
session optional

# /etc/ssh/sshd_config:
Protocol 2
Addressfamily inet
ServerKeyBits 2048
SyslogFacility AUTHPRIV
PermitRootLogin without-password
MaxAuthTries 3
PasswordAuthentication yes
ChallengeResponseAuthentication no

# Kerberos options enabling single-sign-on
KerberosAuthentication yes
KerberosOrLocalPasswd yes
KerberosTicketCleanup yes
KerberosGetAFSToken no

# GSSAPI options
GSSAPIAuthentication yes
GSSAPICleanupCredentials yes

UsePAM yes
X11Forwarding yes
Subsystem sftp /usr/libexec/openssh/sftp-server

# Script used to join AD:


# Ensure services are running
for SERVICE in winbind smb sssd sshd oddjobd; do
service $SERVICE restart

# How many AD servers do we got?
COUNT=$(host -t srv _kerberos._tcp.$(hostname -d)|awk ‘{ print $8 }’|sed ‘s/.com./.com/g’|grep [a-z]|wc -l)

# Add one, to avoid getting 0.
SELECT=$(echo $r+1|bc)

# Fetch the Nth AD server, where Nth is the random number we got above.
ADSERVER=$(host -t srv _kerberos._tcp.$(hostname -d)|awk ‘{ print $8 }’|sed ‘s/.com./.com/g’|grep [a-z]|awk ‘{ if(NR==n) print $0 }’ n=$SELECT)

# PLEASE NOTE: You may want to add the specific location in the OU to join here.
net ads join -w $DOMAIN -U secret:stuff -S $ADSERVER

# Joining a specific location was not an option for me, so I needed to move the object created to a specific place.
# This is likely not the case for you, meaning you can remove this the ldapmodify part of this script.
ldapmodify -h $ADSERVER -D “CN=AccountName,CN=Users,DC=$(hostname -d|tr a-z A-Z|sed ‘s/.COM//’),DC=COM” -w “secret” <<!
dn: CN=$(hostname -s),CN=IWASPUTHERE,DC=$(hostname -d|sed ‘s/.com//g’),DC=com
changetype: moddn
newrdn: CN=$(hostname -s)
deleteoldrdn: 1
newsuperior: ou=PutAllServersHere,ou=SITE,ou=SOME,ou=OTHER,ou=STUFF,dc=$(hostname -d|sed ‘s/.com//g’),dc=com

# Create a keytab for SSH single-sign-on
net ads keytab create -U secret:stuff

# Shutdown services we don’t need further
service winbind stop
service smb stop

# SSH client command to login using a Kerberos ticket:
$ ssh -K user@host


Red Hat Summit 2014

You must read Posted on 2014-04-22 13:59:43

Last week I went to Red Hat Summit 2014 in San Francisco. First off, for you who couldn’t attend: Click here for the PDFs for all the presentations.

Secondly, here’s a short transcript of the sessions I attended.

* Keynote from Jim Whitehurst, Red Hat CEO
-Exec summary:
The level of interruption seen in IT right now is unprecedented,
meaning there are huge opportunities for companies that can adapt.
The level of innovation in open source communities are far beyond any single company.
If you are looking at truly innovative companies like Facebook, Google,
Twitter etc they are all basing all their infrastructure on Open Source
– because there are no other options.
The challenges in the hybrid
(private+public) cloud environment can not be solved by any single
company (this was echoed by Intel, Cisco, IBM, Dell, HP etc) – so you
need open source solutions where people come together to innovate.

* Keynote from Douglas W. Fisher, Corporate Vice President, Intel
-Exec summary:
Intel X86 CPUs far outperform Power based CPUs.
Get moving to X86 and Red Hat Enterprise Linux. Red Hat Enterprise
Linux is by far the single vendor that makes MOST use of Intels X86
CPUs. Huge commitments from Intel to Linux.

* Keynote from Padmasree Warrior, CTO, Cisco
-Exec summary:
The Internet of things is coming, thanks to open source innovations.
It’s now that we draw full advantage of all the connected devices and
start to create truly innovative services.

* Red Hat Enterprise Linux Roadmap I & II
-Exec summary:
This is the default session to go to, it’s where Red Hat presents the
big news for Red Hat Enterprise Linux and all the key engineering
managers are there to present.
Big news:
There will be an upgrade path from RHEL6 to RHEL7, we do not have to reinstall any more.
RHEL7 will include kpatch – enabling us to do online patching of
EVERYTHING, including the kernel.
RHEL7 will include Docker support (Linux containers). A lot of people are whispering that virtual machines
are a thing of the past. Read more below from the Linux containers
RHEL 7 release candidate is to get released next week. That means RHEL7 is on schedule and will be out within months.
RHEL7 will feature a new default filesystem called XFS – that scales up
to 500 TB for a single filesystem. That figure, 500 TB will increase.

* Moving from Red Hat Satellite 5 to 6: A practical guide.
-Exec summary:
There is a large array of support tools to make this happen. Moving to
Red Hat Satellite 6 that will enable us to easily create and provision
virtual and physical machines out-of-the-box.
Migration should be fairly simply as we’re already using Puppet and Foreman.

* Linux containers in Red Hat Enterprise Linux 7
-Exec summary:
A new piece of technology called Docker could make virtual machines
Docker helps you create isolated containers within Red Hat Enterprise Linux 7 that guarantees CPU, memory, IO and network performance.
So, Linux can now deploy applications in a container instead of in a
virtual machine. The container is MUCH less overhead and only runs when
you have something to run.
You can either run contains as stateful
mini-servers or you can start them up just to run a specific thing. When
you’re done executing your application – the container disappears into
an inactive state – meaning it takes 0 resources from your server.

This will able us to reach completely new levels of hardware
utilization. There are examples where companies runs +2000 application
on a SINGLE blade server.
This is very interesting stuff and by far the killer app of RHEL7.

* New networking features & tools for Red Hat Enteprise Linux 7
-Exec summary:
Performance wise, RHEL7 will be better in all aspects. Also there are
new exiting technologies like SYNPROXY that defeats DDoS attacks.
will be possible to manage all networking using a simple CLI, TUI or
GUI interface. Looks really good and will greatly simplify
administration of host based networking.

* Demystifying systemd: A practical guide
-Exec summary:
In the heart of RHEL7 lies systemd. It will make resource limitation
technologies like cgroups into a thing for everyone to easily consume.
Logging is taken to the next level. Sockets, filesystems, services and
more are all handled.
You can define that a service should always be
running, meaning a HUGE increase in robustness for applications on
Linux. It something dies systemd will start it up again automatically.

* Red Hat Enterprise Linux 7 file systems: New scale, speed & features
-Exec summary:
A big new feature is that a server will get a simple CLI interface to
manage enterprise SANs. A server will be able to manage it’s LUNs by
itself. Need more storage? In a single go you can assign a new disk,
format it, put a new filesystem on it and install your application..
The new default filesystem of Red Hat Enterprise Linux 7 is called XFS.
It scales up to 500 TB filesystems (and you can have multiple
During the lifetime of RHEL7 XFS is likely to get
certified for even larger filesystems, making it an perfect choice for
handling Big Data.
XFS outperforms most filesystems out there (and
most definitely ext4) on basically all workloads ranging from 1 billion
small files to 1 big file that is a several billion bytes big.
logical volume handler now supports thinly provisioned snapshots, that
we can use to get easy rollbacks if application upgrades fail.

* The next-generation firewall in Red Hat Enterprise Linux 7
-Exec summary:
Management for firewall rules on Red Hat Linux servers is going to
become much more easier. There is now a common interface called
firewalld that offers a single set of commands to create firewall rules
for all the underlying things like iptables, ip6tables, ebtables, etc.
Firewalld comes with a cli allowing you to easily create and maintain
complex (+10 000 rules) host based firewalls. Moving into host based
firewalls is key for us being able to scale in the private, public or
hybrid cloud.

* Containers & resource management in Red Hat Enterprise Linux 7
-Exec summary:
A two hour hands on experience with Linux containers / Docker and cgroups (the technology that limits resources) and systemd.
Docker, the technology that you use to manage Linux containers is a
pleasure to work with. The simple CLI interface allows you to spin up
hundreds of containers in a bit. Systemd init scripts are a joy to work with – consisting of a very simple windows-like ini file. Setting up a ini file for a new service is super simple and requires very little work – compared to bash based sysv init scripts.

Fedora 18 x86_64, VirtualBox and MacBook Pro

You must read Posted on 2013-02-07 21:06:07

I just tried out Fedora 18 on my Macbook, running my Fedora 18 desktop as a Virtualbox guest. Works great. Time between the Grub screen and the login screen is around 3 seconds (I have a SSD disk). No issues with horrible performance using Gnome 3’s so called Gnome Shell as I encountered with Fedora 17 and early VirtualBox 4 releases. No issues with graphics or networking.
To get sound working, follow my simple instructions here.

Some features that raised my eye brows.

* Storage System Manager (ssm) – All your storage needs met with one tool? I’ve just started to scratch the surface of this tool, but this seems like something I’ve been waiting for a long time. Check it out with
$ sudo yum install system-storage-manager

* Faster ‘yum install’. Yum now seems to multithread when doing downloads/installs of atleast some packages.

* /tmp is now located on a tempfs and will therefor be much much faster for people that doesn’t run on SSD.

* Support for UEFI Secure Boot

* Live snapshots of virtual KVM guests (no need to pause or stop your guests before snapshotting)

* A bunch of cloud related tools, such as Eucalpytus, OpenShift Origin and Heat.

Have a look at the Fedora 18 Release Notes yourself:

Python encryption/decryption script of arbitrary string using a secret of choice and non-interactively

You must read Posted on 2012-11-24 17:14:31

A typically issue when you script is that you end up keeping passwords either in your script or in clear text in some file. Here’s a simple an reasonable secure solution for this. I write “reasonable secure” as the secret provided may not be the best (that depends on you) and is not passphrase protected. But.. it’s much more secure than using for example base64, bzip or etc as it does require a secret for the encrypted information to be read. That said, if your secret is world readable, your encrypted information is just as secure.

This script deploys maximum 256 bit AES encrypted data, that’s OK by NIST standards. It’s completely non-interactive.

A perhaps better alternative to this script would be:
1) Generate gpg key:
$ gpg –gen-key
2) Decrypt with –batch –passphrase
$ gpg –batch –passphrase “super secret passphrase” -d myfile.gpg 2>/dev/null

The script below was tested on Red Hat Enterprise Linux 6 and 5. To install needed dependencies run:
# yum install python python-crypto

Usage: rcrypt [-s /path/to/secret [-e ‘string’|-d ‘string]]
-s /path/to/secret File that contains a 16, 24 or 32 byte secret.
-e ‘string’ Encrypt provided string.
-d ‘string’ Decrypt provided string.

Note: if you wonder how many bytes your secret is, try: $ wc -c /path/to/file-containing-secret

For example:
[me@server ~]$ ./ -s ./secret -e test
[me@server ~]$ ./ -s ./secret -d iFm4k687hEWsqN4+zrBYn3RTIABmaVYCe98BsQVbiOI=
[me@server ~]$

Usage of this in a script would perhaps be:


# source username and password
. /credentials

decryptedUser=$( -s /home/me/.secret -d $username)
decryptedPassword=$( -s /home/me/.secret -d $password)

do_something -u $decryptedUser -p $decryptedPassword

Usage in a python script would rather be using a function, like:

from Crypto.Cipher import AES
import base64

def decrypt(ciphertext):
# the block size for the cipher object; must be 16, 24, or 32 for AES
# one-liner to sufficiently pad the text to be encrypted
pad = lambda s: s + (BLOCK_SIZE – len(s) % BLOCK_SIZE) * PADDING
# one-liners to encrypt/encode and decrypt/decode a string
# encrypt with AES, encode with base64
DecodeAES = lambda c, e: c.decrypt(base64.b64decode(e)).rstrip(PADDING)
f = open(‘/root/.mysecretkey’, ‘r’)
secret =
test = len(secret)
if not (test == 16 or test == 24 or test == 32):
print “Error: Secret (AES key) must be either 16, 24, or 32 bytes long”
cipher =, AES.MODE_CFB)
decoded = DecodeAES(cipher,ciphertext)
return decoded

def main():
f = open(‘/encrypted/info/here.conf’, ‘r’)
encrypted_str1 = str(lines[0])
encrypted_str2 = str(lines[1])
decrypted_str1 = decrypt(encrypted_str1)
decrypted_str2 = decrypt(encrypted_str2)

Download the script from here.

Script follows below as well, it won’t work if you cut and paste from below as indentation is screwed up..

#!/usr/bin/env python
# Author: Magnus Glantz,
# Credits: Most of the code authored by: Code Koala at:
# Copyright (C) 2012 Magnus Glantz
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, version 3 of the
# License.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <>.

from Crypto.Cipher import AES
import base64
import os
import getopt
import sys

# the block size for the cipher object; must be 16, 24, or 32 for AES


# one-liner to sufficiently pad the text to be encrypted
pad = lambda s: s + (BLOCK_SIZE – len(s) % BLOCK_SIZE) * PADDING

# one-liners to encrypt/encode and decrypt/decode a string
# encrypt with AES, encode with base64
EncodeAES = lambda c, s: base64.b64encode(c.encrypt(pad(s)))
DecodeAES = lambda c, e: c.decrypt(base64.b64decode(e)).rstrip(PADDING)

# Print usage
def usage():
print “Usage: rcrypt [-s /path/to/secret [-e ‘string’|-d ‘string]]”
print “-s /path/to/secret File that contains a 16, 24 or 32 byte secret.”
print “-e ‘string’ Encrypt provided string.”
print “-d ‘string’ Decrypt provided string.”
print “”
print “Note: if you wonder how many bytes your secret is, try: $ wc -c /path/to/file-containing-secret”

# Process arguments, create cipher object and print encrypted or decrypted string back.
# Note: I wasn’t able to use getopt to fetch string to decrypt, not sure why. Therefor manual arg/opt handling.
def main():
if len(sys.argv) >1 and sys.argv[1] == “-s”:
filename = sys.argv[2]
f = open(filename, “r”)
secret =
test = len(secret)
if not (test == 16 or test == 24 or test == 32):
print “Error: Secret (AES key) must be either 16, 24, or 32 bytes long”

# Create a cipher object, using the provided secret
cipher =, AES.MODE_CFB)

if len(sys.argv) > 2 and sys.argv[3] == “-e”:
encoded = EncodeAES(cipher,sys.argv[4])
print encoded
elif len(sys.argv) >2 and sys.argv[3] == “-d”:
decoded = DecodeAES(cipher,sys.argv[4])
print decoded

if __name__ == “__main__”:

Jenkins RPM Build Environment

You must read Posted on 2012-11-13 21:07:25

Tired of rpmbuilding manually?
Tired of failing at setting up Koji?
Tired of the dark shell of Mock?

After having searched for year for an easy to setup RPM build environment, I’ve found the Hudson fork – Jenkins. Had I realized that it was so easy to setup, I would have started to use it ages ago.

Here’s a quick 1-2-3 setup guide.. I’ve only been using Jenkins a couple of hours, so this can be done in much nicer ways, though, this does work.

Use below instructions to get going. My .spec file won’t work with stuff that needs to get compiled. Go ahead and modify to fit your purposes.

1) Install a Fedora or RHEL server
2) Download and install the Jenkins RPM from the Jenkins website.
3) Start Jenkins and make it start up boot:
# service jenkins start
# chkconfig jenkins on
4) Install an Apache webserver
# yum install httpd
5) Setup a Subversion server and a repository – Use one of the many guides online, and yes, I know I should be using Git.. 🙂

6) For each RPM that your going to package, create a directory / file structure as described below:

7) Cut and paste below script into /usr/bin/buildtherpm and chmod +x /usr/bin/buildtherpm
if [ “$1” == “” ]; then
echo “Error, missing argument. Usage: $0 <name of jenkins project.”
exit 1



# Clean up and create directories
[[ -d $dir ]] && rm -rf $dir
mkdir $dir
if [ $dir == “RPMS” ]; then
mkdir $dir/noarch
mkdir $dir/x86_64
mkdir $dir/i686

# Create a tar ball from what we checked in
rm -rf $(find $THEPROJECT|grep “.svn”)

# We’ll do a lazy definition of what to package, only defining top-level directories.
# We have to add the initial / with sed.

echo >>$THEPROJECT.spec

# FIXME: allow for changelogs to be retained
echo “%changelog” >>$THEPROJECT.spec
echo “* $(date +”%a %b %d %Y”) Build Service <e@mail.domain>” >>$THEPROJECT.spec
echo “- Build Service automatic build.” >>$THEPROJECT.spec



# Create rpm in RPMS/noarch/
rpmbuild –define ‘_topdir ‘`pwd` -ba SPECS/$THEPROJECT.spec

# Move build RPMS to storage area.
[[ -d /var/www/html/buildservice/$THEPROJECT ]] && rm -rf /var/www/html/buildservice/$THEPROJECT
mkdir -p /var/www/html/buildservice/$THEPROJECT
cp -Rp RPMS /var/www/html/buildservice/$THEPROJECT
cp -Rp SRPMS /var/www/html/buildservice/$THEPROJECT

8) Create your .spec files like such:
Packager: Build Service
Summary: Replace with something that makes sense
Name: my-rpm
Version: 1.0.0
Release: 1
Vendor: You
Group: Development/Tools
BuildRoot: %{_tmppath}/%{name}-%{version}-root
Source0: %{name}.tar.gz
BuildArch: noarch
Requires: bash
License: GPL3
URL: http://my-jenkins-server.localdomain:8080

I didn’t replace this text with something that makes sense.

tar -xvzf %_topdir/SOURCES/my-rpm.tar.gz

mkdir %{buildroot}
cd my-rpm
cp -Rp * %{buildroot}/

rm -rf %{buildroot}/




9) Create a build job in Jenkins ( type: Build a free-style software project ). The name of the project must be: my-rpm (same as the name of the RPM, same as the name of the subversion directory where it’s put. That’s just how my scripts work, change it if it doesn’t fit your purposes 🙂

Select “Add Build Step” and select “Execute Shell”, then enter the following into the text box:
buildtherpm my-rpm

11) Done. You can now edit you properties to build at SVN commits, create nightly builds, etc.

« PreviousNext »