“It is all mine! All mine! I am DevOps” (…and related bs**t)

“It is all mine! All mine! I am DevOps” (…and related bs**t)

This week I held a training session for my teammates called Shot in the alley…WTF to do. Instead of the proverbial getting hit by a bus, I went a little more realistic and what if I get shot in a downtown alley instead, what is my team going to to now? The quick class is one of two things that I do to ensure my team knows what it is that I work on and most importantly, they know how to do it if I’m not around. The other thing is called “WTF sessions” on a specific discipline, such as WTF is Amazon Redshift.

I am a firm believer, just like everyone at Artifact Uprising, that we should communicate and share as much as possible. As an example, The other day I did a deploy that I’ve never done before thanks to one of our dev engineers.

After that session, I reminded myself of why I will never return to very large companies that have had the same folks working on the same things since the Macarena was popular. Those places are always full of folks holding on to knowledge as they fear to lose their jobs. The staple phrase: “Well, that’s my job security“, comes to mind. My silent response always was: “And fuck off with that”. It was silent because those large companies don’t appreciate piercing, blunt statements like that. They spend a lot of money on HR training programs so you don’t say things like that.

I also encourage everyone to document. The silly idea that Agile, lean development processes should have minimal documentation is that, silly. Take some time, slow Fridays for example, and write down what you did over the week. This will benefit you and others around you. Just checked my wiki space and there are 23 postings that I have published in about 4 months. You can document fast, document on code, but make it clear and available.

The paranoia of I’m going to lose my job if I share what I do is unhealthy, for that person and the business overall.

Share, share, share.


Back in a kitchen! Provision without a knife.

Back in a kitchen! Provision without a knife.

As many of you who I interact online or over a drink (or two…three) know, I left Puppet. The company and the people are fantastic and very glad of my time there. I now work at Artifact Uprising in Denver, CO. Their products, culture, amazing people, and technology outlook are very attractive to me. Plus, I don’t travel anymore which means I see my wife and daughter every day.

Working at my new outfit meant a return to a kitchen. I was a former Chef of a very busy infrastructure kitchen some years ago and now I am back with a knife. Boy was I rusty! Oh my oh my, have I forgotten my cookbook-making skills! Surely need to read more recipes.

One of my current big projects involve some work with our Chef infrastructure and I couldn’t believe how stuck I was doing the simple task of auto-provisioning. If you are using the open-source version of Chef, usually you use the knife tool to provision machines. While knife is a great tool for managing Chef, I simply couldn’t use it for how dynamic, ever-changing, fast and hipster our infrastructure grows, shrinks and moves. I need to not think about a machine coming up and get provisioned. It has to register by itself.

There are several folks that have written about how to do this using the open-source version of Chef but none of them worked exactly for my setup. This post will show you, very simply what you need to do on your client to get it automatically up and running.

On another post I will detail how to build your own Chef development environment, at least the Xuxo way. For now, this applies to just the client.


On a cloud-init, bootstrap script, etc., script the following commands:

Install the chef-client from Chef.io. It will detect your OS:

curl -L https://omnitruck.chef.io/install.sh | sudo bash

Obtain the validation pem from your server and place it somewhere on your client:

echo >/tmp/my-validator.pem 'my_validation_key'

Create a “First Boot” JSON file required by Chef and add the role(s) you want the machine you have:

{ "run_list": [ "my-role" ] }

Create the configuration folders:

mkdir /etc/chef && mkdir /etc/chef/trusted_certs

Create the file /etc/chef/client.rb with the following contents (change url and validator info):

chef_server_url "https://chef-server/organizations/my_organization"
client_fork true
log_location "/var/log/chef/client.log"
validation_client_name "my-validator"
node_name "this-client-node"
trusted_certs_dir "/etc/chef/trusted_certs"
# Do not crash if a handler is missing / not installed yet
rescue NameError => e
 Chef::Log.error e

I highlighted in blue the trusted_certs dir because that was key for me to get automatic provisioning going. Obtain the Chef server’s CRT file and place it in that directory.

Finally, run this command on the client to provision and register with your open-source Chef server:

sudo chef-client -j /tmp/first-boot.json --validation_key /tmp/my-validator.pem

Chef will now provision the system and the role will be applied to the node. Script that and you don’t need to be on your Chef workstation provisioning via knife!

Thanks for reading.

Air Xuxo lands

Air Xuxo lands

This post contains my very personal statements. They do not represent those of Puppet or prior employers.

I write from my last flight with Puppet. It was an enjoyable year with a fantastic company. Puppet will be one of those places remaining alive within my best workplace memories. Amazing people, wonderful talent and a culture that should be an example for every business. A touch of sadness is palpable within me as I bid farewell.

Puppet is among a list of companies I have traveled for. Some I traveled extensively for and others a fair amount. When I started flying for work many years ago, I posted on Facebook as Air Xuxo with a dash of cultist pride. I illustrated postcards that I published with hints related to cities and countries. Then, after a long series of treks, I simply dropped my laptop, said goodbye and went underground. I even deleted all the pictures I had taken and removed myself from social networks. That period on the ground lasted about 3 1/2 years.

That time found me adjusting to new places to work, re-introducing myself to circles of people I saw everyday and realizing that business travel can be addictive and you have to fight it. Furthermore, I also went through some changes to improve my health and heal damages I had made to my relationship at home. The wounds had been building, but I was away too much to nurse them back to health. Time at home also saw the birth of my daughter. To be honest, I finally became an adult in my late 30s.

After that period, I felt comfortable trying a position that had a minimal, by historical comparison, amount of travel. Puppet offered that great balance and after consultation with my wife, I went for it. Mountain area was my only region to cover and that was a much better way of “air living” than in prior stages of my career. I had fun, I enjoyed the short trips and learned a ton from everyone I met. It also allowed me to build a community I am very proud of. Then, my daughter turned 3 and some time after I started hearing these phrases: “I miss you all the time” and “please come back soon”.

When you hear such statements from a small toddler that is still learning everything around her, it truly melts you. It makes you think. First you think about how she feels and then about yourself. Realizing that I am not there to teach her stuff, unable to play with her several periods in a month and not waking her up in the mornings made me feel selfish, detached and careless about the real importance of life. I also thought of all the hard days my wife has had handling everything while I am away. About 8 years ago, she drove herself to the hospital and ended up with an emergency appendectomy because I was away for work and stranded due to snow! That’s just one example.

Some of you that have done gigs like these must think: why not bring your family to trips? Because I am not there either. Work trips are series of endless meetings with happy hours and just some downtime at night, probably still slaving away in front of a laptop screen to cram work. Ask that question to my wife instead, who spent a week in São Paulo and barely saw me.

In between the countless boarding pass scans and walks across airport terminals I thought about this type of life (subculture really). I wondered how much business travel, and associated expenses, can be trimmed back if leadership finally embraced our hyper-connected culture. Maybe one visit in person, the next 6 or 7 virtual. Everybody wins: companies, employees and families. Why people do this? Why I did it? Why risk being in a metal container 30,000+ ft above the ground loaded with fuel so often? That metal container can fail at any time and not see your family and friends ever again. It is a risk we, business travelers, have accepted but rarely seem to think about.

During my first grounding, I came across a LinkedIn article that resonated with me. It seems impossible for me to find now to quote but it was written by a business traveler that said he did it for his family. That all those missed important dates were for his family. An especially striking statement claimed he did it for the future of his children. There is no worth on that when you completely missed important moments and experiences on their path to that future. Financial reward and title seem unimportant when you have to see your child singing for the first time through a text with a link.

Farewell to the skies and rewards’ points. It has been a great time and set of experiences. From now on I will only travel for vacations, the 3 of us together.




Puppet Enterprise Orchestrator: A Practical Guide

Puppet Enterprise Orchestrator: A Practical Guide

Oh wow! Just when you got really good at deploying VMs, configuring and installing stuff on them, someone walks into your area and asks: “Hey!, can you use Puppet to deploy multiple nodes in order and install app stacks on them?”. If you are in Denver, your answer might be: “Let me check with Xuxo”. Well, pretend you are in Denver as I will show you how to do that task by deploying a Python Flask app that needs a MongoDB database, plus some ideas to automate these deployments further.

After doing this a couple of times and reviewing the documentation, I thought about splitting this post into two parts as I combine several concepts. Then I thought a bit more and decided to give you everything in one long post. So, grab a coffee if it’s morning or a beer if evening (or lunch time if you are in Colorado)…this is a long post.

Components and knowledge requirements

  • Puppet Enterprise 2015.x.x or higher. There is an open-source guide out there, :). The author will surely hit me on Twitter later but it works differently.
  • Understanding of hiera. Go to Puppet’s docs on it or follow my minimalist guide.
  • Understanding of multi-tier applications. DB tier, application tier.

Flow Architecture

I will describe how to implement orchestration as close to operations as possible. This means that the request for a new stack will come from an external system and the host and stack information will be retrieved rather than hardcoded in Puppet manifests as it is expected to change. While I will use minimal values, you will see that the input data can grow and become as fine-grained as you want it.


The illustration above shows how a user would request a ‘stack’ and the new host(s) information will be stored in CouchDB, pretending it to be a CMDB or a host information database. Once that information is provided, an API call can trigger an orchestration job in Puppet and the build-out will begin. Also in the diagram, Puppet will retrieve the values for credentials and database info from a key/value store. I use Consul and recommend Vault. Puppet will validate all objects and deploy the nodes in order. When the process completes, Puppet returns a report URL with a job ID that can be tracked elsewhere to report completion to requester.

Now that we know what we are doing, let’s begin. Grab the second cup or second beer.

Hiera setup

I am taking this post to also show you how to extend Hiera capabilities. We will be retrieving values from two places, CouchDB and Consul. For that we need to add two new backends in hiera:

Once you install them, we need to move those backend providers to a new location as we are working with Puppet Enterprise not Open Source. Copy the providers .rb files to:


Now let’s modify our hiera.yaml file (below is my actual config) so we can use CouchDb and Consul. I have highlighted the changes:

 - yaml
 - json
 - http
 - consul
 :datadir: "/etc/puppetlabs/code/environments/%{::environment}/hieradata"
 :datadir: "/etc/puppetlabs/code/environments/%{::environment}/hieradata"
 :port: 5984
 :output: json
 :failure: graceful
 - /hiera/%{clientcert}
 - /hiera/%{environment}
 - /hiera/common
 :port: 8500
 - /v1/kv/hiera/common
 - "nodes/%{::trusted.certname}"
 - "global"
 - "common"
 - "aws"
 - "stiglinux"
 - "etcd"

Restart pe-puppetserver to apply the new backends and configuration:

systemctl restart pe-puppetserver

CouchDB and Consul setup

You must now be on beer #1 if coffee is done or beer #3. I will not walk you through the installations of CouchDB and Consul. Follow the vendor guides as they are pretty good. BTW, I host them on separate VMs. In this step, we will add some values to those two stores.

CouchDB and Consul have great REST APIs and UIs that can be used to get our data in and out of them. On Couch we will create a document that mimics the posted stack request:

Create DB for hiera:

curl -X PUT http://couchdb.host:5984/hiera

Add document:

curl -X PUT http://couchdb.host:5984/hiera/common -H \
'Content-Type: application/json' -d \

I prettied the text a bit for readability but you can see how I labeled each server we will be orchestrating as a DB and App units. The ‘ready‘ states are purely optional, but handy as you will see later. Also, notice how the database and document follow the paths highlighted on the hiera.yaml‘s http backend.

Login to the Consul server and create a key/value objects for hiera:

consul kv put hiera/common/dbuser admin
consul kv put hiera/common/dbport 27017
consul kv put hiera/common/dbpass admin

As you can see, you can put as many things as you want in there. It doesn’t necessarily mean you have to use them. The paths are reflected on the consul section of our hiera.yaml.

The Puppet manifests

On to beer #4 or #5…

Now we have setup a good portion of our infrastructure that will support a request for a stack. It is time to dive into the Puppet piece of this. We will begin by coding our application stack. There are new provisions on the Puppet 4 language to achieve this.

Create the work directory structure:

mkdir -p pyflaskapp/{manifests,templates,files,lib}
mkdir -p pyflaskapp/lib/puppet/type


First, we need to create a small capability (or interface) to share our database information with our application node. Sharing data is the core of orchestration inside Puppet. Create the file nosql.rb inside pyflaskapp/lib/puppet/type with this content:

Puppet::Type.newtype :nosql, :is_capability => true do
 newparam :name, :is_namevar => true
 newparam :user
 newparam :password
 newparam :port
 newparam :host
 newparam :database

Our next step is to create our database manifest that will export this values to orchestrator. The name of the file is on the first line:

# pyflaskapp/manifests/db.pp
define pyflaskapp::db(
  $host = $::fqdn,
  $port = 27017,
  $database = $name,
 class {'::mongodb::globals':
     manage_package_repo => true,
     bind_ip => '',
 class {'::mongodb::client': } ->
 class {'::mongodb::server': } ->

 mongodb::db {$database:
   user => $db_user,
   password => $db_password,

 Pyflaskapp::Db produces Nosql {
   user => $db_user,
   password => $db_password,
   host => $host,
   database => $database,
   port => $port

To achieve orchestration, we are using a new block in our manifests. I have highlighted some new things we will need to understand.

Define is our entry point in these manifests and it tells us which data we need as parameters between the parentheses. It has been available in the language and it’s essential for these jobs.

The last block is new and very important. Here is where we are stating that this DB module will produce or make available the stated information: user, password, host, database, port.

Our DB tier makes this available for our app tier to know where the resources to use are.

Now we will make our app manifest. This will build our flask application:

# pyflaskapp/manifests/app.pp
define pyflaskapp::app(

 $pippackages = ['flask', 'pymongo']

package {$pippackages:
    ensure => 'installed',
    provider => 'pip',

file {'/flask_app':
    ensure => 'directory',
    mode => '0775',

file {'/flask_app/templates':
    ensure => 'directory',
    mode => '0775',

file {'/flask_app/index.py':
    ensure => present,
    content => template('pyflaskapp/index.py.erb')

file {'/flask_app/index.wsgi':
    ensure => present,
    source => 'puppet:///modules/pyflaskapp/index.wsgi',

file {'/flask_app/templates/index.html':
    ensure => present,
    source => 'puppet:///modules/pyflaskapp/index.html',

exec {'run_me':
    path => ['/usr/bin', '/bin', '/sbin', '/usr/local/bin'],
    command => "python index.py &",
    cwd => "/flask_app",
    unless => "/usr/bin/test -f /flask_app/.running.txt",

    ensure => file,
    content => "Running flask instance"


Pyflaskapp::App consumes Nosql {
    db_name => $database,
    db_host => $host,
    db_port => $port,
    db_user => $user,
    db_password => $password


Notice again the last block. This time we consume what the DB manifest produced. To use some of the values that we will receive from the database piece of the orchestration job, I generated the flask start file from template. In this fashion, we can deploy as many unique instances of our application:

# pyflaskapp/templates/index.py.erb
from flask import Flask, render_template, request, redirect

import os
from pymongo import MongoClient

def connect():
# Substitute the 5 pieces of information you got when creating
# the Mongo DB Database (underlined in red in the screenshots)
# Obviously, do not store your password as plaintext in practice
 connection = MongoClient("<%= @db_host -%>",27017)
 handle = connection["<%= @db_name -%>"]
 handle.authenticate("<%= @db_user -%>","<%= @db_password -%>")
 return handle

app = Flask(__name__)
handle = connect()

# Bind our index page to both www.domain.com/
#and www.domain.com/index
@app.route("/index" ,methods=['GET'])
@app.route("/", methods=['GET'])
def index():
 userinputs = [x for x in handle.mycollection.find()]
 return render_template('index.html', userinputs=userinputs)

@app.route("/write", methods=['POST'])
def write():
 userinput = request.form.get("userinput")
 oid = handle.mycollection.insert({"message":userinput})
 return redirect ("/")

@app.route("/deleteall", methods=['GET'])
def deleteall():
 return redirect ("/")

# Remove the "debug=True" for production
if __name__ == '__main__':
 # Bind to PORT if defined, otherwise default to 5000.
 port = int(os.environ.get('PORT', 5000))

app.run(host='', port=port, debug=True)

Finally, our module needs to bring this all together. We do this in our init.pp:

# pyflaskapp/manifests/init.pp
application pyflaskapp(
  String $db_user,
  String $db_password,
  String $host,
    db_user => $db_user,
    db_password => $db_password,
    host => $host,
    port => $port,
    export => Nosql[$name],

    consume => Nosql[$name],



The entry point here is the word application. It will define our stack and its’ components. Notice the export and consume relationship. We are almost ready to trigger this job.

Orchestration job

Probably this is the last beer you will have on your desk as you work through this. It is all down to site.pp now. Just as you are used to defining nodes in that main file, now we define a site, our stack building steps and which nodes get what! Add this to site.pp:


     # get AppName from CouchDB's request    
     $name = hiera('AppName')

      # get values from Consul and CouchDB to fulfill request

         db_user => hiera('dbuser'),
         db_password => hiera('dbpass'),
         host =>hiera('DBServerName'),
         nodes => {
               Node[hiera('DBServerName')] => [Pyflaskapp::Db[$name]],
               Node[hiera('AppServerName')] => [Pyflaskapp::App[$name]],

Let’s run the job!

Running orchestrator

Orchestrator is a tool within Puppet Enterprise to accomplish these multi node stack deployments. It is available via REST API with secure token authentication.

The tool has two main parts. The first I want to show is the command ‘puppet app show‘. This utility works as a job plan that you can review. It checks that all dependencies are met, node information looks good, and which order things should run:


I show the image because it is actually color coded. If the plan review looks OK, we can go ahead and run the job. If one of the items does not pass validation, this tool will let you know. I added to my site.pp a conditional that would only run a job if all nodes are on a ‘ready‘ state. That way, I protect the dependencies even further:

# conditional block
 if (hiera('AppServerReady') == "not ready") or (hiera('DBServerReady') == "not ready") {
   fail("One of the servers is not ready")

To run the job, the command is as follows:

puppet job run --application Pyflaskapp --environment production

As you can see, we can apply a job to a specific environment also. Output is also color coded:

orch run.png

Our multi-tier node is now ready for use. It is a flask app that I took from the web somewhere and modified along the way:

Flask node:
flask app.png

Mongo node:

And there you have it! A full stack deployment with Puppet!

Thanks for reading.


A Quick Hit of Hash…icorp!

A Quick Hit of Hash…icorp!

Alright! This is going to be one of the quickest posts ever! Why? Because what we are going to do is ridiculously simple yet powerful!. We will build a 2-node Nomad distributed scheduler to run applications on. Sure, there is Kubernetes, Mesos, etc., but….can you do it in about 10 minutes and with single binaries? Ah!, the elegance of Hashicorp!

What you need

  • 2 VMs and a Consul installed somewhere. I always use RHEL or Ubuntu for my VMs.
  • Consul agent
  • Nomad


My nodes are called: nomad0.puppet.xuxo and nomad1.puppet.xuxo. We will make nomad0 our server and nomad1 our client. You can scale up and cluster as much as you want! It is very quick and simple to add.

On each of the nodes, download and place Consul agent and nomad:

wget https://releases.hashicorp.com/consul/0.7.2/consul_0.7.2_linux_amd64.zip
wget https://releases.hashicorp.com/nomad/0.5.2/nomad_0.5.2_linux_amd64.zip
unzip nomad_0.5.2_linux_amd64.zip
unzip consul_0.7.2_linux_amd64.zip
cp nomad /usr/bin/
cp consul /usr/bin/

Create a config file (/etc/consul/config.json) for the Consul agent on each:

    "advertise_addr":"", (<-IP of node, change for each node)
    "bind_addr":"", (<-IP of node, change for each node)
    "datacenter":"xuxodrome-west", (<- your consul datacenter)
    "node_name":"nomad0", (<-node name, change for each)

Create a config file (/etc/nomad.d/server.hcl) for nomad0 (our server):


# Increase log verbosity
log_level = "DEBUG"

# Setup data dir
data_dir = "/opt/nomad0"

# Enable the server
server {
      enabled = true

# Self-elect
     bootstrap_expect = 1

Create a config file (/etc/nomad.d/client.hcl) for nomad1 (our client):

datacenter = "xuxodrome-west"

client {
 enabled = true

leave_on_terminate = true

Optionally, you can create a systemd file to manage the nomad service:


ExecStart=/usr/bin/nomad agent -config /etc/nomad.d

Alright, we are ready! Let’s start everything and verify:

On server node run:

consul agent -data-dir=/opt/consul -node=nomad0.puppet.xuxo /
-bind= -config-dir=/etc/consul &


systemctl start nomad (if you did the systemd service file)

On client node run the same commands in the same order but changing the -node option to the client name.

Verify cluster memberships by running a check on any consul member:

root@nomad0:~# consul members
Node Address Status Type Build Protocol DC
consul0 alive server 0.7.2 2 xuxodrome-west
nomad0.puppet.xuxo alive client 0.7.2 2 xuxodrome-west
nomad1.puppet.xuxo alive client 0.7.2 2 xuxodrome-west

Verify nomad cluster by running this command on our nomad server (nomad0):

root@nomad0:~# nomad node-status
ID DC Name Class Drain Status
e1be248c xuxodrome-west nomad1.puppet.xuxo <none> false ready

We are done! But what fun is it if our scheduler is not running anything? None. Let’s create a mongo container job then.

On nomad0, our server, create a job file:


job "mongo" {
        datacenters = ["xuxodrome-west"]
        type = "service"

        update {
           stagger = "10s" 
           max_parallel = 1

        group "cache" {
           count = 1
           restart {
          attempts = 10
           interval = "5m"
           delay = "25s"
           mode = "delay"

         ephemeral_disk {
              size = 300

         task "mongo" {
               driver = "docker"

         config {
               image = "mongo"
               port_map {
                    db = 27017

          resources {
                  cpu = 500 # 500 MHz
                  memory = 256 # 256MB
         network {
                  mbits = 10
                  port "db" {}

            service {
                 name = "global-mongodb-check"
                 tags = ["global", "cache"]
                 port = "db"
                 check {
                      name = "alive"
                      type = "tcp"
                      interval = "10s"
                      timeout = "2s"


Start the job:

nomad run mongo.nomad

Verify job after some seconds:

nomad status mongo

ID = mongo
Name = mongo
Type = service
Priority = 50
Datacenters = xuxodrome-west
Status = running
Periodic = false

Task Group Queued Starting Running Failed Complete Lost
cache 0 0 1 0 0 0

ID Eval ID Node ID Task Group Desired Status Created At
230d2ec2 ff77cbff e1be248c cache run running 12/27/16 21:20:32 UTC

Check Consul and now our nomad cluster is alive and the mongo service is available:


Have fun!

Planes, Trains, and Ruby for real

Planes, Trains, and Ruby for real

As I sit on a plane going to my next DevOps adventure, I write this little nugget about config management platforms. Why? One, I have time. Two, a little Ruby/Puppet DSL (Domain Specific Language) deep-dive I had some weeks ago made me think about this.

Recently, I had to explain to a good bunch of folks why Puppet uses a DSL. Not only Puppet, but any other fine config management tool like it. Explaining a DSL was a bit harder than I thought so I decided to do a “Why Ruby?” presentation. During the presentation, I did the unthinkable, I wrote Ruby! It was a simple exercise: how many lines of Ruby vs. a DSL takes you to perform a specific task. The task was to install apache server. I won’t go into detail about it…but it was roughly 3 lines on DSL vs. 74+ on a given programming language without a lot of the dependencies.

However, that little exercise made me think: “Hey, let me build a small tool in Ruby to show how installing a package works and have the plumbing to operate like a Puppet, Chef, or Salt platform”. Hence, Sysville, as I call it, will be explained on this post.

Sysville is a simple concept. It is a small “neighborhood” of systems that get “mail” delivered and/or returned to a neighborhood post office to act upon. A main hub for a neighborhood of servers.

For this concept and exercise, I decided to setup a simple RabbitMQ message queue and a “post office” server. If I expand on it, there will be future posts. As usual on my blog, let’s begin.

The Architecture

Below is the basic logical architecture of our application:


Yes!, we have a MongoDB piece!

The Setup

Build at least 2 CentOS VMs, 1 will be the post office, the other a node or client.

The RabbitMQ piece

Install and configure RabbitMQ per this guide on the main server that will be the post office.

The MongoDB piece

Install and configure MongoDB on the same server that acts as the post office per this guide.

Now, you should have a running RabbitMQ installation and a MongoDB to store our stuff, whatever that stuff happens to be. The intent is to let you be creative and come up with cool things to do. For now, we will capture install records that serve as an audit trail. This is one of the main reasons companies look for platforms like Puppet.

The Post Office

Our post office is a small 100% Ruby middleware that posts messages to RabbitMQ to be retrieved by a node.

The code

First, let’s create a configuration YAML file to store our values. This is very similar how Puppet abstracts common values from the actual infrastructure code.

Create a folder called sysville and inside, a folder called config:

mkdir -p sysville/config

Inside the config folder, create the file rabbit_config.yaml and populate replacing your values:

 # RabbitMQ values
 mq_user: admin
 mq_pass: admin
 mq_server: ls0

 info_channel: info
 notification_channel: notify
 provision_channel: provision
 init_channel: init
 enforce_channel: enforce
 status_channel: status

# Mongo values
 mongo_host: ls0
 mongo_port: 27017
 db: sysville

In my setup, ls0 is my main server. Replace with your targeted host server.

For both servers, install the Bunny gem:

gem install bunny

Now, at the root of the sysville directory, we create post_office.rb. Read the comments on the code to understand what all the pieces do:

#!/usr/bin/env ruby
# encoding: utf-8

# Libraries required. Yaml is included. You will need to install bunny
# and mongo ( gem install bunny && gem install mongo )
require 'bunny'
require 'yaml'
require 'date'
require 'mongo'

class Postoffice

# load configs to use across the methods

fn = File.dirname(File.expand_path(__FILE__)) + '/config/rabbit_config.yaml'
 config = YAML.load_file(fn)

# export common variables

@@datetime = DateTime.now()

# export the connection variables
 @@host = config['mq_server']
 @@mq_user = config['mq_user']
 @@mq_pass = config['mq_pass']

# export the channels to be created/used
 @@info = config['info_channel']
 @@notif = config['notification_channel']
 @@provi = config['provision_channel']
 @@init = config['init_channel']
 @@enfo = config['enforce_channel']
 @@stat = config['status_channel']

# mongo database values
 @@db = config['db']
 @@mongo_host = config['mongo_host']

# export connection to RabbitMQ
 @@conn = Bunny.new(:hostname => @@host,
 :user => @@mq_user,
 :password => @@mq_pass)

# export connection to MongoDB
 @@db_conn = Mongo::Client.new([ "#{@@mongo_host}:27017" ], :database => "#{@@db}")

def initialize()

# define methods to use by server and clients

def request_status(hostname)
# open connection to MQ

# generate a random message ID 
id = rand(0...1000000)
 type = "PING_REQUEST"
 message = type + "," + hostname + "," + String(id) + "," + String(@@datetime)

 # create channel to post messages
 ch = @@conn.create_channel
 q = ch.queue(@@stat)
 ch.default_exchange.publish(message, :routing_key => q.name)

puts " [x] Sent Status Request to " + hostname


# place record on database

collection = @@db_conn[:status]
 doc = { type: type, client: hostname, msg_id: id, time: @@datetime }
 result = collection.insert_one(doc)
 puts result.n


 def install_parcels(parcel, hostname)

id = rand(0...1000000)
 message = type + "," + parcel + "," + String(id) + "," + String(@@datetime)

 ch = @@conn.create_channel
 q = ch.queue(@@provi)
 ch.default_exchange.publish(message, :routing_key => q.name)

puts " [x] Sent Installation Request for " + parcel + " to " + hostname


collection = @@db_conn[:provision]
 doc = { type: type, package: parcel, client: hostname, msg_id: id, time: @@datetime }
 result = collection.insert_one(doc)
 puts result.n


 def remove_parcels(parcel, hostname)

id = rand(0...1000000)
 message = type + "," + parcel + "," + String(id) + "," + String(@@datetime)

 ch = @@conn.create_channel
 q = ch.queue(@@provi)
 ch.default_exchange.publish(message, :routing_key => q.name)

puts " [x] Sent Installation Request for " + parcel + " to " + hostname


collection = @@db_conn[:provision]
 doc = { type: type, package: parcel, client: hostname, msg_id: id, time: @@datetime }
 result = collection.insert_one(doc)
 puts result.n


On the root of sysville, create a small script to send messages. Let’s call it try.rb since I don’t have an original name for it:

require "./post_office"

d = Postoffice.new()

The file above imports our Postoffice class and passes two values to the provision method: httpd, ls1. This tells the post office to please send a message to host ls1 that it needs to install httpd. Ls1 is your client or node.

The Domicile

Following the neighborhood’s post office analogy, we now treat our client node as a home or a domicile which receives mail, messages.

The code

Make a directory called house and inside create the file domicile.rb with this content:

#!/usr/bin/env ruby
# encoding: utf-8

require "bunny"

conn = Bunny.new(:hostname => "ls0", :user => "admin", :password => "admin")

ch = conn.create_channel
q = ch.queue("provision")

puts " [*] Waiting for messages in #{q.name}. To exit press CTRL+C"
q.subscribe(:block => true) do |delivery_info, properties, body|
 res = body.split(',')
 req = res[0]
 bin = res[1]

puts " [x] Received #{body}"

 install_job = fork do
 puts "I am an install request"
 exec "yum install #{bin} -y"

 install_job = fork do
 puts "I am an install request"
 exec "yum erase #{bin} -y"

You should now be ready to test.

Try it!

Go to your Post Office instance, ls0, navigate to our sysville folder and run the following:

ruby try.rb

The output should be like this:

[root@ls0 sysville]# ruby try.rb
D, [2016-10-17T13:38:24.600211 #2304] DEBUG -- : MONGODB | Adding ls0:27017 to the cluster.
 [x] Sent Installation Request for httpd to ls1
D, [2016-10-17T13:38:24.619113 #2304] DEBUG -- : MONGODB | ls0:27017 | sysville.insert | STARTED | {"insert"=>"provision", "documents"=>[{:type=>"INSTALL_REQUEST", :package=>"httpd", :client=>"ls1", :msg_id=>316469, :time=>#<DateTime: 2016-10-17T13:38:24+00:00 ((2457679j,49104s,599731872n),+0s,2299161j)>, :_id=>BSON::ObjectId('5804d450ec0c7b090000d...
D, [2016-10-17T13:38:24.620682 #2304] DEBUG -- : MONGODB | ls0:27017 | sysville.insert | SUCCEEDED | 0.001470799s

Now the request to install has been posted to RabbitMQ waiting to be picked up. We also have created an audit trail by inserting a record on the database for this install.

Now, go to our client node, ls1:

ruby domicile.rb


[root@ls1 sysville]# ruby domicile.rb
 [*] Waiting for messages in provision. To exit press CTRL+C
 [x] Received INSTALL_REQUEST,httpd,316469,2016-10-17T13:38:24+00:00
I am an install request
Loaded plugins: fastestmirror
Loaded plugins: fastestmirror

Now, your node has httpd since it was told to install it.


Platforms like Puppet and others use a DSL to make the process above a lot slimmer and simpler, among a lot of other things. In this fashion, you don’t have to write all this code to do this simple installation. Also, applications like this come with everything integrated on it, such as message queues so you don’t have to spend time figuring out how to integrate it.

Feel free to download the repo for this exercise and play with it. Also, add stuff to it an have fun!

Nah-ah!…Shut up!…You don’t say!…OMG, you do Windows!!!

Nah-ah!…Shut up!…You don’t say!…OMG, you do Windows!!!

In my adventures around the Mountain area spreading the good word of DevOps and making IT shops hip, I find interesting things…surprising things. It is even better when I can be the one giving the surprise with a simple statement: “Yes, we work with Windows really well“. Honestly, Windows has become a good percentage of my conversations the last months.

There are two key reasons, from my perspective, why that simple statement carries significant impact and why it comes up so often:

  • Microsoft has become cool again
  • Traditional automation and configuration management platforms that cater to Windows are missing some of the extra mile functionality that modern, faster organizations need.

I am a UNIX child. Some of my early interactions with operating systems were with Sun, SGI, IBM stuff. The big iron monsters that took significant space in data centers which looked like enclosed, clean quarries churning out application data. Back then, Microsoft had DOS, Windows for Workgroups, and Windows 95. The latter, some of us PowerPC kiddies labeled as Microsoft’s MacOS 1.0. Microsoft was also seen as driven to take over the world under an evil, conspiratorial plan. Simply said, it wasn’t cool to be into what MS was doing.

IT then navigated around a good portion of years witnessing the rise of Linux on the data center, displacing a lot of the fridge-sized, expensive big UNIX machines, and the forceful wind of change that open-source software brought. Such events, and many other ones, gave the ability and flexibility, for tools to be developed for managing the growth of systems and provide assurance, integrity in the servers that now ran our businesses or publish cute cat pictures on the interwebs! Those tools matured and grew themselves thanks to their openness and strong communities that actively contributed pieces to them. As an example, Puppet became a mainstay and a necessary tool across the IT landscape quickly.

The automation, configuration management tools that were born quietly became associated just with *nix operating systems and no one noticed that they also had been building support for Microsoft Windows. Now Microsoft, under their new excellent leadership and embrace of open-source has become, almost overnight, a reborn cool company that the younger IT professional loves. More and more I have noticed the growing presence of MS-backed database servers, middleware and web servers.

Data centers, on premise or public clouds, now run Windows and Linux alongside each other in great harmony. Some are actually multi-tier application providers that run parts on each OS. It is really attractive now the fact that those beloved tools that *nix kids used and developed into strong platforms that automate, manage, and even orchestrate application deployments already have the robust hooks to manage those Microsoft servers.

Some of my favorite meeting and presentation moments include a gigantic smile I give right before I address this statement: “But you guys only work on Linux and I just have a couple of those”. After that smile, my reply is usually: “Let’s build a domain controller, an IIS Server, and join it to the domain. C’mon it will be fun.” The surprise when the set is built with Puppet is very rewarding. One day….just one day, I will Candid Camera the moment!

So…Microsoft folks, welcome to the cool group sitting on the back of the classroom again! Join my punks, rebels, thrashers and the like, as we help you keep those servers humming peacefully and managed along side those Linux boxes…and those AIX, Solaris systems sitting wayyyy back there (that’s for another post!).