Yes, I want that node gone now…please?

Yes, I want that node gone now…please?

If you use any release of Puppet, open source or enterprise, you know that sometimes you have to cleanup leftover certnames, or nodes, when servers are decommissioned or simply ‘killed’ away. On this post, I will show you how to build a tool to constantly listen for node cleanup requests and delete them from Puppet. This is an example, you can take what I did here and do your own, improve, etc.

Use Case

An external system will send notifications when a machine has been destroyed, or decommissioned. After such notification is sent, Puppet should know about it and cleanup the SSL information about the machine and stop enforcing configuration management on it upon receipt.

The Code

The tool is written on Ruby and follows the same structure as the rest of my utilities. I stick to the same fashion of system design and re-use of code so I can work quickly on my concepts and validate them.

Message Queue

Install or re-use RabbitMQ on the puppetmaster or another server.

Configuration

On your working directory, create a folder called config and the file common.yaml:

---
 # RabbitMQ values
 mq_user: admin
 mq_pass: admin
 mq_server: ls0
 remove_channel: noderemoval

# Mongo values
 mongo_host: ls0
 mongo_port: 27017
 db: removednodes

The mongo values can be ignored, in my environment I set a record for each deletion.

The “middleware”

On your working directory, create a file named clean_node.rb. It is your main class:

#!/usr/bin/env ruby
# encoding: utf-8
# one change


require 'bunny'
require 'yaml'
require 'date'
require 'mongo'

class Cleannode

# load configs to use across the methods

fn = File.dirname(File.expand_path(__FILE__)) + '/config/common.yaml'
 config = YAML.load_file(fn)

# export common variables

@@datetime = DateTime.now()

# export the connection variables
 @@host = config['mq_server']
 @@mq_user = config['mq_user']
 @@mq_pass = config['mq_pass']

# export the channels to be created/used
 @@remove_ch = config['remove_channel']

# database values
 @@db = config['db']
 @@mongo_host = config['mongo_host']

# export connection to RabbitMQ
 @@conn = Bunny.new(:hostname => @@host,
 :user => @@mq_user,
 :password => @@mq_pass)


 def initialize()
 end

# define methods to use by server and clients
 # Post a message to remove a node that has been decommissioned
 def remove_node(certname)

@@conn.start

type = "REMOVE"
 message = type + "," + certname + "," + String(@@datetime)


 ch = @@conn.create_channel
 q = ch.queue(@@remove_ch)
 ch.default_exchange.publish(message, :routing_key => q.name)

puts " [x] Sent Removal Request to Puppet" + certname

@@conn.close


 end


end

 

The listener on the master

This is the actual piece that brings it all together and performs the deletion. Create a file called node_clean_listener.rb:

#!/usr/bin/env ruby
# encoding: utf-8

require "bunny"
require 'yaml'

fn = File.dirname(File.expand_path(__FILE__)) + '/config/common.yaml'
config = YAML.load_file(fn)

@@host = config['mq_server']
@@mq_user = config['mq_user']
@@mq_pass = config['mq_pass']
@@remove_ch = config['remove_channel']

conn = Bunny.new(:hostname => "#{@@host}",
 :user => "#{@@mq_user}",
 :password => "#{@@mq_pass}")
conn.start

ch = conn.create_channel
q = ch.queue("#{@@remove_ch}")

puts " [*] Waiting for messages in #{q.name}. To exit press CTRL+C"
q.subscribe(:block => true) do |delivery_info, properties, body|
 res = body.split(',')
 typ = res[0]
 certname = res[1]

puts " [x] Received #{body}"

#puts res
 puts typ
 puts certname

if typ == "REMOVE"
 #remove_job = fork do
 fork do
 puts "Removing node"
 exec "/opt/puppetlabs/bin/puppet cert clean #{certname}"
 end
 Process.detach
end

end

The exec cert clean command will have to change to node purge if using Puppet Enterprise.

Run this on the background to listen for deletion requests all the time.

Client test

Create a file called try.rb:

require "./clean_node"

# SIMPLE CLIENT TO TEST THE MIDDLEWARE
String host = "your-cert-name-to-delete"

# CREATE NEW OBJECT FROM CLEANNODE CLASS
d = Cleannode.new()
# TO CREATE A STATUS REQUEST FOR A SPECIFIC HOST
d.remove_node(host)

Replace the highlighted string in red with a node you want removed and run! The node will be deleted from Puppet.

The client piece can be any external system that will tell Puppet to remove the node.

Download the repo here and improve it.

Xuxodrome: My infrastructure (Part 2)

Xuxodrome: My infrastructure (Part 2)

The previous post explained how my virtual datacenter is setup. On this article I will show you a very simple monitoring tool that just checks that my hosts are alive. The tool is deployed on my Linux VMs. Since this is a complete throwaway environment I don’t need a full-blown Nagios or anything like that.

The monitoring tool is written in Ruby and records the checks into a MongoDB.

Something a little extra on this post will be the deployment of the utility via Puppet’s vcsrepo module which essentially keeps our tool up to date whenever Puppet agent runs!

The Ruby Code and YAML

Create a folder somewhere on your system called monitor-my-infra:

mkdir -p monitor-my-infra

Inside create the file config.yaml which will have some values that our tool will need:

---
  mongo_server: ls0.puppet.xuxo
  mongo_db: monitoring
  mongo_db_collection: host_stats
  host: ls0.puppet.xuxo
  log_dir: /var/log/monitoring/
  ping_timeout: 3

Please replace the values above with your own.

Next create a file called monitors.rb (description of actions in red):

require 'mongo'
require 'yaml'
require 'date'
require 'net/ping'
require 'free_disk_space'
require 'usagewatch'

class Monitorinfra

# load config
 fn = 'config.yaml'
 config = YAML.load_file(fn)

class_variable_set(:@@database, config['mongo_db'])
 class_variable_set(:@@db_server, config['mongo_server'])
 class_variable_set(:@@log_locale, config['log_dir'])
 class_variable_set(:@@collection, config['mongo_db_collection'])

@@datetime = DateTime.now()

# Connect to Mongo for record keeping
 @@db_conn = Mongo::Client.new([ "#{@@db_server}:27017" ], :database => "#{@@database}")

# Ping an host to see if node can reach out
 def ping_out(host)

    res = system("ping -c1 #{host} 2>&1 >/dev/null")

    if res == true
      s = 'ALIVE'

      else
      s = 'DEAD'
   end


   # Insert record in database
   collection = @@db_conn[:host_stats]
   doc = { host: host, status: s, time: @@datetime }
   result = collection.insert_one(doc)

 end

# Check disk avail in gigabytes
  def disk_space(host, disk)

      res = FreeDiskSpace.new(disk)
      val = res.gigabytes.round
      # insert record into DB
      collection = @@db_conn[:host_stats]
      doc = { host: host, disk: disk, avail_disk: val, unit: "GB", time: @@datetime }
      result = collection.insert_one(doc)
      # for debug:
      puts result.n

  end

end

And create a client, let’s say try.rb:

require './monitors'

# I am pinging myself here but just use an external hostname
hostname = `hostname`.strip
d = Monitorinfra.new()

# ping
status = d.ping_out(hostname)
puts status

# disk
diskstatus = d.disk_space(hostname, '/')
puts diskstatus

Now, you can commit this to a repo. Why? Because we are going to use Puppet to deploy it and keep it updated!

Deploy with Puppet and keep the code updated on the client

Here is where things get a bit more hip! We will deploy this monitor using Puppet and a module called vcsrepo. Our Puppet module will deploy the code on the client node and then check the git repo on every run to ensure the code is the latest! We will also create a cron job to run the checks every hour. I really don’t need to know status every 60 seconds, once an hour will do.

Create our working module structure:

mkdir -p inframonitor/{manifests,files,templates}

Create our manifest in the manifests folder, simplemon.pp (Read comments for actions):

class inframonitor::simplemon {
  
  # Array of gems to install
  $gems = ['mongo', 'net-ping', 'free_disk_space', 'usagewatch']

  $gitusername = "your git account"
  $gitrepo = "monitor-my-infra.git"

   # Install the ruby devel package
   package {'ruby-devel':
     ensure => 'installed',
   }->

   # Install the gems from the array above
   package {$gems:
     ensure => 'installed',
     provider => 'gem',
   }->

   # Install git for repo cloning
   package {'git':
     ensure => 'installed',
   }->

  file { '/simplemon':
     ensure => directory,
     mode => '770',
   }->

  file{ '/root/run.sh':
     ensure => file,
     source => 'puppet:///modules/monitorpack/run.sh'
  }->

  # Clone repo! Notice the ensure latest!
  vcsrepo { '/simplemon':
     ensure => latest,
     provider => git,
     source => "git://github.com/${gitusername}/${gitrepo}",
     revision => 'master',
   }->

  # Create cron job
  cron::job { 'run_simplemon':
     minute => '0',
     hour => '*',
     date => '*',
     month => '*',
     weekday => '*',
     user => 'root',
     command => '/root/run.sh',
     environment => [ 'MAILTO=root', 'PATH="/usr/bin:/bin"', ],
     description => 'Run monitor',
   }
}

Now create our runner shell script, run.sh, for cron inside the files folder:

#!/bin/bash
cd /simplemon
/bin/ruby try.rb

Deploy!

Set a classification rule that groups all Linux hosts on the master and wait for Puppet agent to run.

Wait a couple of hours and query our Mongo DB via the mongo client:

> use monitoring
switched to db monitoring
> db.host_stats.find()
{ "_id" : ObjectId("5810f8bcd4e736d99bfadf17"), "host" : "ostack-master", "status" : "ALIVE", "time" : ISODate("2016-10-26T18:40:59.450Z") }
{ "_id" : ObjectId("5810f8f6d4e736da6c4d6a10"), "host" : "ostack-master", "status" : "ALIVE", "time" : ISODate("2016-10-26T18:41:58.280Z") }
Type "it" for more

Trimming some entries, but there are our ping results!

In the near future I will probably add notifications, but I can easily query my Database whenever I need to check status.

Download the repo here and have fun!

 

 

Xuxodrome: My infrastructure (Part 1)

Xuxodrome: My infrastructure (Part 1)

After almost a year with Puppet, I finally get to diagram and document how the infrastructure that I use for virtually everything looks like. Since I am a big fan of the movie Videodrome (as shown in the header image!), it is called Xuxodrome. I share it here just in case any of you want to replicate it!

My idea with Xuxodrome is to provide a real, mini datacenter that it is always available and can take real changes, adapt. It is not only used for demos but also for the real hands-on workshops that I do.

Architecture

Xuxodrome can be represented in the following illustration. We will go through each numbered section.

slicenumbers

Breakdown

The environment diagram includes a reference to a master in AWS, but we won’t cover that here as it is external. The only relationship to Xuxodrome is the relationship with my GitHub repository.

  1. The Puppet Enterprise Master. It manages all assets inside Xuxodrome.
  2. Shared Services. A Windows 2012 domain controller, built by Puppet, acts as DNS and Active Directory.
  3. Code Stuff. Puppet’s code-manager is actively retrieving the latest Infrastructure code from a repo in my GitHub site. Every time I commit something, Travis-CI checks it for me to determine how bad of a coder I am.
  4. Traditional Infrastructure. This area has VMs that live and die often. Systems include Windows, RHEL, and Debian.
  5. Modern Infrastructure. Puppet’s Blueshift stuff. CoreOS cluster, Docker engines and Consul backend reside here. Clusters include ELK Stack, Jenkins CI, and other things that I can use for examples. It also provides an Nginx load balancer.
  6. Other. Every now and then I might attach an AIX LPAR from Puppet’s corporate infrastructure or provision VMs using vSphere.

Todo

Convert everything into a HEAT template as the core of Xuxodrome is all on OpenStack. In this fashion, I can rebuild the whole thing push-button!

On part two, I will show you a simple monitor that I run to keep track of some nodes. It is not great, but a good example of some Ruby and some Puppet.

Until then…

Planes, Trains, and Ruby for real

Planes, Trains, and Ruby for real

As I sit on a plane going to my next DevOps adventure, I write this little nugget about config management platforms. Why? One, I have time. Two, a little Ruby/Puppet DSL (Domain Specific Language) deep-dive I had some weeks ago made me think about this.

Recently, I had to explain to a good bunch of folks why Puppet uses a DSL. Not only Puppet, but any other fine config management tool like it. Explaining a DSL was a bit harder than I thought so I decided to do a “Why Ruby?” presentation. During the presentation, I did the unthinkable, I wrote Ruby! It was a simple exercise: how many lines of Ruby vs. a DSL takes you to perform a specific task. The task was to install apache server. I won’t go into detail about it…but it was roughly 3 lines on DSL vs. 74+ on a given programming language without a lot of the dependencies.

However, that little exercise made me think: “Hey, let me build a small tool in Ruby to show how installing a package works and have the plumbing to operate like a Puppet, Chef, or Salt platform”. Hence, Sysville, as I call it, will be explained on this post.

Sysville is a simple concept. It is a small “neighborhood” of systems that get “mail” delivered and/or returned to a neighborhood post office to act upon. A main hub for a neighborhood of servers.

For this concept and exercise, I decided to setup a simple RabbitMQ message queue and a “post office” server. If I expand on it, there will be future posts. As usual on my blog, let’s begin.

The Architecture

Below is the basic logical architecture of our application:

untitled-diagram

Yes!, we have a MongoDB piece!

The Setup

Build at least 2 CentOS VMs, 1 will be the post office, the other a node or client.

The RabbitMQ piece

Install and configure RabbitMQ per this guide on the main server that will be the post office.

The MongoDB piece

Install and configure MongoDB on the same server that acts as the post office per this guide.

Now, you should have a running RabbitMQ installation and a MongoDB to store our stuff, whatever that stuff happens to be. The intent is to let you be creative and come up with cool things to do. For now, we will capture install records that serve as an audit trail. This is one of the main reasons companies look for platforms like Puppet.

The Post Office

Our post office is a small 100% Ruby middleware that posts messages to RabbitMQ to be retrieved by a node.

The code

First, let’s create a configuration YAML file to store our values. This is very similar how Puppet abstracts common values from the actual infrastructure code.

Create a folder called sysville and inside, a folder called config:

mkdir -p sysville/config

Inside the config folder, create the file rabbit_config.yaml and populate replacing your values:

---
 # RabbitMQ values
 mq_user: admin
 mq_pass: admin
 mq_server: ls0

 info_channel: info
 notification_channel: notify
 provision_channel: provision
 init_channel: init
 enforce_channel: enforce
 status_channel: status

# Mongo values
 mongo_host: ls0
 mongo_port: 27017
 db: sysville

In my setup, ls0 is my main server. Replace with your targeted host server.

For both servers, install the Bunny gem:

gem install bunny

Now, at the root of the sysville directory, we create post_office.rb. Read the comments on the code to understand what all the pieces do:

#!/usr/bin/env ruby
# encoding: utf-8

# Libraries required. Yaml is included. You will need to install bunny
# and mongo ( gem install bunny && gem install mongo )
require 'bunny'
require 'yaml'
require 'date'
require 'mongo'

class Postoffice

# load configs to use across the methods

fn = File.dirname(File.expand_path(__FILE__)) + '/config/rabbit_config.yaml'
 config = YAML.load_file(fn)

# export common variables

@@datetime = DateTime.now()

# export the connection variables
 @@host = config['mq_server']
 @@mq_user = config['mq_user']
 @@mq_pass = config['mq_pass']

# export the channels to be created/used
 @@info = config['info_channel']
 @@notif = config['notification_channel']
 @@provi = config['provision_channel']
 @@init = config['init_channel']
 @@enfo = config['enforce_channel']
 @@stat = config['status_channel']

# mongo database values
 @@db = config['db']
 @@mongo_host = config['mongo_host']

# export connection to RabbitMQ
 @@conn = Bunny.new(:hostname => @@host,
 :user => @@mq_user,
 :password => @@mq_pass)

# export connection to MongoDB
 @@db_conn = Mongo::Client.new([ "#{@@mongo_host}:27017" ], :database => "#{@@db}")

def initialize()
 end

# define methods to use by server and clients
 # REQUEST STATUS OF NODE, BASICALLY A PING  TODO

def request_status(hostname)
# open connection to MQ
@@conn.start

# generate a random message ID 
id = rand(0...1000000)
 type = "PING_REQUEST"
 message = type + "," + hostname + "," + String(id) + "," + String(@@datetime)

 # create channel to post messages
 ch = @@conn.create_channel
 q = ch.queue(@@stat)
 ch.default_exchange.publish(message, :routing_key => q.name)

puts " [x] Sent Status Request to " + hostname

@@conn.close

# place record on database

collection = @@db_conn[:status]
 doc = { type: type, client: hostname, msg_id: id, time: @@datetime }
 result = collection.insert_one(doc)
 puts result.n


 end

# REQUEST A NODE TO INSTALL A PACKAGE
 def install_parcels(parcel, hostname)

id = rand(0...1000000)
 type = "INSTALL_REQUEST"
 message = type + "," + parcel + "," + String(id) + "," + String(@@datetime)

@@conn.start
 ch = @@conn.create_channel
 q = ch.queue(@@provi)
 ch.default_exchange.publish(message, :routing_key => q.name)

puts " [x] Sent Installation Request for " + parcel + " to " + hostname

@@conn.close

collection = @@db_conn[:provision]
 doc = { type: type, package: parcel, client: hostname, msg_id: id, time: @@datetime }
 result = collection.insert_one(doc)
 puts result.n

end

# REMOVE A PARCEL, PROCESS A RETURN
 def remove_parcels(parcel, hostname)

id = rand(0...1000000)
 type = "UNINSTALL_REQUEST"
 message = type + "," + parcel + "," + String(id) + "," + String(@@datetime)

@@conn.start
 ch = @@conn.create_channel
 q = ch.queue(@@provi)
 ch.default_exchange.publish(message, :routing_key => q.name)

puts " [x] Sent Installation Request for " + parcel + " to " + hostname

@@conn.close

# CREATE A RECORD ON THE MONGO DATABASE
collection = @@db_conn[:provision]
 doc = { type: type, package: parcel, client: hostname, msg_id: id, time: @@datetime }
 result = collection.insert_one(doc)
 puts result.n

end
end

On the root of sysville, create a small script to send messages. Let’s call it try.rb since I don’t have an original name for it:

require "./post_office"

d = Postoffice.new()
d.provision("httpd","ls1")

The file above imports our Postoffice class and passes two values to the provision method: httpd, ls1. This tells the post office to please send a message to host ls1 that it needs to install httpd. Ls1 is your client or node.

The Domicile

Following the neighborhood’s post office analogy, we now treat our client node as a home or a domicile which receives mail, messages.

The code

Make a directory called house and inside create the file domicile.rb with this content:

#!/usr/bin/env ruby
# encoding: utf-8

require "bunny"

# CREATE CONNECTION TO RABBIT MQ
conn = Bunny.new(:hostname => "ls0", :user => "admin", :password => "admin")
conn.start

# STATE WHICH QUEUE WE WILL BE LISTENING ON
ch = conn.create_channel
q = ch.queue("provision")

puts " [*] Waiting for messages in #{q.name}. To exit press CTRL+C"
q.subscribe(:block => true) do |delivery_info, properties, body|
 # GRAB THE MESSAGE AND SEGMENT IT TO GRAB VALUES
 res = body.split(',')
 req = res[0]
 bin = res[1]

puts " [x] Received #{body}"

# EVALUATE THE FIRST FIELD TO DETERMINE IF I AM TO INSTALL OR REMOVE
# I AM FORKING THE INSTALL PROCESS TO KEEP THE LISTENER OPEN WAITING FOR MORE
# REQUESTS
if req == "INSTALL_REQUEST"
 install_job = fork do
 puts "I am an install request"
 exec "yum install #{bin} -y"
 end
 Process.detach
end

# FEEL FREE TO CHANGE THE POST OFFICE INSTALL TO UNINSTALL TO TRY
if req == "UNINSTALL_REQUEST"
 install_job = fork do
 puts "I am an install request"
 exec "yum erase #{bin} -y"
 end
 Process.detach
end
end

You should now be ready to test.

Try it!

Go to your Post Office instance, ls0, navigate to our sysville folder and run the following:

ruby try.rb

The output should be like this:

[root@ls0 sysville]# ruby try.rb
D, [2016-10-17T13:38:24.600211 #2304] DEBUG -- : MONGODB | Adding ls0:27017 to the cluster.
 [x] Sent Installation Request for httpd to ls1
D, [2016-10-17T13:38:24.619113 #2304] DEBUG -- : MONGODB | ls0:27017 | sysville.insert | STARTED | {"insert"=>"provision", "documents"=>[{:type=>"INSTALL_REQUEST", :package=>"httpd", :client=>"ls1", :msg_id=>316469, :time=>#<DateTime: 2016-10-17T13:38:24+00:00 ((2457679j,49104s,599731872n),+0s,2299161j)>, :_id=>BSON::ObjectId('5804d450ec0c7b090000d...
D, [2016-10-17T13:38:24.620682 #2304] DEBUG -- : MONGODB | ls0:27017 | sysville.insert | SUCCEEDED | 0.001470799s
1

Now the request to install has been posted to RabbitMQ waiting to be picked up. We also have created an audit trail by inserting a record on the database for this install.

Now, go to our client node, ls1:

ruby domicile.rb

Output:

[root@ls1 sysville]# ruby domicile.rb
 [*] Waiting for messages in provision. To exit press CTRL+C
 [x] Received INSTALL_REQUEST,httpd,316469,2016-10-17T13:38:24+00:00
INSTALL_REQUEST
httpd
I am an install request
# HERE BEGINS THE YUM INSTALL PROCESS
Loaded plugins: fastestmirror
Loaded plugins: fastestmirror

Now, your node has httpd since it was told to install it.

So…

Platforms like Puppet and others use a DSL to make the process above a lot slimmer and simpler, among a lot of other things. In this fashion, you don’t have to write all this code to do this simple installation. Also, applications like this come with everything integrated on it, such as message queues so you don’t have to spend time figuring out how to integrate it.

Feel free to download the repo for this exercise and play with it. Also, add stuff to it an have fun!

Nah-ah!…Shut up!…You don’t say!…OMG, you do Windows!!!

Nah-ah!…Shut up!…You don’t say!…OMG, you do Windows!!!

In my adventures around the Mountain area spreading the good word of DevOps and making IT shops hip, I find interesting things…surprising things. It is even better when I can be the one giving the surprise with a simple statement: “Yes, we work with Windows really well“. Honestly, Windows has become a good percentage of my conversations the last months.

There are two key reasons, from my perspective, why that simple statement carries significant impact and why it comes up so often:

  • Microsoft has become cool again
  • Traditional automation and configuration management platforms that cater to Windows are missing some of the extra mile functionality that modern, faster organizations need.

I am a UNIX child. Some of my early interactions with operating systems were with Sun, SGI, IBM stuff. The big iron monsters that took significant space in data centers which looked like enclosed, clean quarries churning out application data. Back then, Microsoft had DOS, Windows for Workgroups, and Windows 95. The latter, some of us PowerPC kiddies labeled as Microsoft’s MacOS 1.0. Microsoft was also seen as driven to take over the world under an evil, conspiratorial plan. Simply said, it wasn’t cool to be into what MS was doing.

IT then navigated around a good portion of years witnessing the rise of Linux on the data center, displacing a lot of the fridge-sized, expensive big UNIX machines, and the forceful wind of change that open-source software brought. Such events, and many other ones, gave the ability and flexibility, for tools to be developed for managing the growth of systems and provide assurance, integrity in the servers that now ran our businesses or publish cute cat pictures on the interwebs! Those tools matured and grew themselves thanks to their openness and strong communities that actively contributed pieces to them. As an example, Puppet became a mainstay and a necessary tool across the IT landscape quickly.

The automation, configuration management tools that were born quietly became associated just with *nix operating systems and no one noticed that they also had been building support for Microsoft Windows. Now Microsoft, under their new excellent leadership and embrace of open-source has become, almost overnight, a reborn cool company that the younger IT professional loves. More and more I have noticed the growing presence of MS-backed database servers, middleware and web servers.

Data centers, on premise or public clouds, now run Windows and Linux alongside each other in great harmony. Some are actually multi-tier application providers that run parts on each OS. It is really attractive now the fact that those beloved tools that *nix kids used and developed into strong platforms that automate, manage, and even orchestrate application deployments already have the robust hooks to manage those Microsoft servers.

Some of my favorite meeting and presentation moments include a gigantic smile I give right before I address this statement: “But you guys only work on Linux and I just have a couple of those”. After that smile, my reply is usually: “Let’s build a domain controller, an IIS Server, and join it to the domain. C’mon it will be fun.” The surprise when the set is built with Puppet is very rewarding. One day….just one day, I will Candid Camera the moment!

So…Microsoft folks, welcome to the cool group sitting on the back of the classroom again! Join my punks, rebels, thrashers and the like, as we help you keep those servers humming peacefully and managed along side those Linux boxes…and those AIX, Solaris systems sitting wayyyy back there (that’s for another post!).

Serving beer with Puppet

Serving beer with Puppet

On August 18 2016, Puppet held one of our Puppet Camp events in Denver. During such events, I have the opportunity to showcase what Puppet does in new, usually fun, ways. This time I went for deploying a micro-service that served virtual beer. This post describes how I did such a fun little demo.

The demo itself builds upon several things I have done during the last months. It expands from this post. Once you complete that exercise, you can use this post to serve the beer.

Demo Architecture

Overall, the architecture of the demo looks like this:

Preso.006

The barge(n) VMs are the same OpenStack builds that get created following the previous post referenced in this writing. However, I take that build and take it a step further by deploying a Docker engine on each “barge” and deploying 2 containers: a Flask RESTful service that has our beer gifs and a registrator container that notifies Consul that there is a new service out there. All these instances will reside behind a load balancer.

Procedure and simple code

First, if you are using Consul, you need at least a manager server. Use this manifest and dependent module to build it:

Module

Consul module

Code

class servebeer::consul_mgr {

class { 'consul':
    config_hash => {
    'datacenter' => hiera('consul.datacenter'),
    'data_dir' => '/opt/consul',
    'ui_dir' => '/opt/consul/ui',
    'bind_addr' => $::ipaddress,
    'client_addr' => '0.0.0.0',
    'node_name' => $::hostname,
    'advertise_addr' => $::ipaddress,
    'bootstrap_expect' => '1',
    'server' => true
    }
  }
}

I am leveraging the use of Hiera to get my datacenter value. You can just give a name here as a string.

Once Consul is built we can create the basic Docker + Flask service deployment.

Flask beer service container

The Flask service needs to be turned into a Docker image. For this to happen, you need to code it on a system that already has Docker installed. Go ahead and create a folder to do the work:

mkdir ~/brewery && cd ~/brewery

Now create a file called app.py and populate with this content:

from flask import Flask,jsonify,send_file
app = Flask(__name__)

@app.route('/drink')
def gimmebeer():
 filename = 'beer_pour.gif'
 return send_file(filename, mimetype='image/gif')

@app.route('/dry')

def nobeer():
 filename = 'vanderbeer.gif'
 return send_file(filename, mimetype='image/gif')


if __name__ == '__main__':
 app.run(debug=True,host='0.0.0.0')

Essentially this is serving two gif files on different routes. The gif files are on the github repo I will link at the end of this write up. Notice how there is no port 5000 reference in the code but it is on the architecture. That’s when things get fun!

Next, we create a file that allows us to include Flask itself. Name the file requirements.txt:

Flask==0.10.1

Finally, we make our Dockerfile (<- same name) and populate:

FROM ubuntu:latest
MAINTAINER Xuxo Garcia "jesus@puppet.com"
RUN apt-get update -y
RUN apt-get install -y python-pip python-dev build-essential
COPY . /brewery
WORKDIR /brewery
RUN pip install -r requirements.txt
ENTRYPOINT ["python"]
CMD ["app.py"]

I have my own Docker hub registry so I will use it to push my container and make it available to my deployment. Please use your public or private registry. Run these commands to build, tag, and publish the container:

docker login
docker build -t puppetcamp-pong:latest .
docker images
docker tag <image_id> xuxog/puppetcamp-pong:latest
docker push xuxog/puppetcamp-pong

Our beer micro-service is now a container out there. For this exercise we will not attach it to another service but you get the idea with this simple deployment. Let’s deploy it!

Puppet deployment procedure and code

Module

Docker module

Code

The code needs a bit of a walk-through. I will leverage Docker Compose to perform the actual deployment on the service. At the very basics, compose let’s you define containers via a YAML file and deploy them as prescribed in that file. Furthermore, you can scale and orchestrate dependencies using it.

Create a folder to store the work in Puppet where your modules reside:

mkdir -p <modulepath>/servebeer/{manifests,files,templates}

On the files directory, create beer.yaml:

beer:
 container_name: puppetcamp2016
 image: xuxog/puppetcamp-pong
 ports:
 - "5000:5000"

Notice how it is here that I assign the service port and define the container run.

On the manifests directory, create our puppet manifest, compose_beer.pp:

class servebeer::compose_beer{

class {'docker':

    tcp_bind => ['tcp://0.0.0.0:2375'],
    socket_bind => 'unix:///var/run/docker.sock',
    ip_forward => false,
    ip_masq => false,
    iptables => false,
    dns => '8.8.8.8',

  }->

class {'docker::compose':
    ensure => present,
  }->

file {'/tmp/beer.yaml':
    ensure => file,
    source => 'puppet:///modules/servebeer/beer.yaml',
  }->

docker_compose { '/tmp/beer.yaml':
    ensure => present,
    require => File['/tmp/beer.yaml'],
  }
}

And that’s it! Now when you run this manifest via Puppet on any barge, or in my case all classified barges, the micro-service will be deployed.

I also did a registrator container deployment. This is optional if you are running Consul:

class servebeer::registrator{

exec {'deploy_registrator':
   command => "/usr/bin/docker run -d --name=registrator --net=host --volume=/var/run/docker.sock:/tmp/docker.sock gliderlabs/registrator:latest consul://192.168.0.20:8500",
   unless => "/usr/bin/test -f /root/.registrator-deployed",
 }->
 file {'/root/.registrator-deployed':
   ensure => file,
   content => "deployed registrator container",
}

}

Since I have deployed everything behind a load balancer, now we can access http://www.puppet.xuxo, my private site, and see these two services:

Beer pour!

beer

Sadness when dry 😦

vander

This fun demo is an example of how you can use Puppet to deploy micro-services on a container-based infrastructure.

If you wish to spin this demo up, grab this repo.

Thanks for reading.

My Atom.io setup

My Atom.io setup

Last week I had a lot of fun presenting and helping host Puppet Camp Denver 2016. It took place at the Denver Art Museum and the turnout was great with a lot of awesome presentations by our local community.

Aside my official demo, where I served virtual beer via a micro-service deployment, some folks were very curious about my Atom.io editor setup. I even made a little demo companion to the Puppet one. It was also very cool to hear a presentation where someone said: “Thanks to Xuxo, I switched to Atom”

Therefore, I decided to share how my setup looks like via this post. There are 3 essential pieces to my Atom workspace:

Git-Plus
Git-Plus is an excellent plugin to Atom that allows you to perform virtually all git operations from the editor. I can switch branches, commit, push, etc. Download it and integrate it to avoid having to switch around a lot.

Terminal-Plus
Terminal-Plus is simply the best terminal plugin for Atom. You can create several terminals at the bottom of the screen and even drag to re-organize quickly. This helps me a lot on demos since I don’t have to leave my IDE to run commands and communicate with multiple servers.

Browser-Plus
Browser-Plus uses built-in navigation shortcuts from Atom to open pages in a browser pane. Again, I don’t need to leave my editor to see how my Js/Flask/etc pages are looking like as I work.

Those are the main items that some of you saw last Thursday. Aside those, I recommend:

Language-Puppet: Support for puppet language.
Seti-Icons: Icons per supported syntax.

As a reference, here’s an image from my Atom screen:

screen_atom