The previous post explained how my virtual datacenter is setup. On this article I will show you a very simple monitoring tool that just checks that my hosts are alive. The tool is deployed on my Linux VMs. Since this is a complete throwaway environment I don’t need a full-blown Nagios or anything like that.

The monitoring tool is written in Ruby and records the checks into a MongoDB.

Something a little extra on this post will be the deployment of the utility via Puppet’s vcsrepo module which essentially keeps our tool up to date whenever Puppet agent runs!

The Ruby Code and YAML

Create a folder somewhere on your system called monitor-my-infra:

mkdir -p monitor-my-infra

Inside create the file config.yaml which will have some values that our tool will need:

  mongo_server: ls0.puppet.xuxo
  mongo_db: monitoring
  mongo_db_collection: host_stats
  host: ls0.puppet.xuxo
  log_dir: /var/log/monitoring/
  ping_timeout: 3

Please replace the values above with your own.

Next create a file called monitors.rb (description of actions in red):

require 'mongo'
require 'yaml'
require 'date'
require 'net/ping'
require 'free_disk_space'
require 'usagewatch'

class Monitorinfra

# load config
 fn = 'config.yaml'
 config = YAML.load_file(fn)

class_variable_set(:@@database, config['mongo_db'])
 class_variable_set(:@@db_server, config['mongo_server'])
 class_variable_set(:@@log_locale, config['log_dir'])
 class_variable_set(:@@collection, config['mongo_db_collection'])

@@datetime =

# Connect to Mongo for record keeping
 @@db_conn =[ "#{@@db_server}:27017" ], :database => "#{@@database}")

# Ping an host to see if node can reach out
 def ping_out(host)

    res = system("ping -c1 #{host} 2>&1 >/dev/null")

    if res == true
      s = 'ALIVE'

      s = 'DEAD'

   # Insert record in database
   collection = @@db_conn[:host_stats]
   doc = { host: host, status: s, time: @@datetime }
   result = collection.insert_one(doc)


# Check disk avail in gigabytes
  def disk_space(host, disk)

      res =
      val = res.gigabytes.round
      # insert record into DB
      collection = @@db_conn[:host_stats]
      doc = { host: host, disk: disk, avail_disk: val, unit: "GB", time: @@datetime }
      result = collection.insert_one(doc)
      # for debug:
      puts result.n



And create a client, let’s say try.rb:

require './monitors'

# I am pinging myself here but just use an external hostname
hostname = `hostname`.strip
d =

# ping
status = d.ping_out(hostname)
puts status

# disk
diskstatus = d.disk_space(hostname, '/')
puts diskstatus

Now, you can commit this to a repo. Why? Because we are going to use Puppet to deploy it and keep it updated!

Deploy with Puppet and keep the code updated on the client

Here is where things get a bit more hip! We will deploy this monitor using Puppet and a module called vcsrepo. Our Puppet module will deploy the code on the client node and then check the git repo on every run to ensure the code is the latest! We will also create a cron job to run the checks every hour. I really don’t need to know status every 60 seconds, once an hour will do.

Create our working module structure:

mkdir -p inframonitor/{manifests,files,templates}

Create our manifest in the manifests folder, simplemon.pp (Read comments for actions):

class inframonitor::simplemon {
  # Array of gems to install
  $gems = ['mongo', 'net-ping', 'free_disk_space', 'usagewatch']

  $gitusername = "your git account"
  $gitrepo = "monitor-my-infra.git"

   # Install the ruby devel package
   package {'ruby-devel':
     ensure => 'installed',

   # Install the gems from the array above
   package {$gems:
     ensure => 'installed',
     provider => 'gem',

   # Install git for repo cloning
   package {'git':
     ensure => 'installed',

  file { '/simplemon':
     ensure => directory,
     mode => '770',

  file{ '/root/':
     ensure => file,
     source => 'puppet:///modules/monitorpack/'

  # Clone repo! Notice the ensure latest!
  vcsrepo { '/simplemon':
     ensure => latest,
     provider => git,
     source => "git://${gitusername}/${gitrepo}",
     revision => 'master',

  # Create cron job
  cron::job { 'run_simplemon':
     minute => '0',
     hour => '*',
     date => '*',
     month => '*',
     weekday => '*',
     user => 'root',
     command => '/root/',
     environment => [ 'MAILTO=root', 'PATH="/usr/bin:/bin"', ],
     description => 'Run monitor',

Now create our runner shell script,, for cron inside the files folder:

cd /simplemon
/bin/ruby try.rb


Set a classification rule that groups all Linux hosts on the master and wait for Puppet agent to run.

Wait a couple of hours and query our Mongo DB via the mongo client:

> use monitoring
switched to db monitoring
> db.host_stats.find()
{ "_id" : ObjectId("5810f8bcd4e736d99bfadf17"), "host" : "ostack-master", "status" : "ALIVE", "time" : ISODate("2016-10-26T18:40:59.450Z") }
{ "_id" : ObjectId("5810f8f6d4e736da6c4d6a10"), "host" : "ostack-master", "status" : "ALIVE", "time" : ISODate("2016-10-26T18:41:58.280Z") }
Type "it" for more

Trimming some entries, but there are our ping results!

In the near future I will probably add notifications, but I can easily query my Database whenever I need to check status.

Download the repo here and have fun!




Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s