Oh wow! Just when you got really good at deploying VMs, configuring and installing stuff on them, someone walks into your area and asks: “Hey!, can you use Puppet to deploy multiple nodes in order and install app stacks on them?”. If you are in Denver, your answer might be: “Let me check with Xuxo”. Well, pretend you are in Denver as I will show you how to do that task by deploying a Python Flask app that needs a MongoDB database, plus some ideas to automate these deployments further.

After doing this a couple of times and reviewing the documentation, I thought about splitting this post into two parts as I combine several concepts. Then I thought a bit more and decided to give you everything in one long post. So, grab a coffee if it’s morning or a beer if evening (or lunch time if you are in Colorado)…this is a long post.

Components and knowledge requirements

  • Puppet Enterprise 2015.x.x or higher. There is an open-source guide out there, :). The author will surely hit me on Twitter later but it works differently.
  • Understanding of hiera. Go to Puppet’s docs on it or follow my minimalist guide.
  • Understanding of multi-tier applications. DB tier, application tier.

Flow Architecture

I will describe how to implement orchestration as close to operations as possible. This means that the request for a new stack will come from an external system and the host and stack information will be retrieved rather than hardcoded in Puppet manifests as it is expected to change. While I will use minimal values, you will see that the input data can grow and become as fine-grained as you want it.


The illustration above shows how a user would request a ‘stack’ and the new host(s) information will be stored in CouchDB, pretending it to be a CMDB or a host information database. Once that information is provided, an API call can trigger an orchestration job in Puppet and the build-out will begin. Also in the diagram, Puppet will retrieve the values for credentials and database info from a key/value store. I use Consul and recommend Vault. Puppet will validate all objects and deploy the nodes in order. When the process completes, Puppet returns a report URL with a job ID that can be tracked elsewhere to report completion to requester.

Now that we know what we are doing, let’s begin. Grab the second cup or second beer.

Hiera setup

I am taking this post to also show you how to extend Hiera capabilities. We will be retrieving values from two places, CouchDB and Consul. For that we need to add two new backends in hiera:

Once you install them, we need to move those backend providers to a new location as we are working with Puppet Enterprise not Open Source. Copy the providers .rb files to:


Now let’s modify our hiera.yaml file (below is my actual config) so we can use CouchDb and Consul. I have highlighted the changes:

 - yaml
 - json
 - http
 - consul
 :datadir: "/etc/puppetlabs/code/environments/%{::environment}/hieradata"
 :datadir: "/etc/puppetlabs/code/environments/%{::environment}/hieradata"
 :port: 5984
 :output: json
 :failure: graceful
 - /hiera/%{clientcert}
 - /hiera/%{environment}
 - /hiera/common
 :port: 8500
 - /v1/kv/hiera/common
 - "nodes/%{::trusted.certname}"
 - "global"
 - "common"
 - "aws"
 - "stiglinux"
 - "etcd"

Restart pe-puppetserver to apply the new backends and configuration:

systemctl restart pe-puppetserver

CouchDB and Consul setup

You must now be on beer #1 if coffee is done or beer #3. I will not walk you through the installations of CouchDB and Consul. Follow the vendor guides as they are pretty good. BTW, I host them on separate VMs. In this step, we will add some values to those two stores.

CouchDB and Consul have great REST APIs and UIs that can be used to get our data in and out of them. On Couch we will create a document that mimics the posted stack request:

Create DB for hiera:

curl -X PUT http://couchdb.host:5984/hiera

Add document:

curl -X PUT http://couchdb.host:5984/hiera/common -H \
'Content-Type: application/json' -d \

I prettied the text a bit for readability but you can see how I labeled each server we will be orchestrating as a DB and App units. The ‘ready‘ states are purely optional, but handy as you will see later. Also, notice how the database and document follow the paths highlighted on the hiera.yaml‘s http backend.

Login to the Consul server and create a key/value objects for hiera:

consul kv put hiera/common/dbuser admin
consul kv put hiera/common/dbport 27017
consul kv put hiera/common/dbpass admin

As you can see, you can put as many things as you want in there. It doesn’t necessarily mean you have to use them. The paths are reflected on the consul section of our hiera.yaml.

The Puppet manifests

On to beer #4 or #5…

Now we have setup a good portion of our infrastructure that will support a request for a stack. It is time to dive into the Puppet piece of this. We will begin by coding our application stack. There are new provisions on the Puppet 4 language to achieve this.

Create the work directory structure:

mkdir -p pyflaskapp/{manifests,templates,files,lib}
mkdir -p pyflaskapp/lib/puppet/type


First, we need to create a small capability (or interface) to share our database information with our application node. Sharing data is the core of orchestration inside Puppet. Create the file nosql.rb inside pyflaskapp/lib/puppet/type with this content:

Puppet::Type.newtype :nosql, :is_capability => true do
 newparam :name, :is_namevar => true
 newparam :user
 newparam :password
 newparam :port
 newparam :host
 newparam :database

Our next step is to create our database manifest that will export this values to orchestrator. The name of the file is on the first line:

# pyflaskapp/manifests/db.pp
define pyflaskapp::db(
  $host = $::fqdn,
  $port = 27017,
  $database = $name,
 class {'::mongodb::globals':
     manage_package_repo => true,
     bind_ip => '',
 class {'::mongodb::client': } ->
 class {'::mongodb::server': } ->

 mongodb::db {$database:
   user => $db_user,
   password => $db_password,

 Pyflaskapp::Db produces Nosql {
   user => $db_user,
   password => $db_password,
   host => $host,
   database => $database,
   port => $port

To achieve orchestration, we are using a new block in our manifests. I have highlighted some new things we will need to understand.

Define is our entry point in these manifests and it tells us which data we need as parameters between the parentheses. It has been available in the language and it’s essential for these jobs.

The last block is new and very important. Here is where we are stating that this DB module will produce or make available the stated information: user, password, host, database, port.

Our DB tier makes this available for our app tier to know where the resources to use are.

Now we will make our app manifest. This will build our flask application:

# pyflaskapp/manifests/app.pp
define pyflaskapp::app(

 $pippackages = ['flask', 'pymongo']

package {$pippackages:
    ensure => 'installed',
    provider => 'pip',

file {'/flask_app':
    ensure => 'directory',
    mode => '0775',

file {'/flask_app/templates':
    ensure => 'directory',
    mode => '0775',

file {'/flask_app/index.py':
    ensure => present,
    content => template('pyflaskapp/index.py.erb')

file {'/flask_app/index.wsgi':
    ensure => present,
    source => 'puppet:///modules/pyflaskapp/index.wsgi',

file {'/flask_app/templates/index.html':
    ensure => present,
    source => 'puppet:///modules/pyflaskapp/index.html',

exec {'run_me':
    path => ['/usr/bin', '/bin', '/sbin', '/usr/local/bin'],
    command => "python index.py &",
    cwd => "/flask_app",
    unless => "/usr/bin/test -f /flask_app/.running.txt",

    ensure => file,
    content => "Running flask instance"


Pyflaskapp::App consumes Nosql {
    db_name => $database,
    db_host => $host,
    db_port => $port,
    db_user => $user,
    db_password => $password


Notice again the last block. This time we consume what the DB manifest produced. To use some of the values that we will receive from the database piece of the orchestration job, I generated the flask start file from template. In this fashion, we can deploy as many unique instances of our application:

# pyflaskapp/templates/index.py.erb
from flask import Flask, render_template, request, redirect

import os
from pymongo import MongoClient

def connect():
# Substitute the 5 pieces of information you got when creating
# the Mongo DB Database (underlined in red in the screenshots)
# Obviously, do not store your password as plaintext in practice
 connection = MongoClient("<%= @db_host -%>",27017)
 handle = connection["<%= @db_name -%>"]
 handle.authenticate("<%= @db_user -%>","<%= @db_password -%>")
 return handle

app = Flask(__name__)
handle = connect()

# Bind our index page to both www.domain.com/
#and www.domain.com/index
@app.route("/index" ,methods=['GET'])
@app.route("/", methods=['GET'])
def index():
 userinputs = [x for x in handle.mycollection.find()]
 return render_template('index.html', userinputs=userinputs)

@app.route("/write", methods=['POST'])
def write():
 userinput = request.form.get("userinput")
 oid = handle.mycollection.insert({"message":userinput})
 return redirect ("/")

@app.route("/deleteall", methods=['GET'])
def deleteall():
 return redirect ("/")

# Remove the "debug=True" for production
if __name__ == '__main__':
 # Bind to PORT if defined, otherwise default to 5000.
 port = int(os.environ.get('PORT', 5000))

app.run(host='', port=port, debug=True)

Finally, our module needs to bring this all together. We do this in our init.pp:

# pyflaskapp/manifests/init.pp
application pyflaskapp(
  String $db_user,
  String $db_password,
  String $host,
    db_user => $db_user,
    db_password => $db_password,
    host => $host,
    port => $port,
    export => Nosql[$name],

    consume => Nosql[$name],



The entry point here is the word application. It will define our stack and its’ components. Notice the export and consume relationship. We are almost ready to trigger this job.

Orchestration job

Probably this is the last beer you will have on your desk as you work through this. It is all down to site.pp now. Just as you are used to defining nodes in that main file, now we define a site, our stack building steps and which nodes get what! Add this to site.pp:


     # get AppName from CouchDB's request    
     $name = hiera('AppName')

      # get values from Consul and CouchDB to fulfill request

         db_user => hiera('dbuser'),
         db_password => hiera('dbpass'),
         host =>hiera('DBServerName'),
         nodes => {
               Node[hiera('DBServerName')] => [Pyflaskapp::Db[$name]],
               Node[hiera('AppServerName')] => [Pyflaskapp::App[$name]],

Let’s run the job!

Running orchestrator

Orchestrator is a tool within Puppet Enterprise to accomplish these multi node stack deployments. It is available via REST API with secure token authentication.

The tool has two main parts. The first I want to show is the command ‘puppet app show‘. This utility works as a job plan that you can review. It checks that all dependencies are met, node information looks good, and which order things should run:


I show the image because it is actually color coded. If the plan review looks OK, we can go ahead and run the job. If one of the items does not pass validation, this tool will let you know. I added to my site.pp a conditional that would only run a job if all nodes are on a ‘ready‘ state. That way, I protect the dependencies even further:

# conditional block
 if (hiera('AppServerReady') == "not ready") or (hiera('DBServerReady') == "not ready") {
   fail("One of the servers is not ready")

To run the job, the command is as follows:

puppet job run --application Pyflaskapp --environment production

As you can see, we can apply a job to a specific environment also. Output is also color coded:

orch run.png

Our multi-tier node is now ready for use. It is a flask app that I took from the web somewhere and modified along the way:

Flask node:
flask app.png

Mongo node:

And there you have it! A full stack deployment with Puppet!

Thanks for reading.



3 thoughts on “Puppet Enterprise Orchestrator: A Practical Guide

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s