In this blog post, I’ll tell you the whole story from my zero knowledge of Windows administration to an almost fully automatized Windows build machine image construction.
But, first let’s digress a bit to explain in which context we operate our builds.
Our CI system is built around Jenkins, with a specific twist. We run the Jenkins master on our own infrastructure and our build slaves on AWS EC2. The reason behind this choice is out of the scope of this article (but you can still ask me, I’ll happily answer).
So, we’re using the Jenkins EC2 plugin, and a revamped by your servitor Jenkins S3 Plugin. We produce somewhat large binary artifacts when building our client software, and the bandwidth between EC2 and our master is not that great (and expensive), so using the aforementioned patch I contributed, we host all our artifacts into S3, fully managed by our out-of-aws Jenkins master.
The problem I faced when starting to explore the intricate world of Windows in relation with Jenkins slave, is that we wanted to keep the Linux model we had: on-demand slave spawned by the master when scheduling a build. Unfortunately the current state of the Jenkins EC2 plugin only supports Linux slave.
The EC2 plugin for Linux slave works like this:
java -jar slave.jar
.
The stdin and stdout of the slave.jar process is then connected to the jenkins master through an ssh tunnel.I needed to replicate this behavior. In the Windows world, ssh is inexistent. You can find some native implementation (like FreeSSHd or some other commercial ones), but all that options weren’t easy to implement, or simply non-working.
In the Windows world, remote process execution is achieved through the Windows Remote Management which is called WinRM for short. WinRM is an implementation of the WSMAN specifications. It allows to access the Windows Management Instrumentation to get access to hardware counters (ala SNMP or IPMI for the unix world).
One component of WinRM is WinRS: Windows Remote Shell. This is the part that allows to run remote commands. Recent Windows version (at least since Server 2003) are shipped with WinRM installed (but not started by default).
WinRM is an HTTP/SOAP based protocol. By default, the payload is encrypted if the protocol is used in a Domain Controller environment (in this case, it uses Kerberos), which will not be our case on EC2.
Digging, further, I found two client implementations:
I started integrating Overthere into the ec2-plugin but encountered several incompatibilities, most notably Overthere was using a more recent dependency on some libraries than jenkins itself.
I finally decided to create my own WinRM client implementation and released Windows support for the EC2 plugin. This hasn’t been merged upstream, and should still be considered experimental.
We’re using this version of the plugin for about a couple of month and it works, but to be honest WinRM doesn’t seem to be as stable as ssh would be. There are times the slave is unable to start correctly because WinRM abruptly stops working (especially shortly after the machine boots).
So all is great, we know how to execute commands remotely from Jenkins. But that’s not enough for our sysadmin needs. Especially we need to be able to create a Windows AMI that contains all our software to build our own applications.
Since I’m a long time Puppet user (which you certainly noticed if you read this blog in the past), using Puppet to configure our Windows build slave was the only possiblity. So we need to run Puppet on a Windows base AMI, then create an AMI from there that will be used for our build slaves. And if we can make this process repeatable and automatic that’d be wonderful.
In the Linux world, this task is usually devoted to tools like Packer or Veewee (which BTW supports provisioning Windows machines). Unfortunately Packer which is written in Go doesn’t yet support Windows, and Veewee doesn’t support EC2.
That’s the reason I ported the small implementation I wrote for the Jenkins EC2 plugin to a WinRM Go library. This was the perfect pet project to learn a new language :)
So, starting with all those tools, we’re ready to start our project. But there’s a caveat: WinRM is not enabled by default on Windows. So before automating anything we need to create a Windows base AMI that would have the necessary tools to further allow automating installation of our build tools.
There’s a service running on the AWS Windows AMI called EC2config that does the following at the first boot:
On first and subsequent boot, it also does:
One thing that is problematic with Windows on EC2 is that the Administrator password is unfortunately defined randomly at the first boot. That means to further do things on the machine (usually using remote desktop to administer it) you need to first know it by asking AWS (with the command-line you can do: aws ec2 get-password-data
).
Next, we might also want to set a custom password instead of this dynamic one. We might also want to enable WinRM and install several utilities that will help us later.
To do that we can inject specific AMI user-data
at the first boot of the Windows base AMI. Those user-data can contain one or more cmd.exe or Powershell scripts that will get executed at boot.
I created this Windows bootstrap Gist (actually I forked and edited the part I needed) to prepare the slave.
First, we’ll create a Windows security group allowing incoming WinRM, SMB and RDP:
aws ec2 create-security-group --group-name "Windows" --description "Remote access to Windows instances"
# WinRM
aws ec2 authorize-security-group-ingress --group-name "Windows" --protocol tcp --port 5985 --cidr <YOURIP>/32
# Incoming SMB/TCP
aws ec2 authorize-security-group-ingress --group-name "Windows" --protocol tcp --port 445 --cidr <YOURIP>/32
# RDP
aws ec2 authorize-security-group-ingress --group-name "Windows" --protocol tcp --port 3389 --cidr <YOURIP>/32
Now, let’s start our base image with the following user-data (let’s put it into userdata.txt):
<powershell>
Set-ExecutionPolicy Unrestricted
icm $executioncontext.InvokeCommand.NewScriptBlock((New-Object Net.WebClient).DownloadString('https://gist.github.com/masterzen/6714787/raw')) -ArgumentList "VerySecret"
</powershell>
This powershell script will download the Windows bootstrap Gist and execute it, passing the desired administrator password.
Next we launch the instance:
aws ec2 run-instances --image-id ami-4524002c --instance-type m1.small --security-groups Windows --key-name <YOURKEY> --user-data "$(cat userdata.txt)"
Unlike what is written in the ec2config documentation, the user-data must not be encoded in Base64.
Note, the first boot can be quite long :)
After that we can connect through WinRM with the “VerySecret” password. To check we’ll use the WinRM Go tool I wrote and talked about above:
./winrm -hostname <publicip> -username Administrator -password VerySecret "ipconfig /all"
We should see the output of the ipconfig command.
Note: in the next winrm command, I’ve omitted the various credentials to increase legibility (a future version of the tool will allow to read a config file, meanwhile we can create an alias).
A few caveats:
System.Net.WebClient
winrm set winrm/config/winrs @{MaxMemoryPerShellMB="1024"}
Unfortunately, this is completely broken in Windows Server 2008 unless you install this Microsoft hotfix
The linked bootstrap code doesn’t install this hotfix, because I’m not sure I can redistribute the file, that’s an exercise left to the reader :)Now that we have our base system with WinRM and Puppet installed by the bootstrap code, we need to create a derived AMI that will become our base image later when we’ll create our different windows machines.
aws ec2 create-image --instance-id <ourid> --name 'windows-2008-base'
For a real world example we might have defragmented and blanked the free space of the root volume before creating the image (on Windows you can use sdelete
for this task).
Note that we don’t run the Ec2config sysprep prior to creating the image, which means the first boot of any instances created from this image won’t run the whole boot sequence and our Administrator password will not be reset to a random password.
Now that we have this base image, we can start deriving it to create other images, but this time using Puppet instead of a powershell script. Puppet has been installed on the base image, by virtue of the powershell bootstrap we used as user-data.
First, let’s get rid of the current instance and run a fresh one coming from the new AMI we just created:
aws ec2 run-instances --image-id <newami> --instance-type m1.small --security-groups Windows --key-name <YOURKEY>
We’re going to run Puppet in masterless mode for this project. So we need to upload our set of manifests and modules to the target host.
One way to do this is to connect to the host with SMB over TCP (which our base image supports):
sudo mkdir -p /mnt/win
sudo mount -t cifs -o user="Administrator%VerySecret",uid="$USER",forceuid "//<instance-ip>/C\$/Users/Administrator/AppData/Local/Temp" /mnt/win
Note how we’re using an Administrative Share (the C$
above). On Windows the Administrator user has access to the local drives through Administrative Shares without having to share them as for normal users.
The user-data script we ran in the base image opens the windows firewall to allow inbound SMB over TCP (port 445).
We can then just zip our manifests/modules, send the file over there, and unzip remotely:
zip -q -r /mnt/win/puppet-windows.zip manifests/jenkins-steam.pp modules -x .git
./winrm "7z x -y -oC:\\Users\\Administrator\\AppData\\Local\\Temp\\ C:\\Users\\Administrator\\AppData\\Local\\Temp\\puppet-windows.zip | FIND /V \"ing \""
And finally, let’s run Puppet there:
./winrm "\"C:\\Program Files (x86)\\Puppet Labs\\Puppet\\bin\\puppet.bat\" apply --debug --modulepath C:\\Users\\Administrator\\AppData\\Local\\Temp\\modules C:\\Users\\Administrator\\AppData\\Local\\Temp\\manifests\\site.pp"
And voila, shortly we’ll have a running instance configured. Now we can create a new image from it and use it as our Windows build slave in the ec2 plugin configuration.
Puppet on Windows is not like your regular Puppet on Unix. Let’s focus on what works or not when running Puppet on Windows.
The obvious ones known to work:
User: Puppet can create/delete/modify local users. The Security Identifier (SID) can’t be set. User names are case-insensitive on Windows. To my knowledge you can’t manage domain users.
Group: Puppet can create/delete/modify local groups. Puppet can’t manage domain groups.
Package: Puppet can install MSI or exe installers present on a local path (you need to specify the source). For a more comprehensive package system, check below the paragraph about Chocolatey.
Service: Puppet can start/stop/enable/disable services. You need to specify the short service name, not the human-reading service name.
Exec: Puppet can run executable (any .exe, .com or .bat). But unlike on Unix, there is no shell so you might need to wrap the commands with cmd /c
. Check the Powershell exec provider module for a more comprehensive Exec system on Windows.
Host: works the same as for Unix systems.
Of course that’s expected, mostly because of the used packages. Most of the Forge module for instance are targeting unix systems. Some Forge modules are Windows only, but they tend to cover specific Windows aspects (like registry, Powershell, etc…), still make sure to check those, as they are invaluable in your module Portfolio.
You certainly know that Windows paths are not like Unix paths. They use \
where Unix uses /
.
The problem is that in most languages (including the Puppet DSL) ‘' is considered as an escape character when used in double quoted strings literals, so must be doubled \\
.
Puppet single-quoted strings don’t understand all of the escape sequences double-quoted strings know (it only parses \'
and \\
), so it is safe to use a lone \
as long as it is not the last character of the string.
Why is that?
Let’s take this path C:\Users\Administrator\
, when enclosed in a single-quoted string 'C:\Users\Administrator\'
you will notice that the last 2 characters are \'
which forms an escape sequence and thus for Puppet the string is not terminated correctly by a single-quote.
The safe way to write a single-quoted path like above is to double the final slash: 'C:\Users\Administrator\\'
, which looks a bit strange. My suggestion is to double all \
in all kind of strings for simplicity.
Finally when writing an UNC Path in a string literal you need to use four backslashes: \\\\host\\path
.
Back to the slash/anti-slash problem there’s a simple rule: if the path is directly interpreted by Puppet, then you can safely use /
. If the path if destined to a Windows command (like in an Exec), use a \
.
Here’s a list of possible type of paths for Puppet resources:
/
/
/
for coherence/
, but beware that most Windows executable requires \
paths (especially cmd.exe
)/
\
as this will be used directly by Windows.To identify a Windows client in a Puppet manifests you can use the kernel
, operatingsystem
and osfamily
facts that all resolves to windows
.
Other facts, like hostname
, fqdn
, domain
or memory*
, processorcount
, architecture
, hardwaremodel
and so on are working like their Unix counterpart.
Networking facts also works, but with the Windows Interface name (ie Local_Area_Connection
), so for instance the local ip address of a server will be in ipaddress_local_area_connection
. The ipaddress
fact also works, but on my Windows EC2 server it is returning a link-local IPv6 address instead of the IPv4 Local Area Connection address (but that might because it’s running on EC2).
We’ve seen that Puppet Package type has a Windows provider that knows how to install MSI and/or exe installers when provided with a local source. Unfortunately this model is very far from what Apt or Yum is able to do on Linux servers, allowing access to multiple repositories of software and on-demand download and installation (on the same subject, we’re still missing something like that for OSX).
Hopefully in the Windows world, there’s Chocolatey. Chocolatey is a package manager (based on NuGet) and a public repository of software (there’s no easy way to have a private repository yet). If you read the bootstrap code I used earlier, you’ve seen that it installs Chocolatey.
Chocolatey is quite straightforward to install (beware that it doesn’t work for Windows Server Core, because it is missing the shell Zip extension, which is the reason the bootstrap code installs Chocolatey manually).
Once installed, the chocolatey
command allows to install/remove software that might come in several flavors: either command-line packages or install packages. The first one only allows access through the command line, whereas the second does a full installation of the software.
So for instance to install Git on a Windows machine, it’s as simple as:
chocolatey install git.install
To make things much more enjoyable for the Puppet users, there’s a Chocolatey Package Provider Module on the Forge allowing to do the following
package {
"cmake":
ensure => installed,
provider => "chocolatey"
}
Unfortunately at this stage it’s not possible to host easily your own chocolatey repository. But it is possible to host your own chocolatey packages, and use the source
metaparameter. In the following example we assume that I packaged cmake version 2.8.12 (which I did by the way), and hosted this package on my own webserver:
# download_file uses powershell to emulate wget
# check here: http://forge.puppetlabs.com/opentable/download_file
download_file { "cmake":
url => "http://chocolatey.domain.com/packages/cmake.2.8.12.nupkg",
destination_directory => "C:\\Users\\Administrator\\AppData\\Local\\Temp\\",
}
->
package {
"cmake":
ensure => install,
source => "C:\\Users\\Administrator\\AppData\\Local\\Temp\\"
}
You can also decide that chocolatey will be the default provider by adding this to your site.pp:
Package {
provider => "chocolatey"
}
Finally read how to create chocolatey packages if you wish to create your own chocolatey packages.
There’s one final things that the Windows Puppet user must take care about. It’s line endings and character encodings.
If you use Puppet File resources to install files on a Windows node, you must be aware that file content is transferred verbatim from the master (either by using content
or source
).
That means if the file uses the Unix LF
line-endings the file content on your Windows machine will use the same.
If you need to have a Windows line ending, make sure your file on the master (or the content in the manifest) is using Windows \r\n
line ending.
That also means that your text files might not use a windows character set. It’s less problematic nowadays than it could have been in the past because of the ubiquitous UTF-8 encoding. But be aware that the default character set on western Windows systems is CP-1252 and not UTF-8 or ISO-8859-15. It’s possible that cmd.exe
scripts not encoded in CP-1252 might not work as intended if they use characters out of the ASCII range.
I hope this article will help you tackle the hard task of provisioning Windows VM and running Puppet on Windows. It is the result of several hours of hard work to find the tools and learn Windows knowledge.
During this journey, I started learning a new language (Go), remembered how I dislike Windows (and its administration), contributed to several open-source projects, discovered a whole lot on Puppet on Windows, and finally learnt a lot on WinRM/WinRS.
Stay tuned on this channel for more article (when I have the time) about Puppet, programming and/or system administration :)
]]>Today we’ll focus on the compiler.
The compiler is at the heart of Puppet, master/agent or masterless. Its responsibility is to transform the AST into a set of resources called the catalog that the agent can consume to perform the necessary changes on the node.
You can see the compiler as a function of the AST and Facts and returning the catalog.
The compiler lives in the lib/puppet/parser/compiler.rb
file and more specifically in the Puppet::Parser::Compiler
class. When a node connects to a master to ask for a catalog, the Indirector directs the request to the compiler.
In a classic master/agent system, the agent does a REST find catalog request to the master. The master catalog indirection is configured to delegate to the compiler. This happens in the lib/puppet/indirector/catalog/compiler.rb
file. Check this previous article about the Indirector if you want to know more.
The indirector request contains two things:
When we’re talking about catalog, in the Puppet system it can mean two distinct things:
The first one is the product of the compiler (which we’ll delve into in this article). The second one is formed by the transformation of the first one in the agent. This is the later one that we usually call the puppet catalog.
Here is a simple manifest and the containment catalog that I obtained after compiling:
class test {
file {
"/tmp/a": content => "test!"
}
}
include test
And here is the produced catalog:
You’ll notice that as its name implies, the containment catalog is a graph of classes and resources that follows the structure of the manifest.
In a master/agent system the facts are coming from the request in a serialized form. Those facts were created by calling Facter on the remote node.
Once unserialized, the facts are cached locally as YAML (as per the default terminus for facts on a master). You can find them in the $vardir/yaml/facts/$certname.yaml
file.
At the same time the compiler catalog terminus compute some server facts that are injected into the current node instance.
In Puppet there are several possibilities to store node definitions. They can be defined by node {}
blocks in the site.pp
, by an ENC, into an LDAP directory, etc…
Before the compiler can start, it needs to create an instance of the Puppet::Node
class, and fill this with the node informations.
The node indirection terminus is controlled by the node_terminus
puppet settings which by default is plain
. This terminus just creates a new empty instance of a Puppet::Node
.
In an ENC setup, the terminus for the node indirection will be exec
. This terminus will create a Puppet::Node
instance initialized with a set of classes and global parameters the compiler will be able to use.
The plain
terminus for nodes calls Puppet::Node#fact_merge
. This methods finds the current set of Facts of this node. In the plain
case it involves reading the YAML facts we wrote to disk in the last chapter, and merging those to the current node instance parameters.
Back to the compiler catalog terminus, this one tries to find the node with the given request information and if not present by using the node certname
. Remember that the request to get a catalog from REST matches /catalog/node.domain.com
, in which case the request key is node.domain.com
.
After that, we really enter the compiler code, when the compiler catalog terminus calls Puppet::Parser::Compiler.compile
, which creates a new Puppet::Parser::Compiler
instance giving it our node instance.
When creating this compiler instance, the following is created:
Puppet::Resource::Catalog
). This one will hold the result of the compilation.Puppet::Parser::Scope
)If the given node was coming from an ENC, the catalog is bootstrapped with the known node classes.
Once done, the compile
method is called on the compiler instance. The first thing done is to bootstrap top scope with the node parameters (which contains the global data coming from the ENC if one is used and the facts).
When we left the Parser post, we obtained an AST. This AST is a tree of AST
instances that implement the guts of the Puppet language.
In this previous article we left aside 3 types of AST:
Those are different in the sense that we don’t strictly evaluate them during compilation (more later on this step). No, those are instantiated as part of the initial import of the known types. If you’re wondering why I spelled the Class AST as Hostclass, then it’s because that’s how it is spelled in the Puppet code; the reason being that class
is a reserved word in Ruby :)
Using a lazy evaluation scheme, Puppet keeps (actually per environments), a list of all the parsed known types (classes, definitions and nodes that the parser encountered during parsing); this is called the known types.
When this list is first accessed, if it doesn’t exist, Puppet triggers the parser to populate it. This happens in Puppet::Node::Environment.known_resource_types
which calls the import_ast
method with the result of the parsing phase.
import_ast
adds to the known types an instance of every definitions, hostclass, node returned by their respective instantiate
method.
Let’s have a closer look of the hostclass instantiate
:
def instantiate(modname)
new_class = Puppet::Resource::Type.new(:hostclass, @name)
all_types = [new_class]
code.each do |nested_ast_node|
if nested_ast_node.respond_to? :instantiate
all_types += nested_ast_node.instantiate(modname)
end
end
return all_types
end
So instantiate
returns an array of Puppet::Resource::Type
of the given type. You’ll notice that the hostclass code above analyzes its current class AST children for other ‘instantiable’ AST elements that will also end in the known types.
The known types I’m talking about since a while all live in the Puppet::Resource::TypeCollection
object. There’s one per Puppet environment in fact.
This object main responsibility is storing all known classes, nodes and definitions to be easily accessed by the compiler. It also watches all loaded files by the parser, so that it can trigger a re-parse when one of those is updated. It also serves as the Puppet class/module autoloader (when asking it for an unknown class, it will first try to load it from disk and parse it).
Let’s open a parenthesis to explain a little bit what the scope is. The scope is an instance of Puppet::Parser::Scope
and is simply a symbol table (as explained in the Dragon Book). It just keeps the values of Puppet variables.
It forms a tree, with the top scope (the one we saw the creation earlier) being the root of all scopes. This tree contains one child per new namespace.
The scope supports two operations:
Look up is done with the lookupvar
method. If the variable is qualified it will directly ask the correct scope for its value. For instance ::$hostname
will fetch directly the top scope fact hostname
.
Otherwise it will either return its value in the local scope if it exists or delegate to the parent scope. This can happen up until the top scope. If the value can’t be found anywhere, the :undef
ruby symbol will be returned.
Note that this dynamic scope behavior will be removed in the next Puppet version, where only the local scope and the top scope will be supported. More information is available in this Scope and Puppet article.
Setting a variable is done with the setvar
method. This method is called for instance by the AST class responsible of variable assignment (the AST::VarDef
).
Along with regular variables, each scope has the notion of ephemeral scope. An ephemeral scope is a special transient scope that stores only regex capture $0
to $xy
variables.
Each scope level maintains a stack of ephemeral scopes, which is by default empty.
In Puppet there is no scopes for other language structures than classes (and nodes and definitions), so inside the following if
, an ephemeral scope is created, and pushed on the stack, to store the result of the regex match:
if $var =~ /test(.*)/ {
# here $0, $1... are available
}
When Puppet execution reaches the closing ‘}’, the ephemeral scope is popped from the ephemeral scope stack, removing the $0
definition.
lookupvar
will also ask the ephemeral scope stack if needed.
Orthogonally, the scope instance will also store resource defaults.
And here we need to take a break from compilation to talk about AST evaluation, which I elegantly eluded from the previous post on the Parser.
Every AST node (both branch and leaf ones) implements the evaluate
method. This method takes a Puppet::Parser::Scope
instance as parameter. This is the scope instance that is valid at the moment we evaluate this AST node (usually the scope associated with the class where the code we evaluate is).
There are several outcomes possible after evaluation:
if
, case
, selectors need to evaluate code in one their children branch)Puppet::Parser::Resource
when encountering a puppet resourcePuppet::Resource::Type
(more puppet classes)When an AST node evaluates its children it does so by calling safeevaluate
on them which in turn will call evaluate
. Safeevaluate will shield the caller from exceptions, and transform them to parse errors that can specify the line and file of the puppet instruction that triggered the problem.
Let’s go back to the compiler now. We left the compiler after we populated the top scope with the node’s facts, and we still didn’t properly started the compilation phase itself.
Here is what happens after:
After that, what remains is the containment catalog. This one will be transformed to a resource containment catalog. We call resource catalog an instance of Puppet::Resource::Catalog
where all Puppet::Parser::Resource
have been transformed to Puppet::Resource
instances.
Let’s now see in order the list of operations we outlined above and that form the compilation.
The main class is an hidden class where every code outside any definition, node or class ends. It’s a kind of top class from which any other class is inner. This class is special because it has an empty name.
Evaluating the main class means:
Puppet::Parser::Resource
) whose scope is the top scope.Let’s focus on this last step which happens in Puppet::Parser::Resource.evaluate
.
It involves mainly getting access to the Puppet::Resource::Type
instance matching our class (its type in fact) from the known types, and then calling the Puppet::Resource::Type.evaluate_code
method.
I’m putting aside the main class evaluation to talk a little bit about code evaluation of a given class because that’s something we’ll see for every class or node during compilation.
This happens during Puppet::Resource::Type.evaluate_code
which essentially does:
We saw in the Puppet Parser post how the AST was produced. Eventually some of those AST nodes will end up in the code
element of a given puppet class (you can refer to the Puppet grammar and Puppet::Parser::AST::Hostclass
for the code), under the form of an ASTArray
(which is an array of AST nodes).
As for the main class, the current node compilation phase:
This last evaluation will execute the given node AST code.
If the node was provided by an ENC, the compiler will then evaluate those classes. This is the same process as for the main class, where for every classes we create a resource, add it to the catalog and then evaluate it.
In Puppet the generators are the entities that are able to spawn new resources:
This part of the compilation loops calling evaluate_definitions
and evaluate_collections
, until none of those produces new resources.
During the AST code evaluation, if the compiler encounters a definition call, the Puppet::Parser::AST::Resource.evaluate
will be called (like for every resource).
Since this resource comes from a definition, a type resource will be instantiated and added to the catalog. This resource will not be evaluated at this stage.
Later, when evaluate_definitions
is called, it will pick up any resource that hasn’t been evaluated (which is the case of our definition resources) and evaluates them.
This operation might in turn create more unevaluated resources (ie new definition spawning more definition resources), which will be evaluated in a subsequent pass over evaluate_definitions
.
When the parser parses a collection which are defined like this in the Puppet language:
File <<| tag == 'key' |>>
it creates an AST node of type Puppet::Parser::AST::Collection
. The same happen if you use the realize
function.
Later when the compiler evaluate code and encounters this collection instance, it will create a Puppet::Parser::Collector
and register it to the compiler.
Even later, during evaluate_collections
, the evaluate
method of all the registered collectors will be called. This method will either fetch exported resources from storeconfigs or virtual resources, and create Puppet::Parser::Resource
that are registered to the compiler.
If the collector has created all its resources, it is removed from the compiler.
The current compiler holds the list of relationships defined with the ->
class of relationship operators (but not the ones defined with the require
or before
meta-parameters).
During code evaluation, when the compiler encounters the relationship AST node (an instance of Puppet::Parser::AST::Relationship
), it will register a Puppet::Parser::Relationship
instance to the compiler.
During the evaluate_relationships
method of the compiler, every registered relationship will be evaluated. This evaluation simply adds the destination resource reference to the source resource meta-parameter matching the operator.
And the next compilation phase consists in adding all the overrides we discovered during the AST code evaluation. Normally overrides are applied as soon as they are discovered, but it can happen than an override (especially for collection overrides), can not be applied because the resources it should apply on are not yet created.
Applying an override consist in setting a given resource parameter to the overridden value.
During this phase, the compiler will call the finish
method on every created resources.
This methods is responsible of:
The next step in the compilation process is to set all meta-parameter of our created resources, starting from the main class and walking the catalog from there.
Once everything has been done, the compiler runs some checks to make sure all overrides and collections have been evaluated.
Then the catalog is transformed to a Puppet::Resource
catalog (which doesn’t change its layout, just the instance of its vertices).
I hope you now have a better view of the compilation process. As you’ve seen the compilation is a complex process, which is one of the reason it can take some time. But that’s the price to pay to produce a data only graph tailored to one host that can be applied on the host.
Stay tuned here for the next episode of my Puppet Internals series of post. The next installment will certainly cover the Puppet transaction system, whose role is to apply the catalog on the agent.
]]>My idea was to check the common stacks and see which one would deliver the best concurrency. This article is a follow-up of my previous post about puppet-load and puppet master benchmarking
I decided to try the following stacks:
The setup is the following:
To recap, m1.large instances are:
All the benchmarks were run on the same instance couples to prevent skew in the numbers.
The master uses my own production manifests, consisting of about 100 modules. The node for which we’ll compile a catalog contains 1902 resources exactly (which makes it a big catalog).
There is no storeconfigs involved at all (this was to reduce setup complexity).
The methodology is to setup the various stacks on the master instance and run puppet-load on the client instance. To ensure everything is hot on the master, a first run of the benchmark is run at full concurrency first. Then multiple run of puppet-load are performed simulating an increasing number of clients. This pre-heat phase also make sure the manifests are already parsed and no I/O is involved.
Tuning has been done as best as I could on all stacks. And care was taken for the master instance to never swap (all the benchmarks involved consumed about 4GiB of RAM or less).
Essentially a puppet master compiling catalog is a CPU bound process (that’s not because a master speaks HTTP than its workload is a webserver workload). That means during the compilation phase of a client connection, you can be guaranteed that puppet will consume 100% of a CPU core.
Which essentially means that there is usually little benefit of using more puppet master processes than CPU cores on a server.
When we want to scale a puppet master server, there is a rough computation that allows us to see how it will work.
Here are the elements of our problem:
30 minutes interval means that every 30 minutes we must compile 2000 catalogs for our 2000 nodes. That leaves us with 2000/30 = 66
catalogs per minute.
That’s about a new client checking-in about every seconds.
Since we have 8 CPU, that means we can accommodate 8 catalogs compilation in parallel, not more (because CPU time is a finite quantity).
Since 66/8 = 8.25
, we can accommodate 8 clients per minute, which means each client must be serviced in less than 60/8.25 = 7.27s
.
Since our catalogs take about 10s to compile (in my example), we’re clearly in trouble and we would need to either add more master servers or increase our client sleep time (or not compile catalogs, but that’s another story).
Let’s first compare our favorite stacks for an increasing concurrent clients number (increasing concurrency).
For setups that requires a fixed number of workers (Passenger, Mongrel) those were setup with 25 puppet master workers. This was fitting in the available RAM.
For JRuby, I had to use the at the time of writing jruby-head because of a bug in 1.6.5.1. I also had to comment out the Puppet execution system (in lib/puppet/util.rb
).
Normally this sub-system is in use only on clients, but when the master loads the types it knows for validation, it also autoloads the providers. Those are checking if some support commands are available by trying to execute them (yes I’m talking to you rpm and yum providers).
I also had to comment out when puppet tries to become the puppet user, because that’s not supported under JRuby.
JRuby was run with Sun java 1.6.0_26, so it couldn’t benefit from the invokedynamic work that went into Java 1.7. I fully expect this feature to improve the performances dramatically.
The main metric I’m using to compare stacks is the TPS (transaction per seconds). This is in fact the number of catalogs a master stack can compile in one second. The higher the better. Since compiling a catalog on our server takes about 12s, we have TPS numbers less than 1.
Here are the main results:
And, here is the failure rate:
First notice that some of the stack exhibited failures at high concurrency. The errors I could observe were clients timeouts., even tough I configured a large client side timeout (around 10 minutes). This is what happens when too many clients connect at the same time. Everything slows down until the client times out.
In this graph, I plotted the min, average, median and max time of compilation for a concurrency of 16 clients.
Of course, the better is when min and max are almost the same.
For the stacks that supports a configurable number of workers (mongrel and passenger), I wanted to check what impact it could have. I strongly believe that there’s no reason to use a large number (compared to I/O bound workloads).
Beside being fun this project shows why Passenger is still the best stack to run Puppet. JRuby shows some great hopes, but I had to massage the Puppet codebase to make it run (I might publish the patches later).
That’d would be really awesome if we could settle on a corpus of manifests to allow comparing benchmark results between Puppet users. Anyone want to try to fix this?
]]>