Recent Posts

Benchmarking Puppet Stacks

4 minute read

I decided this week-end to try the more popular puppet master stacks and benchmark them with puppet-load (which is a tool I wrote to simulate concurrent clients).

My idea was to check the common stacks and see which one would deliver the best concurrency. This article is a follow-up of my previous post about puppet-load and puppet master benchmarking

Methodology

I decided to try the following stacks:

  • Apache and Passenger, which is the blessed stack, with MRI 1.8.7 and 1.9.2
  • Nginx and Mongrel
  • JRuby with minzuno

The setup is the following:

  • one m1.large ec2 instance as the master
  • one m1.small ec2 instance as the client (in the same availability zone if that matters)

To recap, m1.large instances are:

  • 2 cpu with 2 virtual core each
  • 8 GiB of RAM

All the benchmarks were run on the same instance couples to prevent skew in the numbers.

The master uses my own production manifests, consisting of about 100 modules. The node for which we’ll compile a catalog contains 1902 resources exactly (which makes it a big catalog).

There is no storeconfigs involved at all (this was to reduce setup complexity).

The methodology is to setup the various stacks on the master instance and run puppet-load on the client instance. To ensure everything is hot on the master, a first run of the benchmark is run at full concurrency first. Then multiple run of puppet-load are performed simulating an increasing number of clients. This pre-heat phase also make sure the manifests are already parsed and no I/O is involved.

Tuning has been done as best as I could on all stacks. And care was taken for the master instance to never swap (all the benchmarks involved consumed about 4GiB of RAM or less).

Puppet Master workload

Essentially a puppet master compiling catalog is a CPU bound process (that’s not because a master speaks HTTP than its workload is a webserver workload). That means during the compilation phase of a client connection, you can be guaranteed that puppet will consume 100% of a CPU core.

Which essentially means that there is usually little benefit of using more puppet master processes than CPU cores on a server.

A little bit of scaling math

When we want to scale a puppet master server, there is a rough computation that allows us to see how it will work.

Here are the elements of our problem:

  • 2000 clients
  • 30 minutes sleep interval, clients evenly distributed in time
  • master with 8 CPU core and 8GiB of RAM
  • our average catalog compilation is 10s

30 minutes interval means that every 30 minutes we must compile 2000 catalogs for our 2000 nodes. That leaves us with 2000/30 = 66 catalogs per minute.

That’s about a new client checking-in about every seconds.

Since we have 8 CPU, that means we can accommodate 8 catalogs compilation in parallel, not more (because CPU time is a finite quantity).

Since 66/8 = 8.25, we can accommodate 8 clients per minute, which means each client must be serviced in less than 60/8.25 = 7.27s.

Since our catalogs take about 10s to compile (in my example), we’re clearly in trouble and we would need to either add more master servers or increase our client sleep time (or not compile catalogs, but that’s another story).

Results

Comparing our stacks

Let’s first compare our favorite stacks for an increasing concurrent clients number (increasing concurrency).

For setups that requires a fixed number of workers (Passenger, Mongrel) those were setup with 25 puppet master workers. This was fitting in the available RAM.

For JRuby, I had to use the at the time of writing jruby-head because of a bug in 1.6.5.1. I also had to comment out the Puppet execution system (in lib/puppet/util.rb).

Normally this sub-system is in use only on clients, but when the master loads the types it knows for validation, it also autoloads the providers. Those are checking if some support commands are available by trying to execute them (yes I’m talking to you rpm and yum providers).

I also had to comment out when puppet tries to become the puppet user, because that’s not supported under JRuby.

JRuby was run with Sun java 1.6.0_26, so it couldn’t benefit from the invokedynamic work that went into Java 1.7. I fully expect this feature to improve the performances dramatically.

The main metric I’m using to compare stacks is the TPS (transaction per seconds). This is in fact the number of catalogs a master stack can compile in one second. The higher the better. Since compiling a catalog on our server takes about 12s, we have TPS numbers less than 1.

Here are the main results:

Puppet Master Stack / Catalog compiled per Seconds

And, here is the failure rate:

Puppet Master Stack / Failure rate

First notice that some of the stack exhibited failures at high concurrency. The errors I could observe were clients timeouts., even tough I configured a large client side timeout (around 10 minutes). This is what happens when too many clients connect at the same time. Everything slows down until the client times out.

Fairness

In this graph, I plotted the min, average, median and max time of compilation for a concurrency of 16 clients.

Puppet Master Stack / fairness

Of course, the better is when min and max are almost the same.

Digging into the number of workers

For the stacks that supports a configurable number of workers (mongrel and passenger), I wanted to check what impact it could have. I strongly believe that there’s no reason to use a large number (compared to I/O bound workloads).

Puppet Master Stack / Worker # influence

Conclusions

Beside being fun this project shows why Passenger is still the best stack to run Puppet. JRuby shows some great hopes, but I had to massage the Puppet codebase to make it run (I might publish the patches later).

That’d would be really awesome if we could settle on a corpus of manifests to allow comparing benchmark results between Puppet users. Anyone want to try to fix this?

Puppet Internals: the parser

12 minute read

As more or less promised in my series of post about Puppet Extension Points, here is the first post about Puppet Internals.

The idea is to produce a series of blog post about each one about a Puppet sub-system.

Before starting, I first want to present what are the various sub-blocks that forms Puppet, or Puppet: the Big Picture:

Puppet the Big Picture

I hope to be able to cover each of those sub-blocks in various posts, but we’ll today focus on the Puppet Parser.

The Puppet Parser

The Puppet Parser responsibility is to transform the textual manifests into a computer usable data structure that could be fed to the compiler to produce the catalog. This data structure is called an AST (Abstract Syntax Tree).

The Puppet Parser is the combination of various different sub-systems:

  • the lexer
  • the racc-based parser
  • the AST model

The Lexer

The purpose of the lexer is to read manifests characters by characters and to produce a stream of tokens. A token is just a symbol (combined with data) that represents a valid part of the Puppet language.

For instance, the lexer is able to find things such (but not limited to):

  • reserved keywords (like case, class, define…)
  • quoted strings
  • identifiers
  • variables
  • various operators (like left parenthesis or right curly braces…)
  • regexes

Let’s take an example and follow what comes out of the lexer when scanning this manifest:

$variable = "this is a string"

And here is the stream of tokens that is the outcome of the lexer:

:VARIABLE(VARIABLE) {:line=>1, :value=>"variable"}
:EQUALS(EQUALS) {:line=>1, :value=>"="}
:STRING(STRING) {:line=>1, :value=>"this is a string"}

As you can see, a puppet token is the combination of a symbol and a hash.

Let’s see how we achieved this result. First you must know that the Puppet lexer is a regex-based system. Each token is defined as a regex (or a stock string). When reading a character, the lexer ‘just’ checks if one of the string or regex can match. If there is one match, the lexer emits the corresponding token.

Let’s take our example manifest (the variable assignment above), and see what happens in the lexer:

  1. read $ character
  2. no regex match, let’s read some more characters
  3. read ‘variable’, still no match, our current buffer contains $variable
  4. read ‘ ‘, oh we have a match against the DOLLAR_VARIABLE token regex
  5. this token is special, it is defined with a ruby block. When one of those token is read and matched, the block is executed.
  6. the block just emits the VARIABLE("variable") token

The lexer’s scanner doesn’t try every regexes or strings, it does this in a particular order. In short it tries to maximize the length of the matched string, in a word the lexer is greedy. This helps removing ambiguity.

As seen in the token stream above, the lexer associates to each token an hash containing the line number where we found it. This allows error messages in case of parsing error to point to the correct line. It also helps puppetdoc to associate the right comment with the right language structure.

The lexer also supports lexing contexts. Some tokens are valid in some specific contexts only, this is true especially when parsing quoted strings for variables interpolation.

Not all lexed tokens emit tokens for the parser. For instance comments are scanned (and stored in a stack for puppetdoc use), but they don’t produce a token for the parser: they’re skipped.

Finally, the lexer also maintains a stack of the class names it crossed. This is to be able to find the correct fully qualified name of inner classes as seen in the following example:

class outer {
  class middle {
    class inner {
      # we're in outer::middle::inner
    }
  }
}

If you want more information about the lexer, check the Puppet::Parser::Lexer class.

The parser

The parser is based on racc. Racc is a ruby port of the good old Yacc. Racc, like Yacc, is what we call a LALR parser.

The ‘cc’ in Racc means ‘compiler of compiler’. It means in fact that the parser is generated from what we call a grammar (and for LALR parsers, even a context free grammar). The generated parser is table driven and consumes tokens one by one. Those kind of parsers are sometimes called Shift/Reduce parsers.

This grammar is written in a language that is a machine readable version of a Backus-Naur Form or “BNF”.

There are different subclasses of context free grammars. Racc works best with LR(1) grammars, which means it must be possible to parse any portion of an input string with just a single token lookahead. Parsers for LR(1) grammars are deterministic. This means that we only need a fixed number of lookahead tokens (in our case 1) and what we already parsed to find what next rule to apply.

Roughly it does the following:

  1. read a token
  2. shift (this mean put the token on the stack), goto 1. until we can reduce
  3. reduce the read tokens with a grammar rules (this involves looking ahead)

We’ll have a deeper look in the subsequent chapters. Meanwhile if you want to learn everything about LALR Parsers or parsers in general, I highly recommend the Dragon Book

The Puppet Grammar

The Puppet Grammar can be found in lib/puppet/parser/grammar.ra in the sources. It is a typical racc/yacc grammar that

  • defines the known tokens (those matches the lexed token names)
  • defines the precedence of operators
  • various recursive rules that form the definition of the Puppet languages

Let’s have a look to a bit of the Puppet Grammar to better understand how it works:

statement_or_declaration:    resource
  | collection
  | assignment
  | casestatement
  | ifstatement_begin
...
assignment:     VARIABLE EQUALS expression {
  variable = ast AST::Name, :value => val[0][:value], :line => val[0][:line]
  result = ast AST::VarDef, :name => variable, :value => val[2], :line => val[0][:line]
}
...
expression:   rvalue
  | hash
  ...

rvalue:       quotedtext
  | name
  ...

quotedtext: STRING  { result = ast AST::String, :value => val[0][:value], :line => val[0][:line] }

So the closer look above shows 4 rules:

  • a non-terminal rule called statement_or_declaration which is an alternation of sub-rules
  • a terminal rule called assignment, with a ruby code block that will be executed when this rule will be reduced.
  • a non terminal rule called expression
  • a terminal rule quotedtext with a ruby block

To understand what that means, we could translate those rules by:

  1. A statement or declaration can be either a resource or a collection, or an assignement
  2. An assignment is when the parser finds a VARIABLE token followed by an EQUALS token and an expression
  3. An expression can be a rvalue or an hash (all defined later on in the grammar file)
  4. A rvalue can be among other things a quotedtext
  5. And finally a quotedtext can be STRING (among other things)

You can generate yourself the puppet parser by using racc, it’s as simple as:

  1. Installing racc (available as a gem)
  2. running: make -C lib/puppet/parser

This rebuilds the lib/puppet/parser/parser.rb file.

You can generate a debug parser that prints everything it does if you use -g command-line switch to racc (check the lib/puppet/parser/makefile and define @@yydebug = true in the parser class.

The parser itself is controlled by the Puppet::Parser::Parser class which is in lib/puppet/parser/parser_support.rb. This class is requiring the generated parser (both share the same ruby class). That means that the ruby blocks in the grammar will be executed in the context of an instance of the Puppet::Parser::Parser class. In other words, you can call from the grammar, methods defined in the parser_support.rb file. That’s the reason we refer to the ast method in the above example. This method just creates an instance of the given class and associates some context to it.

Let’s go back a little bit to the reduce operation. When the parser is reducing, it pops from the stack the reduced tokens and pushes the result to the stack. The result can either be what ends in the result field of the grammar ruby block or the result of the reduction of the mentioned rule (when it’s a non-terminal one).

In the ruby block of a terminal rule, it is possible to access the tokens and rule results currently parsed in the val array. To get back to the assignment statement above, val[0] is the VARIABLE token, and val[2] the result of the reduction of the expression rule.

The AST

The AST is the computer model of the parsed manifests. It forms a tree of instances of the AST base class. There are AST classes (all inheriting the AST base class) for every elements of the language. For instance there’s one for puppet classes, for if, case and so on. You’ll find all those in lib/puppet/parser/ast/ directory.

There are two kinds of AST classes:

  • leaves: which represent some kind of values (like an identifier or a string)
  • branches: which encompass more than one other AST classes (like if, case or class). This is what forms the tree.

All AST classes implement the evaluate method which we’ll cover in the compiler article.

For instance when parsing an if/else statement like this:

if $var {
  notice("var is true")
} else {
  notice("var is false")
}

The whole if/else once parsed will be an instance of Puppet::Parser::AST::IfStatement (which can be found in lib/puppet/parser/ast/ifstatement.rb.

This class defines three instance variables:

  1. @test
  2. @statements
  3. @else

The grammar rule for ifstatement is (I simplified it for the purpose of the article):

ifstatement:  IF expression LBRACE statements RBRACE else {
  args = {
    :test => val[0],
    :statements => val[2],
    :else = val[4]
  }
  result = ast AST::IfStatement, args
}

Notice how the AST::IfStatement is initialized with the args hash containing the test,statements and else result of the those rules. Those rules result will also be AST classes, and will end up in the IFStatement fields we talked about earlier.

Thus this forms a tree. If you look to the AST::IfStatement#evaluate implementation you’ll see that depending on the result of the evaluation of the @test it will either evaluate @statements or @else.

Calling the evaluate method of the root element of this tree will in chain trigger calling evaluate on children like for the IfStatement example. This process will be explained in details in the compiler article, but that’s essentially how Puppet compiler works.

An Example Step by Step

Let’s see an end-to-end example of parsing a simple manifest:

class test {
  file {
    "/tmp/a": content => "test!"
  }
}

This will produce the following stream of tokens:

:CLASS(CLASS) {:line=>1, :value=>"class"}
:NAME(NAME) {:line=>1, :value=>"test"}
:LBRACE(LBRACE) {:line=>1, :value=>"{"}
:NAME(NAME) {:line=>2, :value=>"file"}
:LBRACE(LBRACE) {:line=>2, :value=>"{"}
:STRING(STRING) {:line=>3, :value=>"/tmp/a"}
:COLON(COLON) {:line=>3, :value=>":"}
:NAME(NAME) {:line=>3, :value=>"content"}
:FARROW(FARROW) {:line=>3, :value=>"=>"}
:STRING(STRING) {:line=>3, :value=>"test!"}
:RBRACE(RBRACE) {:line=>4, :value=>"}"}
:RBRACE(RBRACE) {:line=>5, :value=>"}"}

And now let’s dive in the parser events (I simplified the outcome because the Puppet grammar is a little bit more complex than necessary for this article). The following example shows all actions of the Parser and how looks the parser stack after the operation took place. I elided some of the stacks when not strictly needed to understand what happened.

  1. receive: CLASS (our parser got the first token from the lexer)
  2. shift CLASS (there’s nothing else to do for the moment)

    the result of the shift is that we now have one token in the parser stack

    stack: [ CLASS ]

  3. receive: NAME("test") (we get one more token)
  4. shift NAME (still no rules can match so we shift it)

    stack: [ CLASS NAME("test") ]

  5. reduce NAME –> classname (oh and now we can reduce a rule)

    notice how the stacks now contains a classname and not a NAME

    stack: [ CLASS (classname "test") ]

  6. receive: LBRACE
  7. shift LBRACE

    stack: [ CLASS (classname "test") LBRACE ]

  8. receive: NAME("file")
  9. shift NAME

    stack: [ CLASS (classname "test")
LBRACE NAME("file") ]

  10. receive: LBRACE
  11. reduce NAME –> classname

    stack: [ CLASS (classname "test")
LBRACE (classname "file") ]

  12. shift: LBRACE

    stack: [ CLASS (classname "test")
LBRACE (classname "file") LBRACE ]

  13. receive STRING("/tmp/a")
  14. shift STRING

    stack: [ CLASS (classname "test") LBRACE (classname "file") LBRACE STRING("/tmp/a") ]

  15. reduce STRING –> quotedtext

    stack: [ CLASS (classname "test") LBRACE (classname "file") LBRACE (quotedtext AST::String("/tmp/a")) ]

  16. receive COLON
  17. reduce quotedtext –> resourcename

    stack: [ CLASS (classname "test") LBRACE (classname "file") LBRACE (resourcename AST::String("/tmp/a")) ]

  18. shift COLON

    stack: [ CLASS (classname "test") LBRACE (classname "file") LBRACE (resourcename AST::String("/tmp/a")) COLON ]

  19. receive: NAME("content")
  20. shift NAME

    stack: [ CLASS (classname "test") LBRACE (classname "file") LBRACE (resourcename AST::String("/tmp/a")) COLON NAME("content") ]

  21. receive: FARROW
  22. shift FARROW

    stack: [ CLASS (classname "test") LBRACE (classname "file") LBRACE (resourcename AST::String("/tmp/a")) COLON NAME("content") FARROW ]

  23. receive: STRING("test!")
  24. shift: STRING
  25. reduce STRING –> quotedtext
  26. receive: RBRACE
  27. reduce quotedtext –> rvalue

    stack: [ CLASS (classname "test") LBRACE (classname "file") LBRACE (resourcename AST::String("/tmp/a")) COLON NAME("content") FARROW (rvalue AST::String("test!"))]

  28. reduce rvalue –> expression

    stack: [ CLASS (classname "test") LBRACE (classname "file") LBRACE (resourcename AST::String("/tmp/a")) COLON NAME("content") FARROW (expression AST::String("test!"))]

  29. reduce NAME FARROW expression –> param (we’ve now a resource parameter)

    stack: [ CLASS (classname "test") LBRACE (classname "file") LBRACE (resourcename AST::String("/tmp/a")) COLON (param AST::ResourceParam("content"=>"test!")))]

  30. reduce param –> params (multiple parameters can form a params)
  31. reduce resourcename COLON params –> resourceinst (name: parameters form a resouce)

    stack: [ CLASS (classname "test") LBRACE (classname "file") LBRACE (resourceinst (AST::ResourceInstance(...)))]

  32. reduce resourceinst –> resourceinstances (more than one resourceinst can form resourceinstances)

    stack: [ CLASS (classname "test") LBRACE (classname "file") LBRACE (resourceinstances [(resourceinst (AST::ResourceInstance(...)))] )]

  33. shift RBRACE
  34. reduce classname LBRACE resourceinstances RBRACE –> resource (we’ve discovered a resource)

    stack: [ CLASS (classname "test") LBRACE (classname "file") LBRACE (resource AST::Resource(...))]

  35. receive: RBRACE
  36. reduce resource –> statement_or_declaration (a resource is one statement)
  37. reduce statement_or_declaration –> statement_and_declarations
  38. shift RBRACE

    stack: [ CLASS (classname "test") LBRACE (classname "file") LBRACE (resource AST::Resource(...)) RBRACE ]

  39. reduce CLASS classname LBRACE statements_and_declarations RBRACE –> hostclass (we’ve discovered a puppet class)

    stack: [ (hostclass AST::Hostclass(...)) ]

  40. reduce hostclass –> statement_or_declaration
  41. reduce statement_or_declaration –> statements_and_declarations
  42. receive: end of file
  43. reduce statements_and_declarations –> program
  44. shift end of file

    stack: [ (program (AST::ASTArray [AST::Hostclass(...))])) ]

And the parsing is now over. What is returned is this program, which is in fact an instance of an AST::ASTArray.

If we now analyze the produced AST, we find:

  • AST::ASTarray - array of AST instances, this is our program
    • AST::Hostclass - an instance of a class
      • AST::Resource - contains an array of resource instances
        • AST::ResourceInstance
          • AST::ResourceParam - contains the “content” parameter
            • AST::String("content")
            • AST::String("test!")

What’s important to understand is that the AST depends only from the manifests. Thus the Puppet master needs only to reparse manifests only if they change.

What’s next?

The next episode will follow-up after the Parser: the compilation. The Puppet compiler takes the AST, injects into it the facts and gets what we call a catalog; that’s exactly what we’ll learn in the next article (sorry, no ETA yet).

Do not hesitate to comment or ask questions on this article with the comment system below :)

And happy new year all!

Protobuf, Maven, M2E and Eclipse are on a boat

2 minute read

At Days of Wonder we develop several Java projects (for instance our online game servers). Those are built with Maven, and most if not all are using Google Protocol Buffers for data interchange.

Development happens mostly in Eclipse, and until a couple of months ago with m2eclipse. With the release of m2e (m2eclipse successor), our builds don’t work as is in Eclipse.

The reason is that we run the maven-protoc-plugin (the David Trott fork which is more or less now the only one available still seeing development). This maven plugins allows the protoc Protocol Buffers compiler to be run at the generate-sources phase of the Maven Lifecycle. Under m2eclipse, this phase was happening outside Eclipse and the builds was running fine.

Unfortunately m2e is not able to solve this correctly. It requires using a connector. Those connectors are Eclipse plugins that ties a maven plugin to a m2e build lifecycle phase. This way when m2e needs to execute this phase of the build, it can do so with the connector.

Until now, there wasn’t any lifecycle connector for the maven-protoc-plugin. This wasn’t possible to continue without this in the long term for our development team, so I took a stab to build it.

In fact it was way simpler than what I first thought. I used the m2e Extension Development Guide as a bootstrap (and especially the EGit extension).

The result of this few hours of development is now open-source and available in the m2e-protoc-connector Github repository.

Installation

I didn’t release an Eclipse p2 update repository (mainly because I don’t really know how to do that), so you’ll have to build the project by yourself (but it’s easy).

  1. Clone the repository
git clone git://github.com/masterzen/m2e-protoc-connector.git
  1. Build with maven 3
mvn package

Once built, you’ll find the feature packaged in com.daysofwonder.tools.m2e-protoc-connector.feature/target/com.daysofwonder.tools.m2e-protoc-connector.feature-1.0.0.20111130-1035-site.zip.

To install in Eclipse Indigo:

  1. open the Install New Software window from the Help menu.
  2. Then click on the Add button
  3. select the Archive button and point it to the: com.daysofwonder.tools.m2e-protoc-connector.feature/target/com.daysofwonder.tools.m2e-protoc-connector.feature-1.0.0.20111130-1035-site.zip file.
  4. Accept the license terms and restart eclipse.

Usage

To use it there is no specific need, as long as your pom.xml conforms roughly to what we use:

    <plugin>
        <groupId>com.google.protobuf.tools</groupId>
        <artifactId>maven-protoc-plugin</artifactId>
        <executions>
            <execution>
                <id>generate proto sources</id>
                <goals>
                    <goal>compile</goal>
                </goals>
                <phase>generate-sources</phase>
                <configuration>
                    <protoSourceRoot>${basedir}/src/main/proto/</protoSourceRoot>
                    <includes>
                        <param>**/*.proto</param>
                    </includes>
                </configuration>
            </execution>
        </executions>
    </plugin>
...
  <dependency>
    <groupId>com.google.protobuf</groupId>
    <artifactId>protobuf-java</artifactId>
    <version>2.4.1</version>
  </dependency>
...
    <pluginRepositories>
        <pluginRepository>
            <id>dtrott-public</id>
            <name>David Trott's Public Repository</name>
            <url>http://maven.davidtrott.com/repository</url>
        </pluginRepository>
    </pluginRepositories>

If you find any problem, do not hesitate to open an issue on the github repository.