Recent Posts

The 10 commandments of logging

14 minute read

Welcome on my blog for this new Year. After writing an answer to a thread regarding monitoring and log monitoring on the Paris devops mailing list, I thought back about a blog post project I had in mind for a long time.

I wrote this blog post while wearing my Ops hat and this is mostly addressed to developers.

Knowing how and what to log is, to me, one of the hardest task a software engineer will have to solve. Mostly because this task is akin to divination. It’s very hard to know what information you’ll need during troubleshooting… That’s the reason I hope those 10 commandments will help you enhance your application logging for the great benefits of the ops engineers :)

1. Thou shalt not write log by yourself

Never, ever use printf or write your log entries by yourself to files, or handle log rotation by yourself. Please do your ops guys a favor and use a standard library or system API call for this.

This way, you’re sure the running application will play nicely with the other system components, will log to the right place or network services without special system configuration.

So, if you just use the system API, then this means logging with syslog(3). Learn how to use it.

If you instead prefer to use a logging library, there are plenty of those especially in the Java world, like Log4j, JCL, slf4j and logback. My favorite is the combination of slf4j and logback because it is very powerful and relatively easy to configure (and allows JMX configuration or reloading of the configuration file).

The best thing about slf4j is that you can change the logging backend when you see fit. It is especially important if you’re coding a library, because it allows anyone to use your library with their own logging backend without any modification to your library.

There are also several other logging library for different languages, like for ruby: Log4r, stdlib logger, or the almost perfect Jordan Sissel’s Ruby-cabin

And if your argument for not using a logging library is CPU consumption, then you have my permission to skip this blog post. Sure you should not put log statements in tight inner loops, but otherwise you’ll never see the difference.

2. Thou shalt log at the proper level

If you followed the 1st commandment, then you can use a different log level per log statement in your application. One of the most difficult task is to find at what level this log entry should be logged.

I’m giving here some advice:

  • TRACE level: this is a code smell if used in production. This should be used during development to track bugs, but never committed to your VCS.

  • DEBUG level: log at this level about anything that happens in the program. This is mostly used during debugging, and I’d advocate trimming down the number of debug statement before entering the production stage, so that only the most meaningful entries are left, and can be activated during troubleshooting.

  • INFO level: log at this level all actions that are user-driven, or system specific (ie regularly scheduled operations…)

  • NOTICE level: this will certainly be the level at which the program will run when in production. Log at this level all the notable event that are not considered an error.

  • WARN level: log at this level all event that could potentially become an error. For instance if one database call took more than a predefined time, or if a in memory cache is near capacity. This will allow proper automated alerting, and during troubleshooting will allow to better understand how the system was behaving before the failure.

  • ERROR level: log every error conditions at this level. That can be API calls that return errors or internal error conditions.

  • FATAL level: too bad it’s doomsday. Use this very scarcely, this shouldn’t happen a lot in a real program. Usually logging at this level signifies the end of the program. For instance, if a network daemon can’t bind a network socket, log at this level and exit is the only sensible thing to do.

Note that the default running level in your program or service might widely vary. For instance I run my server code at level INFO usually, but my desktop programs runs at level DEBUG. Because it’s very hard to troubleshoot an issue on a computer you don’t have access too, and it’s far easier when doing support or customer service to ask the user to send you the log than teaching her to change the log level and then send you the log. Of course YMMV :)

3. Honor thy log category

Most logging library I cited in the 1st commandment allow to specify a logging category. This category allows to classify the log message, and will ultimately, based on the logging framework configuration, be logged in a distinct way or not logged at all.

Most of the time java developers use the fully qualified class name where the log statement appears as the category. This is a scheme that works relatively fine if your program respects the simple responsibility principle.

Log categories in java logging libraries are hierarchical, so for instance logging with category com.daysofwonder.ranking.ELORankingComputation would match the top level category com.daysofwonder.ranking. This would allow the ops engineer to setup a logging configuration that works for all the ranking subsystem by just specifying configuration for this category. But it could at the same time, produce logging configuration for child categories if needed.

We can extend the paradigm a little bit further to help troubleshooting specific situation. Imagine that you are dealing with a server software that respond to user based request (like a REST API for instance). If your server is logging with this category my.service.api.<apitoken> (where apitoken is specific to a given user), then you could either log all the api logs by allowing my.service.api or a single misbehaving api user by logging with a more detailed level and the category my.service.api.<bad-user-api-token>. Of course this requires a system where you can change logging configuration on the fly.

4. Thou shalt write meaningful logs

This might probably be the most important commandment. There’s nothing worst than cryptic log entries assuming you have a deep understanding of the program internals.

When writing your log entries messages, always anticipate that there are emergency situations where the only thing you have is the log file, from which you have to understand what happened. Doing it right might be the subtle difference between getting fired and promoted :)

When a developer writes a log message, it is in the context of the code in which the log directive is to be inserted. In those conditions we tend to write messages the infer on the current context. Unfortunately when reading the log itself this context is absent, and those messages might not be understandable.

One way to overcome this situation (and that’s particularly important when writing at the warn or error level), is to add remediation information to the log message, or if not possible, what was the purpose of the operation, and it’s outcome.

Also, do not log message that depends on previous messages content. The reason is that those previous messages might not appear if they are logged in a different category or level, or worst can appear in a different place (or way before) in a multi-threaded or asynchronous context.

5. Thy log shalt be written in English

This might seem a strange commandment, especially coming from a French guy. First, I still think English is much more concise than French and better suits technical language. Why would you want to log in French if the message contains more than 50% of English words in it?

This being put aside, here are the essential reason behind this commandment:

  • English means your messages will be in logged with ASCII characters. This is particularly important because you can’t really know what will happen to the log message, nor what software layer or media it will cross before being archived somewhere. If your message uses a special charset or even UTF-8, it might not render correctly at the end, but worst it could be corrupted in transit and become unreadable. Still remains the question of logging user-input which might be in diverse charset and/or encoding.

  • If your program is to be used by the most and you don’t have the resources for a full localization, then English is probably your best alternative. Now, if you have to localize one thing, localize the interface that is closer to the end-user (it’s usually not the log entries).

  • if you localize your log entries (like for instance all the warning and error level), make sure you prefix those by a specific meaningful error-code. This way people can do language independent Internet search and find information. Such great scheme has been used a while ago in the VMS operating system, and I must admit it is very effective. If you were to design such scheme, you can adopt this scheme: APP-S-CODE or APP-S-SUB-CODE, with respectively:

    • APP: your application name on 3 letters
    • S: severity on 1 letter (ie D: debug, I: info…)
    • SUB: the sub part of the application this code pertains to
    • CODE: a numeric code specific to the error in question

6. Thou shalt log with context

So, there’s nothing worst than this kind of log message:

 Transaction failed

or

User operation succeeds

or in case of API exceptions:

java.lang.IndexOutOfBoundsException

Without proper context, those messages are only noise, they don’t add value and consume space that could have been useful during troubleshooting.

Messages are much more valuable with added context, like:

Transaction 2346432 failed: cc number checksum incorrect

or

User 54543 successfully registered e-mail user@domain.com

or

IndexOutOfBoundsException: index 12 is greater than collection size 10

Since we’re talking about exceptions in this last context example, if you happen to propagate up exceptions, make sure to enhance them with context appropriate to the current level, to ease troubleshooting, as in this java example:

  public void storeUserRank(int userId, int rank, String game) {
    try {
      ... deal database ...
    } catch(DatabaseException de) {
      throw new RankingException("Can't store ranking for user "+userId+" in game "+ game + " because " + de.getMessage() );
    }
  }

So the upper-layer client of the rank API will be able to log the error with enough context information. It’s even better if the context becomes parameters of the exception itself instead of the message, this way the upper layer can use remediation if needed.

An easy way to keep a context is to use the MDC some of the java logging library implements. The MDC is a per thread associative array. The logger configuration can be modified to always print the MDC content for every log line. If your program uses a per-thread paradigm, this can help solve the issue of keeping the context. For instance this java example is using the MDC to log per user information for a given request:

  class UserRequest {
    ...
    public void execute(int userid) {
      MDC.put("user", userid);
      
      // ... all logged message now will display the user=<userid> for this thread context ...
      log.info("Successful execution of request");
      
      // user request processing is now finished, no need to log our current user anymore
      MDC.remote("user");
    }
  }

Note that the MDC system doesn’t play nice with asynchronous logging scheme, like Akka’s logging system. Because the MDC is kept in a per-thread storage area and in asynchronous systems you don’t have the guarantee that the thread doing the log write is the one that has the MDC. In such situation, you need to log the context manually with every log statement.

7. Thou shalt log in machine parseable format

Log entries are really good for human, but very poor for machines. Sometimes it is not enough to manually read log files, you need to perform some automated processing (for instance for alerting or auditing). Or you want to store centrally your logs and be able to perform search requests.

So what happens when you embed the log context in the string like in this hypothetical logging statement:

log.info("User {} plays {} in game {}", userId, card, gameId);

This will produce this kind of text:

2013-01-12 17:49:37,656 [T1] INFO  c.d.g.UserRequest  User 1334563 plays 4 of spades in game 23425656

Now, if you want to parse this, you’d need the following (untested) regex:

  /User (\d+) plays (.+?) in game (\d+)$/

Well, this is not easy and very error-prone, just to get access to string parameters your code already knows natively.

So what about this idea, I believe Jordan Sissel first introduced in his ruby-cabin library: Let add the context in a machine parseable format in your log entry. Our aforementioned example could be using JSON like this:

2013-01-12 17:49:37,656 [T1] INFO  c.d.g.UserRequest  User plays {'user':1334563, 'card':'4 of spade', 'game':23425656}

Now your log parsers can be much easier to write, indexing now becomes straightforward and you can enable all logstash power.

8. Thou shalt not log too much or too little

That might sound stupid, but there is a right balance for the amount of log.

Too much log and it will really become hard to get any value from it. When manually browsing such logs, there is too much clutter which when trying to troubleshoot a production issue at 3AM is not a good thing.

Too little log and you risk to not be able to troubleshoot problems: troubleshooting is like solving a difficult puzzle, you need to get enough material for this.

Unfortunately there is no magic rule when coding to know what to log. It is thus very important to strictly respect the 1st and 2nd commandments so that when the application will be live it will be easier to increase or decrease the log verbosity.

One way to overcome this issue is during development to log as much as possible (do not confuse this with logging added to debug the program). Then when the application enters production, perform an analysis of the produced logs and reduce or increase the logging statement accordingly to the problems found. Especially during troubleshooting, note the part of the application you wished you could have more context or logging, and make sure to add those log statements to the next version (if possible at the same time you fix the issue to keep the problem fresh in memory). Of course that requires an amount of communication between ops and devs.

This can be a complex task, but I would recommend to refactor logging statements as much as you refactor the code. The idea would be to have a tight feedback loop between the production logs and the modification of such logging statement. It’s even more efficient if your organization has a continuous delivery process in place, as the refactoring can be constant.

Logging statements are some kind of code metadata, at the same level of code comments. It’s really important to keep the logging statements in sync with the code. There’s nothing worst when troubleshooting issues to get irrelevant messages that have no relation to the code processed.

9. Thou shalt think to the reader

Why adding logging to an application?

The only answer is that someone will have to read it one day or later (or what is the point?). More important it is interesting to think about who will read those lines. Depending on the person you think will read the log messages you’re about to write, the log message content, context, category and level will be quite different.

Those persons can be:

  • an end-user trying to troubleshoot herself a problem (imagine a client or desktop program)
  • a system-administrator or operation engineer troubleshooting a production issue
  • a developer either for debugging during development or solving a production issue

Of course the developer knows the internals of the program, thus her log messages can be much more complex than if the log message is to be addressed to an end-user. So adapt your language to the intended target audience, you can even dedicate separate categories for this.

10. Thou shalt not log only for troubleshooting

As the log messages are for a different audience, log messages will be used for different reasons. Even though troubleshooting is certainly the most evident target of log messages, you can also use log messages very efficiently for:

  • Auditing: this is sometimes a business requirement. The idea is to capture significant events that matter to the management or legal people. These are statements that describe usually what users of the system are doing (like who signed-in, who edited that, etc…).

  • Profiling: as logs are timestamped (sometimes to the millisecond level), it can become a good tool to profile sections of a program, for instance by logging the start and end of an operation, you can either automatically (by parsing the log) or during troubleshooting infer some performance metrics without adding those metrics to the program itself.

  • Statistics: if you log each time a certain event happens (like a certain kind of error or event) you can compute interesting statistics about the running program (or the user behaviors). It’s also possible to hook this to an alert system that can detect too many errors in a row.

Conclusion

I hope this will help you produce more useful logs, and bear with me if I forgot an essential (to you) commandments. Oh and I can’t be held responsible if your log doesn’t get better after reading this blog :)

It’s possible that those 10 commandments are not enough, so feel free to use the comment section (or twitter or your own blog) to add more useful tips.

Thanks for reading.

Ticket to Devopsdays Roma?

5 minute read

After seeing @KrisBuytaert tweet a couple of days ago about offering priority registration for the upcoming Devopsdays Roma next October to people blogging about Devops, I thought why not me?

I already wanted to attend the last European Devopsdays session in Goteborg last year, but time proximity with the PuppetConf and some work schedule prevented me to finally join the party.

This year, I won’t show up in San Francisco the yearly Puppet big event (which at least from there sounds quite terrific) for various reasons, so if attending a Devopsdays conference had a certain appeal I really couldn’t resist (and choosing one in Europe, well is much more easier for me).

Qualifying

I believe (feel free to speak up if you think not) I can qualify for being part of the devops movement (and per se can attend Devopsdays Roma) :)

One reason is that my work job is mostly programming. But I also have the fine responsibility of bringing what I write to production systems, along with creating and managing the infrastructure.

This puts me in a situation that not a lot of my fellow programmers experienced. From a long time, producing software for Software Engineers was compiling a binary artifact and handle it into other hands, then work as soon as possible on new software.

Unfortunately in the real world, this doesn’t really work. The ops people in charge of the application (and that would be the same for client side products with respect to support engineers) will struggle to deploy, analyze and simply maintain it.

When you already experienced both side of the world, you know that what matters to a Software Engineers (ie clean code, re-usable components, TDD and unit testing…) doesn’t really matters to Operations Engineers. The latter currently wants:

  • logs
  • ease of administration (like changing configuration gracefully)
  • ease of introspection (like usable logs, integrated consoles, useful metrics, trace mechanism ala Dapper)
  • ease of deployment (like packaging system, artifacts assembly that contains comprehensible dependencies)
  • external (and internal) monitoring of all aspects
  • wisely chosen hard dependencies (like database, message queues or cache systems)

Well, if you’re a developer you’ll recognize like me that the above list is really not trivial to bring, and usually tends to be overcome.

Unfortunately, an application that doesn’t implement this will be hard or impossible to properly maintain.

The first time I tried to deploy software I wrote, I discovered the hard way that those requirements are, well, requirements. Now, I make sure that the projects I work on have user stories encompassing those essential facets.

Ops already know it

If you’re on the operation side, then you already know the value of those software requirements. Our role as operation engineers is to evangelize and teach software engineers those good practices. I strongly believe that devops is all about that.

But it’s not only this. It’s also being agile on the infrastructure side…

Puppet opened my eyes

I always had been interested in managing server infrastructures. I started managing linux servers about 12 years ago (about 5 years after professionally starting as a software developer). Managing servers has always been something on the side for me, my main job being producing software (either client or server side).

I was lurking the configuration management space and community for a long time before I adopted Puppet. On the small infrastructure I was maintaining, I thought it would be overkill.

Oh, how I was wrong at that time :)

Back in 2007, I started using Puppet, and began to write modules for the software we were using in production. I was still too shy to run this in production. Then in 2008, I really started using puppet (and BTW, contributing some features I thought interesting, which you already know if you read this infrequently updated blog).

Puppet helped me to:

  • have all my configuration centralized in git
  • deploy servers with repeatable process (ie recreate servers from scratch)
  • parallelize (clusters can be spawned much more easily)
  • prevent configuration drift
  • orchestrate multiple nodes configurations (ie publish monitoring or backup information to other nodes)

Puppet helped me understand that tools are a real life savers.

I also learnt (and actually enforced) good administration practice:

  • deploy native packages instead of source installs
  • setup monitoring for every installed critical software
  • no manual configuration on servers anymore
  • configuration testing on VM (thanks Vagrant BTW)
  • automate as much as you can

Puppet also helped me join sysadmin communities (like the find folks at ##infra-talk), which in turn helped me discover other life saver tools.

This, I think, is part of the devops culture and community.

It’s not reserved to server-side

Yes, it isn’t. Support staff or customer service staff share the same responsibilities as the operation teams but for client side applications. There is now much more client software than ever with the number of smartphones out there.

The same benefit of devops I talked about earlier, can and should also be applied to client side software. Logs are invaluable when trying to understand why some software your dev team wrote doesn’t work when in the hand of your clients.

It’s even much more complex than analyzing server-side issues, because when you have the chance of managing client applications that produce logs, it’s most of the time impossible to get access to them…

Is devops the future?

My own small goal (my stone to the devops edifice) is to start the cultural mindset shift of the more developers I can (starting with my fellow co-workers). And I think that’s our own responsibility as part of the devops movement (if we can use this word) to initiate such shift.

I always smile when I see “devops engineer” job positions. Devops is not a role, it’s a mindset that everybody in a given dev and ops team should share. Maybe recruiters use this word as a synonym for “help us use automation tools”, as if it was the only solution to a human problem (well obviously if you don’t use any configuration management you have more problems to solve)

The same way olympic athletes practice hard every day to reach their level, I strongly believe that those devops practice I described should be adopted by all software developers.

Now it’s our job to spread the word and help engineers to.

Finally, in my humble opinion, devops is all about common sense. I think it’s easier to implement such practices in small companies/teams than in larger already installed teams (people are usually reluctant to changes, being good or bad). Nevertheless, if developers and operations unite and walk in the same direction, big things can be achieved.

What happens at Devopsdays…

… should not stay at Devopsdays (well except maybe for the drinks outcome)

What do I want to get from attending Devopsdays Roma?

I really want to:

  • learn new things
  • learn about other people experiences in the field
  • share about fixing problems that plagues us all like:
  • monitoring sucks
  • log processing/centralization (and developer access)
  • network and learn from wise and knowledgeable peers

And now I wish I’ll be there and that I’ll meet you :)

Puppet Internals: the compiler

13 minute read

And I’m now proud to present the second installation of my series of post about Puppet Internals:

Today we’ll focus on the compiler.

The Compiler

The compiler is at the heart of Puppet, master/agent or masterless. Its responsibility is to transform the AST into a set of resources called the catalog that the agent can consume to perform the necessary changes on the node.

You can see the compiler as a function of the AST and Facts and returning the catalog.

The compiler lives in the lib/puppet/parser/compiler.rb file and more specifically in the Puppet::Parser::Compiler class. When a node connects to a master to ask for a catalog, the Indirector directs the request to the compiler.

In a classic master/agent system, the agent does a REST find catalog request to the master. The master catalog indirection is configured to delegate to the compiler. This happens in the lib/puppet/indirector/catalog/compiler.rb file. Check this previous article about the Indirector if you want to know more.

The indirector request contains two things:

  • what node we should compile
  • the node’s facts

Produced Catalog

When we’re talking about catalog, in the Puppet system it can mean two distinct things:

  • a containment catalog
  • a relationship resource catalog

The first one is the product of the compiler (which we’ll delve into in this article). The second one is formed by the transformation of the first one in the agent. This is the later one that we usually call the puppet catalog.

Here is a simple manifest and the containment catalog that I obtained after compiling:

class test {
  file {
    "/tmp/a": content => "test!"
  }
}

include test

And here is the produced catalog:

Out of compiler containment catalog

You’ll notice that as its name implies, the containment catalog is a graph of classes and resources that follows the structure of the manifest.

When Facts matter

In a master/agent system the facts are coming from the request in a serialized form. Those facts were created by calling Facter on the remote node.

Once unserialized, the facts are cached locally as YAML (as per the default terminus for facts on a master). You can find them in the $vardir/yaml/facts/$certname.yaml file.

At the same time the compiler catalog terminus compute some server facts that are injected into the current node instance.

Looking for the node

In Puppet there are several possibilities to store node definitions. They can be defined by node {} blocks in the site.pp, by an ENC, into an LDAP directory, etc…

Before the compiler can start, it needs to create an instance of the Puppet::Node class, and fill this with the node informations.

The node indirection terminus is controlled by the node_terminus puppet settings which by default is plain. This terminus just creates a new empty instance of a Puppet::Node.

In an ENC setup, the terminus for the node indirection will be exec. This terminus will create a Puppet::Node instance initialized with a set of classes and global parameters the compiler will be able to use.

The plain terminus for nodes calls Puppet::Node#fact_merge. This methods finds the current set of Facts of this node. In the plain case it involves reading the YAML facts we wrote to disk in the last chapter, and merging those to the current node instance parameters.

Back to the compiler catalog terminus, this one tries to find the node with the given request information and if not present by using the node certname. Remember that the request to get a catalog from REST matches /catalog/node.domain.com, in which case the request key is node.domain.com.

Let’s compile

After that, we really enter the compiler code, when the compiler catalog terminus calls Puppet::Parser::Compiler.compile, which creates a new Puppet::Parser::Compiler instance giving it our node instance.

When creating this compiler instance, the following is created:

  • an empty catalog (an instance of Puppet::Resource::Catalog). This one will hold the result of the compilation.
  • a companion top scope (an instance of Puppet::Parser::Scope)
  • some other internal data structures helping the compilation

If the given node was coming from an ENC, the catalog is bootstrapped with the known node classes.

Once done, the compile method is called on the compiler instance. The first thing done is to bootstrap top scope with the node parameters (which contains the global data coming from the ENC if one is used and the facts).

Remember the AST

When we left the Parser post, we obtained an AST. This AST is a tree of AST instances that implement the guts of the Puppet language.

In this previous article we left aside 3 types of AST:

  • Node AST
  • Hostclass AST
  • Definition AST

Those are different in the sense that we don’t strictly evaluate them during compilation (more later on this step). No, those are instantiated as part of the initial import of the known types. If you’re wondering why I spelled the Class AST as Hostclass, then it’s because that’s how it is spelled in the Puppet code; the reason being that class is a reserved word in Ruby :)

Using a lazy evaluation scheme, Puppet keeps (actually per environments), a list of all the parsed known types (classes, definitions and nodes that the parser encountered during parsing); this is called the known types.

When this list is first accessed, if it doesn’t exist, Puppet triggers the parser to populate it. This happens in Puppet::Node::Environment.known_resource_types which calls the import_ast method with the result of the parsing phase.

import_ast adds to the known types an instance of every definitions, hostclass, node returned by their respective instantiate method.

Let’s have a closer look of the hostclass instantiate:

def instantiate(modname)
  new_class = Puppet::Resource::Type.new(:hostclass, @name)
  all_types = [new_class]
  code.each do |nested_ast_node|
    if nested_ast_node.respond_to? :instantiate
      all_types += nested_ast_node.instantiate(modname)
    end
  end
  return all_types
end

So instantiate returns an array of Puppet::Resource::Type of the given type. You’ll notice that the hostclass code above analyzes its current class AST children for other ‘instantiable’ AST elements that will also end in the known types.

Known Types

The known types I’m talking about since a while all live in the Puppet::Resource::TypeCollection object. There’s one per Puppet environment in fact.

This object main responsibility is storing all known classes, nodes and definitions to be easily accessed by the compiler. It also watches all loaded files by the parser, so that it can trigger a re-parse when one of those is updated. It also serves as the Puppet class/module autoloader (when asking it for an unknown class, it will first try to load it from disk and parse it).

Scopes

Let’s open a parenthesis to explain a little bit what the scope is. The scope is an instance of Puppet::Parser::Scope and is simply a symbol table (as explained in the Dragon Book). It just keeps the values of Puppet variables.

It forms a tree, with the top scope (the one we saw the creation earlier) being the root of all scopes. This tree contains one child per new namespace.

The scope supports two operations:

  1. Looking up a variable value
  2. Setting a variable value

Look up is done with the lookupvar method. If the variable is qualified it will directly ask the correct scope for its value. For instance ::$hostname will fetch directly the top scope fact hostname.

Otherwise it will either return its value in the local scope if it exists or delegate to the parent scope. This can happen up until the top scope. If the value can’t be found anywhere, the :undef ruby symbol will be returned.

Note that this dynamic scope behavior will be removed in the next Puppet version, where only the local scope and the top scope will be supported. More information is available in this Scope and Puppet article.

Setting a variable is done with the setvar method. This method is called for instance by the AST class responsible of variable assignment (the AST::VarDef).

Along with regular variables, each scope has the notion of ephemeral scope. An ephemeral scope is a special transient scope that stores only regex capture $0 to $xy variables.

Each scope level maintains a stack of ephemeral scopes, which is by default empty.

In Puppet there is no scopes for other language structures than classes (and nodes and definitions), so inside the following if, an ephemeral scope is created, and pushed on the stack, to store the result of the regex match:

if $var =~ /test(.*)/ {
  # here $0, $1... are available
}

When Puppet execution reaches the closing ‘}’, the ephemeral scope is popped from the ephemeral scope stack, removing the $0 definition.

lookupvar will also ask the ephemeral scope stack if needed.

Orthogonally, the scope instance will also store resource defaults.

Talking about AST Evaluation

And here we need to take a break from compilation to talk about AST evaluation, which I elegantly eluded from the previous post on the Parser.

Every AST node (both branch and leaf ones) implements the evaluate method. This method takes a Puppet::Parser::Scope instance as parameter. This is the scope instance that is valid at the moment we evaluate this AST node (usually the scope associated with the class where the code we evaluate is).

There are several outcomes possible after evaluation:

  • Manipulation of the scope (like variable assignment, variable lookup, parser function call)
  • Evaluation of AST children of this node (for instance if, case, selectors need to evaluate code in one their children branch)
  • Creation of Puppet::Parser::Resource when encountering a puppet resource
  • Creation of Puppet::Resource::Type (more puppet classes)

When an AST node evaluates its children it does so by calling safeevaluate on them which in turn will call evaluate. Safeevaluate will shield the caller from exceptions, and transform them to parse errors that can specify the line and file of the puppet instruction that triggered the problem.

Shouldn’t we talk about compilation?

Let’s go back to the compiler now. We left the compiler after we populated the top scope with the node’s facts, and we still didn’t properly started the compilation phase itself.

Here is what happens after:

  1. Main class evaluation
  2. Node AST evaluation
  3. Evaluation of the node classes if any
  4. Recursive evaluation of definitions and collections (called generators)
  5. Evaluation of relationships
  6. Resource overrides evaluation
  7. Resource finish
  8. Ship the catalog

After that, what remains is the containment catalog. This one will be transformed to a resource containment catalog. We call resource catalog an instance of Puppet::Resource::Catalog where all Puppet::Parser::Resource have been transformed to Puppet::Resource instances.

Let’s now see in order the list of operations we outlined above and that form the compilation.

Main class evaluation

The main class is an hidden class where every code outside any definition, node or class ends. It’s a kind of top class from which any other class is inner. This class is special because it has an empty name.

Evaluating the main class means:

  1. Creating a companion resource (an instance of Puppet::Parser::Resource) whose scope is the top scope.
  2. Add this resource to the catalog
  3. Evaluating the class code of this resource

Let’s focus on this last step which happens in Puppet::Parser::Resource.evaluate. It involves mainly getting access to the Puppet::Resource::Type instance matching our class (its type in fact) from the known types, and then calling the Puppet::Resource::Type.evaluate_code method.

Evaluating code of a class

I’m putting aside the main class evaluation to talk a little bit about code evaluation of a given class because that’s something we’ll see for every class or node during compilation.

This happens during Puppet::Resource::Type.evaluate_code which essentially does:

  1. Create a scope for this class (unless we’re compiling the main class which already uses the top scope)
  2. Ask the class AST children to evaluate with this scope

We saw in the Puppet Parser post how the AST was produced. Eventually some of those AST nodes will end up in the code element of a given puppet class (you can refer to the Puppet grammar and Puppet::Parser::AST::Hostclass for the code), under the form of an ASTArray (which is an array of AST nodes).

Node Evaluation

As for the main class, the current node compilation phase:

  • ask the known types about the current node, and if none are found ask for a default node.
  • creates a resource for this node, add it to the catalog
  • evaluates this node resource

This last evaluation will execute the given node AST code.

Node class evaluation

If the node was provided by an ENC, the compiler will then evaluate those classes. This is the same process as for the main class, where for every classes we create a resource, add it to the catalog and then evaluate it.

Evaluation of Generators

In Puppet the generators are the entities that are able to spawn new resources:

  • collections, including storeconfig exported resources
  • definitions

This part of the compilation loops calling evaluate_definitions and evaluate_collections, until none of those produces new resources.

Definitions

During the AST code evaluation, if the compiler encounters a definition call, the Puppet::Parser::AST::Resource.evaluate will be called (like for every resource).

Since this resource comes from a definition, a type resource will be instantiated and added to the catalog. This resource will not be evaluated at this stage.

Later, when evaluate_definitions is called, it will pick up any resource that hasn’t been evaluated (which is the case of our definition resources) and evaluates them.

This operation might in turn create more unevaluated resources (ie new definition spawning more definition resources), which will be evaluated in a subsequent pass over evaluate_definitions.

Collections

When the parser parses a collection which are defined like this in the Puppet language:

File <<| tag == 'key' |>>

it creates an AST node of type Puppet::Parser::AST::Collection. The same happen if you use the realize function.

Later when the compiler evaluate code and encounters this collection instance, it will create a Puppet::Parser::Collector and register it to the compiler.

Even later, during evaluate_collections, the evaluate method of all the registered collectors will be called. This method will either fetch exported resources from storeconfigs or virtual resources, and create Puppet::Parser::Resource that are registered to the compiler.

If the collector has created all its resources, it is removed from the compiler.

Relationship evaluation

The current compiler holds the list of relationships defined with the -> class of relationship operators (but not the ones defined with the require or before meta-parameters).

During code evaluation, when the compiler encounters the relationship AST node (an instance of Puppet::Parser::AST::Relationship), it will register a Puppet::Parser::Relationship instance to the compiler.

During the evaluate_relationships method of the compiler, every registered relationship will be evaluated. This evaluation simply adds the destination resource reference to the source resource meta-parameter matching the operator.

Resource overrides

And the next compilation phase consists in adding all the overrides we discovered during the AST code evaluation. Normally overrides are applied as soon as they are discovered, but it can happen than an override (especially for collection overrides), can not be applied because the resources it should apply on are not yet created.

Applying an override consist in setting a given resource parameter to the overridden value.

Resource finishing

During this phase, the compiler will call the finish method on every created resources. This methods is responsible of:

  • adding resource defaults to the resource parameters
  • tagging the resource with the current scope tags
  • checking that resource parameter are valid

Resource meta-parameters

The next step in the compilation process is to set all meta-parameter of our created resources, starting from the main class and walking the catalog from there.

Finish

Once everything has been done, the compiler runs some checks to make sure all overrides and collections have been evaluated. Then the catalog is transformed to a Puppet::Resource catalog (which doesn’t change its layout, just the instance of its vertices).

Conclusion

I hope you now have a better view of the compilation process. As you’ve seen the compilation is a complex process, which is one of the reason it can take some time. But that’s the price to pay to produce a data only graph tailored to one host that can be applied on the host.

Stay tuned here for the next episode of my Puppet Internals series of post. The next installment will certainly cover the Puppet transaction system, whose role is to apply the catalog on the agent.