Thursday, November 28, 2013

On liblognorm1

Liblognorm is a fast-samples based normalization library. It's brand new version 1.0.0 will be released today. It is a major improvement over previous versions, but unfortunately we needed to change the API. So some notes are due.

Liblognorm evolves since several years and was intially meant to be used primarily with the Mitre CEE effort. Consequently, the initial version of liblognorm (0.x) uses the libee CEE support library in its API.

As time evolved, the initial CEE schema underwent considerable change. Even worse, Mitre lost funding for CEE. While the CEE ideas survived as part of Red Hat-driven "Project Lumberjack", the data structures became greatly simplified and JSON based. That effectively made libee obsolete (and also in parts libestr, which was specifically written to support CEE's initial requirement of embedded NUL chars in strings).

Recently, Pavel Levshin converted liblognorm to native JSON, which helped improve performance and simplicity for many client applications. Unfortunately, this change broke interface compatibility (and there was no way to avoid that, obviously...).

The current library is the result of that effort. Application developers are encouraged to switch to this version, as it provides the benefit of a simpler API. This version is now being tracked by the git master branch.

However, if you need to stick to the old API, there is a git branch liblognorm0, which contains the previous version of the library. This branch is also maintained for important bug fixes, so it is safe to use.

We recommend that packagers create packages both for liblognorm0 and liblognorm1. Note that liblognorm's development packages cannot coexist on the same system as the PKGCONFIG system would get into trouble. Adiscon's own packages follow this schema.

Note that rsyslog will soon begin to enjoy the benefits of liblognorm1. This results in a notable performance improvement for mmnormalize. Support will initially become available in v8.

Tuesday, November 26, 2013

rsyslog impstats analyzer reloaded

My co-worker Andre had a little time and extended the rsyslog impstats analyzer to support generating graphs. IMHO this gives you fantastic insight into how the system operates. While I know that some folks already push this data to their internal health monitoring system, the beauty of the online rsyslog impstats analyzer is that you do not need to install anything -- a log file with stats is all you need to get you going. Let's look at a quick sample. This is a page returned by the analyzer's check phase:

If you look closely, you notice that there are active links to the problem areas. Let's follow the one to action 3 queue:

Here, we see the problem in action: the queue initially behaves well, but relatively soon keeps it's size at close to 1k messages. At the same time, the enqueue rate (green line) is much higher. Consequently, the discard rate (blue line) is getting pretty high. The delta between discard and enqueue line is what is actually processed: obviously far too few messages to keep up. 

BTW: this chart is from a real-world case. One problem here was that the queue's discard mark was set too low (close to 1k), so that the queue never could fill up over the 1k mark even though it had a much larger max size. When we fixed this, we saw that the queue consumer (a script) could actually not keep up with the message volume (not shown here). So this hint from the graph was also pointing to a real problem (but you need to fix one problem after another and then look at new stats).

Note that graphics can also be generated for non-problem counters - you can select from a menu on top of the pages (see first screenshot). The web app supports cumulated stats and can create delta values out of them. It also offers the ability to use logarithmic y axis scaling, which is useful in some cases. The app does not well handle imudp traffic. The reason is that imudp reports both ipv4 and ipv6 listeners with the same counter name, and so we don't have any chance to differentiate between them. An update for imudp is planned to address this.

More enhancements for the statistics analyzer are planned. We are actively looking for feedback.

Friday, November 22, 2013

rsyslog statistics online analyzer

Based on recent discussions on the rsyslog mailing list, we have begun to start an online tool to "analyze" impstats logs. So far, it detects "obvious" problems, but it probably is a good starting point for beginners. We plan to extend this tool.

It can be found here:

I hope it is useful.

Friday, November 15, 2013

release window for rsyslog 7.6

There have been a couple of questions when rsyslog's next stable release (7.6) will be released.

Originally, the idea was to have this done by the end of this year, which essentially means end of November. Unfortunately, this needs to be pushed into early January 2014. The reason is code quality.

In October, we discovered a larger regression and inconsistency in regard to the new variable support in rsyslog. As it turned out, the only good choice was to seriously refactor the code handling variables and message properties. Unfortunately, such a larger refactoring also means bug potential, so I don't feel well to release 7.6 stable with just two weeks of testing on this refactored code. Looking forward, a release around mid-December sounds doable, but this means we'll hit the holiday break with a brand-new stable release. This is something that I definitely want to avoid. As such, the current plan is to do the release shortly after the holiday break.

In order to keep this release window, I will also avoid larger modifications to the code base. That doesn't mean things like better support for impstats support in omelasticsearch, but it may mean that the updated imfile code will probably not be merged into v7, at least not now.

Depending on how things work out with rsyslog v8, I will do another 7.7 devel series for less intrusive enhancements or move directly to v8 only. This will be also be decided around January 2014, when we hopefully got enough feedback on v8.

Thursday, November 14, 2013

Samples for v8 module conversion

This is just a  blog post on where to find sample of converting modules to the v8 output module interface. Additional information will be upcoming within the next days. Stay tuned.

Please bear in mind that the v8 output module interface is not stable at this time. It will very likely change within the next weeks.

Right now, the rsyslog v8 compatibility doc has some information on the new interface. It is expected that it will remain the best source to get at least an overview of the required changes.

Actual samples are best found in git. As I have various levels of updates (from very minimal to full), you'll probably find something for your liking. Very minimal change is required for message modification modules (assuming they were thread-safe in the first place), e.g. mmfields:;a=commitdiff;h=1c5a17241131129e9a1c184688500ad5b1383c28
It's actually just adding the new plumbing. That's all that's usually required for message modification modules. Still, note the fundamental difference that the modules are run concurrently in v8, and as such must be reentrant and thread-safe (this requirement now applies to all output modules). Usually this is the case, but may be different depending on the processing done by the module in question (e.g. lookups to backend systems, libraries, global data - to name some of potential trouble spots).
omrelp is an example of a conversion to v8 that makes use of the new features:

If you browse git, the commit comments should help find more examples on a module-by-module basis.

Wednesday, November 13, 2013

On to the Lucky Number...

Do you remember last October? The rsyslog mailing list got very busy with things like output load balancing, global variables, and a lot of technical details of the rsyslog engine. I don't intend to reproduce all of this here. The interested reader may simply review the rsyslog mailing list archives, starting at October 2013.

Most of these discussions focused on the evolution of the rsyslog engine. Essentially, the v7 engine brought many new things, but was constraint due to a lot of legacy, for example the output module interface which originally (roughly 8 years ago) was designed to keep it very simple to use, especially for developers who did not know about threading (which was uncommon to know at that time). Also, the engine gained lot's of speeds by a special SIMD-like execution mode (details in mailing list archive), but that mode was counter-productive for some of the newer features. An ultimate problem were global variables, which were very hard to implement with the v7 engine. I even proposed a complete shadow variable system to make them fit into the framework, but this was -thankfully- not well received by the community at large.

Note that I wanted to re-write at least part of the core engine for quite some while, but it sounded like a task so large that I deferred it one time after another. However, seeing all the time going into the discussion, potential work-arounds (like shadow variables) and the then still-nonoptimal state I very strongly considered if it may be time for a radical change. Better spend the time on cool new things than investing that time into some "work-around" type of implementations.

With that on my mind, I had a couple of days of drafting and some talks with my peers. It turned out that we wanted to at least give this a try and see how (and if;)) things would evolve. So I did a week of test implementation, mostly focused on a new, highly concurrent output module interface. That did work out rather well, so we finally decided to go for the full length. I started rewriting the complete core rule execution engine.

And this brings us to the lucky number: let's welcome rsyslog v8, as the new version will be named. I thought for a short moment to stick with v7, but the changes are really, really big and quite a bit of legacy and upward compatibility has been lost. For example, all output modules need to be rewritten. So while v8 may not be a great marketing tool to get it quickly into the distros, it correctly flags that some bigger changes are to be anticipated (but as usual we were very conservative with breaking existing configurations ;)).

I did not announce this effort any sooner as until now I wasn't sure if it would really work out. Obviously, I now think thinks look sufficiently well ;-). It's not yet deployable, but we are getting closer.

Expect a 8.1.0 release within the next couple of days. We are working on the final finishing touches, many of which involve testing. Note that 8.1.0 will be highly experimental and really is for folks who would like to get an impression of its new capabilities ... and those trying to help find bugs.

There are some restrictions on supported modules and features, and some upgrades may run into trouble with that (details will be available together with the release). Also note that the rewrite is not yet 100% complete. Some of the finer details still need to be advanced, e.g. a new, higher performance and easier to use interface for output transactions. Or some aspects of message error processing ... and so on. This follows our usual pattern that in a new development branch we bring in new features gradually, until we have reached the final goal. In any case, we are now pretty confident that we will reach this goal. If all goes exceptionally well, a v8-stable may come up in March 2014 (and free support for v7 will be available for another month or two so that there is time to migrate).

To facilitate this move, git branches will change. Yesterday's master branch will become available as v7-devel (now in place) and the master branch will hold the new v8 code. Follow this blog post and the rsyslog mailing list to be sure to see exciting new versions when they become available.

Version 7-devel will get kind of a beta status, where mostly bug fixes are applied. However, some non-bugfix changes will also happen, but only after careful consideration. The focus obviously is now on v8 and this is where we will do the "cool" stuff in the future.

Note that the average user will possibly not notice much difference between v7 and v8 initially. The new engine will be most useful for power users and installations with complex rules and/or a high log volume.

I am very excited about this new branch of development and hope you'll also be excited as it unrolls. Also my sincere thanks to everyone who waits patiently for v7 related work. I was too busy to look a that the past few weeks and will probably be occupied with v8 for a quite a bit longer.

Monday, November 04, 2013

On Executing a Ruleset in non-SIMD Mode

This is probably inconsistent, not thought out, maybe even wrong. Use this information with care.

I wanted to share a complexity of executing a rsyslog ruleset in non-SIMD mode, which means one message is processed after each other. This posting is kind of a think tank and may also later be useful for others to understand design and design decisions. Note, however, that I do not guarantee that anything will be implemented as described here.

As you probably know rsyslog supports transactions and, among other things, does this to gain speed from it. This is supported by many modules, e.g. database modules or the tcp forwarder. In essence, what happens is
  • transaction is begun
  • messages are feed to the action
  • transaction is stopped
For most operations, success/failure information is only available after step 3 (because step 2 mainly buffers to-be-processed messages).

This plays very well with the current SIMD engine, which advances processing one statement after the other, and then processed the batch as whole. That means all three steps are done at one instance when the relevant statement is processed. This permits the engine to read the final result before advancing state to the next action.
Now envision a non-SIMD engine. It needs to start the transaction as soon as the first message is ready for processing, and the submit messages as they are processed. However, in non-SIMD mode, each action will only carry out step 2, and then immediately advance to the next statement. This happens for each message. Only after ruleset processing is completed, the final commit can occur.
This means that we do not have the actual outcome of operation until ruleset processing has finished. Most importantly, the result of statement n is no longer available in statements s with s > n. Some features (like failover processing -- execIfPreviousIsSuspended) depend on the previous statement's result, so this is a problem. Even more so, retrying an action is far from trivial, as we do no longer necessarily have the exact same information that was gathered during processing available when processing is done (later statements may have changed message state).
There are some solutions to these problems
  1. if execIfPreviousIsSuspended is specified, the statement in front of it must be forced to commit after each message (which costs quite a lot of performance, but at least only if the users turns on that feature)
  2. To mitigate the performance loss of those auto-commits, we could add a new syntax which explicitly permits to specify failover actions. These could be linked to the primary statement and NOT be executed during regular rule engine execution. We would still need to buffer (message pointers) for the case they are to be executed, but that's probably better. Most importantly, there would be no other conditional logic supported, which makes processing them rather quickly.
  3. The message object must support a kind of "copy on write" (which would even be very useful with the v7 engine, which also permits more updates than any pre-v8 engine ever was designed for...). This could be done by splitting the (traditional) immutable part of the message structure from things like variables. Message modification modules modifying the "immutable" part would need to do a full copy, others not (the usual case). Of course, this copy on update can make variable operations rather costly.
  4. Output modules could be forced to perform a kind of "retry commit" -- but this is a bad option because it a) puts (repetitive) burden on the output (in essence, the output faces exactly the same problems like the core engine, EXCEPT probably that it knows better which exact data items it needs -- easy for traditional template based interface). b) it removes from the engine the ability to re-try parts of the transaction. So this is not very appealing.
  5. In any case, the actual "action retry handling" should probably be applied to the commit interface, far less than the usual submit interface.
What even more complicates things is that we do not know what modules that use the message passing interface actually do with the messages. In current code, most of them are message modification modules. This means in order for them to work correctly, they actually need to execute on the message "right in time". And, of course, there is even more complexity in that each output may do partial commits at any time. The most popular case probably is when it runs out of some buffer space.

To solve these issues, a two-step execution inside the rule system seems desireable:
  • execution phase
  • commit phase
Note that this two-phase approach is already very similar to what action queues do. However, this also means that action queues in pre-v8 can be victim to race conditions if variables are heavily used.

In any case, using action queues to perform these two steps seems very natural and desirable. Unfortunately, there is still considerate overhead attached to this (mutex operations, required context switches), which makes this very unattractive. The end result if taking this path probably would be a reduced overall processing speed, something we really don't like to see. Also, failover processing would not work if following that path.

Execution Phase
  1. advance message state - message modification (mm) modules must be called immediately
  2. "shuffle" msgs to actions - the main concern here is to make sure that the action sees an immutable action object, at least in regard to what it needs from it (we may need to add an entry point to ask the action to extract whatever it actually needs and return a pointer to that - not necessary for simple strings, for a prominent example).
    Note that doAction is never called for non mm-modules.
At the end of execution phase, for each action we have an array of to-be-processed data.

Commit Phase
For each action with data, we submit the data to its action, performing all three steps. This way, we can easily keep track of the state advancement and action errors. It would be easy to implement dedicated failover processing at this stage (but this probably requires larger state info if the failover action is different from the primary one).

This two-phase approach somewhat resembles the original batching/SIMD idea of the pre-v8 engine. So it looks like this design was well up to the point of what we need to do. I am still a bit undecided if doing these engine changes are really worth it, but so far code clarity seem to be much better. Performance possibly as well, as the SIMD needed to carry a lot of state around as well.

I will now probably do a test implementation of the two-phase approach, albeit only for the traditional string interface.

Some ideas/results from the test implementation:
  •  The structure used to store messages could -LATER- be made the structure that is actually queued in action queues, enabling for faster performance (in-memory) and unified code.
  • Note: an advantage on storing the result string template vs. the message object between phases is of advantage as we do not need to keep the message immutable for this reason. It needs to be seen, though, if that really is an advantage from the overall picture (the question is can we get to a point where we actually do NOT need to do copy-on-write -- obviously this would be the case if one string templates are used).