Monday, December 16, 2013

How I maintain multiple rsyslog versions

Rsyslog is an enterprise-class project. Among others, this means we need to provide different versions for quite a while (enterprises don't like to update every 6 month, and don't do so for good reasons).

As such, we support multiple major versions. As of this writing, the versions used in practice are v5, v7 and v8 upcoming shortly. There are also v0,...,v4 versions out there, and what I write applys to them equally. If there is development going on for a version, there is a vX-devel branch. This is for those that want the new features, obviously at the expense of some enhanced chance for instability. And then there is vX-stable, which is the stable branch. Once something is named vX-stable, it NEVER receives new features, just bug fixes. There is one exception from the naming rules: the most current development version is held inside the git master branch -- because that's where people expect it. So, currently, there is no v8-devel branch, but it's named "master" instead.

Maintaining multipl versions sounds scaring and cumbersome. Indeed, this fear came up when talking about maintaining multiple doc versions inside the rsyslog-doc project. Thanks to git, this is not the case. It's actually dumb easy to do, once you get a little used to the workflow.

The key thing is to make changes in the oldest branch where they belong (with "oldest" being the one that the project really want's to support). So if there is a bug that's present in v7-stable, v7-devel and master, do NOT fix it in master. Checkout v7-stable and fix it there. The same applies to doc updates. Once you have fixed it, all you need to do is to merge the changes up. Especially for smaller changes, git does most of the hard plumbing. Occasionally, there may be some merge conflicts, but these can usually quickly solved. There are some rare exceptions, usually as the result of big refactoring. For doc, merge conflicts are always almot trivial to solve (at least have been in the past). It's advisabile to merge up early. The longer you wait, the more work you need to do if a merge conflict occurs (because you probably don't remember well enough what you did). The backdraw of that is that the git history becomes a little cluttered with merge entries, but that's how it is (nothing in life is free ;)).

So in the workflow is as follows (I use the v7-stable as the "oldest" version in this workflow sample; that's also the typical case):
  1. git checkout v7-stable
  2. update whatever you need to update
  3. git commit
  4. git checkout v7-devel
  5. git pull . v7-stable
  6. git checkout master
  7. git pull . v7-devel
  8. DONE
In esscence, you make the change once and take 30 seconds to merge it up. Merge conflicts happen seldom and are quite unlikely for doc changes. Those that happen are usually just additions to related areas, and so it takes another 30 seconds to fix them. All in all very painless.

If you check the git logs, you'll often find occurences of the workflow above (just look for the merges, they really stand out).

Now there remains the question of multiple versions, not branches. For example, we have v7.4.0, 7.4.1, 7.4.2,... How do we handle updates to them. Well, easy: first of all when 7.4.x is out, 7.4.(x-1) is NEVER again change (why should it, that's for the next version). So we actually do NOT do any updates to already released versions (again, think about the resulting mess). So it just boils down to being able to fetch the exact same version later. And this is extremely easy with git: just use a tag, that's what this is meant for.

So whenever I do a release, the last thing I do after it is build and being announced is do a "git tag ", e.g. "git tag v7.4.7". That way, I can later do "git checkout v7.4.7" and have exactly the version I look for. Note, though, that you need to do "git push --tags" in order to push the tags to your published repository.

Conclusion: using git branches and release tags makes it extremely easy to maintain a multitude of version. Without git, this would be an almost undoable task (we had a really hard time in those CVS days...).

Saturday, December 14, 2013

Why is the rsyslog project running its own infrastructure?

Currently, there is a very valuable discussion going on on the rsyslog mailing list on how we can attract more contributors and how moving things to github can help with this. I was writing a longer reply, and then it occured to me that it probably is better to blog about this topic as it may be of future interest to have the current thinking (relatively) esay accessible.

The core question being asked is "Would it make more sense to leave all that sort of info in one place instead of pulling people from the *official* rsyslog repo on *github* over to *rsyslog.com*?" Then, it had some examples of how logstash uses github README files for that [full text here in the rsyslog mailing list archives].

These are my thoughts:
 
I don't object placing a bit more into readme and related files. However, while this one of the official  repos*, it's not the official *project site*, so I think it makes sense to ask people to go to the project site for anything that's non-trivial. Looking at the logstash things, to me it pretty much looks like they do the same thing. I have absolutely no problem putting the same information into README-like files inside the repository, as long as the authorative pages reside on the official project web (and yesterday I have begun to do so).

In essence, the question boils down to "why don't I want to be locked in into github?" It's (bad) experience. When rsyslog started we used sourceforge.net intensively. At that time, it was as popular and the place to be on" that github is today. Then, they got some crazy biz ideas, got technical and other problems and ... it really turned out to be a mess and bad for the project.

Also, we used Freshmeat for most of our public announcements around the same time (a bit later, and partly togehter with sf.net). All went extremely well, until out of the sudden they had the bright idea of a "redesign" that made the site unusable. Again, bad hit for us.
 
I really don't want to see this again. If I use github exclusively, I have no chance to redirect things if they go crazy. I have a hard time moving on to the next hot spot when it is born (because all the Google juice is with github).
 
I admit that I would like to have the buzz of "I tell you this is an active project, I can judge by the number of issues opened and closed (and so on)". But at the same time, I remember that this type of entanglement always turned against us after some time.

For example, I tend to file the bug number (actually a link to it) in both commit comments and the change log. If I now link to sf.net, github, etc and they "go away" (some way or the other), all of these links become invalid and I don't have a chance to fix this (again, I've actually experienced that, so it's not pure theory). And such things happen. For a non-rsyslog example, think about the bitkeeper debacle, where they out of the sudden thought it was due to charge kernel developers for so-far free services.

Again, I would really like to have some of the cool things. With git, it's relatively safe, as I can move the repo around quickly, and as long as the rsyslog.com site contains the main pointer to where the current official git is, the risk is very limited. But other than that, my experience is that the short-term benefits come at the risk of severe long-term problems.

I am open to really good arguments why I am wrong. One good argument would probably be an OSS complete hosting service that is in place for 15+ yrs without any interruption in user base and breaking URLs - and that is still a hot spot. I guess sf.net mostly qualifies in that category, but as I said, we had our own story with them...

I frankly admit that I am very conservative in this regard. After all, I even post things like this on my personal blog, and not on a site directly owned by Adiscon. But, you may say, you use blogger, so don't you take a risk here? Well, you probably have noticed that I use blogger under a domain that I have full control over. So whenever they go crazy, I can move on to some other place (not totally effortlessly, but I am in control of those things).

Friday, December 13, 2013

rsyslog on github

In some pretty long discussions, it turned out that most (if not all) users were not aware that rsyslog has an official repository on  github for over six month now. So it probably is a good idea to tell the world.

While it probably is obvious, I would also like to say that I accept pull requests via github. Actually, this was always the case, even when I had no repository on github. If someone sends me a pull request, I fetch from whereever that someone's git is located, merge it into my git and push the final result. So there is nothing special with a repository that's on github.

Please note that we currently continue to provide rsyslog via the Adiscon git. With git, it's irrelevat where a repository is hosted -- git is not a server-based system. While I like github's interface, I do not like to lock rsyslog into github. We almost got locked into sourceforge.net in the early days of rsyslog development and I was extremely happy that we resisted to do much more than just the CVS repository on SF when they had their really bad time. Github currently for sure is the place to be, but I like the ability to move away when the next cool thing pops up. As such, I have setup a new page that describes where the rsyslog repositories can be found currently.  This page will be updated as need arises.

Please note that official repositories are peers, so all are equal (minus maybe a one-minute delta during push operations). Pick whichever you like most.

Thursday, November 28, 2013

On liblognorm1

Liblognorm is a fast-samples based normalization library. It's brand new version 1.0.0 will be released today. It is a major improvement over previous versions, but unfortunately we needed to change the API. So some notes are due.

Liblognorm evolves since several years and was intially meant to be used primarily with the Mitre CEE effort. Consequently, the initial version of liblognorm (0.x) uses the libee CEE support library in its API.

As time evolved, the initial CEE schema underwent considerable change. Even worse, Mitre lost funding for CEE. While the CEE ideas survived as part of Red Hat-driven "Project Lumberjack", the data structures became greatly simplified and JSON based. That effectively made libee obsolete (and also in parts libestr, which was specifically written to support CEE's initial requirement of embedded NUL chars in strings).

Recently, Pavel Levshin converted liblognorm to native JSON, which helped improve performance and simplicity for many client applications. Unfortunately, this change broke interface compatibility (and there was no way to avoid that, obviously...).

The current library is the result of that effort. Application developers are encouraged to switch to this version, as it provides the benefit of a simpler API. This version is now being tracked by the git master branch.

However, if you need to stick to the old API, there is a git branch liblognorm0, which contains the previous version of the library. This branch is also maintained for important bug fixes, so it is safe to use.

We recommend that packagers create packages both for liblognorm0 and liblognorm1. Note that liblognorm's development packages cannot coexist on the same system as the PKGCONFIG system would get into trouble. Adiscon's own packages follow this schema.

Note that rsyslog will soon begin to enjoy the benefits of liblognorm1. This results in a notable performance improvement for mmnormalize. Support will initially become available in v8.

Tuesday, November 26, 2013

rsyslog impstats analyzer reloaded

My co-worker Andre had a little time and extended the rsyslog impstats analyzer to support generating graphs. IMHO this gives you fantastic insight into how the system operates. While I know that some folks already push this data to their internal health monitoring system, the beauty of the online rsyslog impstats analyzer is that you do not need to install anything -- a log file with stats is all you need to get you going. Let's look at a quick sample. This is a page returned by the analyzer's check phase:

If you look closely, you notice that there are active links to the problem areas. Let's follow the one to action 3 queue:



Here, we see the problem in action: the queue initially behaves well, but relatively soon keeps it's size at close to 1k messages. At the same time, the enqueue rate (green line) is much higher. Consequently, the discard rate (blue line) is getting pretty high. The delta between discard and enqueue line is what is actually processed: obviously far too few messages to keep up. 

BTW: this chart is from a real-world case. One problem here was that the queue's discard mark was set too low (close to 1k), so that the queue never could fill up over the 1k mark even though it had a much larger max size. When we fixed this, we saw that the queue consumer (a script) could actually not keep up with the message volume (not shown here). So this hint from the graph was also pointing to a real problem (but you need to fix one problem after another and then look at new stats).

Note that graphics can also be generated for non-problem counters - you can select from a menu on top of the pages (see first screenshot). The web app supports cumulated stats and can create delta values out of them. It also offers the ability to use logarithmic y axis scaling, which is useful in some cases. The app does not well handle imudp traffic. The reason is that imudp reports both ipv4 and ipv6 listeners with the same counter name, and so we don't have any chance to differentiate between them. An update for imudp is planned to address this.

More enhancements for the statistics analyzer are planned. We are actively looking for feedback.

Friday, November 22, 2013

rsyslog statistics online analyzer

Based on recent discussions on the rsyslog mailing list, we have begun to start an online tool to "analyze" impstats logs. So far, it detects "obvious" problems, but it probably is a good starting point for beginners. We plan to extend this tool.

It can be found here:
    http://www.rsyslog.com/impstats-analyzer/

I hope it is useful.

Friday, November 15, 2013

release window for rsyslog 7.6

There have been a couple of questions when rsyslog's next stable release (7.6) will be released.

Originally, the idea was to have this done by the end of this year, which essentially means end of November. Unfortunately, this needs to be pushed into early January 2014. The reason is code quality.

In October, we discovered a larger regression and inconsistency in regard to the new variable support in rsyslog. As it turned out, the only good choice was to seriously refactor the code handling variables and message properties. Unfortunately, such a larger refactoring also means bug potential, so I don't feel well to release 7.6 stable with just two weeks of testing on this refactored code. Looking forward, a release around mid-December sounds doable, but this means we'll hit the holiday break with a brand-new stable release. This is something that I definitely want to avoid. As such, the current plan is to do the release shortly after the holiday break.

In order to keep this release window, I will also avoid larger modifications to the code base. That doesn't mean things like better support for impstats support in omelasticsearch, but it may mean that the updated imfile code will probably not be merged into v7, at least not now.

Depending on how things work out with rsyslog v8, I will do another 7.7 devel series for less intrusive enhancements or move directly to v8 only. This will be also be decided around January 2014, when we hopefully got enough feedback on v8.

Thursday, November 14, 2013

Samples for v8 module conversion

This is just a  blog post on where to find sample of converting modules to the v8 output module interface. Additional information will be upcoming within the next days. Stay tuned.

Please bear in mind that the v8 output module interface is not stable at this time. It will very likely change within the next weeks.

Right now, the rsyslog v8 compatibility doc has some information on the new interface. It is expected that it will remain the best source to get at least an overview of the required changes.

Actual samples are best found in git. As I have various levels of updates (from very minimal to full), you'll probably find something for your liking. Very minimal change is required for message modification modules (assuming they were thread-safe in the first place), e.g. mmfields:

http://git.adiscon.com/?p=rsyslog.git;a=commitdiff;h=1c5a17241131129e9a1c184688500ad5b1383c28
It's actually just adding the new plumbing. That's all that's usually required for message modification modules. Still, note the fundamental difference that the modules are run concurrently in v8, and as such must be reentrant and thread-safe (this requirement now applies to all output modules). Usually this is the case, but may be different depending on the processing done by the module in question (e.g. lookups to backend systems, libraries, global data - to name some of potential trouble spots).
omrelp is an example of a conversion to v8 that makes use of the new features:

If you browse git, the commit comments should help find more examples on a module-by-module basis.

Wednesday, November 13, 2013

On to the Lucky Number...

Do you remember last October? The rsyslog mailing list got very busy with things like output load balancing, global variables, and a lot of technical details of the rsyslog engine. I don't intend to reproduce all of this here. The interested reader may simply review the rsyslog mailing list archives, starting at October 2013.

Most of these discussions focused on the evolution of the rsyslog engine. Essentially, the v7 engine brought many new things, but was constraint due to a lot of legacy, for example the output module interface which originally (roughly 8 years ago) was designed to keep it very simple to use, especially for developers who did not know about threading (which was uncommon to know at that time). Also, the engine gained lot's of speeds by a special SIMD-like execution mode (details in mailing list archive), but that mode was counter-productive for some of the newer features. An ultimate problem were global variables, which were very hard to implement with the v7 engine. I even proposed a complete shadow variable system to make them fit into the framework, but this was -thankfully- not well received by the community at large.

Note that I wanted to re-write at least part of the core engine for quite some while, but it sounded like a task so large that I deferred it one time after another. However, seeing all the time going into the discussion, potential work-arounds (like shadow variables) and the then still-nonoptimal state I very strongly considered if it may be time for a radical change. Better spend the time on cool new things than investing that time into some "work-around" type of implementations.

With that on my mind, I had a couple of days of drafting and some talks with my peers. It turned out that we wanted to at least give this a try and see how (and if;)) things would evolve. So I did a week of test implementation, mostly focused on a new, highly concurrent output module interface. That did work out rather well, so we finally decided to go for the full length. I started rewriting the complete core rule execution engine.

And this brings us to the lucky number: let's welcome rsyslog v8, as the new version will be named. I thought for a short moment to stick with v7, but the changes are really, really big and quite a bit of legacy and upward compatibility has been lost. For example, all output modules need to be rewritten. So while v8 may not be a great marketing tool to get it quickly into the distros, it correctly flags that some bigger changes are to be anticipated (but as usual we were very conservative with breaking existing configurations ;)).

I did not announce this effort any sooner as until now I wasn't sure if it would really work out. Obviously, I now think thinks look sufficiently well ;-). It's not yet deployable, but we are getting closer.

Expect a 8.1.0 release within the next couple of days. We are working on the final finishing touches, many of which involve testing. Note that 8.1.0 will be highly experimental and really is for folks who would like to get an impression of its new capabilities ... and those trying to help find bugs.

There are some restrictions on supported modules and features, and some upgrades may run into trouble with that (details will be available together with the release). Also note that the rewrite is not yet 100% complete. Some of the finer details still need to be advanced, e.g. a new, higher performance and easier to use interface for output transactions. Or some aspects of message error processing ... and so on. This follows our usual pattern that in a new development branch we bring in new features gradually, until we have reached the final goal. In any case, we are now pretty confident that we will reach this goal. If all goes exceptionally well, a v8-stable may come up in March 2014 (and free support for v7 will be available for another month or two so that there is time to migrate).

To facilitate this move, git branches will change. Yesterday's master branch will become available as v7-devel (now in place) and the master branch will hold the new v8 code. Follow this blog post and the rsyslog mailing list to be sure to see exciting new versions when they become available.

Version 7-devel will get kind of a beta status, where mostly bug fixes are applied. However, some non-bugfix changes will also happen, but only after careful consideration. The focus obviously is now on v8 and this is where we will do the "cool" stuff in the future.

Note that the average user will possibly not notice much difference between v7 and v8 initially. The new engine will be most useful for power users and installations with complex rules and/or a high log volume.

I am very excited about this new branch of development and hope you'll also be excited as it unrolls. Also my sincere thanks to everyone who waits patiently for v7 related work. I was too busy to look a that the past few weeks and will probably be occupied with v8 for a quite a bit longer.

Monday, November 04, 2013

On Executing a Ruleset in non-SIMD Mode

WARNING - WORK IN PROGRESS
This is probably inconsistent, not thought out, maybe even wrong. Use this information with care.

I wanted to share a complexity of executing a rsyslog ruleset in non-SIMD mode, which means one message is processed after each other. This posting is kind of a think tank and may also later be useful for others to understand design and design decisions. Note, however, that I do not guarantee that anything will be implemented as described here.

As you probably know rsyslog supports transactions and, among other things, does this to gain speed from it. This is supported by many modules, e.g. database modules or the tcp forwarder. In essence, what happens is
  • transaction is begun
  • messages are feed to the action
  • transaction is stopped
For most operations, success/failure information is only available after step 3 (because step 2 mainly buffers to-be-processed messages).

This plays very well with the current SIMD engine, which advances processing one statement after the other, and then processed the batch as whole. That means all three steps are done at one instance when the relevant statement is processed. This permits the engine to read the final result before advancing state to the next action.
Now envision a non-SIMD engine. It needs to start the transaction as soon as the first message is ready for processing, and the submit messages as they are processed. However, in non-SIMD mode, each action will only carry out step 2, and then immediately advance to the next statement. This happens for each message. Only after ruleset processing is completed, the final commit can occur.
This means that we do not have the actual outcome of operation until ruleset processing has finished. Most importantly, the result of statement n is no longer available in statements s with s > n. Some features (like failover processing -- execIfPreviousIsSuspended) depend on the previous statement's result, so this is a problem. Even more so, retrying an action is far from trivial, as we do no longer necessarily have the exact same information that was gathered during processing available when processing is done (later statements may have changed message state).
There are some solutions to these problems
  1. if execIfPreviousIsSuspended is specified, the statement in front of it must be forced to commit after each message (which costs quite a lot of performance, but at least only if the users turns on that feature)
  2. To mitigate the performance loss of those auto-commits, we could add a new syntax which explicitly permits to specify failover actions. These could be linked to the primary statement and NOT be executed during regular rule engine execution. We would still need to buffer (message pointers) for the case they are to be executed, but that's probably better. Most importantly, there would be no other conditional logic supported, which makes processing them rather quickly.
  3. The message object must support a kind of "copy on write" (which would even be very useful with the v7 engine, which also permits more updates than any pre-v8 engine ever was designed for...). This could be done by splitting the (traditional) immutable part of the message structure from things like variables. Message modification modules modifying the "immutable" part would need to do a full copy, others not (the usual case). Of course, this copy on update can make variable operations rather costly.
  4. Output modules could be forced to perform a kind of "retry commit" -- but this is a bad option because it a) puts (repetitive) burden on the output (in essence, the output faces exactly the same problems like the core engine, EXCEPT probably that it knows better which exact data items it needs -- easy for traditional template based interface). b) it removes from the engine the ability to re-try parts of the transaction. So this is not very appealing.
  5. In any case, the actual "action retry handling" should probably be applied to the commit interface, far less than the usual submit interface.
What even more complicates things is that we do not know what modules that use the message passing interface actually do with the messages. In current code, most of them are message modification modules. This means in order for them to work correctly, they actually need to execute on the message "right in time". And, of course, there is even more complexity in that each output may do partial commits at any time. The most popular case probably is when it runs out of some buffer space.

To solve these issues, a two-step execution inside the rule system seems desireable:
  • execution phase
  • commit phase
Note that this two-phase approach is already very similar to what action queues do. However, this also means that action queues in pre-v8 can be victim to race conditions if variables are heavily used.

In any case, using action queues to perform these two steps seems very natural and desirable. Unfortunately, there is still considerate overhead attached to this (mutex operations, required context switches), which makes this very unattractive. The end result if taking this path probably would be a reduced overall processing speed, something we really don't like to see. Also, failover processing would not work if following that path.

Execution Phase
  1. advance message state - message modification (mm) modules must be called immediately
  2. "shuffle" msgs to actions - the main concern here is to make sure that the action sees an immutable action object, at least in regard to what it needs from it (we may need to add an entry point to ask the action to extract whatever it actually needs and return a pointer to that - not necessary for simple strings, for a prominent example).
    Note that doAction is never called for non mm-modules.
At the end of execution phase, for each action we have an array of to-be-processed data.

Commit Phase
For each action with data, we submit the data to its action, performing all three steps. This way, we can easily keep track of the state advancement and action errors. It would be easy to implement dedicated failover processing at this stage (but this probably requires larger state info if the failover action is different from the primary one).

This two-phase approach somewhat resembles the original batching/SIMD idea of the pre-v8 engine. So it looks like this design was well up to the point of what we need to do. I am still a bit undecided if doing these engine changes are really worth it, but so far code clarity seem to be much better. Performance possibly as well, as the SIMD needed to carry a lot of state around as well.

I will now probably do a test implementation of the two-phase approach, albeit only for the traditional string interface.

Some ideas/results from the test implementation:
  •  The structure used to store messages could -LATER- be made the structure that is actually queued in action queues, enabling for faster performance (in-memory) and unified code.
  • Note: an advantage on storing the result string template vs. the message object between phases is of advantage as we do not need to keep the message immutable for this reason. It needs to be seen, though, if that really is an advantage from the overall picture (the question is can we get to a point where we actually do NOT need to do copy-on-write -- obviously this would be the case if one string templates are used).

Friday, October 25, 2013

A Proposal for Rsyslog State Variables

As was discussed in great lenghth on the rsyslog mailing list in October 2013, global variables as implemented in 7.5.4 and 7.5.5 were no valid solution and have been removed from rsyslog. At least in this post I do not want to summarize all of this - so for the details please read the mailing list archive.

Bottom line: we need some new facility to handle global state and this will be done via state variables. This posting contains a proposal which is meant as basis for discussion. At the time of this writing, nothing has been finalized and the ultimate solution may be totally different. I just keep it as blog posting so that, if necessary and useful, I can update the spec towards the final solution.

Base Assumptions
There are a couple of things that we assume for this design:
  • writing state variables will be a  very infrequent operation during config execution
  • the total number of state variables inside a ruleset is very low
  • reading occurs more often, but still is not a high number (except for read-only ones)
Syntax
State variables look like the previous global variables (e.g. "$/var" or "$/p!var"), but have different semantics based on the special restrictions given below.

Restrictions
Restrictions provide the correctness predicate under which state vars are to be evaluated. Due to the way the rsyslog engine works (SIMD-like execution, multiple threads), we need to place such restrictions in order to gain good performance and to avoid a prohibitively amount of development work to be done. Note that we could remove some of these restriction, but that would require a lot of development effort and require considerable sponsorship.

  1. state variables are read-only after they have been accessed
    This restriction applies on a source statement level. For example,
    set $/v = $/v + 1;
    is considered as one access, because both the read and the write is done on the same source statement (but it is not atomic, see restriction #3). However,
    if $/v == 10 then
        set $/v = 0;
    is not considered as one access, because there are two statements involved (the if-condition checker and the assignment).
    This rule requires some understanding of the underlying RainerScript grammar and as such may be a bit hard to interpret. As a general rule of thumb, one can set that using a state variable on the left-hand side of a set statement is the only safe time when this is to be considered "one access".
  2. state variables do not support subtree access
    This means you can only access individual variables, not full trees. For example, accessing $/p!var is possible, but accessing $/p will lead to a runtime error.
  3. state variables are racy between diffrent threads except if special update functions are use
    State variable operations are not done atomic by default. Do not mistake the one-time source level access with atomicity:
    set $/v = $/v + 1;
    is one access, but it is NOT done atomically. For example, if two threads are running and $/v has the value 1 before this statement, the outcome after executing both threads can be either 2 or 3. If this is unacceptable, special functions which guarantee atomic updates are required (those are spec'ed under "new functions" below).
  4. Read-only state variables can only be set once during rsyslog lifetimeThey exist as an optimization to support the common  usecase of setting some config values via state vars (like server or email addresses).
Basic Implementation Idea
State variable are implemented by both a global state as well as message-local shadow variables. The global state holds all state variables, whereas the shadow variables contain only those variables that were already accessed for the message in question. Shadow variables, if they exist, always take precedency in variable access and are automatically created on first access.

As a fine detail, note that we do not implement state vars via the "usual" JSON method, as this would both require more complex code and may cause performance problems. This is the reason for the "no-subtree" restriction.

Data Structure
State vars will be held in a hash table. They need some decoration that is not required for the other vars. Roughly, a state var will be described by
  • name
  • value
  • attributes
    • read-only
    • modifyable
    • updated (temporary work flag, may be solved differently)
Implementation
This is pseudo-code for the rule engine. The engine executes statement by statement.

for each statement:
   for each state var read:
      if is read-only:
         return var from global pool
     else:
         if is in shadow:
             return var from shadow pool
        else:
             read from global pool
             add to shadow pool, flag as modifiable
             return value
    for each state var written:
        if is read-only or already modified:
           emit error message, done
        if is not in shadow:
            read from global pool
            add to shadow pool, flag as modifiable
        if not modifieable:
           emit error message, done
        modify shadow value, flag as modified


at end of each statement:
   scan shadow var pool for updated ones:
       propagate update to global var space
       reset modified flag, set to non-modifiable
  for all shadow vars:
       reset modifiable flag

Note: the "scanning" can probably done much easier if we assume that the var can only be updated once per statement. But this may not be the case due to new (atomic) functions, so I just kept the generic idea, which may be refined in the actual implementation (probably a single pointer access may be sufficient to do the "scan").

The lifetime of the shadow variable pool equals the message lifetime. It is destrcuted only when the message is destructed.

New Statements
In order to provide some optimizations, new statements are useful. They are not strictly required if the functionality is not to be implemented (initially).
  • setonce $/v = value;
    sets read-only variable; fails if var already exists.
  • eval expr;
    evaluates an expression without the need to assign the result value. This is useful to make atomic functions (see "New Functions") more performant (as they don't necessarily need to store their result).
New Functions
Again, these offer desirable new functionality, but can be left out without breaking the rest of the system. But they are necessary to avoid cross-thread races.
  • atomic_add(var, value)
    atomically adds "value" to "var". Works with state variables only (as it makes no sense for any others).
This concludes the my current state of thinking. Comments, both here or on the rsyslog mailing list, are appreciated. This posting will be updated as need arises and no history kept (except where vitally important).

Special thanks to Pavel Levshin and David Lang for their contributions to this work!

Friday, October 11, 2013

rsyslog's imudp now multithreaded

Rsyslog is heavily threaded to fully utilize modern multi-core processors. However, the imudp module did so far work on a single thread. We always considered this appropriate and no problem, because the module basically pulls data off the OS receive buffers and injects them into rsyslog's internal queues. However, some folks expressed the desire to have multiple receiver threads and there were also some reports that imudp ran close to 100% cpu in some installations.

So starting with 7.5.5, imudp itself supports multiple receiver threads. The default is to use a single thread as usual, but via the "threads" module parameter, up to 32 receiver threads can be configured. We introduced this limit to prevent naive users from totally overruning their system capability - spawning a myriad of threads usually is quite counter-productive (especially when they outnumber the available processor cores). For the same reason, I would strongly suggest that the number of threads is only increased if there is some evidence for this to be useful -- which usually means the imudp thread should require considerable CPU time. In order to aid the decision, I have also added new rsyslog statistics counters which permit monitoring of the worker thread activity.

We will now evaluate practical feedback from the new feature. One of the goals of this new enhancement is to limit the risk of UDP message loss due to buffer overrun, which we hope we have improved even without the need to select realtime priority.

Please note that 7.5.5 is at the time of this writing not yet released, so for the next couple of days the new feature is only available via building from the git master branch.

Friday, October 04, 2013

New Queue Defaults in rsyslog 7.5

As regular readers of my blog know, we are moving towards preferring enterprise needs vs. low-end system needs in rsyslog. This is part of the changes in the logging world induced by systemd journal (the full story can be found here).

Many of the main queue and ruleset queue default parameters were a compromise, and much more in favor of low-end systems than enterprises. Most importantly, the queue sizes were very small, done so in an approach to save virtual memory space. With the 7.5.4 release, this will change. While the default size was 10,000 msgs so far, it has been increased 10-fold to 100,000. The main reason is that the inputs nowadays batch together quite some messages, which gives us very good performance on busy systems. It is not uncommon that e.g. the tcp input submits 2,000 messages as once. With the previous defaults, that meant the main queue could hold 5 such submission. Now we got much more head room.

Note that some other changes were made alongside. The dequeue batch size has been increased to 256 from the previous value of 32. The max number of worker threads has been increased to two, removing our previous conservative setting of one. At the same time, we now require at least 40,000 messages to be inside the queue before the second worker is activated. So this will only happen on very busy systems. Note that the previous value of 100 messages was really an artifact of long gone-away times and usually meant immediate activate of the maximum number of workers, what was quite contrary to the intention of that parameter.

Of course, these are just changed defaults. They can always be overridden by explicit settings. For those configs that already did this, nothing changes at all.

Tuesday, September 10, 2013

imfile multi-line messages

As most of you know, rsyslog permits to pull multiple lines from a text file and combine these into a single message. This is done with the imfile module. Up until version 7.5.3, this lead to a message which always had the LF characters embedded. That usually posed little problem when the same rsyslog instance wrote the message immediately to another file or database, but caused trouble with a number of other actions. The most important example of the latter is plain tcp syslog.

That industry standard protocol uses LF as a frame delimiter. This means a syslog message is considered finished when a LF is seen and everything after the LF is a new message. Unfortunately, the protocol does not provide a special escape mechanism for embedded LFs. This makes it simply impossible to correctly transmit messages with embedded LFs via plain tcp syslog (for more information, see RFC6587, section 3.4).

To solve this situation, rsyslog provides so-called "octet counted" framing, which permits transmission of any characters. While this is a great solution for rsyslog-to-rsyslog transmission, there are few other programs capable of working in that mode. So interoperability is limited.

Even worse, most log processing tools (primarily those working on files) do not expect multi-line messages. Usually they get very confused if LFs are included.

In short, embedded LFs are evil in the logging world. It was probably not a great idea to generate them when imfile processes multi-line messages.

Starting with rsyslog version 7.5.3, this problem has now been solved. Now, imfile escapes LF to the four-character sequence "#012", which is rsyslog's standard (octal) control character escape sequence. With this escaping in place, there will neither be problems at the protocol layer nor with other log processing applications. If for some reason embedded LF are needed, there is a new imfile input() parameter called "escapeLF". If set to "off", embedded LFs will generated. We assume that when a users does this, he also knows what he does and how to handled those embedded LFs.

This behaviour could obviously break existing configurations. So we have decided not to turn on LF escaping for file monitors defined via legacy statements. These are most probably those that do not want it and also probably long have dealt with the resulting problems.

As always, it is highly suggested that new configurations use the much easier to handle input() statement, which also has LF escaping turned on by default. Note that you cannot use LF escaping together with imfile legacy config statements. In that case, you must switch to the new style.

So this construct:

$InputFileName /tmp/imfile.in
$InputFileTag imfile.in
$InputFileStateFile imfile.in
$InputFileReadMode 1
$InputRunFileMonitor


Needs to become that one:

input(type="imfile" file="/tmp/imfile.in"
      statefile="imfile.in" readMode="2" tag="imfile.in")

Again, keep in mind that in new style LF escaping is turned on by default, so the above config statement is equivalent to:

input(type="imfile" file="/tmp/imfile.in" escapelf="on"
      statefile="imfile.in" readMode="2" tag="imfile.in")

This later sample is obviously also correct.
To turn off LF escaping in new style, use:

input(type="imfile" file="/tmp/imfile.in" escapelf="off"
      statefile="imfile.in" readMode="2" tag="imfile.in")

I hope this clarifies reasons, usefulness and how to handle the new imfile LF escaping modes.

Tuesday, July 16, 2013

rsyslog: why disk-assisted queues keep a file open

From time to time, someone asks why rsyslog disk-assisted queues keep one file open until shutdown. So it probably is time to elaborate a bit about it.

Let's start with explaining what can be seen: if a disk-assisted queue is configured, rsyslog will normally use the in-memory queue. It will open a disk queue only if there is good reason to do so (because it severely hurts performance). The prime reason to go to disk is when the in memory queue's configured size has been exhausted. Once this happens, rsyslog begins to create spool files, numbered consequtively. This should normally happen in cases where e.g. a remote target is temporarily not reachable or a database engine is responding extremely slow.  Once the situation has been cleared, rsyslog clears out the spool files. However, one spool file always is kept until you shut rsyslog down. Note well that when the disk queue is idle, all messages are processed even though the the physical spool file still contains some already-processed data (impstats will show you the exact details).

This is expected behaviour, and there is good reason for it. This is what happens technically:

A DA queue is actually "two queues in one": It is a regular in-memory queue, which has a (second) helper disk queue in case it neeeds it. As long as operations run smoothly, the disk queue is never used. When the system starts up, only the in-memory queue is started. Startup of the disk queue is deferred until it is actually needed (as in most cases it will never be needed).

When the in-memory queue runs out of space, it starts that Disk queue, which than allocates its queue file. In order to reclaim space, not a single file is written but a series of files, where old files are deleted when they are processed and new files are created on an as-needed basis. Initially, there is only one file, which is read and written. And if the queue is empty, this single file still exists, because it is the representation of a disk queue (like the in-memory mapping tables for memory queues always exist, which just cannot be seen by the user).

So what happens in the above case is that the disk queue is created, put into action (the part where it writes and deletes files) and is then becoming idle and empty. At that stage, it will keep its single queue file, which holds the queue's necessary mappings.

One may now ask "why not shut down the disk queue if no longer needed"? The short answer is that we anticpiate that it will be re-used and thus we do not make the effort to shut it down and restart when the need again arises. Let me elaborate: experience tells that when a system needs the disk queue once, it is highly likely to need it again in the future. The reason for disk queues to kick into action are often cyclic, like schedule restarts of remote systems or database engines (as part of a backup process, for example). So we assume if we used it once, we will most probably need it again and so keep it ready. This also helps reduce potential message loss during the switchover process to disk - in extreme cases this can happen if there is high traffic load and slim in-memory queues (remember that starting up a disk queue needs comparativley long).

The longer answer is that in the past rsyslog tried to shut down disk queues to "save" ressources. Experience told us that this "saving" often resulted in resource wasting. Also, the need to synchronize disk queue shutdown with the main queue was highly complex (just think about cases where we shut it down at the same time the in-memory queue actually begins to need it again!). This caused quite some overhead even when the disk queue was not used (again, this is the case most of the time - if properly sized). An indication of this effect probably is that we could remove roughly 20% of rsyslog's queue code when we removed the "disk queue shutdown" feature!

Bottom line: keeping the disk queue active once it has been activated is highly desirable and as such seeing a queue file around until shutdown is absolutely correct. Many users will even never see this spool file, because there are no exceptional circumstances that force the queue to go to disk. So they may be surprised if it happens every now and then.

Side-Note: if disk queue files are created without a traget going offline, one should consider increasing the in-memory queue size, as disk queues are obviouly much less efficient than memory queues.

Friday, June 28, 2013

rsyslog on AIX: who wants to help porting?

Thanks to the recent IBM contribution of a partial rsyslog 5.8.6 port to AIX, we have come across that platform again. Unfortunately, some license issues prevent me from merging the IBM contribution to the current rsyslog release (7.4+). I tried to work with IBM resolving these issues, but it just occurred to me that actually doing the port ourselves is probably easier than wrangling with license issues. I could set some time aside to make changes (as a by-activity, aka "may be sluggish at times"), but I neither have time, machines nor know-how. David Lang has already said has has some machines to try compile on, but we still have know-how issues.

So question now: would there some more folks be interested in rsyslog on AIX? If so, could you join a porting effort and provide us with advise? The most pressing thing currently is setting up an environment where we can compile at all. Possibly, once this is done, we may run into issues that require some more in-depth programming know-how, but that's to be seen (and must must not necessarily happen).

Anyone interested in helping with this please either comment to this blog posting or make yourself heard on the rsyslog mailing list or send me a private mail.

PS: all code derived from this work needs to be placed under ASL 2.0 - just so that you know.

Thursday, June 27, 2013

rsyslog under ASL 2.0: why I can't simply do that

The ASL 2.0 topic boiled up again due to a much-appreciated IBM contribution to make rsyslog 5.8.6 work on AIX. Unfortunately, this contribution was done under GPLv3+. I tried to work with IBM to have it released under ASL 2.0, but their legal department is of the opinion that this is not possible. This resulted in some restrictions, which can be found in the git branches' README file. Most importantly, it's a dead-end branch branch which cannot be merged up to new versions.

As an option, IBM said if I would release rsyslog 5.8.6 under ASL 2.0, they could release their patch under ASL 2.0 as well. Unfortunately, I cannot do this by just declaring so.

You need to keep in mind that I do not own the complete copyright! Actually, there are a couple of hundreds contributors that I can find in git history ... and there are even more in the original sysklogd, which I can't even identify all. As such, it is simple impossible for me to change the license "on my own".
To reach the current state, I did quite intense research on who contributed what some time (two years maybe?) ago. I concentrated on "easy" files (new ones without sysklogd code) and contacted all contributors if they agree on ASL 2.0. Those files that I got agreement on are now under ASL 2.0. For the "hard" files, I even did some research which of them were still from the original syslogd, which was released under BSD. I replaced these files /code sequences with the BSD version (which is identical), and so could put that under ASL 2.0. But still there is a notable body of code left that needs to be under GPLv3+. Some can probably be changed with more author contacting (especially Red Hat contributions, where I have a general "go" for most things), but some code definitely needs to be rewritten. I guess the additional research takes at least some weeks, with the rewrite taking maybe another month or so.
Bottom line: it's far from being easy and there is no pressing need. But I don't want to fallback on the effort just because of the IBM contribution. I would need to rewrite it in any case, so there is no point in merging mainstream.

Wednesday, June 26, 2013

Joining the Guardtime Technical Advisory Board

I just wanted to let everyone know that I will be joining the Guardtime technical advisory board. The board's prime mission is to make sure that Guardtime implements technology that users need.

While implementing the rsyslog log signature interface and it's first signature provider, I worked very closely and productive with the Guardtime folks. Over time, we exchanged more and more ideas and the whole thing culminated when I visited their office in Tallinn, Estonia where we had even better discussions and hacking sessions.

Guardtime seems to have enjoyed this work as well, and so I was recently asked if I would like to join their to-be-formed technical advisory board (TAB). At first, I was a bit hesitant because I wondered if there is any collision of interest between being on the Guardtime  TAB and driving rsyslog forward. So I talked a bit with the Guardtime folks clarifying my position. As it turns out, the whole point of the TAB is to assemble experts from different fields and let them discuss and provide information on what end-users really need. The rationale is very obvious: Guardtime as a company can only succeed if they provide what the market really needs. There is no point in trying the market to pick up some product or technology that has no real value. Too many companies tried that in the past and failed miserably. So, in that sense, I can act as the logging user's agent inside the TAB and help push Guardtime in a direction that really is of value for logging. As I already think their current technology is a really good fit for what we need in logging, this is an exciting opportunity.

I also admit that it is very appealing that I will get ready access to Guardtime's team of qualified cryptographers (including chief scientist Atho Buldas, who helped building the Estonian national PKI), which is a really big win in regard to secure cryptography in rsyslog. I never before had the chance to work with such top-notch crypto people. In fact, they already provided feedback on the non-Guardtime encryption bits in rsyslog and I am sure that working closer with them will help us getting very solid cryptography into rsyslog. If in doubt, just look at the rsyslog git tree and commit history about when all this new crypto stuff got in :-)

As a side-note, I am also talking with some other party about a custom module using a HMAC based tamper proof system for log records. While this is not as strong as the Gurdtime signature provider, there are obviously some use cases for it as well. If that project materializes, it will of course be merged into the main rsyslog git tree. So, for the concerned, rsyslog will definitely not get a Guardtime bias. After all, my main interest is driving rsyslog forward and, again, you can do that against what "the market" demands.

One of the things I discussed with the Guardtime cryptographers while I was in Tallinn was about how we could extend log integrity proof to a full network relay case, where relays even drop messages. This is a very hard and yet unsolved problem (for details see my paper on log signing in the relay case). While we got no ready solution from the initial discussion, we had some very promising ideas. This for sure is nothing we'll see within the next quarter or so, but I am happy I now have the opportunity to discuss this and other hard problems with the knowledgeable (and very friendly, btw ;)) folks from Guardtime.

So for me getting on the TAB I think is a win-win-win situation for rsyslog, it's user base and of course also Guardtime. Hope a lot of good stuff will come out of it.

Sunday, June 16, 2013

systemd journal endlessly sends data to imjournal

When Fedora updated to rsyslog 7.4.0 in Fedora 19, they changed the default way how they obtain local log messages. In all previous releases, imuxsock, the traditional Unix/Linux input was used. That also works in conjunction with journal, at least as far as traditional syslog is involved. This is because journal provides these messages via the log socket it passes to rsyslog. Unfortunately, journal hides information if structured content is available. In order to obtain that, Red Hat has contributed a new module, imjournal, which reads the journal database via native journal APIs and as such gets hold of all data. With 7.4.0, the Fedora maintainers switched the default input in the desire to get full logging information into the system.

Unfortunately, rapidly many users begun to report serious problems with duplicate messages exhausting their system resources. As can be seen in this systemd journal database corruption Red Hat bug tracker, the root cause of this problem is that the journal database is corrupted and the native journal API thus endlessly returns the same messages ever and ever again.

The authors of imjournal did not use rsyslog's ratelimiting capability in this module. Actually, there was good reason not to do it, as journal itself provides some rate-limiting, so doing it twice doesn't sound like a good idea.

If you dig deeper into the journal bug, you'll notice that this is not a new problem. It occurs with journal tools as well, and does so obviously for quite a while. In my opinion, it now surfaced as a big issue simply because imjournal was the first application ever to make serious use of the journal database.

After I received reports on this problem, I acted immediately. First and foremost, I put a warning message into imjournal telling about the risk on relying on the journal database. I can obviously not solve that problem, but then I mitigated it by adding rate-limiting to the module. The current git version by default does not permit more than 20,000 messages within a 10 minute interval, so even with a broken journal, no more than 120,000 messages per hour should be emitted. That's as far as I can go handling this situation. Note that there is already a fix for the current problem in journal, it is also included in the bug tracker I mentioned above. However, in the past couple of weeks I talked to quite some folks on rsyslog/journal integration, and journal database corruption was frequently mentioned. Nobody told me about loops, but as I guard against future problems, I will keep the ratelimiting default active in imjournal.

Note that the rsyslog project "officially" always suggested to use imjournal only in cases where there is hard need for it (this was a result of talking to the end users). Thankfully I published a presentation on rsyslog and systemd journal integration including this fact just last week, so I can even proof that recommendation ;)

How will we go forward? If you need a quick patch, you should rebuild rsyslog from the git v7-stable branch. For a single patch that addresses the ratelimiting, you can use this one. The full patch set will be included in 7.4.1. We are working with Red Hat to release it ASAP, hopefully tomorrow.

Another viable solution is to simply disable imjournal in rsyslog.conf and replace it with the former imuxsock configuration. Please note that this problem does not affect any version of rsyslog installed from Adiscon's repositories. This is because we never use imjournal by default.

I hope this clarifies the root cause of this problem, how to mitigate it and how the rsyslog team intends to prevent further harm from potential systemd journal database corruption.

Saturday, June 15, 2013

rsyslog performance: main and action queue workers

Rsyslog has both "main" message queues and action queues. [Actually, "main message queues" are queues created for a ruleset, "main message" is an old-time term that was preserved even though it is no longer accurate.]

By default, both queues are set to one worker maximum. The reason is that this is sufficient for many systems and it can not lead to message reordering. If multiple workers are concurrently active, messages will obviously be reordered, as the order now, among others, depends on thread scheduling order.

So for now let's assume that you want to utilize a multi-core machine. Then you most probably want to increase the maximum number of main message queue workers. The reason is that main queue workers process all filters for all rules inside the rule set, as well as full action processing for all actions that are not run on an asynchronous (action) queue. In typical setups, this offers ample of opportunity for concurrency. Intense tests on the v5 engine have shown near linear scalability to up to 8 cores, with still good improvements for higher number of cores (but increasing overhead). Later engines do most probably even better, but have not been rigorously performance tested (doing it right is a big effort in itself).

Action queues have a much limited concurrency potential because they do only a subset of the work (no filtering, for example, just generating the strings and executing the actual plugin action). The output module interface traditionally permits only one thread at a time to be present within the actual doAction() call of the plugin. This is to simply writing output plugins, but would be needed in any case as most can not properly handle real concurrent operations (think for example about writing to sequential files or a TCP stream...). For most plugins, the doAction() part is what takes most processing time. HOWEVER, multiple threads on action queues can build string concurrently, which can be a time consuming operation (especially when regexpes are involved). But even then it is hard to envision that more than two action queue worker threads can make much sense.

So the bottom line is that you need to increase the main queue worker threads to get much better performance. If you need to go further, you can fine-tune action queue worker threads, but that's just that: fine-tuning.

Note that putting "fast" actions (like omfile) on an async action queue just to be able to specify action threads is the wrong thing to do in almost all situations: there is some inherent overhead with scheduling the action queue, and that overhead will most probably eat up any additional performance gain you get from the action queue (even more so, I'd expect that usually it will slow things down).

Action queues are meant to be used with slow (database, network) and/or unreliable outputs.

For all other types of actions, even long-running, increasing the main queue worker thread makes much more sense, because this is where most concurrency is possible. So for "fast" action, use direct action queues (aka "no queue") and increase the main thread workers.

Finally a special note on low-concurrency rulesets. Such rulesets have limited inherent concurrency. A typical example is a ruleset that consists of a single action. For obvious reasons, the number of things that can be done concurrently is very limited. If it is a fast action, and there is little effort involved in producing the strings (most importantly no regex), it is very hard to gain extra concurreny, especially as a high overhead is involved with such fine-grained concurrency. In some cases, the output plugin may come to help. For example, omfile can do background writes, which will definitely help in such situations.

You are in a somewhat better shape if the string builder is CPU intense, e.g. because it contains many and complex regexes. Remember that strings can be build in parallel to executing an action (if run on multiple threads). So in this case, it makes sense to increase the max number of threads. It makes even more sense to increase the default batch size. That is because strings for the whole batch are build, and then the action plugin is called. So the larger the batch, the large these two partitions of work are. For a busy system, a batch size of 10,000 messages does not sound unreasonable in such cases. When it comes to which worker threads to increase, again increase the main queue workers, not the action ones. It must be iterated that this gives the rsyslog core more concurrency to work with (larger chunks of CPU bound activity) and avoids the extra overhead (though relatively small) of an async action queue.

I hope this clarifies worker thread settings a bit more.

Friday, June 14, 2013

recent librelp development

I thought I share some news on what I have been busy with and intend to be in the future. In the past days, I have added more config options to librelp, which now supports GnuTLS compression methods as well as provides the ability to set the Diffie-Hellman key strength (number of bits) and - for experts - to set the GnuTLS priorities, which select the cipher methods and other important aspects of TLS handling.

This is done now and I also added rsyslog facilities to use these new features. Some of this stuff is not yet released, but will soon be.

The next big step is preventing man-in-the-middle attacks. I will most probably use SSH-type fingerprint authentication, so that no full PKI is necessary to make this work. I guess implementing this feature set will probably take a couple of days and will keep you posted on how things progress.

Wednesday, June 12, 2013

What is a rsyslog signature provider?

Some folks asked what a (rsyslog) signature provider is. In essence, it is just a piece of software written to a specific interface.

There are little functional requirements for signature providers. Most obviously, we expect that it will provide some useful way to sign logs, what means it should be able to provide an integrity proof of either a complete log file or even some log records some time after the logs are recorded.

There is no "official signature provider spec" available. As usual in open source, the provider interface is well defined inside its actual source header file. Take a look for example at rsyslog's git to see the definition.

If you look at that code, the interface is fairly simple. The primary entry points a programmer needs to define are

  • OnFileOpen
  • OnRecordWrite
  • OnFileClose
As the names imply, they are called whenever a log file is opened, written to, and closed. Again, what the provider does at these events is up to the provider.

The initial provider that ships directly with rsyslog is using a secure remote signing service provided by Guardtime.  There are various reasons why I selected this provider, and I elaborated at length on them in my blog posting on why I selected Guardtime as first rsyslog signature provider. In a nutshell, it is a method that does not rely on local secrets and as such can not be circumvented (again: details in my other posting). Obviously, Guardtime offers paid services, but they also operate free services for everyone to use (a key point when I evaluated that technology).

If you have privacy concerns about the Guardtime provider, you may want to read my blog posting on just that question.

Note that signature provider implementations can use the above entry points in any way they like. For example, the may ignore file open and close and do all work in record write. So this small interface is highly flexible. If you look at the full interface description, you'll also see some more events which primarily provide housekeeping functionality. If you intend to write your own signature provider, I suggest to start with a copy of the guardtime provider and change it according to your needs. That way, you have a great example of how the housekeeping calls are handled.

Tuesday, June 11, 2013

new rsyslog devel branch: 7.5

After the successful and important release of 7.4 stable, we are working hard on further improving rsyslog. Today, we release rsyslog 7.5.0, which opens up the new development branch.

With that branch, we'll further focus on security. Right with the first release, we provide a much-often demanded feature: TLS support for the Reliable Event Logging Protocol (RELP). Even better, we also support the compression feature that is included in GnuTLS, so you can use im/omrelp not only with TLS, but also turn on compression if you like so.

There is already a guide for TLS-secured RELP with rsyslog available. It was written for the experimental 7.3.16 release, which never was officially announced. So the guide contains some (now-unnecessary) build steps, but also a full example of client and server configurations.

Note that the current TLS support supports anonymous authentication, only (via Diffie-Hellman key exchange). This is due to the current librelp implementation. However, librelp is scheduled to become more feature-rich and rsyslog will support this, once it is available. In general, you can expect some more enhancements (and more fine-grained config control for those in the need) for rsyslog's RELP subsystem.

There are more things to come, especially in the security context. So stay tuned to what the next version of rsyslog will provide!

Monday, June 10, 2013

rsyslog: How can I see which field contains which data item?

A topic that comes up on the rsyslog mailing list or support forum very often is that folks do not know exactly which values are contained on which fields (or properties, like they are called in rsyslog, e.g. TAG, MSG and so on).

So I thought I write a quick block post on how to do that. I admit, I do this mostly to save me some time typing and having it at hand for reference in the future.

This is such a common case, that rsyslog contains a template which will output all fields. The basic idea is to use that template and write all message into a special file. A user can than look up that file, find the message in question, and see exactly which field/property is populated with which data.

It's very easy to do. All you need to place this at the top of your rsyslog.conf:


*.* /var/log/debugfmt;RSYSLOG_DebugFormat


and restart rsyslog. After that, all messages will be written to /var/log/debugfmt. Note that each message is written to multiple lines, with each of its properties. This is a very rough example of how this may look:


Debug line with all properties:
FROMHOST: 'linux', fromhost-ip: '127.0.0.1', HOSTNAME: 'linux', PRI: 46,
syslogtag 'rsyslogd:', programname: 'rsyslogd', APP-NAME: 'rsyslogd', PROCID: '-', MSGID: '-',
TIMESTAMP: 'Jun 10 18:56:18', STRUCTURED-DATA: '-',
msg: ' [origin software="rsyslogd" swVersion="7.4.0" x-pid="4800" x-info="http://www.rsyslog.com"] start'
escaped msg: ' [origin software="rsyslogd" swVersion="7.4.0" x-pid="4800" x-info="http://www.rsyslog.com"] start'
inputname: rsyslogd rawmsg: ' [origin software="rsyslogd" swVersion="7.4.0" x-pid="4800" x-info="http://www.rsyslog.com"] start'


Note that I lied a bit: it's not actually all properties, but the most important ones (among others, the JSON properties are missing, as this is a large amoutn of data - but I may add it in later versions).

If you know what to know what, for example, APP-NAME is, you can simply look it up (here it is "rsyslogd").

Note that I have deliberately given this example in legacy configuration language, so that it can be used by all users of rsyslog. You can of course also make use of the more advanced versions.

Saturday, June 08, 2013

PRISM - and Google and Facebook Intelligence

While almost everyone (including me) is talking about the PRISM system, it may be worth stepping a little back and looking at one of the underlying problems.

Looking closely, PRISM drew it's data from just a couple of large sources, including Google and Facebook. But, again, it's very few sources. While I guess they combine this data with intelligence they have gathered elsewhere, the fact remains that only two handful of sites seem to store quite a bit of information about everyone one the Internet.

So the question probably is what do privately owned corporations like Google and Facebook do with our data? Privacy advocates are never tired to warn about privacy concerns in regard to the Internet giant's services. In fact, the big guys probably have much more insight into their user's habits than intelligence agencies knew in the past about high-profile people.

And it's not only their users: think about all those Facebook "Like" buttons, Google "+1" buttons or even Google Analytics, which is used on an extremely large number of sites. Not to mention blogger (that I am using right now ;)) or youtube, or... If if you are not logged in to your G+ or FB account - or even don't have one - they gather information about you based on your IP address or geographical location. With IPv6, this problem will become even worse, as it is much easier to continuously identify someone surfing the web, especially if the ISP isn't looking too much into privacy-enhancements.

And it's undeniable those giants not only have but make use of the wealth of personal profile information: just look at how targeted ads are nowadays and how customized search results are, based on what the companies know about each and every one of us. Actually, this is what is the actual (astronomical) market value of these companies: not their service per sé but rather their intelligence capabilities over the users base - and how they make use of it.

The best example in my opinion is probably Google Now: it is absolutely astonishing and fascinating how well Google predicts what is of interest for me. Of course it is also pretty frightening, if you just think a couple of seconds about it. Plus, users even give away their roaming profile to use this service which "simple is to good to turn off for privacy concerns". I have to admit I use it, too, at last every now and then - knowing that only continuous use makes it really valuable to the user - and Google.

Of course, there are good reasons to use services like Facebook and Google - we all do it routinely. In some situations it's even unavoidable to use them. But all of that is probably a story for a different blog posting.

Let me conclude that PRISM should also be a wakeup call about how much power about our personal lifes we put into the hands of a very small set of large commercial companies. After all, this is what the government is after, trying to even better understand out actions by combining various data sources. In that light, does one really wonder why governments are not so eager about new and better privacy laws and enforcing them?

Bottom line? There is not only Web 2.0, there is also Privacy 2.0. While the first got great features, the latter got pretty feature-less, at least from the personal privacy point of view.

Thursday, June 06, 2013

new rsyslog 7.4 stable branch released

We just released rsyslog 7.4.0, a new stable release which replaces the 7.2 branch. After nine month of hard work, there are many exciting enhancements compared to 7.2, and I thought I give you a quick rundown of the more important new features. Note that while I list some "minor features" at the end of this posting, the list is not complete. I left out those things that are primarily of interest for smaller groups of users. So if you look for a specific feature not mentioned here, it may pay to look at the ChangeLog or post a question to the rsyslog mailing list.

With this release, the rsyslog project officially supports version 7.4. Support for 7.2 will gradually be phased out. If support for older versions is required, utilizing Adiscon's professional services for rsyslog is recommended.

Note that I only list main headlines for each of the features. Follow links (where provided) to gain more in-depth information.

Security Package
Improved Rate-Limiters
  • introduction of Linux-like rate-limiting for all inputs
  • "Last message repeated n times" now done on a per-input basis; makes it much more useful AND increases processing speed.
Systemd Journal support
  • omjournal writes messages to the journal
  • imjournal obtains messages including journal-specific meta data items from the journal
Performance Improvements
  • Disk Queue Performance greatly improved
  • DNS cache greatly improved and optimized
  • output file writer (omfile) now fully supports background writing
  • script optimizer now changes frequently-used long-running filters into equivalent quick ones when possible (this even affects some distros default configs and is a great performance saver)

Minor Features
  • various plugins now support the new RainerScript based config language
  • omlibdbi improvements, among them support for transactions
  • ommysql now supports transactions
  • improved omfile zip writing speed
  • performance optimizations
  • omelasticsearch improvements (bulk processing mode, local error file)
  • omudpspoof now supports packets larger than 1472 by using fragmentation
  • omrelp now supports session timeout
  • contributed module (not project-supported) omrabbitmq was added

Tuesday, June 04, 2013

rsyslog journal support

We expect that rsyslog and the systemd journal will be found together in quite some szenarios (if you are curios on what exactly we mean, check the "rsyslog vs. journal?" posting).

As such, it makes a lot of sense to think about providing integration facilities. Thanks to rsyslog's modular architecture, it wasn't very hard to provide the necessary building blocks. In the 7.3 experimental branch, two new modules (omjournal and imjournal) have been developed. They provide the capability to write to the journal as well as pull data out of it. Usually, the latter is not really necessary, as journald still provides log messages to the system log socket. But unfortunately, journal developers have decided only to pass on a subset of the logging information. They exclude the structured data content. However, such data is only available if their own logging API is being used by applications, and this is currently not really the case. So right now using just the regular system log socket input should be sufficient in almost all cases. Howerver, should structured data become more prominent in the journal, using imjournal gains rsyslog access to it.

For some more background information on the integration, you can also watch a quick presentation that I recorded:


Monday, June 03, 2013

rsyslog logfile encryption

Starting with version 7.3.11, rsyslog provides native support for log file encryption. In order to keep things highly flexible, both an encryption provider interface as well as an provider have been designed and implemented.

The provider interface enables to use different encryption method and tools. All that is need to be done is to write the actual provider to that interface. The encryption provider (called a "crypto provider") then implements everything necessary for the actual encryption. Note that decryption is not part of the rsyslog core. However, a new user tool - rscryutil - is provided by the project to handle this task. This tool is currently being considered to be part of the crypto provider. Consequently, there is no specific interface for it. The reasoning behind that decision is that there is very little provider-independet plumbing in this tool, so abstracting things looks a bit like over-engineering (we may change that view in the future based on how things evolve).

The initial crypto provider is based on libgcrypt, which looks like an excellent choice for (almost?) all use cases. Note that we support symmetric cryptography, only, inside the crypto provider. This is simply due to the fact that public/private key cryptography is too slow to be used for mass encryption (and this is what we do with log files!). Keep in your mind that even TLS uses symmetric cryptography for the actual session data (for the same reason), and uses public/private key cryptography only for the exchange of the symmetric key. In any case, folks using that functionality in high-secure environments are advised to check the exact security requirements. Periodic re-keying of the encrypted log files may be necessary.

No on to "how does it work?": The encryption functionality, at the action level, is enabled by specifying a crypto provider, for example as follows:

action(type="omfile" file="/var/log/logfile"
       cry.provider="gcry" 
       )

This tells the action to load the crypto provider. What then happens, is up to the crypto provider. For obvious reasons, just loading it is not sufficient, you need at least to provide a crypto key. How this is done depends on the crypto provider. It is assumed that all crypto providers user the "cry." config parameter name space. With the gcry provider, a full action may look like this:

action(type="omfile" file="/var/log/logfile"
       cry.provider="gcry" cry.key="testtesttesttest"
       )

This is the bare minimal set of parameters for gcry - it'll use defaults for the algorithm and use the key directly as specified from the configuration file.

Specifying the key directly in the configuration is usually a bad idea: in most setups, many or even all users can read the config file, so the key is far from being secret. We even thought of not permitting this option, as it is so insecure. We left it in, however, as it is a great help in initial testing.

For production use, there are two other modes: one is to read the key from a side-file (which needs to be sufficiently secured!) or obtain it via a special program that can obtain the key via any method it likes. The latter is meant to be used for higher security use cases. We assume that the (user-written) program can do all those "interesting things" that you can do to manage crypto keys in a secure manner. Access to the keyprogram, of course, needs to be properly secured in that case.

I hope this gives you a glimpse at how rsyslog log file encryption works. For details on the crypto provider parameters, see the official rsyslog grcy crypto provider documentation.

TLS for librelp

If you followed librelp's git, you have probably already noticed that there is increased activity. This is due to the fact that TLS support is finally being added! Thanks to some unnamed sponsor, we could invest "a bit" of time to make this happen.

We have decided to base TLS support on GnuTLS, which has matured very much, is preferred by Debian and fully supported by Red Hat and has no licensing issues with GPL like openssl has (plus the sponsor also preferred it). We build TLS support directly into librelp, as we assume it will get very popular, so an abstraction layer would not make that much sense, especially given the fact the GnuTLS nowadays is almost already installed by default. And remember that an abstraction layer always adds code complexity and an (albeit limited) runtime overhead.

Librelp 1.1.0 will be the first version with basic TLS support. With "basic", we mean that this is a full TLS implementation, but there are some useful additional features not yet present. Most importantly, this version will not support certifiates but rather work with anonymous Diffie-Hellmann key exchange. This means that while the integrity and privacy of the session can be guaranteed as far as the network is concerned, this version does not guard against man-in-the-middle attacks. The reason simply is that there is no way to mutually authenticate peers without certificates. We still think it makes a lot of sense to release that version, as it greatly improves the situation.

Obviously, we have plans to add certificate support in the very near future. And this also means we will add ways for mutual authentication, much like in rsyslog's RFC 5425 implementation. It's not finally decided if we will support all authentication options RFC 5425 offers (some may not be very relevant in practice), but that's so far undecided. We currently strongly consider to start with fingerprint-based authentication, as this permits the ability to do mutual authentication without the need to setup a full-blown PKI. Also, most folks know fingerprint authentication: this is what ssh does when it connects to a remote machine.

So stay tuned to librelp development, many more exciting things are coming up. Please note that rsyslog 7.5.0 will be the first version to utilize the new librelp features - but that's something for a different blog posting.

[This is also cross-posted to the librelp site]

Tuesday, May 28, 2013

rsyslog vs. systemd journal?

I gave an invited talk on this topic at LinuxTag 2013 in Berlin. I was originally asked to talk about "rsyslog vs. journal", but requested that a question mark is added: "rsyslog vs. journal?". This title much better reflects our current thinking in regard to the journal project.

Rather than eloborating on what's our position, I thought it is easier if I just share the slide deck - and the full paper I have written on it. In a nutshell, both answer the question what we currently think of the journal, where we see which technology deployed and which cool things rsyslog can do to enhance enterprise logging. There is also a very intereting history lesson included. But enough of that, on to the real things:


The paper should definitely have all the details you ever want to know (well... ;)) and is a good read if you want to dig deeper:


Rsyslog vs Systemd Journal (Paper) from Rainer Gerhards

Note: the PDF can be downloaded directly from slideshare (use the "Save" button right on top of the paper).