Wednesday, December 09, 2009

rsyslog - feature "schedule"

Every know and then somebody asks what "release schedule" I have on my mind for future rsyslog releases. Today, it happened again, so I'll do my usual blog post ;). Long time readers of this blog will know that this is a snapshot of what I have on my mind - open source development is quite dynamic and so what I actually implement can be quite different - and has been so in the past. It may be a good time to read my blog post describing how I assign priorities to rsyslog work.

For the foreseeable future, I have two primary things on my mind: one is a set of tools to gain insight knowledge of rsyslog's inner workings while it is running. This includes statistics, but goes beyond (still, a recent forum post on rsyslog statistics may be an interesting read). This feature is of interest for the community at large, but it is also something that I need to do some in-depth performance analysis plus it is a real great debugging helper. As such, I intend not only to provide the glue inside rsyslog, but also create a full-blown GUI so that the power can actually be used. If nothing comes in the way, this is my top priority for new work (I intended to begin with it during summer time, but then more important things came into its way - but now it is becoming really pressing...).

The next feature I have on my mind is a change to the configuration language, which may also include some core changes. The community complains rightfully that rsyslog's configuration is a real pain. It is extremely hard to configure some of the most enhanced features - even I need to think hard about how to create some desired results. This is a result of the growth of rsyslog. When the current config system was invented (some three years ago?), we had a handful of low-power commands. This has dramatically changed. For some time, I intended to replace the config language solely by a scripting language. This I no longer believe in. A full-blown scripting language would be a very desirable enhancement, but the base configuration must be done without it (this is also a performance issue). Redoing the config language includes untangling some of the inner workings, adding more flexibility. I am working towards that goal for roughly two and a half month now and that part went well. Now I need to do the next step. I expect that a new config format requires at least a month, more realistic two, to materialize. But adding more features with the current config system is of limited use, because only "expert experts" could configure them. But while the config is important, it is on the second spot on my todo list, right after the GUI and diagnostics tools.

GUI and diagnostics I expect another at least two month to get to something decent. Adding these numbers, I really do not think what the next larger features could be that I intend to implement. If all goes well, I can think about this in spring.

Also, I am currently quite busy with some other, paid, projects. So the time I can spent on rsyslog at the moment is limited. I devote much of this time to fixing bugs, with a primary goal being to get v5 finally ready for prime time (it looks good, but we are not yet fully there).

Also, I notice that adoption rate increases. I notice that by a large growth in support requests both on the mailing list as well as the forum. This is good new, but the bad news is that there are only few frequent contributors. So there is a lot of things that I need to take care of myself, and this needs increasingly more time - time that I obviously do not have for bug-fixing or developing new features. To get things a bit balanced, I have stopped responding to some questions, those that I think either a little google search can lead results to or those that obviously have a primarily commercial background. I'd like to respond to anything - but unfortunately, I simply do not have the time (if I did, rsyslog development would be totally stalled).

As I said, this is just a snapshot of how things look. Maybe tomorrow a sponsor shows up that changes my todo list considerably (we had only very few occurrences of such, but we thankfully had ;)). Even with a sponsor, I am tied with work for the rest of this year, then I have a little vacation, some more paid work, so that I think I can begin working on larger features mid-January, maybe a bit later. Bottom line: don't take any "schedule" for granted, but I hope you get an overall idea of how things evolve. And: please continue to send in bug reports and feature request. Feature requests are very important - I use them (and their occurrence rate!) to judge how much demand for a feature there is in the community.

Happy syslogging!

Tuesday, November 24, 2009

rsyslog multithreading

From time to time, I receive questions on how many cores rsyslog can run on a highly parallel system. Rsyslog is massivley multi-threaded, but that does not necessarily mean that each configuration, and even each use case, can actually benefit from it.

The most important thing to gain a speedup from parallelism is the ability to break down the workload (this is called "partitioning") and distribute it to a set of threads, which than can work in parallel on each part.

For the partitioning to work well, the workload, and configuration, must be "partionable". Let me give a counter-example. If you have a single sender and a single action (yes, this sometimes is the case!), there can not be much parallelism. Such a config looks like this (using imtcp as an example here):

$TCPServerRun 10514
*.* /path/to/file

This can not gain much, because we have on thread for the TCP receiver, one thread for the filtering and one for the output. With the queue engine, we can increase the number of threads that will work on filters in parallel, but these have almost nothing to do in any case. We can not, however, walk in parallel into the output action, because a) the output plugin interface guarantees that only one thread hits a plugin at one time and b) it wouldn't make much sense here in any case: what would it help if we had hit the output twice and then need top synchronize the file access? No much...

So the bottom line is that a configuration like the one above is highly sequential in nature and consequently there is almost no gain by running some of the tasks concurrently. So, out of the box, rsyslog gains speedup from parallel processing in more complex cases, with more complex rule and many of them.

We are working the provide excellent speedup even for sequential configurations. But this is a long and complex road. For example, in v5 we have now de-coupled message parsing from the receiver thread, resulting in somewhat improved speedup for sequential configs like the one above. Also, we have added batching support in v5, which reduces some overhead involved with multiple threads (and thus reduces the gain we could potentially have). And in late v4 builds we introduced the ability to do double-buffered block i/o for output files, which can considerably reduce i/o overhead for high end systems and also runs in pipeline mode, sequzing a bit more parallelism out of the sequential job.

So with the newer engines, we have been able to apply a basic processing pipeline that looks like

input -> parse & filter -> generate file data -> write

which can be done in parallel. Of course, the file write is action-specific, but I guess you get the idea. What you need to do, however, is configure all that. And even then, you can not expect a 4-time speedup on a quad core system. I'd say you can be happy if the speedup is around 2, depending on a lot of factors.

To get to higher speedups, the job must be made more parallel. One idea is to spread the input, e.g. run it on four ports, then create four rulesets with ruleset queues for each of the inputs. Ideally, to solve the file bottleneck, these should write into four different files. While I did not have the opportunity to test this out in an actual deployment, that should gain a much larger speedup. Because now we have four of this pipelines running in parall, on partitioned data where there is no need to synchronize between them.

Well, almost... The bad news is that the current code base (5.5.0 as of this writing) does unfortunately not yet provide the ability to run the input on more than one thread. So if you have 1000 tcp connections, all of these need to be processed by a single thread (even though they may use different ports, that doesn't matter...). It is not as bad as it sounds, because the input now is *very* quick (remember the parsing is done concurrently in a different thread [pool!]). But still it causes some loss of parallel processing where not strictly needed. My thinking is that we should either do a "one thread per connection" server (not any longer such a big problem on 64bit machines) or (better but even more effort) do a thread pool for pulling data from the connections. Unfortunately, I do not have time to tackle that beast, but maybe someone is interested in sponsoring that work (that would be *really* useful)?

As you can see, full speedup by using multiple cores is perfectly doable, but going the max requires a lot of careful thinking. And, of course, I have to admit that the best features are present in the newest releases (somewhat naturally...). Obviously, there is some stability risk involved with them, but on the other hand I had some very good success reports from some high-end sites, at least on of them has v5 already deployed in large-scale production.

I could only touch the issue here, but I hope the information is useful. For further reading, I recommend both the doc on queues, as well as my explanation on how messages are processed in rsyslog. These documents are somewhat older and do not cover all details of pipeline processing (which simply did not exist at that time), but I think they will be very useful to read. And, yes, updating them is another thing on my too-long todo list...

Friday, November 20, 2009

rsyslog internal messages

I had an interesting conversation with someone who runs multiple instances of rsyslog on a machine for remote reception, only and for some reasons, another syslogd for local messages. The question arose where rsyslog error messages are emitted to.

It was expected that the showed up in the other syslogd. However, that is not the case, and for good reason. So I thought it is good to provide some general advise on how internal messages are emitted.

First of all, internal messages are messages generated by the rsyslog itself. The vast majority of them is error messages (like config error, resource error, unauthorized connect etc...), but there are also some status-like messages (like rsyslogd startup and shutdown, unexpectedly dropping tcp connection, ...). Traditionally, rsyslog does not make a distinction between status and error messages (we could change that over time, but so far nobody asked what means this is not worth the hassle).

Rsyslogd is a syslogd, so all message it emits internally are syslog messages. For obvious reasons, they use the "syslog" facility. And as all are flagged as error message, to total priority is "syslog.err". The internal message source is implicitly bound to the default ruleset.

It now depends on how that ruleset is defined where these messages show up. I strongly encourage everyone to include a rule that logs these message. If there are some e.g. config issues, they can be easily solved by looking at the emitted error message. But if you do not have them, it can take you ages to sort out what is wrong.

So you should always make sure that "syslog.err" (or probably better "syslog.*") is logged somewhere.

If you now would like to use another syslogd to log these messages, but not rsyslog itself, you do what you usually do in this situation: first of all, make sure that no local rule logs syslog.* messages. Then, include a rule that forward syslog.* to the recipient that you want to receive it. You have the full flexibility of the rule engine at hand to limit or reformat those messages. Note that an elegant solution to do both is including the following 2 lines at the top of rsyslog.conf (I assume you use UDP-forwarding to another syslogd running on the same host machine):


syslog.* @127.0.0.1
& ~

Note that the tilde character is the discard action.

Thursday, November 19, 2009

disk assisted mode performance in rsyslog

I would just like to clearify one thing: rsyslog supports disk assistance in which case messages are written to disk if the in-memory queue becomes full.

However, it is generally bad if the system needs to go to the disk during normal operations. That is primarily meant for things like output targets going offline. If this happens during normal operations, one is probably lost. In the v3&v4 engines, when disk mode is enabled, the in-memory worker threads are shut down. So all processing then takes place over the disk. That means processing will be slower than before. So if the system was incapable of handling the work load when running on a pure in-memory queue, it will definitely be incapable of handling it in disk mode.

Note that things are different in recent v5 engines: starting with 5.3.5, the disk worker runs concurrently to the in-memory workers and as such the performance is similar to what it was in non-disk mode. Still, overall processing is probably slower, so going to disk is not a cure for a system that can not handle the overall workload. In v5, however, it may be a way to handle excess bursts.

Priorities for rsyslog Work

I receive requests for support and code additions to rsyslog every day and I am grateful so many people express their interest and see rsyslog as a useful tool.

The bottom line, unfortunately, is that I can not do everything and I also can not do many things as quickly as I would like to. Also, I have to admit, there are some things that I do not like to do, at least as a cost-free activity. The typical example is work that benefits only a single or small subset of commercial organizations.

I suggest that you read a bit about my philosophy on how open source projects are paid philosophy. Note that "payment" includes for more things other than money, for example good suggestions and bug reports.

I tend to follow this priority scheme, with some variations:

  1. security-related issues
  2. serious problems in the current release
  3. serious problems in previous releases
  4. paid work
  5. things useful to the community at large
  6. things useful to smaller parts of the community (with descending priority)
  7. support for non-commercial folks
  8. bugs in older releases already fixed in newer ones
  9. activities aiding only commercial organizations

The term "things useful" is deliberately vague. Among others, it involves fixing bugs, adding new features and following support requests. However, support requests usually fall only in that category if either a bug is involved or I can gain some more insight into things that need to be changed (like better doc, general user needs, etc...).

Note that, as of my philosophy, I try to avoid doing work for free that only benefits a commercial party, but neither me personally nor the project. If you find this harsh, read my in-depth explanation of that philosophy.

Paying for Open Source Projects...

A selected the word "paying" in this post's title deliberately. Of course, open source software usually is (and should be) cost-free to all interested parties, but that does not mean there comes no price tag whatsoever with it.

As an open source author I need to admit that it is virtually impossible to give away everything without any price. "Price", in my perception, does not necessarily mean "money". There are many benefits you may gain from working on software, and money is only one of them.

But first of all, let me re-iterate the FSF's "freedom vs. free beer" argument, in which I fully believe:
"Free software" is a matter of liberty, not price. To understand the concept, you should think of "free" as in "free speech," not as in "free beer."

This is very true. In my personal mind, I would really love to give away any work I create to those that need it. But that thought involves some subtle issues. One prominent problem is that other people may think differently. For example, my landlord doesn't like this idea. Nor does my bakery. Not even the computer manufacturer, on whom's system I develop my software! What a shame! So if I gave away everything for free, I would, thanks to the social security system, probably not die, but I would definitely not have a machine to create those things I would like to provide for free.

So it looks like I need to make a compromise, give away certain things and charge for others. One approach would be to quit computing as a profession and become a gardener instead. In my spare time I could then program and give away everything for free. The bottom line is that I could program much less than I can do currently. Also, I prefer programming over gardening. So this does not look like a good approach - neither for me personally (the then-unhappy gardener) nor for society at large (who can no longer gain the full benefit of my work: believe me, I am far more productive as a programmer as opposed to a gardener...).

So this seems to be the wrong approach. It naturally follows that I need to charge for some of the computing work I do.

Then, look at my motivation as an open source developer. I'd like to make the world a little bit a better place, providing useful tools. And, if I am honest, I may even like to get a little bit of fame as a recognized open source developer. I guess that motivates many, but few admit to it ;) This hits a sweet spot of "payment": being recognized feels good and thus it keeps me motivated. Seeing the project grow and spread also motivates me. Projects where there is no feedback and which do not grow are usually quickly abandoned. Why? Because not even the most basic "payment" is provided in exchange for the work done.

So a very important form of "payment" to open source authors, at least in my point of view, are contributions to the project, help in spreading news about it, and, (very, very valuable) good bug reports. Everything that helps push a project and make it evolve. Of course contributions in any form are also happily accepted (be it software, hardware, book, ...., and of course money). Money is not evil. It pays the electricity to run my machine, among others.

Taken the arguments together, there is no ideal world where I can give away everything and receive in exchange whatever I need (and, I barely remember, experiments in trying this failed miserably...).

With that on my mind, I begin to divide the world in "friends" and "foes". Friends are those that provide me with some form of "payment", that is anything that is useful for me. Good examples are the folks that write the open source software *I* use (aha, this is cyclic!), folks that provide good bug reports and try out development versions etc. Any activity that I can also use to my benefit makes you my friend.

Then, there are "foes". That world probably is too hard and maybe should be re-phrased as "non-friends". But the term and the idea is well known.

If you are not my friend, you do not contribute anything that I can use for my benefit. This doesn't mean you are a bad guy. If you and I do not have anything in common, why should you do something that benefits me? There are far more people that I never provided any benefit to than there are people where I did. I guess that is true for almost all of us except a few outstanding people (which then usually receive admiration as a sort of "payment").

But if you are not my friend, you should not expect from me that I do anything for free for you. Envision a stranger comes to your home and asks you if you would like to help him build his home. I guess you will be astonished and probably ask "Why should I do that?". Now assume the sole answer is "Because that is good for me, the stranger, but you need to bring your own clothes and tools and need to pay the gas to come to my home". Would you be willing to help that guy out? I guess, the answer would be "no" in almost all cases.

So why should I as an open source developer create software for or otherwise help a non-friend? Why am I supposed to say "yes, of course" if a stranger asks me "Can you implement this and that, but you need to pay for your own hardware and the electricity used and also for..."? The answer is: I am not! So don't silently expect me to do that.

Of course, the question itself may have made you my friend. How come? Simple: the idea you propose may be a very useful idea for my project. If it gets implemented, it will help many of my currently existing friends and it will eventually help spread the project. So by providing the idea itself, you did me a big favor, which one may consider as a form of "payment". Consequently, I often implement things asked for by complete strangers. And I often try to help out complete strangers on the mailing list and on other support channels. Here, I often learn a real lot about what is good and bad about my projects. This is a very valuable for of "payment" for me.

HOWEVER, and this is my personal limit, whenever I am asked to do something for free, I evaluate *my* benefit in doing so. Of course, this includes the benefit to the project and the benefit to the community at large, but this all goes into the picture of "my" benefit as the sum of all that.

So if a complete stranger asks me to do something, I check for immediate benefits in doing that. Sometimes, there are cases where I can see a benefit, but only to that stranger. Usually, these are things corporate guys need, and they are very special and non-generic. If there is no benefit at all, I simply do not look any further. Of course, the proper solution here is that those folks can actually pay money to make me implement whatever they need. The logic behind this is that when they pay money, the help fund activities that also benefit the project at large. But if they are corporate guys, and they do not get any money approved for what they (think they) need, they don't really need it at all! Because if it were really useful for there corporation, they would have received the money grant (corporations are very good in making these trade-offs, though they occasionally fail ;)). So in short, the money is even a filter that prevents me from spending time on things that nobody really needs!

If a friend comes along and asks me to do something, I still need to evaluate the request. But I am much more likely to implement the functionality requested (its a game of "give and take"). Of course, I need to evaluate the overall priority for my project here, too. But friends definitely receive a priority boost if at all possible. And I think this is only fair.

In general, I favor requests that are useful to the community at large over those that are only useful to a small subset of it. I tend not to implement without any form or "hard" payment (hardware, money, a nice vacation on Hawaii... ;)) anything that is only useful to a single commercial organization. For example, why should I provide free services to a company that expects me to pay, e.g. the utility bill? If you do not give your services to me for free, don't expect me to give my time for free to just your benefit (think about the "stranger asking for my help on building his home" analogy).

My thoughts my sound very material, but in fact they just describe on what I think is fair in the non-perfect world we live in. Remember that most non-profit organizations are my friend, because they offer useful service to "me" (as part of the community). And think about my thoughts in the intro of this blog post about my inability to do any useful work at all if I would NOT have a somewhat material point of view. So, honestly, I think my philosophy here is not actual "material" but rather a result of how life is...

Edit: it may also useful to have a look at my blog post "work, friends and personality", which looks at a very similar issue from a slightly different angle.

The philosophy also influences priority decisions in my open source projects, as outlined for example in "rsyslog work priorites".

Monday, November 16, 2009

ACLs, imudp and accepting messages

I am working again on moving the DNS name resolution outside of the input thread of those sources where this is potentially time-consuming and affecting message acceptance rates. As it turned out, currently imudp seems to be the only case.

While this is potentially easy to do, a problem is ACLs ($AllowedSender) which use system names rather than ip addresses. In order to check these ACLs, we need to do a DNS lookup. Especially in the case of UDP, such a lookup may actually case message loss and thus may be abused by an attacker to cause a certain degree of denial of service (what also points out that these types of ACLs are not really a good idea, even though requested by practice).

In the light of this, I will now do something that sounds strange at first: I will always accept messages that require DNS lookups and enqueue these into the main queue and do the name resolution AND the final name-based ACL check only on the queue consumer part. Please note that it will be done BEFORE message content is parsed, so there is no chance that buffer overlow attacks can be carried out from non-authenticated hosts. The core idea is to move the lengthy, potentially message-loss causing code, away from the input thread. The only questionable effect I can currently see is that queue space is potentially taken up by messages which will immediately be discarded and should not be there in the first place. At the extreme end, that could lead to loss of valid messages. But on the other hand valid messages are more likely to be lost by the DNS name query overhead if I do the ACL check directly in the input thread.

If anyone has an argument against this approach please let me know.

Friday, November 06, 2009

A solution for invalid syslog message formats...

In syslog, we traditionally have a myriad of message formats, causing lots of trouble in real-world deployments. There are a number of industry efforts underway trying to find a common format. To me, it currently does not look like one of them has received the necessary momentum to become "the" dominating standard, so it looks like we need to live with various presentations of the same information for some more time.

The past two weeks, I have begun to make additions to rsyslog that hopefully will help solve this unfortunate situation. I know that I have no real cure to offer, but at least baby steps toward it. I have introduced so called message parsers, which can be utilized to convert malformed messages into rsyslog's well-formed internal structure.

Why is it not a solution? Because what I really introduced was actually an interface, which permits to write different parsers for the myriad of devices. I have not provided a generic solution to do that, so the individual parsers need to be written. And secondly, I have not yet defined any more standard properties than those specified in the recent IETF syslog rfc series, most importantly RFC5424.

So why I hope this will lead to a long-term solution?
First of all, there are some hopes that the IETF effort will bring more standard items. Also, we could embed other specifications within the RFC5424 framework, so this could become the lingua franca of syslog message content over time. And secondly, I hope that rsyslog's popularity will help in getting parsers at least for core RFC5424 information objects, which would be the basis for everything else. Now we have the capability to add custom parsers, and we have an interface that third parties can develop to (and do so with relative ease).

All in all, I think this development is a huge step into the right direction. The rest will be seen by history ;) To me, the probably most interesting question is if we will actually attract third party developers. If there are any, I'll definitely will help get them going with the rsyslog API.

Tuesday, October 27, 2009

next round of performance enhancement in rsyslog

Today, I made a very important change to rsyslog: rulesets now can have their own "main" queue. This doesn't sound too exciting, but can offer dramatic performance improvements.

When rsyslog was initially created, it followed the idea that messages must be processed in the order they were received. To facilitate that, all inputs submitted message to a single main message queue, off from which the processing took place. So messages stayed in reception order. ... Well, actually they stayed in "enqueued order", because it depended on the OS scheduler if input modules could really enqueue in the order they received. If, for example, input A received two messages, but was preempted by module B's message reception, B's data could hit the queue earlier than A's. As rsyslog supported more and more concurrency, the order of messages did become ever less important. The real cure for ordered delivery is to look at high-precision timestamps and build the sort order based on them (in the external log analyzer/viewer).

So, in essence, reception order never has worked well and the requirement to try keep it has long been dropped. That also removed one important reason for the single main message queue. Still, it is convenient to have a single queue, as its parameters can be set once and for all.

But a single queue limits concurrency. In the parallel processing world, we try to partition the input data as much as possible so that the processing elements can independently work on the data partitions. All data received by a single input is a natural data partition. But the single main queue merged all these partitions again, and caused performance bottlenecks via lock contention. "Lock contention", in simple words, means that threads needed to wait for exclusive access to the queue.

This has now been solved. Today, I created the ability to create ruleset-specific queues. In rsyslog, the user can decide which ruleset is bound to which inputs. For a highly parallel setup, each input should have its own ruleset and each ruleset should have defined its own "main" queue. In that setting, inputs do no longer block each other during queue access. On a busy system with many inputs, the results can be dramatic. And as more as a side-effect, each ruleset is now processed by its dedicated rule processing thread, totally independent from each other.

This design offers a lot of flexibility. But that is not enough. The next step I plan to do is to create the ability to submit a message to a different ruleset during processing. That way, hierarchies of rulesets can be created, and these rulesets can even be executed via separate thread pools, with different queue parameters and in full concurrency. And the best is that I currently think it will not be very hard to create the missing glue.

The only really bad thing is that the current configuration language is really not well-suited to handle that complexity ("really not" is not a type for "not really"...). But I have no alternative than to take this route again, until I finally find time to create a new config language. The only good thing is that I get better and better understanding of what this new language must be able to do, and it looks that my initial thoughts were not up to what now is required...

Monday, October 12, 2009

Canonical Paper on RSyslog

I just found out that Canonical (the Company behind Ubuntu) did a nice paper on rsyslog, which also explains why Ubuntu chooses rsyslog as its default syslogd.

It is interesting to see that the paper is well-written and well-researched, but rsyslog has also evolved while the paper has been written. So in fact, it offers even more features than described in the paper.

And, obviously, I am glad to see Ubuntu move to rsyslog as well.

Tuesday, October 06, 2009

Will Microsoft remove the Windows Software RAID?

These days, hardware rates are quite inexpensive. So everybody is moving towards them. However, all mainstream operating systems still support software RAIDs, maybe even for a good reason: an os-controlled software raid may be a bit better to optimize under some circumstances. Anyhow. Microsoft seems to move away from that feature set:

As you probably know, Adiscon provides premier Windows event log processing solutions. Some of our customers use the products for example to monitor if their RAIDs break. And some of them use software RAIDs. So we wrote a nice article on how to monitor RAID health using the Windows Event Log.

Since the days of Windows NT 3.1 (or was it 3.5), the Windows logged an error message if the RAID failed. Actually, I'd consider this a necessary functionality for any working RAID solution. Why? Well, if the RAID solution works, you will not notice that a disk has died. So if nobody tells you, you'll continue to use the system as usual, not suspecting anything bad. So guess what - at some time the next disk fails and then (assuming the usual setup) you'll be "notified" by the disk system, with those nice unnercoverable i/o errors. So without any health alerts, a RAID system is virtually useless.

We learned, that Windows Server 2008's RAID system does no longer issue these alerts! (aka "is useless" ;)). So a long while ago, we reported this to Microsoft. The bug went through several stages of escalation. A few minutes ago, my co-worker got a call from the frontline Microsoft tech. He told him that, regrettably, Microsoft won't fix this issue. According to his words, Micorosoft has confirmed this to be a bug, and the group responsible for ftdisk has confirmed that it should be fixed but someone more powerful up in the hierarchy has opted not to do that. Boom. The tech tried to persuade us to switch to a hardware RAID, but actually that was not the point of the support call ;)

What does that mean? To me, it looks like Microsoft is actually moving away from providing software RAID. How other can one explain that there is no interest in providing any error message at all if something goes wrong with the RAID. Given the wide availability of hardware RAIDs (which, btw, provide proper alerting), this step does not look illogical. But do they really want to leave Linux with being the only widely deployed mainstream operating system that provides software RAID? Or do they intend to keep it on the feature sheet, but provide a dysfunctional solution like in Windows Server 2008?

Let's stay tuned and listen what the future brings...

Monday, October 05, 2009

Another note on hard-to-find-bugs...

Before I began to write this blog post, I realized how long I had not written anything! I promise to begin to write in a more timely manner, but the past weeks were merely a consolidation phase, ironing out bugs from the new releases.

I'd like to elaborate on one of these, one that really drove me crazy the past days. The problem was that omfile's directory creation mode was sometimes set to zero (well, almost always in some configurations). What began as a minor nit turned into a real nightmare ;)

The issue was that the variable fDirCreateMode was always set to zero, except if it was written to at the start of module initialization or when it was simply displayed at start of module initialization. That sounded strange, but even stranger seemed that by moving around the variable definition in the sources code (and thus assumingly changing its memory location), nothing changed. So I came to a point where I used this code as a patch:

omfile.c from rsyslog git

Look at line 769. With that seemingly unrelated write, the variable stayed as expected. However, if I moved the write to a function, nothing worked again. Strange... After committing the patch, testing showed that the directory permissions now worked well BUT the file create mode now behaved wrong in the same way.

I was stunned - and speechless. What followed, were intense debugging sessions. I ended up finding the commit that introduced the problem, but still could not see why that commit would affect anything. After hours of debugging, I ended up with a stripped-down and almost codeless omfile, which still had the same problem. And it appeared and disappeared almost at random as code lines were moved in and out.

I once again checked the git history and then I noticed that a few commits down the line, I had introduced another config variable for the io buffer size. Now I finally had the idea. The size-type config directives were introduced for file size restrictions. Thus, the regular 32 bit integer is not sufficiently large for them. Consequently, they needed 64 bit integers as pointers! But, of course, I had provided only a pointer to a 32 bit int, thus the config handler overwrote another 32 bits that happened to be close to the address I provided.

This was clearly an error. But could it explain the behavior I saw? Not really... But the problem went away once I had corrected the issue. So I began to suspect the that compiler hard re-ordered variable memory assignment in order to optimize access to them (maybe to get a better cache hit rate or whatever else). But hadn't I compiled with -O0 and as such no optimization should take place? I checked, and I realized that due to a glitch in lab setup, optimization actually was on, and not turned off! So now I think I can explain the behavior and theory as well as practice go hand in hand.

Really? What about the write and the debug print that made everything work? I guess these changes triggered some compiler optimization and thus the memory assignment was changed and so the "extra 32" bit pointed to some other variable. What also explains why the file creation mode was affected by my change. As well as why the bug reacted quite random to my testing code changes.

So it looks like I finally resolved this issue.

Lessens learned? Re-check your lab environment, even if it always worked well before. Be careful with assumption about memory layout, as the optimizer seems to heavily reorder variables, and even single statements and statement sequences seem to make a big difference. I knew the compiler reorders things, but I did not have this clear enough on my mind to become skeptic about my lab setup.

And, as always, some assumption limit your ability to really diagnose what goes on... May this be a reminder not only for me (I wonder how long it will last) but for others as well (thus I thought a blog post makes sense ;)).

Friday, July 24, 2009

The code that put people onto the moon...

... was just recently published by NASA and is now available via Google code. Google has a nice blog post on it.

Of course, reading the "old" assembly code is a probably a bit hard even for today's programmers used to high-level languages, and even more so for non-programmers. I still think these are excellent documents and at least the comments speak to folks with technical interest (and some are really explicit ;)).

While digging through this material, I found a very interesting and insightful article on the Lunar Module Guidance Computer by Don Eyles, who was deeply involved with its programming. This is a long article, but it is a rewarding read. It not only offers a lot of insight into how challenging it was to fly with these day's hardware (every cell phone has *far* more capability today, maybe even washing machines...). The article also explains, in plain word, some concepts that were created for Apollo and influence today's programs as well.

Most importantly, I think that the Apollo program not only showed that mankind can leave earth. It also is probably the first instant where computing machinery was absolutely vital to achieve a goal. In the Apollo days, there were some overrides possible, and obviously needed. Today, we are betting our life more and more on technology, and often without a real alternative. Having overrides would sometimes be useful, too, but we seem to partially forget that ;)

But enough said: enjoy these documents!

Monday, July 13, 2009

rsyslog - what's next?

I've not blogged so much the recent weeks. I have had my nose deep down inside rsyslog code, adding new features (like an automatic zip file writer or the ability to spoof/forge UDP sender addresses) and enhancing performance (where I think I scored some major points ;)).

So it is time for an update. Where is rsyslog heading to? With the many changes I made in the past two to three month, I think it is very important to let the code base stabilize. So I would prefer not to touch too much existing code for a while. Also, it is summer time and my summer vacation is not so far away. Another good argument that it is probably not the best time for big code changes (or do I like to break things before I go away...? ;)).

So looking at what to do next, I would like to center myself on improving the tool set that helps create rsyslog. That doesn't mean direct improvement to the actual syslogd, but rather to tools that help build and maintain it. The first major effort in this regard was adding an automated testbench. If you look at v3, I think it has around four automated tests (previous versions had none). With v5, we have over 20 subtests, each of which test various cases, so in total we currently have around one hundered test cases automatically covered.

When I started with this in v3, it was a major effort, even though the number of covered cases was rather small. But getting started with a testbench meant I needed to evaluate ways to automate the tests and create them in the best possible way for rsyslog (which also means convenient during the development process). At that point, I tried a lot of things and finally came up with the current set of tests. The initial testbench covered only a very limited set of use cases.

Since then, it has greatly improved, but there are still a lot of uncovered areas. But I now regularly add new tests, most often when I implement new features, change existing ones or hunt bugs. The process is now well understood and many tests can be added with relative ease (but others not, I have some testcases in the queue that require notable extensions to my current system plus a bit ... of the different toolset I will be talking about soon...).

Initially, I was rather skeptic if the testbench would really pay, especially after I saw the initial effort required (which I by far underestimated). But in the meantime I am convinced. Especially the past couple of month has shown that the automated tests both increase development productivity (by reducing the number of manual tests that need to be done and spotting regressions early) as well as code quality (detect regressions where they otherwise would have been overlooked).

Now I am in a similar situation in regard to performance testing as I was in regard to correctness testing: everything is done manually and with very low-level tools. Still, I was able to make good progress without tools. But I hope that tools would be as useful for performance testing as they were for bug hunting. Most importantly, my current performance improvement testing covers only limited (though highly relevant) scenarios: those where getting sufficiently reliable numbers is possible with the limited capabilities I have. Most importantly this means that almost all testing so far has been done with plain tcp syslog. While this still enables to check the core engine's performance, it does not offer a clear view of e.g. UDP performance (which I really do not have now). Also, the examples are artificial, and it would be useful to get more of a real-world performance benchmark.

Finally, performance benchmarks stress the engine, especially its multi-threading capabilities. So performance testing is also a good way to uncover those nasty threading bugs, that one otherwise only detects when systems fail in production (and nobody then knows why...). So I consider decent performance test also to be a plus for code quality. I even consider them very important to stability e.g. the v5-engine, which so far has received only limited attention in practice. It looks like almost nobody ever tried it. I know because the initial v5 release had such a big memory leak, that any serious tester would have needed to come up and complain very quickly. A lack of test deployments makes it harder to mature the engine. I think that good stress tests (which all have a performance co-notation) will help to somewhat mitigate this problem. As a side-note, I have uncovered many of those bugs that I fixed during my manual performance testing. This seems to prove the point.

So I am more or less convinced (if nothing more urgent shows up) to spend some time implementing performance tools and tests for rsyslog. I would also like to include a somewhat older idea of a "diagnostic front-end" that would be able to pull (and maybe modify) some of rsyslog parameters. I'd expect that as a side-activity I'll also gather (at a minimum) or improve (preferable) performance in a couple of areas (UDP reception performance is on top of the list). But improvements will only come after the basic tools have been written.

As with the testbench, that will mean that new features and enhancements will probably stall a bit in the coming weeks. This even more as I do not intend to write the front-end in C (I personally do not consider C to be the language of choice for non-performance critical interactive programs, especially looking into some of the portability issues - but YMMV...). So I will try to approach this with an Java app. I have to admit that I learned Java 8 to 10 years ago and never programmed much in it, so that will probably mean I'll need to re-learn the language, but as I don't consider this GUI to be something extremely critical, I don't see any issues with me as a Java freshmen doing it.

As a side-note, I should probably mention that I am also involved in the phpLogCon project. So far, I am only part of the design team, but I have a number of really cool visualization features on my personal wish-list. If I ever get time to work on that (I hope for next year), I probably will need to do that in Java, so it doesn't hurt to practice on a less demanding project. In that sense, I also hope to be able to set stage for some future cool technology while I work on a current demand ;) It would also be interesting so visualize some of the performance counters, but that's another story ;)

All in all, getting an interactive troubleshooting and analysis front-end has big potential, not only for testing but also for deployments and finding configuration bugs (which become more and more an issue with improving complexity of the configuration). One could also envision that it could include a graphical configuration build ... as well as tools for setting up all those TLS certs. I don't think I can do all of this now or in the next quarter. But I think it is the right time now to begin working on a foundation that offers yet another big potential. Especially as it also helps to urgent need to get better testing for the engine plus the desire to further improve its performance (my goal is no less than to provide the by-far fastest AND most reliable syslogd on this planet ;)).

Well, that's it for now. I hope you like the idea of an additional performance-centric toolset (which of course also requires engine enhancements) and a GUI as much as I do. If you have comments or concerns, please let me know. I sincerely hope to begin a new round of capability enhancements with this move.

Monday, June 15, 2009

high-peformance, low-precision time API under linux?

This time, I raise a question in my blog. Suggestions, tips and full answers are very welcome.

In rsyslog, there are various situations where I only need low resolution timestamps. With low resolution, I precise within a second. Of course, this thing is provided by the time() API. However, time() is very slow - far too slow for many things I do in rsyslog. So far, I have been able to work around this problem by doing a time() call only every n-th time where I run in tight loops and know that this will not bring me outside of me 1-second window (well, to be precise, this is at least very unlikely and thus acceptable).

However, this approach does not work for all work that I am doing. Now I am facing the challenge once gain, but this time in an area where the "query only n-th time" approach does not work. I need the time in order to schedule asynchronous activities (like writing so far unwritten buffers to disk). With them, there is no tight loop that provides me with some sense of timing, and so I simply do not know if half a second or half an hour has elapsed between calls - except when I do one of these costly time() calls.

A good work-around would be to define my own interval timer, awaking me e.g. every seconds. So I would not need absolute time but could do things based on these timer ticks. However, there is lot of evil in this approach, too: most importantly: this means rsyslogd will be active whenever the system is up, and running on a tick will prevent the operating system from switching the CPU to power saving modes. So this option looks very dirty, too.

So what to do now? Is there any (decently portable) way to get a second-resolution current timestamp (or a tick counter) without actually running on a tick?

If I don't find a better solution, I'll probably be forced to run rsyslogd on a tick, which would not be a good thing from a power consumption point of view.

As I already said, feedback is greatly appreciated...

Edit: in case my description was a bit unclear: it is not so important that the timestamp is of low resolution. Of course, I prefer higher resolution, but I would be OK with lower resolution if that is faster.

The problem with time() and gettimeofday() is that they are quite slow. As an example, I can only do around 250,000 time()/gettimteofday() calls per second on my current development system. So each API call takes around 4ms on that system. While this sounds much, it adds considerable runtime to each messages being processed - especially if multiple calls are required thanks to modular structure.

I have also thought about a single "lowres system time getter" inside rsyslog. However, that brings up problems with multi-threading. If one would like to be on the safe side, its entry points need to be guarded by mutexes, another inherently slow operation (depending on circumstances, overhead can be even worse then time()). With atomic operations, things may improve. But even then, we run into the issue that we do not know if the last call was half a second or half an hour ago...

Another edit:
This is a recording from a basic test I did on one lab system:

[rgerhards@rf10up tests]# cat timecaller.c
#include
#include
#include

int main(int argc, char* argv[])
{
time_t tt;
struct timeval tp;
int i;

for(i = 0 ; i < atoi(argv[1]) ; ++i) {
// time(&tt);
gettimeofday(&tp, NULL);
}
}
[rgerhards@rf10up tests]# cc timecaller.c
[rgerthards@rf10up tests]# time ./a.out 100000

real 0m0.309s
user 0m0.004s
sys 0m0.294s

The runtime for the time() call is roughly equivalent (especially giving the limited precision of the instrumentation). Please also note that we identified the slowness of the time() calls in autumn 2008, when doing performance optimization with the help of David Lang. David was the first to point to the time-consuming time() calls in strace. Reducing them made quite a difference.

Since them, I try to avoid time() calls at all costs.

Friday, May 29, 2009

introducing rsyslog v5

A new v5 version of rsyslog will be released today. Originally, I did not plan to start the v5 version before the end of the year (2009). But then we received sponsorship to enhance queue performance. And then we saw that an audit-grade queue subsystem was needed (audit-grade means that no message is ever lost, not even in fatal failure cases like sudden power loss).

Especially the audit-grade queue subsystem resulted in very large design changes to the queue engine. Their magnitude is so large that I assume we need some time to stabilize it. Thus, I have decided to start a new v5 branch, which will feature the redesigned queue engine.

When we introduced the queue engine in early 2008 (in rsyslog v3), it took roughly three to five month until it got decently stable. With the magnitude of changes we have done now, it will probably take some time, again. It depends a bit on the actual feedback we receive from practice. Also, this time I have added lots of automated tests, so a lot of bugs should already have been caught. Also, during the next weeks I will focus on actual deployment scenarios, rather than things that theoretically may happen (the testbench covers many of those). So, all in all, I expect that the new queue engine will become production-ready faster than the v3 engine.

Still, I think it is desirable to create a new major version branch for this change. So here we are, at v5. I will continue to develop functionality that does not necessarily need the new queue engine inside the v4-devel. That way, we will have this functionality available both with the proven queue engine as well as with the new experimental one. Note that I can not do this with a stable branch: per definition, stable branches never receive enhancements (as that would potentially destabilize the branch). So, for the time being and probably a couple of month, we will have two development branches: the v4 as well as the v5 branch. With that v5 will focus on the new queue engine plus any other additions, which are done in v4.

Tuesday, May 19, 2009

rsyslog queue enhancements - status report

I thought I post a few thoughts about how far the rsyslog queue enhancements have evolved.

We started with the goal to increase performance, especially for database outputs. As part of that endeavor, we designed and implemented message batches as the new processing entity. This approach was suggested by David Lang, who also offered very valuable feedback, suggestions and review of the relevant papers (not to mention actual testing) during the process. Then, we came to the conclusion that we need to have a truly ultra-reliable queue. One that does not even lose messages in case of a sudden fatal failure (like a power fail without a UPS - or a failing UPS!). That lead to further redesign and a lot of design work. All of this is very exciting.

Since last Friday, I have now worked on the actual code. I do now have updated for queue, the queue storage drivers and action processing. Most importantly, the rsyslog testbench does once again successfully run, even in DA queue mode. There are still a couple of things that need to be looked at, but I think most of the bulk work is done. What now follows is careful looking at the open issues plus a LOT more of testing.

The testbench has improved much in the past three month, but it still is far from covering even the most important code areas. Especially the various queueing scenarios are not very well covered by it, mostly because it is rather complex to do so. Anyhow, I will now try not to do so many ad-hoc manual tests but rather see that I can create more automated tests. While this is a lot more of work, even the current testbench has been proven to be extremely valuable during this mayor code change effort (which, let me re-iterate, is far from being fully completed). Without it, it would have been much harder to find those bugs that came up during the testbench run. I think that the time I invested into it already has payed back.

Let me end with a list of things I need to look at. That will at least help me keep focused and let you know what is extremely weak right now:
  • more tests
  • so far, the last batch is not freed until at least one more message comes in (permit DeleteProcessedBatch() to be called de-coupled)
  • cancel processing cleanup, decision if we should still support cancel processing entry points
  • configured discarding of messages on queue-full condition [at least add extra nElem counter]
  • make output actions support message-permanent failures (at least PostgreSQL output plugin) [also think about test cases for this]
  • double-check of action and action unit state processing
  • persisting of messages from memory queues during shutdown (testing)
  • Think about a new way of handling iDeqSlowdown (maybe during batch processing?)

Wednesday, May 13, 2009

ultra-reliable queueing in rsyslog

As part of the ongoing mailing list discussion on ultra-reliable queueing in rsyslog, I'd like to create another blogpost from discussion content (again, I hope this reference is handy in the future).

The key point with ultra-reliable queues is that no message can be lost once it has been enqueued. In the current (v2,v3,v4 <= 4.1.2) releases of rsyslog, this is ensured as long a the system is guarded against a sudden loss of power (or similar disaster) and even then all but the last messages dequeued are save.

To make queue operations ultra-reliable in that case, the queue needs to be run as a pure disk queue and a checkpoint interval of one needs to be used. This makes the queue reliable at the expense of performance. Note also that with a disk queue only a single queue worker is permitted.

Now let's look at a simplified scenario:

input -> queue -> output

This is not correct in that inputs never connect directly to outputs, but this detail is irrelevant for what I intend to say (replace "input" by "producer" and "output" by "consumer" if you'd prefer to have a fully consistent version).

Let's say the processing time is the cost we incur. If we look at it, the queue's cost dominates by far the combined cost of input and output. In most cases, it dominates input+output cost so much, that you can express the total cost as just the cost of the queue operation, without looking at anything else.

So the input needs to wait until the queue is ready to accept a new message. Once it has done so, the output is notified and immediately acquires the queue lock and begins the dequeue operation. At the same time, the input has already finished input processing (as I said, this happens in virtually "no time" compared to the queue operation). So it needs to wait for the queue lock. Once the dequeue operation is finished, the output releases the lock, and processes the message in virtually no time, too. The input acquired the queue lock, and the whole story begins right from the start.

A small queue may build up depending on the OS scheduler, but I think most often, input and output will just wait for the queue to complete. In that sense, this mode is similar to DIRECT mode, except that a queue can build up when the action needs to be retried.

So to optimize such a scenario, the best thing to do is a totally new queue storage driver for such cases. Sequential files do not really work well if we have multiple producers running.

This is a major effort and even then we need to think about the implications I raised in regard to processing cost above.

First of all, rsyslog was never designed for this use case (preserve every message EVEN in case of sudden power fail). When I introduced purely disk-based queues, this was done to support disk-assisted mode. I needed a queue type to permit me store things on disk, if we run out of memory. As a "side-effect", a pure disk mode was available also (I'd never implemented it for the sake of itself). As it was there, I decided to expose this mode and made it user-configurable. I thought (probably correct) that it could solve some need - a need that I'd consider "very exotic" (think about the reliance on a audit-grade protocol for this to really make sense). And I added the checkpoint capability because it seemed useful, even with disk-based queues, which could be guarded from total loss of messages by using a reasonable checkpoint interval. Again, a checkpoint interval of one is permitted just because this capability came "for free" and could be handy in some use cases.

The kiosk example we discussed 2008 (?) on the mailing list looked like a good match for such an exotic environment. Sudden power loss was an option, and we had low traffic volume. Bingo, perfect match.

However, I'd never thought about a reasonable high-volume system using disk-only queues. Think about the cost functions, such a system boils down to a DIRECT mode queue which just takes an exceptional lot of time for processing messages.

So probably the best approach for this situation would be to run the queue actually in direct mode. That removes the overwhelming cost of queue operations. Direct mode also ensures that the input receives an ack from the output [but there may be subtle issues which I need to check to make sure this is always the case, so do not take this for granted - but if it is not yet so, this should not be too complex to change]. With this approach, we have two issues left:

a) the output action may be so slow, that it actually is the dominating cost factor and not disk queue operation

b) the output action may block for an extended period of time (e.g. during a retry)

In case a), a disk-queue makes sense, because it's cost is irrelevant in this scenario. Indeed, it is irrelevant under all circumstances. As such, we can configure a disk-only action queue in that case. Note that this implies a *very* slow output.

Case b) is more complicated. We do NOT have any proper way to address it with current code. The solution IMHO is to introduce a new queue mode "Disk Queue on Delay" which starts an ultra-reliable disk queue (preferably with a faster queue store driver) if and only if the action indicates that it will need extended processing time. This requires some changes to action processing, but the action state machine should be capable to handle that with relatively slight modification [again, an educated guess, not a guarantee]).

In that scenario, we run the action immediately whenever possible. Only if that take the (considerable) extra effort of buffering messages into a much-slower on disk queue. Note that such a mode makes only sense with audit-grade protocols and senders (which hold processing until the ACK has been received). As such, a busy system automatically slows down to the rate that the queue writer can handle. In this sense, the overall system (e.g. a financial trading system!) may be slowed down by the unavailability of a failing output (which in turn causes the extra and very high cost of disk queue operations). It needs to be considered if that is an acceptable price.

The faster an ultra-reliable queue disk store driver performs, the more cases we can handle in the spirit of a) above. In theory, this can lead to elimination of b) cases.

Nevertheless, I hope I have shown that re-designing the queue (drivers) to support high throughput AND ultra-reliable operations AT THE SAME TIME is far from being a trivial task. To do it right, it involves some other changes too.

Monday, May 11, 2009

rsyslog configuration graphs

I worked today on adding a configuration graphing capability to rsyslog. This was inspired by many discussions about how the rule engine works. From a high-level perspective, rsyslog is "just" a configurable message router, that routes messages from a set of inputs to a set of outputs, potentially with transformations doing to the messages. Rsyslog does so via the rule set, which is the most important part of the configuration file. In that sense, rsyslog is a configurable state machine and the rule set is its configuration.

While typical syslog configurations are rather simple and easy to understand, complex ones can be challenging. The graphing capability we now have provide a high-level, human-readable representation of rsyslogd's internal control structures. The beauty with that is that every user can create an exact right diagram from his own configuration.

I hope this is a useful tool for documenting a system setup, but I also think it is a very valuable tool for learning to understand rsyslog as well troubleshooting problems with message processing.

With that said, I now send you to the new graphing feature manual page, which I hope provides sufficient insight into how this feature is used.

But... here is a sample graph to whet your appetite:

Friday, May 08, 2009

Can "more reliable" actually mean "less reliable"?

On the rsyslog mailing list, we currently have a discussion about how reliable rsyslog should be. It circles about a small potential window of message loss in the case of sudden power failure. Rsyslog can be configured to put all messages into a disk queue (instead of main memory), so these messages survive such a powerfail condition. However, messages dequeued and scheduled for processing during the power outage may be lost.

I now consider a case where we have bursty UDP traffic and rsyslog is configured to use a disk-only queue (which obviously is much slower than an in-memory queue). Looking at processing speeds, the max burst rate is limited by using an ultra-reliable queue. To avoid using UDP messages, a second instance could be run that uses an in-memory queue and forwards received messages to the one in ultra-reliable mode (that is with the disk-only queue). So that second instance queues in memory until the (slower) reliable rsyslogd can now accept the message and put it into the reliable queue. Let's say that you have a burst of r messages and that from these burst only r/2 can be enqueued (because the ultra reliable queue is so slow). So you lose r/2 messages.

Now consider the case that you run rsyslog with just a reliable queue, one that is kept in memory but not able to cover the power failure scenario. Obviously, all messages in that queue are lost when power fails (or almost all to be precise). However, that system has a much broader bandwidth. So with it, there would never have been r messages inside the queue, because that system has a much higher sustained message rate (and thus the burst causes much less of trouble). Let's say the system is just twice as fast in this setup (I guess it usually would be *much* faster). Than, it would be able to process all r records.

In that scenario, the ultra-reliable system loses r/2 messages, whereas the somewhat more "unreliable" system loses none - by virtue of being able to process messages as they arrive.

Now extend that picture to messages residing inside the OS buffers or even those that are still queued in their sources because a stream transport blocked sending them.

I know that each detail of this picture can be argued at length about.

However, my opinion is that there is no "ultra-reliable" system in life, only various probabilities in losing messages. These probabilities often depend on each other, what makes calculating them very hard to impossible. Still, the probability of message loss in the system at large is just the product of the probabilities in each of its components. And reliability is just the inverse of that probability.

This is where *I* conclude that it can make sense to permit a system to lose some messages under certain circumstances, if that influences the overall probability calculation towards the desired end result. In that sense, I tend to think that a fast, memory-queuing rsyslogd instance can be much more reliable compared to one that is configured as being ultra-reliable, where the rest of the system at large is badly influenced by this (the scenario above).

However, I also know that for regulatory requirements, you often seem to need to prove that a system may not lose messages once it has received them, even at the cost of an overall increased probability of message loss.

My view of reliability is much the same as my view of security: there is no such thing as "being totally secure", you can just reduce the probability that something bad happens. The worst thing in security is someone who thinks he is "totally secure" and as such is no longer actively looking at potential issues.

The same I see for reliability. There is no thing like "being totally reliable" and it is a really bad idea to think you could ever be. Knowing this, one may begin to think about how to decrease the overall probability of message loss AND think about what rate is acceptable (and what to do with these cases, e.g. "how can they hurt").

Thursday, April 30, 2009

A batch output handling algorithm

With this post, I'd like to reproduce a posting from David Lang on the rsyslog mailing list. I consider this to be important information and would like to have it available for easy reference.

Here we go:


the company that I work for has decided to sponser multi-message queue
output capability, they have chosen to remain anonomous (I am posting from
my personal account)

there are two parts to this.

1. the interaction between the output module and the queue

2. the configuration of the output module for it's interaction with the
database

On for the first part (how the output module interacts with the queue), the
criteria are that

1. it needs to be able to maintain guarenteed delivery (even in the face
of crashes, assuming rsyslog is configured appropriately)

2. at low-volume times it must not wait for 'enough' messages to
accumulate, messages should be processed with as little latency as
possible



to meet these criteria, what is being proposed is the following

a configuration option to define the max number of messages to be
processed at once.

the output module goes through the following loop

X=max_messages


if (messages in queue)
mark that it is going to process the next X messages
grab the messages
format them for output
attempt to deliver the messages
if (message delived sucessfully)
mark messages in the queue as delivered
X=max_messages (reset X in case it was reduced due to delivery errors)
else (delivering this batch failed, reset and try to deliver the first half)
unmark the messages that it tried to deliver (putting them back into the status where no delivery has been attempted)
X=int(# messages attempted / 2)
if (X=0)
unable to deliver a single message, do existing message error
process



this approach is more complex than a simple 'wait for X messages, then
insert them all', but it has some significant advantages

1. no waiting for 'enough' things to happen before something gets written

2. if you have one bad message, it will transmit all the good messages
before the bad one, then error out only on the bad one before picking up
with the ones after the bad one.

3. nothing is marked as delivered before delivery is confirmed.



an example of how this would work

max_messages=15

messages arrive 1/sec

it takes 2+(# messages/2) seconds to process each message (in reality the
time to insert things into a database is more like 10 + (# messages / 100)
or even more drastic)

with the traditional rsyslog output, this would require multiple output
threads to keep up (processing a single message takes 1.5 seconds with
messages arriving 1/sec)

with the new approach and a cold start you would see

message arrives (Q=1) at T=0
om starts processing message a T=0 (expected to take 2.5)
message arrives (Q=2) at T=1
message arrives (Q=3) at T=2
om finishes processing message (Q=2) at T=2.5
om starts processing 2 messages at T=2.5 (expected to take 3)
message arrives (Q=4) at T=3
message arrives (Q=5) at T=4
message arrives (Q=6) at T=5
om finishes processing 2 messages (Q=4) at T=5.5
om starts processing 4 messages at T=5.5 (expected to take 4)
message arrives (Q=5) at T=6
message arrives (Q=6) at T=7
message arrives (Q=7) at T=8
message arrives (Q=8) at T=9
om finishes processing 4 messages (Q=4) at T=9.5
om starts processing 4 messages at T=9.5 (expected to take 4)

the system is now in a steady state

message arrives (Q=5) at T=10
message arrives (Q=6) at T=11
message arrives (Q=7) at T=12
message arrives (Q=8) at T=13
om finishes processing 4 messages (Q=4) at T=13.5
om starts processing 4 messages at T=13.5 (expected to take 4)

if a burst of 10 extra messages arrived at time 13.5 this last item would
become

11 messages arrive at (Q=14) at T=13.5
om starts processing 14 messages at T=13.5 (expected to take 9)
message arrives (Q=15) at T=14
message arrives (Q=16) at T=15
message arrives (Q=17) at T=16
message arrives (Q=18) at T=17
message arrives (Q=19) at T=18
message arrives (Q=20) at T=19
message arrives (Q=21) at T=20
message arrives (Q=22) at T=21
message arrives (Q=23) at T=22
om finishes processing 14 messages (Q=9) at T=22.5
om starts processing 9 messages at T=22.5 (expected to take 6.5)

Monday, April 27, 2009

Levels of reliabilty

We had a good discussion about reliability in rsyslog this morning. On the mailing list, it started with a question about the dynafile cache, but quickly morphed into something else. As the mailing list thread is rather long, I'll try to do a quick excerpt of those things that I consider vital.

First a note on RELP, which is a reliable transport protocol. This was the relevant thought from the discussion:

I've got relp set up for transfer - but apparently I discovered
that relp doesnt take care of a "disk full" situation on the receiver
end? I would have expected my old entries to come in once I had cleared the disk space, but no... I'm not complaining btw - just remarking that this was an unexpected behaviour for me.


That has nothing to do with RELP. The issue here is that the file output writer (in v3) uses the sysklogd concept of "if I can't write it, I'll throw it away". This is another issue that was "fixed" in v4 (not really a fix, but a conceptual change).

If RELP gets an ack from the receiver, the message is delivered from the RELP POV. The receiving end acks, so everything is done for RELP. Some thing if you queue at the receiver and for some reason lose the queue.

RELP is reliable transport, but not more than that. However, if you need reliable end-to-end, you can do that by running the receiver totally synchronous, that is all queues (including the main message queue!) in direct mode. You'll have awful performance and will lose messages if you use anything other than RELP for message reception (well, plain tcp works mostly correct, too), but you'll have synchronous end-to-end. Usually, reliable queuing is sufficient, but then the sender does NOT know when the message was actually processed (just that the receiver enqueued it, think about the difference!).

This explanation triggered further questions about the difference in end-to-end reliability between direct queue mode versus disk based queues:

The core idea is that a disk-based queue should provide sufficient reliability for most use cases. One may even question if there is a reliability difference at all. However, there is a subtle difference:

If you don't use direct mode, than processing is no longer synchronous. Think about the street analogy:


http://www.rsyslog.com/doc-queues_analogy.html


For synchronous, you need the u-turn like structure.

If you use a disk-based queue, I'd say it is sufficiently reliable, but it is no longer an end-to-end acknowledgement. If I had this scenario, I'd go for the disk queue, but it is not the same level of reliability.

Wild sample: sender and receiver at two different geographical locations. Receiver writes to database, database is down.

Direct queue case: sender blocks because it does not receive ultimate ack (until database is back online and records are committed).

Disk queue case: sender spools to receiver disk, then considers records committed. Receiver ensures that records are actually committed when database is back up again. You use ultra-reliable hardware for the disk queues.

Level of reliability is the same under almost all circumstances (and I'd expect "good enough" for almost all cases). But now consider we have a disaster at the receiver's side (let's say a flood) that causes physical loss of receiver.

Now, in the disk queue case, messages are lost without the sender knowing. In direct queue case we have no message loss.

And then David Lang provided a perfect explanation (to which I fully agree) why in practice a disk-based queue can be considered mostly as reliable as direct mode:


> Level of reliability is the same under almost all circumstances (and I'd
> expect "good enough" for almost all cases). But now consider we have a
> disaster at the receiver's side (let's say a flood) that causes physical loss
> of reciver.

no worse than a disaster on the sender side that causes physical loss of the sender.

you are just picking which end to have the vunerability on, not picking if you will have the vunerability or not (although it's probably cheaper to put reliable hardware on the upstream reciever than it is to do so on all senders)

> Now, in the disk queue case, messages are lost without sender knowing. In
> direct queue case we have no message loss.

true, but you then also need to have the sender wait until all hops have been completed. that can add a _lot_ of delay without nessasarily adding noticably to the reliability. the difference between getting the message stored in a disk-based queue (assuming it's on redundant disks with fsync) one hop away vs the message going a couple more hops and then being stored in it's final destination (again assuming it's on redundant disks with fsync) is really not much in terms of reliability, but it can be a huge difference in terms of latency (and unless you have configured many worker threads to allow you to have the messages in flight at the same time, throughput also drops)

besides which, this would also assume that the ultimate destination is somehow less likely to be affected by the disaster on the recieving side than the rsyslog box. this can be the case, but usually isn't.


That leaves me with nothing more to say ;)

Wednesday, April 08, 2009

what is "nextmaster" good for?

People that looked at rsyslog's git may have wondered what the branch "nextmaster" is good for. This actually is an indication that the next rsyslog stable/beta/devel rollover will happen soon. With it, the current beta becomes the next v3-stable. At the same time, the current (v4) devel becomes the next beta (which means there won't be any beta any longer in v3). In order to facilitate this, I have branched of "nextmaster", which I will currently work on. The "master" branch will no longer be touched and soon become beta. Then, I will merge "nextmaster" back into the "master" branch and continue to work with it.

The bottom line is that you currently need to pull nextmaster if you would like to keep current on the edge of development. Sorry for any inconvenience this causes, but this is the best approach I see to go through the migration (and I've done the same in the past with good success, just that then nobody noticed it ;)).

Wednesday, April 01, 2009

rsyslog going to outer space

Rsyslog was designed to be a flexible and ultra-reliable platform for demanding applications. Among others, it is designed to work very well in occasionally connected systems.

There are some systems that are inherently occasionally connected - space ships. And while we are still a bit away from the Star Trek way of doing things, current space technology needs a "captain's star log". Even for spacecraft, it is important when and why systems were powered up, over- or under-utilized or malfunction (for example, due to "attack" not of a Klingon, but a cosmic ray). And all of this information needs to be communicated back to earth, where it can be reviewed and analyzed. For all of this, systems capable of reliable transmission in a disconnected environment are needed.

Inspired by NASA's needs, the Internet Resarch Task Force (the research branch of the IETF) is working on a protocol named DTN, usually called the interplanetary Internet.

As we probably all know, Microsoft Windows flies on the Space Shuttle. And, more importantly, Linux also did. With the growing robustness of Open Source, future space missions will most probably contain more Linux components.

This overall trend will also be present in NASA's and ESA's future Jupiter mission. There is a lot of information technology on the upcoming spacecraft, and so there is a lot of things worth logging. While specialized software is usually required for spacecraft operations, it is considered the rsyslog as the leading provider of reliable occasionally connected logging infrastructures may extend its range into the solar system. It only sounds logical to use all the technology we already have in place for reliable logging even under strange conditions (see "reliable forwarding"). Of importance is also rsyslog's speed and robustness.

As a consequence, we have today begun to implement the DTN protocol for the interplanetary Internet. That will be "omdtn" and is available as part of the rsyslog spaceship git branch. This branch is available as of now from the public git repository.

We could also envision that mission controllers will utilize phpLogCon to help analyze space craft malfunction. A very interesting feature is also rsyslog's modular architecture, which could be used to radiate a new communication plugin up to the space ship, in case this is required to support some alien format. This also enables the rsyslog team to provide an upgrade to the Interstellar Internet, should this finally be standardized in the IETF. If so, and provided the probe has enough consumables, it may be in the best spot to work as a stellar relay between us and whoever else.

Friday, March 27, 2009

is freshmeat now dead?

I used freshmeat.net -both as an user and a project author- for several years and like the clean and efficient interface. Now, they have revamped the whole thing and I have to admit I personally think they screwed up while doing so.

First of all, a project has a structure that consists of various branches, each of them coming in different versions (see my post on the rsyslog family tree. In the old interface, you had branches and versions, and everyone could clearly see what belonged to where. In the new interface (as I understand it), you have a bunch of links that you can label. So I now have to deal with a flat structure and labels. This is NOT how software grows. And as this no longer is a real-world abstraction, it has become quite complicated to assign meaningful values. Not to mention that the big bunch of links is probably quite confusing to users.

I'll probably deal with that by removing all but the development branches. Better to have consistent information than to have everything...

I also miss the statistics counters. They provided some good insight into what users where interested in and what effect releases had. Very valuable for me as an author, but also valuable for me as a user, for example, when I want to judge how active a project is. Freshmeat promised (on March, 15th) to bring back statistics "in a few days", but today (March, 27th), they are still missing. And if they eventually appear and follow the rest of the design paradigm, I am skeptical if there is really value in them.

All in all, I am very dissatisfied. I am sad to have lost a valuable open source resource. So what to do now? Sourceforge again - don't like that either. Ohloh? Maybe. Probably it's best to concentrate on our own web resources... But first of all, I'll wait a couple of days/weeks and hope that freshmeat will become usable again. But please don't expect too many announcements on freshmeat from me for the time being.

There is also an interesting discussion thread on the new freshmeat design, I suggest to give it a read (you'll also find that others like it!)

Monday, March 23, 2009

rsyslog "family tree"

I have created a rsyslog "family tree" showcasing how the various branches and versions go together. It is a condensed graph of the git DAG and shows a few feature branches as an example. I personally think it provides a good overview of how rsyslog work progresses (click picture for larger version).


In red is the git master branch, blue are currently supported stable branches. Branch head "v1-stable" is dotted, because it is no longer officially supported. Dashed nodes are versions on feature branches, solid nodes are versions on main branches. Solid lines are direct ancestors, dashed lines indicate that there are some versions in between. Lots of feature branches have not been show. Bug fixes are typically applied to the oldest code experiencing the problem and then merged into the more recent versions, thus the code flow for bug fixes is kind of reverse. This bug fixing code flow is not shown inside the graph.

Note that you can use gitk to create the full DAG from the git archive. The purpose of my effort is to show the relationships that are well-hidden in gitk's detailled view.

I have written a much more elaborate post about the "evolution of software", unfortunately, it is available currently only in German (with very questionable results by Google Translate).

Tuesday, March 17, 2009

Why is there still PRI in a syslog message?

This is the first of a couple of blog posts I intend to do in response to Raffy's post on syslog-protocol. I am very late, but better now than never. Raffy raised some good points. To some I agree, to some not and for some others it is probably interesting to see why things are as they are.

The bottom line is that this standard - as probably every standard - is a compromise of what could be agreed on by a larger group of people and corporate interests. Reading the IETF mailing list archives will educate much about this process, but I will dig out those interesting entry points into the mass of posts for you.

I originally thought I reply with a single blog post to Raffy. However, this tends to be undoable - every time I intend to start, something bigger and more important comes into my way. So I am now resorting to more granualar answers - hopefully this work.

Enough said, on the the meat. Raffy said:
  • Syslog message facility: Why still keeping this? The only reason that I see people using the facility is to filter messages. There are better ways to do that. Some of the pre-assigned groups are fairly arbitrary and not even really implemented in most OSs. UUCP subsystem? Who is still using that? I guess the reason for keeping it is backwards compatibility? If possible, I would really like this to be gone.
  • Priority calculation: The whole priority field is funky. The priority does not really have any meaning. The order does not imply importance. Why having this at all?


And I couldn't agree more with this. In my personal view, keeping with the old-style facility is a large debt, but it was necessary to make the standard happen. Over time, I have to admit, I even tend to think it was a good idea to stick with this format, it actually eases transition.

Syslog-protocol has a long history. We thought several times we were done, and the first time this happened was in November, 2005. Everything was finalized and then there was a quite unfortunate (or fortunate, as you may say now ;)) IETF meeting. I couldn't attend (too much effort to travel around the world for a 30-minute meeting...) and many other WG participants also could not.

It took us by surprise that the meeting agreed the standard was far from ready for publishing (read the meeting minutes). The objection raised a very long (and productive, I need to admit) WG maling list discussion. To really understand the spirit of what happened later, it would be useful to read mailing list archives starting with November, 14th.

However, this is lots of stuff, so let me pick out some posts that I find important. The most important fact is that backward compatibility became the WG charter's top priority (one more post to prove the point). Among others, it was strongly suggested that both the PRI as well as the RFC 3164 timestamp be preserved. Thankfully, I was able to proof that there was no common understanding on the date part in different syslog server (actually, the research showed that nothing but PRI is common among syslogds...). So we went down and decided that PRI must be kept as is - to favor compatibility.

As I said, I did not like the decision at that time and I still do not like the very limited number of facilities that it provides to us (actually, I think facility is mostly useless). However, I have accepted that there is wisdom in trying to remain compatible with existing receivers - we will stick with them for a long time.

So I have to admit that I think it was a good decision to demand PRI begin compatible. With structured data and the other header fields, we do have ways of specifying different "facilities", that is originating processes. Take this approach: look at facility as a down-level filtering capability. If you have a new syslogd (or write one!) make sure you can filter on all the other rich properties and not just facility.

In essence, I think this is the story why, in 2009, we still have the old-style PRI inside syslog messages...

Thursday, March 12, 2009

How Software gets stable...

I have received a couple of questions the past days if this or that rsyslog feature can be introduced into the stable branch soon. So I thought it is time to blog about what makes software stable - and what not...

But let me first start by something apparently unrelated: let me confess that, from time to time, I like to enjoy some good wine (Californian Merlot and Cabernet especially - ask my for my mailing address if you would like to contribute some! ;)). And at some special occasions, I spend way to much money just to get the "old stuff": those nice wines that have aged in oak barriques. To cut a long story short, those wines are stored in barrels not only for storage, but because the exposure to the oak, as well as some properties of the storage container, interact with the wine and make it taste better. Wikipedia has the full story, and also this interesting quote:
The length of time that a wine spends in the barrel is dependent on the varietal and style of wine that the winemaker wishes to make. The majority of oak flavoring is imparted in the first few months that the wine is in contact with oak but a longer term exposure can affect the wine through the light aeration that the barrel allows which helps to precipitate the phenolic compounds and quickens the aging process of the wine.[8] New World Pinot noir may spend less than a year in oak. Premium Cabernet Sauvignon may spend two years. The very tannic Nebbiolo grape may spend four or more years in oak. High end Rioja producers will sometimes age their wines up to ten years in American oak to get a desired earthy, vanilla character.
Read it again: "High end Rioja producers will sometimes age their wines up to ten years in American oak to get a desired earthy, vanilla character."

So what would the Riojan winemaker probably say if you asked him for a great 2008 wine (we are in early 2009 currently, just for the records)? How about "Be patient, my friend - wait another 9 years, and you can enjoy it!" And what if you begged him you need it now, immediately? "I am sorry, but I can't accelerate time...". And if you told him you really, really need it because otherwise you can not close an important business deal? Maybe he says "Listen my friend. Some things simply need time. You can't hurry them. But if you need to have something that can't really exist, I can get you a bottle of that wine and label it as 'Famos Riojan 10-year aged Wine from 2008' - but we both know what is in the bottle!". Technically speaking, the winemaker is not even cheating - he claims that the wine is from 2008, and so how can it be aged 10 years? If anyone buys that (today), the onlooker is probably very much in fault.

As a side-note, all too often our society works in that way: someone requests something that is impossible to do, someone begs long enough until someone else cheats, everybody knows - and we all are happy (at least up to the point where the cheat gets us into real trouble... - pick your favorite economic crisis to elaborate).
The moral from the story? Some things need time. And you can't replace time by anything else. If you want to have the real taste of a wine aged 10 years in oak... you need 10 years.

By now you probably wonder what all of this has to do with software. A lot! Have you ever thought what makes software stable? In closed source, you hopefully have a large testing department that helps you nail down bugs. In open source, you usually do not have many of these folks, but you have something much better: a community of loyal users eager to break their systems with the latest and greatest of what you happen to have thrown together ;)

In either case, you start with a relatively unstable program and with each bug report (assuming you fix it), the software gets more stable. While fixing bugs, however, you may introduce new instabilities. The larger the fix, the larger the risk. So the more you change, the larger the need to re-test and the larger the probability that while one issue is fixed one (or more!) issues have been newly created. For very large fixes, you may even end with a much worse version of the software than you had before.

Thankfully, a patch to fix a bug is usually much smaller than what was fixed. Often, it is just a few lines of code, so the risk to worsen things is low. Why is the patch usually just a few lines long? Simply because you fix some larger thing that usually works quite well. So you need to change some details which were not properly thought out and thus resulted in wrong behavior (if you made a design error, that's a different story...).

So the more bug reports you get, and the more of them you fix, the more stable a software gets. You may have seen some formal verifications in computer science, but in practice, for most applications, this is the simple truth on how things work.

Now to new features: features are usually the opposite from a bugfix: introducing a new feature tends to be a larger effort, touching much more code and adding code where code never has been ;) If you add new features, chances are great that you introduce new bugs. So with each feature added, you should expect that the stability of your code decreases (and, oh boy, it does!). So how to iron out these newly introduced bugs? Simply wait for bug reports, fix them, wait for more - until you have reached at least a decent level of stability (aka "no new/serious bug reports received for a period of n days, whatever you have n defined to be).

And what if you then introduce a new feature? I guess by now you know: that'll decrease stability so you need to iterate through the bugfixing process ... and so on.

But, hey, we are doing open source. I *love* to add features every day! Umm... I guess my program will never reach a decent level of stability. Bad...

What to do? Taking a long vacation (seducing...) is not a real solution. Who will fix bugs while I am away (shame on me for mentioning this...)? But a pattern appears if you follow this thought: what you need to do to make a program stable is fix bugs for a period of time but refrain from adding new features!

Thanks to git, this can easily be done: you simply create one code branch for a version that shall become stable, and create another branch for the version where you create new features (the development branch). With a bit of git vodoo, you can even import fixes from your stabilizing branch to the development branch. Once you are happy with the stability of your code (in the stabilizing branch), you are ready to declare it to be stable! For that, you'll probably have a separate branch. Then, you can start the game again: copy the state of your development branch to the stabilizing branch, do not touch that branch except for bug fixes and continue adding new features to the development branch. Iterate this as long as you are interested in your project.

This, in short form, is how rsyslog is created. Currently, there are four main branches, plus a number of utility branches that aid the development of specific features (let's ignore them in this context here): we have the development (also called "master") branch which equates to the ... yes... development branch from the sample above;). The stabilizing branch is called "beta" in rsyslog terms. Then, we have a v2-stable and a v3-stable branch. Both are actually stable, but v2-is probably even more stable because it has - except for bug fixes - not been touched for many months more. It also has the fewest features, so it is probably the best choice if you are primarily interested in stability and do not need any of the new features. As rsyslog is further developed, we will add extra stable branches (e.g. there will probably be a v4- and v5-stable branch - but we may also no longer maintain v2-stable at this point because nobody uses it any longer [just like dinosaurs are no longer maintained ;)]).

Did you read carefully? Did you get the message? So let me ask:
What makes software stable?

Bug fixes? Testing? Money (yes, yes, please throw at me!)?

REALLY? Let me repeat:
WHAT MAKES SOFTWARE STABLE?

There is only one real ingredient and that is: TIME! Just like good wine, software needs to age. Thankfully, age, for software, is defined in number of different test cases. So money can accelerate aging of software (as some chemistry guru may be able for wine, probably with the same side-effects...). But for the typical open source project, stability simply goes along with the rate at which the community adopts new releases, tests them AND submits bugs, so that the authors can work on fixing broken things.

And what is the moral of the story? Finally, I am coming back to the opening questions: there is nothing but time that make rsyslog stable. So if you ask me to add a feature today, and I do, you can not expect it to be immediately stable - simply because this is not how things work (thanks, btw, for trusting so much in my programming abilities ;)). The new feature needs to go through all the stages, that is it must be applied to the current development build (otherwise we would de-stabilize the current beta, what is not desirable). Then, this is migrated to the stable build over time, where it can finally fully stabilize and, whenever the bug rate seems to justify this, it can move on to the stable build. For rsyslog, this typically means between three to four, sometimes more month are needed before a new feature hits the stable branches. And there is little you can do against that.

"But... hey, I need a stable version of that cool feature now! My manager demands it. Hey, I'll also pay you for it..." Guess what? I can do the same the winemaker did. Of course, and if you ask really nicely, I can create a v3-stable-cool version for you, which is a version with the cool feature that I have declared immediately stable (btw, it's mostly the same thing that all others just cal l "the beta"). If that satisfies your boss, I'll happy to do. But we both know what you have gotten... ;)

Of course, I am exaggerating a bit here: in software, we can somewhat increase the speed of stabilizing by adding testers. Money (and even more motivation) can do that. We can also backport single new features to so-far stable branches (note the fine print!). This reduces the stability a bit, but obviously not as much as for the development version. However, this requires effort (read: time and/or money) and it may be impractical for many features. Some features simply rely on others that were newly introduced in that development version and if you backport the whole bunch of them, you'll have something as much changed as the development version, but in an environment where the component integration is not as well tested and understood. Of course, some company policies (seem to) force you to do that. If so, the end result is that you have a system that is much less stable than the development version, but has a seemingly "stable" label. Wow, how cool! As the common sense says says: "everyone gets what one asks for" ;)

So what is the bottom line? Good software and good wine has something in common: time to ripen! Think about this the next time to ask me to offer a new feature as part of a stable branch. Its simply impossible. But, of course, you can bribe me to stick that "stable" label onto a mangled-with version...