syslog fragmentation

Hi all, I was asked about the current (and past) state of syslog splitting a single syslog message into multiple smaller messages. The question came up because of a broken syslogd implementation which permits to receive 256 bytes (octets) at most. I thought I re-post my reply to the blog as it may be of interest for you, too. Here we go:


Hi all,

I have now reviewed all of the discussion.

Let me start with the broken receiver. With 255 octets, any (generic)
fragmentation method would need to be ultra-compact which of course is
doable but not (with reasonable overhead) in the context we set up with
the version of syslog-protocol that will turn into a normative RFC.
Note, also, that RFC3164 will be superseded by that RFC once it is out.

As has been described here, fragmentation can either be done at the
protocol layer or at the application layer. In the later case, the
application needs to consider a sensible maximum and needs to emit
message sequences which are somewhat atomic. sendmail seems to do that,
and I know of some other examples, too. Some database servers seem to do
verbose logging in a similar way, in that they log parts of the
statement within different log messages. However, it is often quite
complicated to re-unite those application logs to the original message
(much of the complexity of log analysis stems from that).

A protocol based approach solves these issues. But as it has been
rejected by the syslog-sec WG it is not considered useful by the IETF
syslog community. We may want to try give it another shot, but that
should be done inside the framework layed out in the new IETF series and
as such it would not be a good solution for a broken receiver. Actually,
the new RFC series requires a minimum maximum length of 480 octets
(stemming back to IPv4 UDP available payload size). The recommended
minimum maximum length is 2K and more is permitted if sender and
receiver support it. There is no upper limit per se, but a receiver may
either truncate the message or even discard it as whole. If truncation
happens, it must truncate at the end and without paying any attention to
syntax and semantics of the message. This is specified in [1]’s section
6.1 and it was the result of very elaborate discussions. Most
importantly, the syntax- and semantic-agnostic truncation was a
requirement out of this discussion.

As Tina mentioned, my company Adiscon and me personally are doing
Windows event log to syslog conversion for quite a while. Windows event
messages can be large and keep growing larger. We are converting them to
syslog for over 10 years now and at the time we started the only common
ground was the 1K limit already mentioned. Note that at that time some
implementations could experience serious malfunction if messages over
this size arrived (I remember the Solaris syslogd immediately
segfaulting as one sample of more). We thought about how to best address
the issue. We were tempted to do an app-level split, much as sendmail
does, but refused to do that for two reasons:

1) the messages to be logged did not originate from ourselves (other
than in the sendmail case). This implies that we do not exactly know
what makes sense to put together. While there is a potentially large set
where this can be properly concluded from context, there is also a set
where this is not the case. The later would have required a more
protocol-like generic approach and thus specialised parsers on the end
systems – something we did not really like.

2) this is somewhat similar to the parser problem. In general, log
analysis is even harder if a single logical log entry is distributed
over several physical records. Especially if you take into account that
the order of appearance does not necessarily (in practice almost never)
reflect the order of creation. So processing such a log requires a
consolidation phase. It is especially hard for a human reviewer do this
while reviewing logs and thus considered a big disadvantage.

So we looked at what was available at that time. While the 1K size limit
was universally accepted, most syslog receivers either supported larger
sizes by default, could configured to do so or being recompiled to
handle it (sysklogd, the then-omnipresent syslogd on Linux is a premier
example for the later – #define MAXLINE 1024 needed to be changed and
you were basically done [within UDP constranints]). The real limit
turned out to be the UDP max size, 64K in theory but with different
default/hard coded limits in various stacks. In 2005 a did a bit
research[2] and found that 4K seems to be the typical real-world limit.
But even many years before, allmost every Windows event record fit into
4K (the exception being records with dumps in them…). Also, we already
had plain TCP-based syslog at that time, which did not experience any
size issues.

So the practical solution for our Windows to syslog size problem was
simply to ignore it and tell customers how to configure/recompile their
syslogd. That worked on Windows at least for WinSyslog and Kiwi Syslog.
and on the *nix side at least for sysklogd, syslog-ng, some variants I
don’t remember by name and now rsyslog. We recommended to switch to a
product that supported larger sizes where the stock solution did not do
that. Or we used interim, specialised, receivers who logged data into
separate databases or files.

When I was unable to convince the syslog-sec WG to specify fragmentation
as part of the syslog protocol itself, I was at least able to put that
spirit into the I-D: so we now have the ability to use large sizes if
everybody configures the systems correctly. Part of that spirit, funny
as it may sound, is to place important information early in the packet
as this improves chances it will actually be delivered.

I have to admit this is not a perfect way to do it, but at least it
works if everything is setup up correctly. The current main “problem” is
that RFC 3195 (somewhat vaguely) sets an upper limit of 1K for messages
and also does not talk about truncation. So if there is a RFC3195 system
inside a relay chain, the maximum size for the whole chain goes down to
1K – there is nothing we can do about this. This is also the reason why
a new revision of 3195 is needed. This is underway, as far as I know.
One should also note that this limitation is of no practical importance
for the time being (thus no real “problem”), because 3195 did not find
widespread support. To the best of my knowledge, the only commercially
available implementations are Cisco’s and ours with us also providing
the only (more or less, due to low priority) fully supported 3195
implementation inside an open source syslogd. There was SDSC syslog, but
the project is to the best of my knowledge no longer alive. It also
never spread to become the default syslogd on any important Linux
distribution and can be considered “exotic” at best.

I hope this description is useful for you. The bottom line is that there
is no standard, and there is, at least was, no support for specifying
one. Even if we change that, the end-result will most probably not
support down-level reveivers below the 480 octet limit set forth in the
upcoming RFC series.

Best regards,
Rainer

[1] http://tools.ietf.org/html/draft-ietf-syslog-protocol-23
[2] http://www.monitorware.com/Common/en/articles/ihe-syslog.php

back to work…

You know this: the more you like something, the “faster” time elapses. So it turns out to be Thursday of my first week back at work from my summer vacation now ;) This time, I was really lazy and had extremely limited Internet connectivity while I was away. While a bit unusual for me (I was never disconnected for more than 2 days the past 10 years or so…), it turned out to be a good experience (well, some email via PDA flowed, though). As a side-note, it was good the see the rsyslog well alive while I was out of town! Many thanks to all contributors.

As you probably expect, there was a bunch of work waiting for me when I returned. I am still suffering a bit from it. However, I managed to do some work on rsyslog. So I finally managed to get rid of the hardcoded syslog message size limit. This, of course, caused a lot of code to be touched. I did a pre-release on the mailing list, but I do not have the feeling that many tried it. Well, now it is the official devel and we’ll see if we get into interesting parts of trouble.

The next thing on my agenda is the new documentation generation system. I got a lot of help from my friends at Red Hat Japan. Actually, I now need to fully understand the way docbook and the generation process at all works. I guess that will keep my occupied for a while. So please keep watching this blog, even though I may not have so many new posts for the time being.