Friday, July 25, 2008
I am very happy with this development. However, it also shows a downside. Rsyslog, for obvious reasons, "offers" much more chance for misconfiguration than its feature-bare ancestor sysklogd. While I tried hard to make configuration simple and intuitive, I managed to succeed only in the simple cases. If you configure rsyslog for complex needs, I have provided ample ways to screw up ;) That situation will hopefully get better with the new scripting engine, but even that will not be able to totally resolve the issue. To help people getting over this phase, rsyslog offers a myriad of diagnostics. Whenever something is wrong, it logs at least one message telling you (and often a couple of them, helping to identify the culprit).
With some frustration, however, I begin to see that many people never see these diagnostics. Many people try a configuration and just notice that it does not work. They never look at rsyslog's own detailed error message (and some even do not write them anywhere, so they actually have nothing to look at ;)). So it has become quite common that for questions raised on the forum, we go lengths through the process of obtaining (large) debug logs, just to see that the debug log contains the otherwise-ignored error message that explains it all.
A (too) simple approach would be to blame users: why, the heck, don't they pay attention to their system setup. Beside being lame to blame users, it doesn't help solve the issue. The root cause seems to be that people tend not to know where to look for help. One approach is to improve documentation. I've gone some length on that, but what comes into our way is that people (including me ;)) tend to ignore documentation until there is no way around it (at this point, almost always too late, the foolishness of that approach is proven ;)).
An additional complexity is that people need to have at least some working configuration, including permissions, to make rsyslog record its error messages. A while ago, I thought about an internal message buffer, for error messages and maybe others, that can be viewed via a web browsers (this implies a simple http server inside rsyslogd). So folks with problems could simply point their web browser at rsyslogd and see which diagnostics exists. This solves at least the problem of recording and finding the messages. It does not solve the issue of people not knowing to do that, but it improves the situation: it is easier to tell "just point your web browser to" then to instruct people on how to review log files or create and send debug logs.
Up until now, I have stayed back from this approach because of security concerns I have. Such a http server, especially if it would also enable to view some of the live log data and live state information, could be an ideal tool for attackers. Of course, I can disable it by default and limit its features. But that is counter-productive in regard to the simple troubleshooting case. So I stayed back from it.
Now, seeing that the support problem (and pain people experience) is becoming worse, I have taken another look at this approach. I now think there is a compromise: I can create a separate plugin, one that can be loaded via a single "$ModLoad", but is not by default. So the system is secure by default but it is easy to instruct people to activate advanced diagnostics capability. It is obviously also easy to disable them, but what if people forget (and we tend to, don't we?). That would leave the attack vector intact once it was enabled. I think I now have a cure: What if the plugin automatically disables itself after a given period of time. Looking at the use case, it would be sufficient for most cases if diagnostics are available for 10 minutes after rsyslogd restart, but not any longer. I think I will now take this route. While it leaves the attack vector open, it mitigates the risk to a time frame, which usually is very short compared to overall runtime. And, after all, this is just a second safety measure. In the first instance, people should disable diagnostics once they no longer need it. Enabled diagnostics probably warrant a (startup) log message in themselves, so someone who cares to get an error-free startup should not forget about that (and those who don't care, don't deserve any better but still have a safety belt...).
I think I will try to implement a testing plugin in the current devel branch. This was unplanned work, and obviously pushes away other work. In the long term, however, I think it is very important to help folks getting up and running easily, so changing the schedule to solve that need is justified. For my testing purposes, though, I will not start with a http server but with something much more simplistic - just to grasp the essence of this approach.
I will reproduce my posts/definitions here in this blog entry. To save me some time, I more or less copy and paste my text to this page. I think it should be sufficiently self-contained to grasp the idea. I reproduce my thoughts for two reasons: first of all, I would like to have a handy reference to my view of the logging world (so that I can refer anyone who asks to this page). Secondly, I think it is useful for all those of you who dig a bit deeper into rsyslog, phpLogCon and the rest of the MonitorWare line. All of these softwares are build around the definitions I am giving here.
Please note that I may update this page when I write new content. This helps to keep things tidy together. For you, unfortunately, this means you need to check this page from time to time...
As always, feedback is highly appreciated. You may either comment on this post or mail me.
So here now comes the "meat":
I actually tend to agree that an event is *always* *associated* with a state change. The state change, however, may be subtle and not always obvious.
Let's take this post excerpt as an example:
some logs which don't indicate a state change such as login failures,
At first, a login failure seems to have not changed any state. After all, nobody was logged in before the event and nobody is logged in after it. HOWEVER, here our view is just not granular enough.
Let me take the position of the login machinery. That machine obviously has states for "awaiting login", "authentication requested", "auth success" and "auth failure" (among many others). What happens in the failed login case is that the machine transits between these states:
"awaiting login" --> "auth req" --> "auth failure" --> "awaiting login"
Obviously, state in this subsystem has changed three times. An upper level (e.g. a user process creator) may not have seen a state change, because the transitions did not request one. So at the upper level, we have no state change.
Depending on the granularity of logging we would like to have, we need to look at state changes at different levels. If I am interested in security logging, I most probably should look at the login machinery. And there, I see three transitions. As such, I may report three different events.
In a somewhat more abstract form, I even think an event is *always* bound to a state change. If we look closely enough at an entity, and this entity does not change state, so what can we then report? Is it noteworthy to report that entity x remains in the same state a it had a while ago? And even if so, couldn't one say that the state of x has changed in a subtle way - time needs to be considered part of the entities "state set".
With the "state set" I mean anything that makes up and influences the state of the entity. It includes all properties that differentiate between different states. The state set is obviously defined by the entity, but also by the needs of the observer (so the same entity may have different state sets if different observers look at it). For a given observer (use case) and a given entity, there is exactly one state set, consisting of mutually exclusive states and a set of properties that identify these states.
In this point of view, an event is actually another word for a state transition. If there is no state transition, there is no event. And why should there be one? Let's assume the use case requires tracking of (identical) states over time. As such, elapsed time is part of the state set. Now let's assume all other properties of the state set remain identical, but time elapses. Now we still have a state transition, because the state set changes between time "than" and time "now". This justifies an event. If, however, the use case does not require tracking of continuity, elapsed time is not part of the state set. In the same example above, we do now not have a state change (all other properties remain identical) and consequently no event happens. This may sound like loss of information, but we defined that the observer is not interested in elapsed time. As such, from the POV of this observer, nothing happened. So we are correct in not raising an event.
From this I conclude:
a) there is no such thing like an "absolute event" - an event is always relative to the needs of the observer
b) thus different observers may have different perception of what justifies raising an event
c) what the observer is interested in is defined in the "state set"
d) an event is generated when a change in the "state set" is detected
Trying (and probably failing) to get a short grip on this, I would say "An event is a change in observer-relevant properties of an entity". Which, of course leads to the need to define observer, observer-relevant, property and entity...
I haven't looked at CEE enough to know if there are definitions of these (or similar) terms. Actually, when I looked at CEE some very long time ago I have to admit it didn't look very appealing. But I now have subscribed to the mailing list and will see if I can contribute (or use ;)) to/from the work.
An interesting side-note of a) is that one may (try to) define a superset of state sets of all possible observers. Actually, I think this is what auditing people try to define when they design audit points. The knobs that trigger specific events provide the ability to limit the actual (running) state set to the subset that the configuring observer is interested in. And if you ever wanted to audit anything but did not find an audit point in the software for it, auditing designers made a wrong assumption on the required state superset ;)
> -----Original Message-----
> From: email@example.com [mailto:loganalysis-
> firstname.lastname@example.org] On Behalf Of Anton Chuvakin
> Sent: Friday, July 25, 2008 6:14 AM
> To: email@example.com
> Cc: firstname.lastname@example.org
> Subject: Re: [logs] How to define Log, Event, and Alert?
> Good point. So:
> Event = something that happened on a system.
> Log = a TIMED record of the above occurence.
I'd replace TIME by "sequence identifier", with sequence identifier defined as being something that is monotonically increasing. A timestamp is an object that we think to be a natural example of a monotonically increasing function. HOWEVER, if we look at existing technology, this is not always the case. In fact, it is more often NOT the case than it is... If we have two systems a and b and these systems do not have time synchronized, and have a system c which is the event collector (and collects only events from a and b), then c may record time stamps inside its log that do not monotonically increase. For example, it may record:
02:00:00 event a1
01:00:00 event b1
02:01:00 event a2
01:01:00 event b2
Of course, this is still a TIMED record of occurrences. However, in this sense this is used, "TIMED" includes a sense of temporal order (at least to me). In the above log, we may not have the correct temporal order. We may be able to reconstruct it by sorting on the timestamp. That would be a valid approach if the timestamps are indeed correct (compared to universal time). But if a and/or b has incorrect time, we would create a wrong temporal order. Indeed, in this sense the monotonically increasing identity of the log in question would actually not be the timestamp but rather the *sequence of recording*, kind of a meta-property not directly contained in the property set of the individual event record (but rather obtained by its relationship to its predecessor in the log file).
Now let's assume a log without a timestamp. These things happens, e.g. in debug logs (and all too often in others I have seen).
If we define
> Log = a TIMED record of the above occurence.
such a "log" would obviously not be a log, because it does not fulfill the requirement to be timed.
If a log instead is "a record of events with a sequence identifier", that problem does not exist. The sequence identifier in that case would be the derived property I mentioned above.
The question remains if such a definition is actually useful. The sequence identifier is obviously something with very vague semantics. They depend on the observer as well as the correctness of the "sequence identifier generating function" on all systems in question.
Let's get back to the simple case of timestamps: as outlined above, the semantics of timestamps depend on time sync. Even if there is ntp timesync, timestamps (with reasonable precision) are always questionable. They are approximate, even on the same system. With standard syslog timestamps (second precision!) the problem is easy to see: one may receive hundreds of events within the same second. So even if time is correct, an observer is unable to detect any order of events. If looking just at the timestamps, one must conclude that all events happened at once. If looking at the semantics of the messages, one most often also can conclude this is impossible (e.g. how to delete a file before it is created?). Obviously, the timestamp alone is never sufficient to detect order of events, even on a single system. Granted, for practical purposes a high resolution timestamp (with good time synchronization) is most often a sufficiently well approximation of the time an event happened. But do you really trust it? ...always? Have a look at your own correlation engines: do they work on pure timestamps - or do they include some other properties, like the order of event log records inside the log?
Now let me try to define what I think a log actually is:
An EVENT is a set of properties that describe a state change (in the sense I have described state change yesterday). The contents of this set is depending on the entity who's state changes as well as on the observer. [so it may actually be a set of two sets: entity-related properties and observer-related properties]
An event is generated when a state changes.
An EVENT RECORD is the physical representation of an event inside an event log. It is a set of (at least) two sets: the event's set itself as well as a set of meta-properties which describe the event record (e.g. it's size, it's order inside an event log record, it's format, ...). I think this has a tendency to contain observer-related properties, too. Some Meta-properties are obviously event log related (like sequence of recording).
Finally, an (EVENT) LOG is a set of event records. Please note that according to this definition, the stream of syslog messages flowing from system a to system b is also an event log - not just log files or database tables. So an event log does not necessarily need to be persistend.
There is no inherent order inside event logs. In practice we experience a "natural order" (one record begins before another), but that actually is a meta-property of the event record. So we can order event records based on their meta-properties. It just happens that a text log is physically ordered by the sequence meta property.
[side-note: if we look at a UDP based transmission of an event log, we notice that the sender's perception of the event log is different from the receiver's: the physical order of event log records may be different, as well as their absolute number (if UDP discards messages)]
Finally, different log files, database tables, log stores in general are just partitions of the event log set. This also explains why we need to merge different log files if we would like to have a broader view of events that happened inside our system: We need to re-unit the partitions that contain things of interest for our needs so that we build a sufficiently large set for our analysis algorithm (that set being a complete partition of the log set in regard to what we are interested in).
[side-note: if we need to unite such sets, we often experience the problem that we do not have a truly identifying function for event log records, which makes it often really hard to create real supersets, where no two elements are the same]
Again, I see that my definitions are not really one-liners, but I think if we try to use too-simple definitions, we end up incorrectly describing the objects we are talking about. That will most probably lead us to wrong conclusions, and, in a very practical sense, to a) wrong implementations and b) wrong actions taken on the conclusions we think we have drawn (but were invalid).
Tuesday, July 22, 2008
Special thanks go to Michael Biebl. He maintains the rsyslog package on Debian. Without his hard work and very solid understanding of the Debian platform, rsyslog would never have been able to be ready for prime time on that platform.
For those who enjoy the good news as much as I do, here are two relevant links:
While it is joy to read about these developments, it is also a chore of the rsyslog project to live up to the trust that is put into it. We are well aware of this and will keep rsyslog in good shape for all who depend on it.
Monday, July 21, 2008
To me, it is an interesting experiment. It took me quite some effort to (try to) phrase things in a very generic way (mostly) without any technical vocabulary. I am now looking forward to the feedback I receive (if any). It will be most interesting to see if I can capture some momentum on this blog. If you understand German, you may want to have a look yourself - and be it only to see if I messed up ;)
Thursday, July 17, 2008
Thinking about it, this is actually a buffer that performs a delayed write, but while the data is already accessible to the user. Delayed writing as such may be useful in its own right. Given that rsyslog already has queues, it may be worth to extend it to a mode where data is kept in the queue until either a configured amount of messages is queued or a configured period of time has elapsed. That could be used to generate more efficient, burst disk writes. For example, we could say that log messages be flushed only if there are at least 10 or at least one is older than one minute.
Note that this is a use case quite different from what ramlog addresses. Actually, they could even be combined. If you like this idea, please tell me, so that I can consider implementing it.
Wednesday, July 02, 2008
I have not posted any work log for long. Even worse, I have not fully kept up with the log. So what's going on? Is rsyslog development pace going down?
No, definitely not! I am just currently concerned with helping users, doing some labs myself, working a bit on phpLogCon and thinking about how to best do the scripting engine.
Oh yes, the scripting engine... I am promising and promising it. The reason it takes so long is that I want to have a perfect solution on the first shot (well, as close to perfect as possible...). And I want to keep it backwards-compatible to what we currently have. This involves a lot of things under the hood, and it requires very careful planning to do it right. In fact, I am even re-evaluating some theoretical concepts for the scripting language, maybe it makes sense to adopt a few more thoughts. But, even though you do not see it, I am still very focussed on it. It is the next big thing to happen.
Just to get you an example of the subtle problems: how to *best* handle the "*.*" PRI selectors that are in the current config at the rule of almost all filters. If that is not mapped properly, it can have an enourmous performance hit. To support it, I may need to do a constant folding phase in my interpretative language, something that I so far did not consider as needed. And I am still trying to avoid this (it offers little benefit for almost all other cases).
Whith that said, on to the (partial) work log:
Past day's rsyslog work log:
- created autconf support for a rsyslog testbench
- created the first very rough beginnings of a RainerScript testbench
- disabled warnings caused by third-party libraries
- enhanced gtls driver to support fully async operation (there was
a bug before if gtls did issue EGAIN without getting any data)
- bugfix: comments after actions were not properly treated. For some
actions (e.g. forwarding), this could also lead to invalid configuration
applied to beta branch
... something missing, see git log ;) ...
- released 3.19.8
But a tag is only as good as the information that it helps to find. Consequently, I have started to describe error cases inside the knowledge base's event repository:
So far, there is only a limited set of messages available (to phrase it politely ;)), but I plan to increase it over time. Note that there is an interactive feature where questions to the message can directly be posted to the forum. I hope this is useful.
If you run into an error message that is not-yet described, let us know and we'll add an entry. In the long term, the new knowledge base part should be able to solve most problems.
This is how you can automate Coverity Scan using Travis CI - especially if you have a complex build matrix: create an additional matrix en...
Did you ever use TCP to transfer syslog reliably? And do you think that makes you immune against message loss? If so, it's time to think...
I currently think about creating a very basic shipper for log files, but wonder if it really makes sense. I am especially concerned if good ...
As most of you know, rsyslog permits to pull multiple lines from a text file and combine these into a single message. This is done with the ...