Saturday, November 18, 2017

Automating Coverity Scan with a complex TravisCI build matrix

This is how you can automate Coverity Scan using Travis CI - especially if you have a complex build matrix:
  • create an additional matrix entry you will exclusively use for submission to Coverity
  • make sure you use your regular git master branch for the scans (so you can be sure you scan the real thing!)
  • schedule a Travis CI cron job (daily, if permitted by project size and Coverity submission allowance)
  • In that cron job, on the dedicated Coverity matrix entry:
  • cache Coverity's analysis tools on your own web site, download them from there during Travis CI VM preparation (Coverity doesn't like too-frequent downloads)
  • prepare your project for compilation as usual (autoreconf, configure, ...) - ensure that you build all source units, as you want a full scan
  • run the cov-int tool according to Coverity instructions
  • tar the build result
  • use the "manual" wget upload capability (doc on Coverity submission web form); make sure you use a secure Travis environment variable for your Coverity token
  • you will receive scan results via email as usual - if you like, automate email-to-issue creation for newly found defects
Actual Sample Code: rsyslog uses this method. Have a look at it's .travis.yml (search for "DO_COVERITY"). The scan run can be found here (search for "DO_CRON"), a script that calls another one ultimately doing the steps described above. These two scripts are called from .travis.yml.


Why so complicated? After all, there is a Coverity addon for Travis CI. That's right, and it probably works great for simple build matrices. Rsyslog uses Travis quite intensely. At the time of this writing, we spawn 8 VMs for different tests and test environments plus the one for cron jobs (so 9 in total). With the Coverity addon, this would result in 9 submissions per run, because the addon runs once per VM requested. Given that rsyslog currently has an allowance of 3 scans per day, a single run would use up all of our allowance plus 6 more which would not succeed. Even if it would work, it would be a big waste of Coverity resources and be pretty impolitely both to Coverity and fellow users of the free scan service.

With the above method, we have full control over what we submit to Coverity and how we do it.With our current scan allowance, we use exactly one scan per day, which leaves two left for cases where it is useful to run interim scans during development.

Why do you use master branch for the scan? Coverity recommends a dedicated branch.
In my opinion, software quality assurance works well only if it is automated and is applied to what actually is being "delivered". In that sense, real CI with one Coverity run per pull request (before we merge!) would be best. We can't do this due to Coverity allowance limitations. The next best thing is to do it as soon as possible after merge (daily in our case) and make sure it is actually run against the then-current state. Introducing a different branch just for scan purposes sounds counter-productive:

  • you either need to update it to full master branch immediately before the scan, in which case you can use master directly
  • or you do update the "Coverity branch" only on "special events" (or even manually), which would be counter the idea to have things checked as early as possible. Why would you withhold scanning something that you merged to master, aka "this is good to go"? Sounds like an approach to self-cheating to me...
In conclusion, using scanning master is the most appropriate way to automatically scan your project. Be bold enough to do that.

Does your approach work for non-Travis, e.g. Buildbot?
Of course, it works perfectly with any solution that permits you to run scheduled "builds". We also use Buildbot for rsyslog. We could have placed it there. Simply because of ease of use we have decided to use Travis for automating the scan as well. We could also have used a scheduled Buildbot instance. So the approach described here is universal. The only thing you really want is that the scan is initiated automatically at scheduled intervals - otherwise you could simply use the manual submit.

Wednesday, October 25, 2017

Time for a better Version Numbering Scheme!

The traditional major.minor.patchlevel versioning scheme is no longer of real use:

  • users want new features when they are ready, not when a new major version is crafted
  • there is a major-version-number-increase fear in open source development, thus major version bumps sometimes come very random (see Linux for example)
  • distros fire the major-version-number-increase fear because they are much more hesitant to accept new packages with increased major version
In conclusion, the major version number has become cosmetic for many projects. Take rsyslog as an example: when we switched to rsyslog scheduled releases, we also switched to just incrementing the minor version component, which now actually increments with each release - but nothing else. We still use the 8.x.0 scheme, where x is more or less the real version number. We keep that old scheme for cosmetic reasons, aka "people are used to it".

Some projects, most notably systemd, which probably invented that scheme, are more boldly: the have switched to a single ever-increasing integer as version (so you talk of e.g. version 221 vs 235 of systemd). This needs some adaption from the user side, but seems to be accepted by now.

And different other approach, known from Ubuntu, is to use "wine-versioning": just use the date. So we have 14.04, 16.04, 17.04 representing year and month of release. This also works well, but is unusual yet for single projects.

What made me think about this problem? When Jan started his log anonymizer project, we thought about how it should be versioned. We concluded that a single ever-incrementing version number is the right path to take. For this project, and most probably for all future ones. Maybe even rsyslog will at some time make the switch to a single version digit...

Thursday, August 24, 2017

Busy at the moment...

Some might have noticed that I am not as active as usual on the rsyslog project. As this seems to turn out to keep at least for the upcoming couple of weeks, I'd like to give a short explanation of what is going on. Starting around the  begin of June I got involved into a political topic in my local village. It's related to civil rights, and it really is a local thingy, so there is little point in explaining the complex story. What is important is that the originally small thing grew larger and larger and we now have to win a kind of election - which means rallies and the like. To make matters a little worse (in regard to my time...) I am one of the movement's speakers and also serve as subject matter expert to our group (I am following this theme for over 20 years now). To cut a long story short, that issue has increasingly eating up my spare time and we are currently at a point where little is left.

Usually, a large part of my spare time goes into rsyslog and related projects. Thankfully, Adiscon funds rsyslog development, and so I can work on it during my office hours. However, during these office hours I am obliged to work on paid support cases and also a limited number of things not directly related to rsyslog. Unfortunately, August (and early September) is main holiday season in our region. As such, I also have limited co-workers available that I could share rsyslog work with. And to make matters "worse", I need to train new folks to get started with rsyslog work - one of them does a summer internship, so I need to work with him now. While new folks is always a good thing to have on a project (and I really appreciate it), this means further reduction of my rsyslog time.

The bottom line is that due to all these things together, I am not really able to react to issues as quickly as I would like to. The political topic is expected to come to a conclusion -one way or the other- by the end of September. Due to personal reasons, I will not be able to do work at all in early October (long-planned out of office period), but I hope to be fully available again by mid October. And the good news is that we will have a somewhat larger team by this time because Jan, who does the internship, will continue to work part time on the project. Even better: Pascal will be with Adiscon for the next months on a full-time basis and will be able to work considerable hours on rsyslog.

So while we have a temporary glitch in availability, I am confident we'll recover from that in autumn and we have very exciting work upcoming (for example, the TLS work Pascal has just announced). I have also a couple of very interesting suggestions which are currently discussed with support contract customers.

All in all, I beg for your patience. And I am really thankful to all of our great community members which do excellent work on the rsyslog mailing list, github and other places. Not to forget the great contributions we increasingly get. Looking forward to many more years of productive syslogging!

Monday, August 21, 2017

Introducing new team member

Good news: we have some new folks working on the rsyslog project. In a small mini-series of two blog postings I'd like to introduce them. I'll start with Jan Gerhards, who already has some rsyslog-related material online.

Jan studies computer science at Stuttgart University. He has occasionally worked on rsyslog for a while. This summer, he does a two-month internship at Adiscon. He also plans to continue to work on rsyslog and logging-related projects in the future. Right now, he looks into improving the mmanon plugin. He is also trying to improve debug logging, which I admit is my personal favorite (albeit it looks like this is the lower-priority project ;-)).

Tuesday, November 22, 2016

Would creating a simple Linux log file shipper make sense?

I currently think about creating a very basic shipper for log files, but wonder if it really makes sense. I am especially concerned if good tools already exists. Being lazy, I thought I ask for some wisdom from those in the know before investing more time to search solutions and weigh their quality.

I've more than once read that logstash is far too heavy for a simple shipper, and I've also heard that rsyslog is also sometimes a bit heavy (albeit much lighter) for the purpose. I think with reasonable effort we could create a tool that

  • monitors text files (much like imfile does) and pulls new entries from them
  • does NOT further process or transform these logs
  • sends the resulting file to a very limited number of destionations (for starters, I'd say syslog protocol only)
  • with the focus on being very lightweight, intentionnally not implementing anything complex.
Would this be useful for you? What would be the minimal feature set you need in order to make it useful? Does something like this already exist? Is it really needed or is a stripped-down rsyslog config sufficient?

I'd be grateful for any thoughts in this direction.

Tuesday, August 23, 2016

rsyslog error reporting improved

Rsyslog provides many up-to-the point error messages for config file and operational problems. These immensly helps when troubleshooting issues. Unfortunately, many users never see them. The prime reason is that most distros do never log syslog.* messages and so they are just throw away and invisible to the user. While we have been trying to make distros change their defaults, this has not been very successful. The result is a lot of user frustration and fruitless support work for the community -- many things can very simple be resolved if only the error message is seen and acted on.

We have now changed our approach to this. Starting with v8.21, rsyslog now by default logs its messages via the syslog API instead of processing them internally. This is a big plus especially on systems running systemd journal: messages from rsyslogd will now show up when giving

$ systemctl status rsyslog.service

This is the place where nowadays error messages are expected and this is definitely a place where the typical administrator will see them. So while this change causes the need for some config adjustment on few exotic installations (more below), we expect this to be something that will generally improve the rsyslog user experience.

Along the same lines, we will also work on some better error reporting especially for TLS and queue-related issues, which turn out high in rsyslog suport discussions.

Some fine details on the change of behaviour:

Note: you can usually skip reading the rest of this post if you run only a single instance of rsyslog and do so with more or less default configuration.

The new behaviour was actually available for longer, It needed to be explicitly turned on in rsyslog.conf via

global(processInternalMessages="off")

Of course, distros didn't do that by default. Also, it required rsyslog to be build with liblogging-stdlog, what many distros do not do. While our intent when we introduced this capability was to provide the better error logging we now have, it simply did not turn out in practice. The original approach was that it was less intrusive. The new method uses the native syslog() API if liblogging-stdlog is not available, so the setting always works (we even consider moving away from liblogging-stdlog, as we see this wasn't really adopted). In essence, we have primarily changed the default setting for the "processInternalMessages" parameter. This means that by default, internal messages are no longer logged via the internal bridge to rsyslog but via the syslog() API call [either directly or
via liblogging). For the typical single-rsyslogd-instance installation this is mostly unnoticable (except for some additional latency). If multiple instances are run, only the "main" (the one processing system log messages) will see all messages. To return to the old behaviour, do either of those two:

  1. add in rsyslog.conf:
    global(processInternalMessages="on")
  2. export the environment variable RSYSLOG_DFLT_LOG_INTERNAL=1This will set a new default - the value can still be overwritten via rsyslog.conf (method 1). Note that the environment variable must be set in your startup script (which one is depending on your init system or systemd configuration).

Note that in most cases even in multiple-instance-setups rsyslog error messages were thrown away. So even in this case the behaviour is superior to the previous state - at least errors are now properly being recorded. This also means that even in multiple-instance-setups it often makes sense to keep the new default!

Friday, May 13, 2016

rsyslog's master-candidate branch gone away

Thanks to the new improved CI workflow, we do no longer manually need to do a final check of pull requests. I have used the new system for roughly two weeks now without any problems. Consequently, I have just removed the master-candiate branch from our git (with a backup "just in case" currently remaining in the adiscon git repository).

Anyone contributing, please check the CI status of your PRs, as we can only merge things that pass the CI run. Note, though, that there still is a very limited set of tests which may falsly fail. Their number is shrinking, and I usually catch these relatively shortly and restart them. If in doubt, please add a comment to the PR and I'll investigate.

Automating Coverity Scan with a complex TravisCI build matrix

This is how you can automate Coverity Scan using Travis CI - especially if you have a complex build matrix: create an additional matrix en...