Make Maven treat warning as errors

I like compiler warnings and I think a nicely linted code base really improves code hygiene and keeps standards high. Rather than relying on some post-compile script or a tool like Sonar I prefer the compiler to throw an error when it encouters something that we defined as a code smell. In other words, I want it to treat warnings as errors.

javac can do exactly that with the -Werror flag but using it in Maven took me a little while to find out. To enable it in your Maven builds, use the following XML:

...
<build>
  <plugins>
    <plugin>
      <groupId>org.apache.maven.plugins</groupId>
      <artifactId>maven-compiler-plugin</artifactId>
      <version>3.1</version>
      <configuration>
        <source>1.7</source>
        <target>1.7</target>
        <compilerArguments>
          <Werror />
          <Xlint:all />
        </compilerArguments>
      </configuration>
    </plugin>
  </plugins>
</build>
...

Comments »


Find out who the maintainer of a Debian/Ubuntu package is

Run the following command:

dpkg-query -W -f='${Maintainer}' coreutils

Obviously, replace coreutils with the package you're interested in.

Comments »


Removing 200s from an Apache access log

At work we use Splunk to do log analysis of our frontend Apache which acts as a simple proxy to the application servers. I quite like Splunk but we were hitting our quota quite frequently when we started to include our access log to the indexed files.

We noticed that the vast majority of our entries in the access log had 200 response statuses. We're actually not that interested in all these 200s and it would greatly reduce our Splunk usage if we could filter them all out. What we could have done is to just use the normal access log and then have a cronjob grep out all the 400s and 500s but that didn't seem very elegant. I wanted a solution without an intermediate step.

Apache has a feature called conditional logging, however it can't be used to filter by response code.

On the other hand I found out about piped logs. The idea is that you just pipe the log to another process which in turn can do any log processing and filtering you want.

I chose a combination of grep and Apache's rotatelogs. To use it, put the following in your Apache configuration:

LogFormat "%s %h %l %u %t \"%r\" %b" splunk
CustomLog "|stdbuf -o0 /bin/grep --invert-match '^200' | /usr/sbin/rotatelogs /var/log/apache2/splunk-access.log 86400" splunk

The first line defines a log format with the nickname splunk. In this format we put the response code (%s) at the beginning of the log file so we can grep for it easily.

The second one is where the action starts. The pipe (|) indicates that it is a piped log. Next we use stdbuf -o0 to disable stdin buffering which makes it a pain to test this setup. You can skip this in production if you want to.

Next we hand over to grep and remove all lines that start with the string 200.

Finally we pass the data on to rotatelogs which rotates the log once a day and gzipps those older 24 hours. Read the rotatelogs manual for many more configuration settings.

Comments »


ack2 package for Ubuntu 12.04

Ubuntu 12.04 Precise Pengolin ships with version 1.96 of ack which has been superseded by the much improved version 2. Read the ack homepage if you want to find out what is new in ack 2.

Since I'm now using Ubuntu on my work computer I have built a Debian/Ubuntu package and uploaded it to a PPA. This makes installing ack2 really easy on Ubuntu 12.04.

If you want to install it do the following:

sudo add-apt-repository ppa:leonard-ehrenfried/ack2
sudo apt-get update
sudo apt-get install ack-grep

Afterwards, ack will be available as ack-grep (there is another program in the Ubuntu repositories using the name ack).

Since I share my .bashrc between OS X and Linux I've aliased it as follows:

if [ `uname` == "Linux" ]
then
  alias ack="ack-grep"
fi

Comments »


Unit testing Javascript UIs

NB: Javascript is quite popular on the server-side now as well. This article however concerns itself exclusively with JS in the browser.

In the last 5 years Javascript has come a very long way. When I started out as a (browser) Javascript developer people smiled when I called myself that. I was patronised and thought of as a lesser programmer, a pixel pusher. To be fair there was some truth to it as widgets often broke.

Times have changed. Interactive websites are at the heart of a lot of businesses and no longer an afterthought tacked on after the "real" development has been completed.

But still, there is a large holdout who think of browser programming with Javascript as a foolish activity and not a serious developer's job.

Today I chatted to a friend about this and have started to develop at theory why that might be. I think it has partially something to do with how Javascript testing was done in the browser.

JS and HTML

On traditional web apps it worked mostly like this:

This meant that the widgets the frontend developers built were really hard to test, due to the fact that HTML and JS were largely separated. The widget expected a certain kind of HTML to operate on and if this structure wasn't there, mostly the JS widget would just not work.

Activating one of these widgets would often work something like this

$(".my-widget").datepicker("activate");

and this would assume a DOM structure which would look something like this

<div class="my-widget">
  <input type="text">
  <div class="dropdown"></div>
</div>

Components

But a new style of programming the DOM, fuelled by the rise of client-side MVC frameworks like Backbone, came into being: the UI became parcelled up into components, views or whatever you might want to call them. This meant that a frontend developer wouldn't have to build a certain HTML structure and then call some jQuery plugin on it.

Instead, the HTML isn't rendered on the server but the component you are trying to build comes with its own HTML. Rather than the above you would do something like this:

var view = new DatepickerView();
var rendered = view.render();
$(".my-widget").append(rendered);

Can you spot the difference here? The component itself brings its own HTML to the table instead of manipulating some globally available DOM.

How is this different?

If we write our UI widgets in this style testing is rather easy. Before, if you wanted to really test your JS you had to test both the HTML the server produced as well as the widget code that operated on it. That would mean to somehow also build up the entire context of the server-side templating language and render a full response.

Whilst before you were never quite sure that the HTML you produced was matching the expectations the JS had, it is now rather easy to assert against the widget.

rendered.find("button").click();
expect(view.clickCount).toBe(1);

This new-found engineering rigour means that JS code can be as easily tested as server side code, if not easier. What has made the difference is the coupling of JS with the HTML it operates on (and produces). In my opinion, that's the qualitative distinction.

In fact, if all your server produces is JSON asserting against that becomes easy too. You no longer have to do some DOM gymnastics to find out if your server has generated the correct response: just parse the JSON and assert against that.

Conclusion

Nowadays, JS-heavy projects are thought of as API clients to the server that produces JSON. This has enabled a style of frontend development that takes raw data as its input and not data mixed with a sprinkling of presentation layer. I think that this is a good thing. Frontend developers are now writing programs, apps and no longer just spice up your <select> elements.

Comments »