Tobe expressed

Friday, February 23, 2018

WTF-coding: the case of the obscure configuration

Ever stared at a piece of code without understanding why in the world it does not work? Or why it actually works at all? I'd like to call this phenomenon WTF-coding and I've been remarkably free of it since I've been doing framework-free backend java. But now I have a new job and we use all the popular frameworks, for better and for worse. Well, really only for worse, IMO, but I will write more on that when I understand my aversion better. So far, I have discovered that the tendency to want to write frameworks is very strong because it is the ultimate intellectual masturbation. The tendency to want to use frameworks is more puzzling, but I think we are all attracted to magic to some degree and there is a powerful illusion that frameworks provide a lot of value, automagically.

We are working with JSON in Java and using Jackson, and I had a little problem where one of the fields of the main object could be a different type depending on what the object represented, as indicated by a type name in another field. So I had to work out how to configure Jackson to handle it, which turned out to be a little challenging. After an hour or so I hit upon a fruitful phrasing of the search terms and found a solution.

@JsonDeserialize(builder = Attachment.AttachmentBuilder.class)
public class Attachment {

    private final long id;
    private final String attachmentType;

    public interface ExcelSheets extends List<ExcelSheet> {}

    private static class ExcelSheetsImpl extends ArrayList<ExcelSheet> implements ExcelSheets {}

    @JsonTypeInfo(use = Id.NAME, property = "attachmentType")
    @JsonSubTypes(value =  {
            @JsonSubTypes.Type(value = ExcelSheetsImpl.class, name ="excel")
    private final Object typeSpecificData;

    Attachment(long id, String attachmentType, long reportId, String filename, Object typeSpecificData) { = id;
        this.attachmentType = attachmentType;
        this.typeSpecificData = typeSpecificData;

    public long getId() { return id; }

    public String getAttachmentType() { return attachmentType; }


    public Object getTypeSpecificData() { return typeSpecificData; }

    public AttachmentBuilder builder() { return new AttachmentBuilder(this); }

    public static class AttachmentBuilder implements Builder {
        private long id;
        private String attachmentType;
        private Object typeSpecificData;

        public AttachmentBuilder() {}

        AttachmentBuilder(Attachment attachment) {
            this.attachmentType = attachment.attachmentType;

        public AttachmentBuilder withId(long id) {
   = id;
            return this;

        public AttachmentBuilder withAttachmentType(String attachmentType) {
            this.attachmentType = attachmentType;
            return this;

        public AttachmentBuilder withTypeSpecificData(Object typeSpecificData) {
            this.typeSpecificData = typeSpecificData;
            return this;


        public Attachment build() {
            return new Attachment(id, attachmentType, reportId, filename, typeSpecificData);

Now I could be happy with that and sing the praises of Jackson and "look how elegantly it got configured". But should I?

Even when I have this solution before me, I still can't quite figure it out from the documentation (WTF?), so what will happen in six months time when I have to modify this code?

And here comes an even bigger WTF: change the type "Object" for typeSpecificData to "ExcelSheets" and deserialization no longer works! (What I really wanted to do was to introduce a marker interface, TypeSpecificData, but, as you can surmise, that didn't work either.)

Even though Jackson is (sadly) perhaps the easiest way to handle JSON in Java, I think there may be good reasons besides the above to avoid using it. I won't go into that in detail in this post, but I will leave with this thought: Jackson or any other automagic data-binding framework entices you to create Java objects that match the serialized JSON, but your serialization format is not your internal model, at least not forever, because the two have different reasons to change. So even after you get a deserialized object from Jackson, you should probably write lots of code to transfer the data into your internal representation. Then what did you gain?

Wednesday, January 5, 2011

Prove your assumptions, but remember Murphy was an optimist

Continuing on my unremarkable coding task, having gotten it to work, it was now time to clean up the code.

Given the large load expected, I couldn't allocate a new direct ByteBuffer for every piece of data I wanted to handle. But code that accepts a ByteBuffer is equivalent to code that accepts a byte array, a starting offset and a length (or a limit), right? So I just set the position and the limit on the boundaries of the data and I'm good to go.

Now I ran into some of my own previous assumptions. Luckily, the failure resulted in a log message that made it clear where something was going wrong. It was one of those places where a deviation from the happy path either cannot happen, should not happen or we didn't really care too much. A place where I used to code an empty block "{}", or when I was feeling diligent, with a comment "{/*cannot happen*/}". Now I'll code it as "{throw new AssertionError("Cannot happen.");}" or at least "{LOG.fine("Don't care.");}".

Anyway, a quick analysis showed a likely cause. I say likely, because it's an assumption until proven. I wrote a quick unit test that failed, fixed the code and the test passed. But the program still failed, with the same log message as before. Imagine what I might have been doing if I hadn't proven the failure and the fix with the unit test, I'd still be staring at the same place and possibly ripping my code to shreds in desperation. Now I could immediately move on and fix the second error.

OK, on a roll now, all assumptions proven by assertions, I'm just about done. Except that the code still doesn't work and no sign of why. Murphy's law in full action.

Finally I find it. There's a trap in ExecutorService. What's the difference between calling "execute(Runnable)" versus "submit(Runnable)"? Nothing much, when the code works. But "submit(Runnable)" should have a big red warning sticker. It returns a Future, with no result. You don't bother to "get()" nothing. The devastating side-effect is that all exceptions get preserved until "get()" is called, so this is a hidden equivalent of "catch(Exception e){}". Next task: change this everywhere and add a rule to FindBugs.

Saturday, January 1, 2011

Your assumptions are dangerous, you know too much.

I have just completed a rather unremarkable piece of code. The system was designed to allow this type of addition, so it just took a couple of hours or three to write the code and touch up the parts to selectively enable the functionality by user.

Then a colleague and I spent two days debugging. We hacked around issue after issue, just to prove basic workability before trying to solve the issues.

The final obstacle was that our socket channel was not being selected for reading on the selector. So I hacked around that by creating a new thread to loop infinitely around reading the channel. Which made things work fine up to the point where actual data was coming in and the server blew up with a segmentation fault.

After reflecting while travelling on the bus home, I remembered that perhaps our JNI code needed a direct byte buffer, with data starting at position 0 and the whole backing array available for use. Inconvenient in this case, but another little hack and everything worked like a charm.

Back to the previous question: why no reads? Perhaps my server socket in non-blocking mode actually returned a socket configured for blocking mode instead of for non-blocking mode? It did, and explicitly setting it to non-blocking fixed it.

As I backtracked through the issues, it struck me that every single problem we'd run into had to do with assumptions.

I could of course have checked my assumption about the blocking mode. But I also have another assumption, which is more valid: incorrect usage of an API should not fail silently. This turns out to be correct, because a SelectableChannel throws an IllegalBlockingModeException.

Unfortunately, the "helper" framework that we have in place inadvertently masked that by running the register call in a FutureTask that had a boolean "success" return value that nobody had found the need to check, because "true" was the only possibility. Well, that is, assuming no exceptions are thrown.

Perhaps there is also a flaw in the assumption that a helper framework that obscures the standard API is actually helpful.

Certainly, the direct byte buffer assumption mentioned above should probably have been asserted somewhere, it's easy enough to throw an IllegalArgumentException if buffer.isDirect() returns false. Obviously, the programmer who created the JNI call was not assuming we had a direct byte buffer, he knew we had one. But that's the trick of maintainable and re-usable object oriented code: you cannot rely on any knowledge outside the class you are currently in. From the point of view of the class, such knowledge is an assumption.

Another issue I had hit on the way concerned the UserIdentifier class. It is really just a wrapper around a string, but because it has a specific semantic meaning it was correctly exposed as a separate value class. To limit the new functionality by groups of users, I found it convenient to construct the user identifier slightly differently. The code did not work as expected.

At another point in the code, a programmer had used his knowledge of how the user identifier was constructed, which introduced a hidden assumption about the structure of the user identifier. The correct code would have been to obtain the user identifier from a single authoritative place.

The root cause of the user identifier hack was a design assumption that two things were independent. When they were not, a co-dependency web was created. In a sense, this is also a case of classes having too much knowledge: they knew about too many different other classes that they needed to collaborate with. To keep your code clean and honest, such things need to be refactored (re-designed).

Consider the small amount of care and time it would have taken to avoid the assumptions in the first place and compare it to the four man-days of lost productivity that was caused. We are always under time pressure, but that will be the case next week, month or year as well. If you don't pay the price now, you pay it with interest later.

The more things your classes know about your system, the harder it is to change or re-use the code. Make sure that the knowledge you put into a class is appropriate and doesn't create a dangerous web of assumptions.

Friday, August 28, 2009

Clarity of code is clarity of thought

I remember at my first job when we introduced the concept of code reviews. I don't think anybody really looked at anybody else's code before pressing the "approve" button, I know I didn't. Reading code is boring and it can be hard and it felt like a waste of time. I had my own code to write and why shouldn't the author of the code have written it right in the first place?

When I moved to another job, they talked about how, in theory, code reviews was the number one most effective way to increase code quality. But they had given up on it, because they felt it came down to a discussion of where to put the dots and commas (or, rather, semi-colons and parentheses).

Quite aside from the issue of code reviews, I had come to realize that I spent much more time reading my code than I spent writing it. Every debugging session is spent reading code over and over. Every time you have to add a feature or change some functionality you have to read the code, and re-read it to avoid breaking stuff. Don't tell me tests will do it for you. Now don't get me wrong, tests are great and I strongly advocate test-first coding, it's a great way to achieve focus and clarity of thought. But when a test fails, you're thrown into debugging mode, which means reading code.

So I concluded it was worth spending a little extra time typing longer variable names, and taking the time to find descriptive names. It was worth spending a little more time breaking down those long methods and simplifying those complex structures. Whenever I was reading code that made me stop and think, I would usually refactor it to be clearer (although the term refactoring hadn't been invented yet). I would also change existing code to make a new feature fit in better, in a more readable and more logical way.

In "The Pragmatic Programmer" the distinction is made between "programming by coincidence" and "programming by intention". We all have to do it occasionally, a little trial-and-error programming, because we're not quite sure how things work. That's "programming by coincidence". Before you check your code in, you want to make sure you understand what each statement does and that all unnecessary statements are removed. Then you've transformed the code from coincidental to intentional.

But that's not enough. Your very functional and intentional code is going to lose you valuable time unless you also transform it to readable code, which clearly displays your intent.

A much-touted wisdom is that you should document and comment your code. Fair enough, that works, but it has many weaknesses. Your energy is far better spent making the code explain itself. Comments often lie, but the code is always pure truth, so prove it in code. Only comment on "why" you are doing something, and that only if you cannot make it evident in the code.

I like the following sound-bite from "Refactoring": "Any fool can write code that a machine can understand, it takes a good programmer to write code that a human can understand."

Test-first "anything" is efficient and focused because it sets up the criteria for success and the means to measure it up front. So what's the best way to test if your code is readable? Get another person to read it, i.e. a code review.

I'm very grateful to those who review my code carefully and pick on every detail, it makes the code better and it helps assert that my thinking was clear. That gratitude gives me the energy to return the favour by reviewing their code equally mercilessly.

You will sometimes, but rarely, find bugs by just reading code (only because everybody has a brain-fart now and then). But the real value of the reviews is in the "dot and comma" discussions and especially in picking good names. In addition to making sure that the code is easy to read, it will sometimes bring a real little nasty bug to the surface.

An example: An index into an array of values is stored into a variable called "value". When the reviewer makes you change the name to "valueIndex" instead, some parts of your code may start to look weird (the bug was exposed).

Clarity of code really is clarity of thought.

Thursday, December 25, 2008

Using Java concurrency utilities

The inspiration for this post comes from Jacob Hookom's blog and I can only second the recommendations he gives. Although, as always, I would caution to test any such implementation properly, that it works well and actually provides a benefit. There are lots of pitfalls and concurrency is tricky even with the excellent utilities provided in Java.

To summarize the interesting problem: parallelize the execution of lengthy tasks in a web request, without creating many threads for each request, but also ensuring that the thread pool is not starved by one request. The idea is to have a reasonably sized thread pool and to limit the number of tasks executing in parallel to a number small enough to allow the expected amount of concurrent requests to share the pooled threads.

Essentially, limiting the number of tasks executing in parallel can be done in two ways: limit the number of tasks submitted at one time or limit the number of workers that execute a set of tasks. Jacob takes the first approach, I will take the second approach, which seems to make it simpler to manage time-out issues.

Here's some code:

<V> Queue<Future><V>> submit(int numberOfWorkers, Queue<Callable><V>> tasks,
long timeout, TimeUnit unit)
throws InterruptedException, TimeoutException {
Queue<Future><V>> result = new ConcurrentLinkedQueue<Future><V>>();
List<WorkerTask><V>> workers = new ArrayList<WorkerTask><V>>(numberOfWorkers);
for (int i = 0; i < numberOfWorkers; i++) {
workers.add(new WorkerTask<V>(result, tasks));
List<Future><Object>> deadWorkers
= executor.invokeAll(workers, timeout, unit);
for (Future<Object> obituary : deadWorkers) {
if (obituary.isCancelled()) {
throw new TimeoutException();
return result;

And the code for a WorkerTask:

private static class WorkerTask<V> implements Callable<Object> {

private Queue<Callable><V>> tasks;
private Queue<Future><V>> result;

public WorkerTask(Queue<Future><V>> result, Queue<Callable><V>> tasks) {
this.result = result;
this.tasks = tasks;

public Object call() {
for (Callable<V> task = tasks.poll(); task != null; task = tasks.poll()) {
FutureTask<V> future = new FutureTask<V>(task);;
if (Thread.interrupted()) {
Thread.currentThread().interrupt(); // Restore interrupt.
return null;

Note that it is important to have thread-safe collections for tasks and result, we should actually make sure that the tasks are in a thread-safe collection, but I'll ignore that for now. Note also the check if the thread has been interrupted in the call() method of WorkerTask. That is vital to be able to cancel the task when you don't want to wait for it any longer (i.e. on time-out). If possible, the submitted tasks should also handle interrupts. Note the careful restoration of the interrupt status so that the caller of the method may also be notified.

Monday, November 24, 2008

GC is for Goodstuff Collector

I have over the past few months noticed that there is a fairly common fear of creating objects in Java. When I query the authors, it always seems to boil down to an attempt to create more performant code through avoiding garbage collection.

So why would one want to create more objects, anyway? Well, one good reason would be to get more readable code, e.g. through encapsulating an algorithm or using appropriate helper objects to express an algorithm more clearly.

Even when the code does get put into a helper object, there seems to be a tendency to keep that instance around and reuse it, to avoid garbage collection so that the code performs better. My first comment is always "You should measure that". If I wanted to put it more sharply I should say "If you can't prove it's a performance benefit, then it probably isn't". I have worked with enough optimization to know that what's true for one language will not be true for another. Even using the same language, something that gives a performance benefit on one machine may be a detriment on another kind of machine (or even the same kind of machine with different "tweaks" like page sizes and such).

If you create a temporary object that lives only as long as you need it you gain the following benefits:
  1. Your object is always in a pristine state when you want to use it.
  2. Your code is a big step closer to being thread safe.
  3. Your code is more readable and easier to analyze.
  4. Your garbage collector gets to do less work.
"What?", you say, "The garbage collector does less work by collecting more garbage?".

Indeed it does. When we hear "garbage collector" we tend to think of the work involved in clearing out an overfilled store room, all that work to haul all the garbage out. But the garbage collector doesn't even know the garbage exists, the very definition of garbage is that it can no longer be reached from anywhere. What the garbage collector really does is create a whole new store room and move the stuff you want to keep over to it and then it lets the old store room and all the garbage disappear in a puff of smoke. So all the work done by the garbage collector is really done to keep objects alive, i.e. the GC is really the "goodstuff" collector.

This is obviously a somewhat simplified view and I don't think it holds completely for objects with finalizers (which is probably why finalizers are really bad for performance). Every single measurement and microbenchmark I've done confirms that creating and destroying objects is never worse and often much better than trying to keep objects around. I've done a few, first because I didn't trust it, then because others were so convinced of the opposite that I had to research it. That isn't an absolute proof that it will be true for all scenarios, but I think it's a pretty good indication of where you should be pointing when shooting from the hip.

From an IBM developer works article, we get the numbers that allocating an object in Java takes about 10 machine instructions (an order of magnitude better than the best C/C++ allocation) and that throwing an object away costs, you guessed it, absolutely zero. So just remember that GC stands for "Goodstuff Collector" and you'll be on the right track.

Saturday, August 16, 2008

The legacy of inheritance

Is inheritance really useful or is it a feature that causes more problems than it solves? Certainly I can't think of a case where I've been really grateful that I've been able to inherit from a superclass but I can think of several instances where it has caused friction and where the extension mechanism of inheritance tended to lead the programmer the wrong way.

Consider the following code (public fields for brevity):

public class Square {
  public int base;
  public int getArea() {
return base * base;

public class Rectangle extends Square {
  public int height;
  @Override public int getArea() {
    return base * height;

public class Parallelogram extends Rectangle {
  public double angle;

Now how do we implement a Rhombus? Is it a Square extended with an angle (and overridden area calculation) or a Parallelogram with conditions on setting the properties so that the invariant is preserved (which is why we should have accessors, by the way)?

Well, the correct answer is neither, even though we have a nice sequence of extensions. The problem is that we have been led astray by the extension mechanism and violated the "is a" rule for subclassing and ended up with a corrupt type system. Clearly a Parallelogram is not a Rectangle which equally clearly is not a Square so a subclass instance may not safely be used in place of a superclass instance. Reversing the class hierarchy solves the issue, however it creates subclasses that are restrictions of the superclass rather than extensions.

An acquaintance of mine who is highly experienced in creating standardized components for information interchange once claimed that it only works to inherit properties by restriction. This is well worth considering, although I'm not entirely convinced because it causes a different set of problems when restriction means "that particular property is not used", as it so often does in information interchange scenarios. In that case it is usually not interesting nor useful to know what the superclass is. In our case at hand it will work because the subclasses actually have a meaningful and useful value of the property but it is restricted by an invariant, so it is also useful to be able to view them as instances of the superclass.

Let's try coding again:

public class Parallelogram {
  protected int base;
  protected int height;
  protected double angle;

  public void setAngle(double angle) {
    this.angle = angle;

public class Rectangle extends Parallelogram {
  @Override public void setAngle(double angle) {
     // What should we do here?

Oops, this doesn't seem to work well, either. Obviously we can't just inherit the setAngle method or we could end up with a corrupt Rectangle, nor is there any reasonable action we can take. We can get out of our pickle, however, because constructors are not inherited! Simply make our instances immutable:
public class Parallelogram {
  public final int base;
  public final int height;
  public final int angle;

  public Parallelogram(int base, int height, double angle) {
    this.base = base;
    this.height = height;
    this.angle = angle;

  public int getArea() {
    return base * height;

public class Rectangle extends Parallelogram {
  public Rectangle(int base, int height) {
    super(base, height, Math.PI/2);

public class Square extends Rectangle {
  public Square(int side) {
    super(side, side);

That seems to work, and the implementation of getArea() is actually profitably inherited. But we still have a problem in Java with a Rhombus, since a Square is both a Rectangle and a Rhombus but a Rhombus is not a Rectangle. To solve that would require defining the types as interfaces and have separate implementation classes (now I understand why that is often  recommended :-) ).

If we had chosen to implement Parallelogram with a field for the length of the side instead of the height, the formula for the area would have been base*Math.sin(angle)*side. Even if this would have worked for our Rectangle, it would have been inefficient (although when the fields are immutable it could probably be optimized by the compiler and/or JVM).

On the whole, I believe it is seldom profitable to inherit the implementation of a method, if you have a good example to the contrary, please share it.

I think that overriding all public methods would be preferable, even if you just decide to call super's version, at least it would show that you gave some thought to whether it was correct or not. Without having done an exhaustive study, I believe this is especially true for interfaces that indicate that a class "is" something rather than that it "is a" something, i.e. those interfaces whose name usually end in "able". Try Cloneable, Externalizable, Runnable and Printable.