Source code optimization

For some reason, I wound up reading about compilers’ use of vectorization in optimizing code yesterday, and I came across Felix von Leitner‘s pretty interesting presentation from 2009, “Source Code Optimization”.

While I knew that compilers are generally good at optimizing these days, I had no clue just how good they’ve become. This basically also means that a lot of the “general wisdom” I came up with when writing C and C++ no longer really applies, and even more so, that more readable code often seems to produce equally good or even more optimized code than most attempts at being clever. For C(++) code then, this means that we not only can but almost have to mentally shift from being “clever” to being concise when writing code – a development that’s been helped greatly by C11 and C++11.

If you write any C or C++ on a regular basis, I strongly suggest you add this presentation to your list of annual reminders.

October 19, 2014 |

Smart working

Startup culture is awash with the heroisation of burning the mythical midnight oil with the 60 or 70 hour work week being a badge of honor — the road to success. Slowly, however, that trend is changing — at least among people who’ve actually done those stupid hours — and while I’m embarrassingly still working way too many hours than is any kind of good, evidence keeps indicating that I really need to stop. Not only evidence in terms of feeling unproductive at times, but actual sort of scientific evidence.

I came across one such piece of evidence today in CamMi Pham‘s post “7 Things You Need To Stop Doing To Be More Productive, Backed By Science”. For the post, she has done a couple of very tangible charts (which seem to correlate well with other data I’ve found) showing the effective productivity of weeks of working 50 and 60 hours respectively. Truth be told, the numbers are scary — if the charts are to be taken at face value, the productivity gain from a 50 or 60 hour work week evaporate after only 9 and 8 weeks respectively:

I probably need to be smart and go home “early.”

July 30, 2014 |

Please admit defeat

A year and a half ago, I wrote about just how unimpressive and uninteresting HTTP 2.0 is. At that time, I called out the IETF on the decision to just repackage SPDY, and while I got a bit of flak from a few of the people involved, nothing really seemed to change. Since then, not much has happened, to be honest. The working group is mostly still just bickering about exactly what HTTP 2.0 is supposed to be, rather than coming up with any concrete solutions.

However, it seems like the working group is slowly starting to feel the pressure of releasing something, as Mark Nottingham today posted a very interesting entry to the mailing list:

The overwhelming preference expressed in the WG so far has been to work to a tight schedule. HTTP/3 has already been discussed a bit, because we acknowledge that we may not get everything right in HTTP/2, and there are some things we haven’t been able to do yet. As long as the negotiation mechanisms work out OK, that should be fine.

In other words, the working group seems to be realising that they’ve gotten nowhere in years. But, rather than admitting that they’re stuck and need to start from scratch, they’re just moving to push on through with a new HTTP standard that’s subpar at best, and then fix it in a newer version. While I generally applaud people taking incremental steps, HTTP 2.0 is not only nowhere near incremental, but HTTP is also no laughing matter. HTTP 1.0 has been in active use since 1996 – it’s superset, HTTP 1.1, since 1999 – so to think that we can just push through and adopt a crummy version 2.0 and then fix it later is absurdly naïve at best. I’m rendered virtually speechless by the fact that the supposedly best people in the industry to undertake this task can have such a short sighted stance on HTTP – they, of all people, should know just how bad technical debt is for the industry to be lugging around.

Luckily, there’s at least one person on the mailing list who maintains an actual implementation of HTTP; Poul-Henning Kamp. While Kamp has been a general opponent of a lot of parts of HTTP 2.0 for the last couple of years, Nottingham’s post finally prompted Kamp to call the working group out on their crummy job:

So what exactly do we gain by continuing?

Wouldn’t we get a better result from taking a much deeper look at the current cryptographic and privacy situation, rather than publish a protocol with a cryptographic band-aid which doesn’t solve the problems and gets in the way in many applications?

Isn’t publishing HTTP/2.0 as a “place-holder” is just a waste of everybody’s time, and a needless code churn, leading to increased risk of security exposures and failure for no significant gains?

The rhetorical nature of the wording aside, Kamp hits the nail on the head. Going down the path that Nottingham seems to be indicating would mean nothing but pain for the entire industry as a whole. So, I can only echo the ending words of Kamp:

Please admit defeat, and Do The Right Thing.

May 26, 2014 |

Your website should stop doing this right now

Goran Peuc has written a pretty interesting post on what web developers should stop doing now that we’ve hit 2014. While the bullet point format is interesting albeit a bit tired (“5 things you shouldn’t do on your website”), it gives a good set of pointers for the people most removed from being a non-technical user. However, the really interesting part of the post is Peuc’s point about the relationship between a horrible user experience and developers in discussing PayPal’s ridiculous credit card input field:

Yes, you got that right, developers of the site force the user to understand how the backend logic of the website works.

If we extrapolate this a little, it reveals one of the main causes of bad user experiences: developers forcing users to do the developers’ job, as my colleague Casper Lemming so gracefully puts it. I know that communication is not always the average developer’s strongest suit, and of course, all abstractions are generally leaky, but it’s 2014. The .com days of “you know HTML? You’re hired!” are over — if you’re building front facing stuff, as most web developers are these days, your job is as much a communications job as it is a developer job.

Really, then, screw the bullet points — your developers should stop making users do their job.

January 7, 2014 |

Linux, meet SO_REUSEPORT

The Linux 3.9 release finally introduced a — at least for a networking geek like me — long awaited extension to the socket model: the SO_REUSEPORT socket option. Not to be confused with the virtually default SO_REUSEADDR POSIX socket option, SO_REUSEPORT has its root in BSD 4.4 and offers the ability for multiple, independent processes to listen on the same port at the same time.

This basically means, that from Linux 3.9 onwards, we no longer need to build our own fork(2) hell-based master/slave watch dog contraptions to have multiple processes handle incoming connections efficiently. Instead, we can now leave this to the kernel and just spin up the listening processes we need. As pointed out on the Free Programmer’s Blog, this is especially exciting for programming languages that are, mostly due to implementation specific issues, inherently shitty or incapable of any kind of parallel execution — like Node.js, Python and Ruby.

However, it should probably be kept in mind, that while this is at least in the long run a Godsend in terms of reduced complexity for simple applications that just need some kind of parallel execution, it’s still likely that the solution is not necessarily the performance wise most optimal as we approach extremes. This could probably do with a round of benchmarks, but for now I’m just glad that my days of being dragged kicking and screaming through people’s optimistic implementations based on complete disregard for documentation about the exact consequences of the forking process model might slowly be coming to an end.

September 1, 2013 |