Drew Crawford has done an extremely well researched analysis of exactly why even well supported JIT compiled code written in dynamic languages — especially on mobile devices — still and basically always will fall short of writing the equivalent as compiled code, no matter how many script kid fallacies you throw at the problem. For people not used to reading well researched work, it's quite long, but if you read only one piece of such work this year, make it this one. Essentially, Drew Crawford does a good job of hammering the points of why compiled languages like C, C++ and similar derivatives will always beat your favourite JIT compiled dynamic language. And, guess what? It's not religion. It's something as simple as design tradeoffs made by the language designers offering you productivity niceties at a cost.
However, despite how interesting the article is in itself, the best part about it is the fact that it's a self-reinforcing call for developers to start employing at least a modicum of scientific methodology in their work:
If we are going to make any progress on the mobile web, or on native apps, or really on anything at all–we need to have conversations that at least appear to have a plausible basis in facts of some kind–benchmarks, journals, quotes from compiler authors, whatever. There have been enough HN comments about “I wrote a web app one time and it was fine”. There has been enough bikeshedding about whether Facebook was right or wrong to choose HTML5 or native apps knowing what they would have known then what they could have known now.
The task that remains for us is to quantify specifically how both the mobile web and the native ecosystem can get better, and then, you know, do something about it. You know–what software developers do.
I couldn't agree more.
July 22, 2013 | Permalink →
As a kid, very few things fascinated me more than space flight (I clearly remember yapping my dad's ears off about the different stages of the Saturn V launch vehicle's flight cycle.) To this date, that fascination has stayed with me, and possibly only grown greater as I've started to grasp the true extent of the space programmes. Maybe young me would have loved it, but videos like this one by Spacecraft Films from the launch umbilical tower of the Apollo 11 flight from 5 seconds before liftoff at 500 frames per seconds leave me in absolute awe:
July 21, 2013 | Permalink →
Unsurprisingly, the only thing from the recent Worldwide Partner Conference that's been worth anyone's attention is a little tidbit, that the ever accurate Steve Ballmer let slip during his keynote. Apparently, Microsoft now has more than 1 million servers in its data centres. While I simply shrugged off the number as typical Ballmer bullshit, James Hamilton decided to do the math. While the infrastructure required to run a million servers are expectedly quite high, apparently the figure isn't completely out of the ball park:
How many datacenters would be implied by “more than one million servers?” Ignoring the small points of presence since they don’t move the needle, and focusing on the big centers, let’s assume 50,000 servers in each facility. That assumption would lead to 30 major facilities. As a cross check, if we instead focus on power consumption as a way to compute facility count and assume a total datacenter power consumption of 20MW each and the previously computed 300MW total power consumption, we would have roughly 15 large facilities. Not an unreasonable number in this context.
July 17, 2013 | Permalink →
It's with great sadness that I have learned, that a personal hero of mine, Douglas Carl Engelbart has passed await, albeit at a respectable age of 88. However, I'm even more saddened by the way general media is presenting him in obituaries. Lacking rudimentary understanding of Engelbart's approach and goals, they reduce his intellect, achievements and work to that of a simple inventor of the early personal computing era scoping him as merely "the inventor of the mouse."
But, as I tried to sit down and figure out how to describe who Engelbart really was, I realised, that the general media is probably well excused. Hell, even as a kid who's been heads down in computers since he was 8, it wasn't until a few years ago, where I started to look beyond modern computing as a mere tool, that the seminal nature of Engelbart started to dawn on me. Worse still, it wasn't until I read Bret Victor's wonderful piece that the concept of Engelbart became clear (despite the somewhat fuzzy conclusion):
The least important question you can ask about Engelbart is, "What did he build?" By asking that question, you put yourself in a position to admire him, to stand in awe of his achievements, to worship him as a hero. But worship isn't useful to anyone. Not you, not him.
The most important question you can ask about Engelbart is, "What world was he trying to create?" By asking that question, you put yourself in a position to create that world yourself.
Engelbart wasn't just another gold rush inventor. He doesn't fit inside the same boxes as even people as great as Steve Jobs. No, he was nothing less than an amazingly intellectual being who, as one of the first, saw the mainstream availability of computing not as the amazing achievement in itself but rather a means to augment the capabilities of humans. There is no denying that computers truly have transformed the lives of humans and our abilities to collectively solve problems — and probably far more than even young Engelbart could ever have imagined. But, the popular, naive deduction that this is a natural result of the technological development that endured, is nothing short of wrong.
As Bret Victor so concisely puts it; "Engelbart devoted his life to a human problem, with technology falling out as part of a solution." But, it wasn't just Engelbart's technology that "fell off." A pioneering thought leader as a natural effect of the beautiful purity of the intent of his work, he was and remains both aspirational and inspirational to the technology world as a whole.
His legacy isn't hypertext, the mouse or any other single piece of work — rather, it's the total permeation of the technology world with the intent that lead to these almost insignificant pieces. He truly was one of the very few pioneers of augmentation, history has ever known.
July 4, 2013 | Permalink →
Marco Arment's excellent post "Lockdown" reminded me of an old draft I've had lying around since 2006 on what the next major version of the Web would be — at that time semi-jokingly called "Web 3.0." With social networks in their infancy and popularity of semantic standards like RSS booming, it seemed that the next Web would truly be a beautiful orchestration of easily interchangeable, open semantic data — give or take people's complete lack of respect for standards specifications.
As of July 1st, the original "Web 3.0" idea and the Internet as a whole suffered a huge blow with the shutdown of Google Reader. Marco Arment provides the single best and probably far most valid argument as to why the shutdown happened; Facebook and Twitter got big enough:
Google resisted this trend [to move away from interoperability and open standards] admirably for a long time and was very geek- and standards-friendly, but not since Facebook got huge enough to effectively redefine the internet and refocus Google’s plans to be all-Google+, all the time. The escalating three-way war between Google, Facebook, and Twitter — by far the three most important web players today — is accumulating new casualties every day at our expense.
More broadly, this is also the reason Web 3.0 as otherwise predicted by people as prominent as Sir Tim Berners Lee isn't an interoperable network of semantic data and open standards but rather "the winner takes it all"-esque silos commonly referred to as "social networks" and "clouds." However, as these silos become ever more proprietary with big "fuck you" signs being put up in front of independent outsiders — the people, who have created much of the foundation of the Internet — I suspect that we may see a move back to a much more distributed and independent Internet. Oh yea, and as if the ever increasing proprietary nature of especially social networks wasn't bad enough, recent security concerns are unsettling even to the most ignorant.
For the Internet as a whole, I think the future is much less gloomy than is being suggested, but it requires us to start altering our mindset — especially towards understanding the difference between being the user and being the product. Maybe the semantic Web can still happen?
July 3, 2013 | Permalink →