One of my favourite memories is an email I received when I was about twelve years old. I’d made a website in Microsoft FrontPage using one of the templates; it was full of cool images and I’d uploaded a few video game demos. In the email, another young man, whose name I do not recall, was complaining that he didn’t like my website because it took too long to download said demos.
I don’t recall whether I replied. But it is one of my very first memories of web design, and I like it partially because it’s such a clear point of reference for how much it and I have changed since then.
Yes, we invented cold fusion back in ’95
Starting out with FrontPage Express slightly before the full version, it didn’t take long for me to discover Macromedia, the developers of Flash and Dreamweaver. (Their rival Adobe would later acquire them in 2005.) I played around with their own server-side environment, which they called ColdFusion. As best as I can recall, it was an attempt to make a simplified language for quickly deploying dynamic websites.
This was in the still relatively early days, before PHP exploded and took over the world. I didn’t fully start working in PHP until several years later, in my late teens. The first version of danielran.com was a small custom CMS I’d developed using the dependable PHP/MySQL combo. I’d had two domains previously, both of which used similarly simple setups.
Smash the state
Unlike typical computer applications, PHP does not have a runtime state, very much an extension of the stateless HTTP. What this means is every request sent to the server is unique; nothing is kept in memory and nothing is resumed. In the case of something like WordPress, the entire application is bootstrapped on every pageview.
One of the most comfortable features of this — a big part of the reason why I think PHP is such a popular language for beginners — is that the code is executed synchronously. In other words, line by line, one thing at a time. If a particular statement takes a long time to finish, the rest of the code will wait dutifully.
For this reason, any PHP application can be thought of as linear. Every request will take a certain path through the logic and come out the other end with some HTML (or another kind of output). Not unlike a maze drawn on a piece of paper.
The NSA is listening for your call(back)s
Because of this, applications cannot be written as they are in PHP. This has led to adopting the event-based model. In other words, when you request /blog/some-article-name from the server, that is an event, which is listened for by a callback function. This function has more functions inside it, all of which eventually result in a typical HTTP response.
At a glance, this seems sensible. After all, why execute the entire application every time a user navigates to a different page?
The web server formerly known as no JS
To be clear, Node.js isn’t a web server, it’s a runtime environment. But it can act as a web server, just like nginx or Apache. It’s interesting because the server-side code is usually executed by a different process than the web server itself.
Take a typical LEMP setup (Linux, nginx — pronounced engine x, MySQL and PHP). Linux is just the operating system, but nginx is the web server listening for connections by clients, which are then passed to PHP which executes the application (say, WordPress or Drupal), which fetches data from MySQL and then returns the output to nginx which sends it on to the client.
Don’t rock the wraith
As someone who has worked extensively with the classic PHP/MySQL model, the ability of encapsulating everything save for the database in a single application is relishing. You have control. There are no barriers and upstream servers. Also, nothing is made for you; if you want Node.js to behave like a web server, you must tell it to, in exactly the way you want it.
Guiding each client request through a path, making sure to pass only what’s needed through each function without saving anything past its lifetime (no more saving the current user in a static class property, etc.) as to allow for optimal garbage collection is a new challenge. If you screw it up, Node.js will behave erratically or slowly eat up all the RAM and die.
And that means it will be dead. Unlike PHP which can fail on one request and still work for subsequent clients, if Node.js suffers a critical error, it will die, and your website will be down for everybody until it’s restarted. This puts additional pressure on developers for better error handling and more stable code — no laziness and relying on the stateless nature to absorb mistakes. One typo can bring the whole thing down.
Sounds like fighting the rock wraith in Dragon Age II on Nightmare. Sign me up.
All the best,