Pages

Thursday, 22 September 2011

Javascript Build Systems

Whew it’s been a while since I last posted here! I have moved to Malaysia and got married in that time :)

When I originally started working on Javascript, Ad products in particular, I wrote my code in one big Javascript file, tested everything manually and used jsmin to compress it. This was not an ideal solution, and I wasted a lot of time chasing down bugs due to missing semicolons, mis-matched braces and various typos. Another problem with this approach was that as the code got longer, things became harder to find and I found myself adding large comments at the start and end of sections: //START TOUCH EVENT HANDLERS and //END TOUCH EVENT HANDLERS.

Later on I discovered JSLint which saved me from a large number of “stupid” bugs, although I was still manually copying and pasting my code into the online linter :-O

I am using my own custom build system, with my code separated into files that deal with a certain piece of functionality which are then concatenated together, checked for errors and minified with one command line.

jQuery Build System

I based my system on this system (which can be found in the jQuery GitHub repository) which works as follows:
  1. make jquery which will concatenate all of the source files together and output the complete Javascript as one file (Looking at the Makefile you can see the source files listed under BASE_FILES and then wrapper with an intro and outro script in MODULES)
  2. make lint which will check the built file against JSLint and output any errors to the console
  3. make min which minifies the code using uglify.js

My Build System

As I am building a number of different Ad products, which often share similar functionality, I have my own build scripts which work in a similar way but with the ability to build different combinations of files.

Firstly, I use the product name as the target (e.g. make product.js) with rules in my makefile for the specific set of files required to build that product. This allows me to share files between the products easily, which used to involve copy and pasting blocks of code!

Also, instead of using JSLint I prefer to use JSHint as I find it’s rules more suited to my coding style. I pull in the JSHint repository as a submodule so that I can update it easily and track the version I am using.

Finally, I use the Google Closure Compiler instead of uglify.js (or various other alternatives). I like this as it really compresses my code a lot (Advanced mode, with the code written to work with this) and it provides additional checks to my code such as type checking (via JSDOC comments) and missing property checks. I use the downloadable JAR file to run this locally, although it does take a few seconds to run on larger code bases, I feel it is worth it.

Other Things

I also have a little python script which will run a local webserver for me to test builds on. This includes the ability to configure the Javascript files from XML config files and a basic templating engine so I can test on desktop or mobile sites.

My scripts also will push versioned builds to the live servers, to ease that step too.

Finally

Although it takes a little while to setup these automated scripts, it really is worth it in the long run. As so many bugs can be picked up immediately (without opening a browser or test suite), and changes can easily be made to a shared file and then all products using that can be rebuilt and receive the upgrades in a much shorter time period.

I would advise anyone to automate as much of their manual process as possible, as it will make your life easier.

Also, if you are not just building Javascript, but instead a whole site, then I recommend the HTML5 Boilerplate as that will manage this kind of process for you along with a lot of other nifty things (CSS compression, server optimisation etc etc).

Monday, 7 March 2011

Testing the performance of WordPress on nginx with loadimpact.com


After I set this site live I decided to test the performance of the site under a bit of load, I wanted to check that my thoughts about nginx (and the fcgi cache) were correct.

The results, very promising. For 50 simultaneous users (max for free test) I had ~530ms load time for the homepage. Now I realise this setup of caching can only cache a limited number of pages, but what are the chances of more than a few ever driving a lot of traffic (what is the chance of any of the site!).

So, the full results from loadimpact.com:



As the line is flat, that means the server is not under any stress. So unless I start getting considerably more than 50 simultaneous users, or a large number of popular pages, I have nothing to worry about! All of this scalability relies on the fact that nginx will cache the majority of requests and serve them up instantly, therefore keeping the load off PHP and MySQL.

Although, I have had to tweak the php-fpm config (/etc/php5/fpm/pool.d/www.conf on default  Ubuntu) to lower the number of start and idle servers, as it was eating up the little RAM this server has!

Side note: I have just heard about the Django project which is a Python web-framework. It looks really cool, and would have been really good for some of the internal tools I have written. Hopefully I will find a project soon to play with it on!

Saturday, 5 March 2011

New Site! php-fpm, nginx and a sprinkle of MySQL

I have had this domain name for over a year now, meaning to grab some hosting and set up a blog. I kept putting this off mostly because I could not find a good hosting provider (great quality and price – I could only find one or the other!). I was quite fussy about the hosting I wanted, as I want a virtual machine so I can play with software as it is released, rather than waiting months for a provider to update (that is IF they provide the software in the first place).

As part of my job, I run a lot of servers on Amazon EC2 anyway, and I love how you get your own server to install whatever you like on. Recently when I heard they offered a free tier for a year I decided to actually set this up. After that year is up, with a reserved instance, it comes to $9.62 a month which is currently a little under £6 – cheap! The other benefit of this is that I can easily scale it if this, strangely, becomes popular.

I don’t have a picture of the server, and I didn’t think a picture of a cloud would help. So I found this cool pic of Tux on Flickr

My Setup

  • EC2 micro instance running Ubuntu 10.10 (ami-e59ca991) in EU-West
  • nginx 0.8.54
  • php 5.3.5 (running it via php-fpm)
  • MySQL 5.1.49 (thought about trying 5.5 but have not heard good reports yet)
I went with the micro instance because it was free (duh!) and is more than enough for now. I recently started using nginx for glam.co.uk based on numerous reports of it being more efficient than Apache, it noticeably is out of the box! I have also enabled the fastcgi cache so pages that have been viewed in the last 5 minutes will be served up immediately to the user. My cache config:

In http (/etc/nginx/nginx.conf):

  fastcgi_cache_path /var/cache/fastcgi_cache levels=1:2 keys_zone=ahme:16m inactive=5m max_size=500m;

In server (/etc/nginx/sites-available/mysite.conf):

  location ~ \.php$ {
        if ( $http_cookie ~* "comment_author_|wordpress_logged|wp-postpass_" ) {
            set $nocache 'Y';
        }
        fastcgi_cache ahme;
        fastcgi_cache_key $request_uri$request_method$request_body;
        fastcgi_cache_valid 200 5m;
        fastcgi_pass_header Set-Cookie;
        fastcgi_cache_use_stale error timeout invalid_header;
        fastcgi_no_cache $nocache $query_string;
        fastcgi_cache_bypass $nocache $query_string;

        fastcgi_pass   localhost:9000;
        fastcgi_index  index.php;
        fastcgi_param  SCRIPT_FILENAME $document_root$fastcgi_script_name;
        include        fastcgi_params;
    }

Basically this just caches any 200 response from a PHP file for 5 minutes based on the requested URL, method (don’t want to cache a HEAD request and send that to a GET request – see the nginx docs) and body. Also logged in users and requests with a query string are not subjected to the cache. I have found this removes the need for a WordPress plugin for caching (such as W3 Total Cache or WP Super Cache) and I currently see no need for opcode caching or similar – even on glam.co.uk.

So, if you are looking for a great value, reliable, powerful host and are not afraid of some command line work, then EC2 with nginx is the way to go!