Tuesday, 21 July 2015

Web Components with Polymer... it's magic

Lately I've been experimenting with Web Components using the Polymer library as a means to simplify my workflow and as a means to replace internal Flash 'components' which can be just dropped into the stage.

When I built my proof of concept and it ran for the first time it felt like witchcraft! All I needed to type was:

<my-video src=""></my-video>

Which includes my player skin, preroll support (via vast-client-js) and analytics tracking, all hidden within my custom element in the Shadow DOM (or Shady DOM in most non-chrome browsers for now).

I have since built a few more components, and when composed together they are so simple yet so powerful. This certainly feels like the future, it will be great to deploy widgets (without the iframe overhead) and it makes UI frameworks very powerful and easy to use (see the paper elements as an example).

At the moment I'm using vulcanize to compile all my components into one compressed file so the overhead is minimal. Although I plan to migrate to HTTP/2 soon, which should negate this need and make things simpler, faster and better cached.

Thursday, 6 February 2014

Remote Work

I have been working remotely from Malaysia for a US company for the last 3 years. Previously I worked for them in the London office, although I was the only developer there.

The main personal benefit is being able to control my environment completely. I can have complete silence, loud music, standing desk, sofa. Of course I also spend zero time commuting, so when I finish work I don't have to waste hours of my life stuck in traffic. My work often benefits from this, as there is nobody distracting me (Skype and Email are easily ignored) and I can really get in "the zone" as they say.

The downside is that interaction with colleagues is limited and usually for a purpose (e.g. can you help with this, can you do that) rather than more informal/spontaneous which can result in some great ideas or hours saved (e.g. oh, try it this way). It's also fun to work with others at times.

Overall I am not sure if I have a preference for either. Ideally I would like to flexibly be able to be in an office amongst colleagues, or working remotely. Depending on my mood and what project I am working on.

Sadly being in Malaysia makes this tricky as it's a long way to commute to the US (and very difficult to get a visa!).

Monday, 29 October 2012

Introducing cv.js

A while ago I was reading a post on Lifehacker about a PHP based CV maker script which separates the content from the layout of the CV. Although PHP works well enough for this, I thought I could build something a little more configurable in Javascript with a little Markdown thrown in. So I created cv.js.

Basically the whole user config comes from one JSON file with optional Markdown files for some sections if they get a little too long or more complex formatting is needed, e.g.

  "personal": {
    "name": "John Smith",
    "phone": "(+44) 012 3456 789",
    "email": "",
    "location": "Delight, UK"
  "intro": { "file": "" },

So the format is really simple, and the "intro" section is just loaded from the "" file in the same directory.

This is then loaded into a handlebars template, e.g.

{{#if intro}}
  <section class="intro">
    {{markdown intro}}

I have only created one "basic" template so far, but it is responsive to work on both desktop and mobile devices, along with having a slightly modified print template. It is simple, but it works quite nicely.

From this it generates the static HTML file which can then be uploaded to any web server (or DropBox) and passed around.

Each template lives in its own subdirectory, so it can have external assets such as images and CSS which will be copied over to the output directory along with the static HTML.

Also included is a local testing server so it can be easily tested locally without continually rebuilding it manually.

Go and see a demo or the source on GitHub.

Thursday, 22 September 2011

Javascript Build Systems

Whew it’s been a while since I last posted here! I have moved to Malaysia and got married in that time :)

When I originally started working on Javascript, Ad products in particular, I wrote my code in one big Javascript file, tested everything manually and used jsmin to compress it. This was not an ideal solution, and I wasted a lot of time chasing down bugs due to missing semicolons, mis-matched braces and various typos. Another problem with this approach was that as the code got longer, things became harder to find and I found myself adding large comments at the start and end of sections: //START TOUCH EVENT HANDLERS and //END TOUCH EVENT HANDLERS.

Later on I discovered JSLint which saved me from a large number of “stupid” bugs, although I was still manually copying and pasting my code into the online linter :-O

I am using my own custom build system, with my code separated into files that deal with a certain piece of functionality which are then concatenated together, checked for errors and minified with one command line.

jQuery Build System

I based my system on this system (which can be found in the jQuery GitHub repository) which works as follows:
  1. make jquery which will concatenate all of the source files together and output the complete Javascript as one file (Looking at the Makefile you can see the source files listed under BASE_FILES and then wrapper with an intro and outro script in MODULES)
  2. make lint which will check the built file against JSLint and output any errors to the console
  3. make min which minifies the code using uglify.js

My Build System

As I am building a number of different Ad products, which often share similar functionality, I have my own build scripts which work in a similar way but with the ability to build different combinations of files.

Firstly, I use the product name as the target (e.g. make product.js) with rules in my makefile for the specific set of files required to build that product. This allows me to share files between the products easily, which used to involve copy and pasting blocks of code!

Also, instead of using JSLint I prefer to use JSHint as I find it’s rules more suited to my coding style. I pull in the JSHint repository as a submodule so that I can update it easily and track the version I am using.

Finally, I use the Google Closure Compiler instead of uglify.js (or various other alternatives). I like this as it really compresses my code a lot (Advanced mode, with the code written to work with this) and it provides additional checks to my code such as type checking (via JSDOC comments) and missing property checks. I use the downloadable JAR file to run this locally, although it does take a few seconds to run on larger code bases, I feel it is worth it.

Other Things

I also have a little python script which will run a local webserver for me to test builds on. This includes the ability to configure the Javascript files from XML config files and a basic templating engine so I can test on desktop or mobile sites.

My scripts also will push versioned builds to the live servers, to ease that step too.


Although it takes a little while to setup these automated scripts, it really is worth it in the long run. As so many bugs can be picked up immediately (without opening a browser or test suite), and changes can easily be made to a shared file and then all products using that can be rebuilt and receive the upgrades in a much shorter time period.

I would advise anyone to automate as much of their manual process as possible, as it will make your life easier.

Also, if you are not just building Javascript, but instead a whole site, then I recommend the HTML5 Boilerplate as that will manage this kind of process for you along with a lot of other nifty things (CSS compression, server optimisation etc etc).

Monday, 7 March 2011

Testing the performance of WordPress on nginx with

After I set this site live I decided to test the performance of the site under a bit of load, I wanted to check that my thoughts about nginx (and the fcgi cache) were correct.

The results, very promising. For 50 simultaneous users (max for free test) I had ~530ms load time for the homepage. Now I realise this setup of caching can only cache a limited number of pages, but what are the chances of more than a few ever driving a lot of traffic (what is the chance of any of the site!).

So, the full results from

As the line is flat, that means the server is not under any stress. So unless I start getting considerably more than 50 simultaneous users, or a large number of popular pages, I have nothing to worry about! All of this scalability relies on the fact that nginx will cache the majority of requests and serve them up instantly, therefore keeping the load off PHP and MySQL.

Although, I have had to tweak the php-fpm config (/etc/php5/fpm/pool.d/www.conf on default  Ubuntu) to lower the number of start and idle servers, as it was eating up the little RAM this server has!

Side note: I have just heard about the Django project which is a Python web-framework. It looks really cool, and would have been really good for some of the internal tools I have written. Hopefully I will find a project soon to play with it on!

Saturday, 5 March 2011

New Site! php-fpm, nginx and a sprinkle of MySQL

I have had this domain name for over a year now, meaning to grab some hosting and set up a blog. I kept putting this off mostly because I could not find a good hosting provider (great quality and price – I could only find one or the other!). I was quite fussy about the hosting I wanted, as I want a virtual machine so I can play with software as it is released, rather than waiting months for a provider to update (that is IF they provide the software in the first place).

As part of my job, I run a lot of servers on Amazon EC2 anyway, and I love how you get your own server to install whatever you like on. Recently when I heard they offered a free tier for a year I decided to actually set this up. After that year is up, with a reserved instance, it comes to $9.62 a month which is currently a little under £6 – cheap! The other benefit of this is that I can easily scale it if this, strangely, becomes popular.

I don’t have a picture of the server, and I didn’t think a picture of a cloud would help. So I found this cool pic of Tux on Flickr

My Setup

  • EC2 micro instance running Ubuntu 10.10 (ami-e59ca991) in EU-West
  • nginx 0.8.54
  • php 5.3.5 (running it via php-fpm)
  • MySQL 5.1.49 (thought about trying 5.5 but have not heard good reports yet)
I went with the micro instance because it was free (duh!) and is more than enough for now. I recently started using nginx for based on numerous reports of it being more efficient than Apache, it noticeably is out of the box! I have also enabled the fastcgi cache so pages that have been viewed in the last 5 minutes will be served up immediately to the user. My cache config:

In http (/etc/nginx/nginx.conf):

  fastcgi_cache_path /var/cache/fastcgi_cache levels=1:2 keys_zone=ahme:16m inactive=5m max_size=500m;

In server (/etc/nginx/sites-available/mysite.conf):

  location ~ \.php$ {
        if ( $http_cookie ~* "comment_author_|wordpress_logged|wp-postpass_" ) {
            set $nocache 'Y';
        fastcgi_cache ahme;
        fastcgi_cache_key $request_uri$request_method$request_body;
        fastcgi_cache_valid 200 5m;
        fastcgi_pass_header Set-Cookie;
        fastcgi_cache_use_stale error timeout invalid_header;
        fastcgi_no_cache $nocache $query_string;
        fastcgi_cache_bypass $nocache $query_string;

        fastcgi_pass   localhost:9000;
        fastcgi_index  index.php;
        fastcgi_param  SCRIPT_FILENAME $document_root$fastcgi_script_name;
        include        fastcgi_params;

Basically this just caches any 200 response from a PHP file for 5 minutes based on the requested URL, method (don’t want to cache a HEAD request and send that to a GET request – see the nginx docs) and body. Also logged in users and requests with a query string are not subjected to the cache. I have found this removes the need for a WordPress plugin for caching (such as W3 Total Cache or WP Super Cache) and I currently see no need for opcode caching or similar – even on

So, if you are looking for a great value, reliable, powerful host and are not afraid of some command line work, then EC2 with nginx is the way to go!