Creating a directions panel with the Google Maps API

Contact and location pages on websites usually fall into the category of ‘useful but boring’. While doing a bit of work recently for local first aid training company, Help First Aid Training though, I wanted to make their new location pages for their regular County Durham and Newcastle area venues just a little more interactive than an address and a static map and decided to take a look at what the Google Maps API offered in terms of directions finding capability. The idea was to offer visitors a panel of customised directions to the training venues, rather than simply a fixed set of instructions.

Help First Aid Directions Darlington to North Shields

You can see the implementation for Help First Aid in action on their venues pages such as this North Shields Training page (to try it out, pop your address or postcode in the panel on the right). It turned out to be pretty easy to build and I think it makes a great enhancement to any page where you want to give your visitors guidance on how to find a location. For instance it makes a great addition to any ordinary contact page where you need visitors to be able to find you in real life.

Here’s a step by step guide on how to build your own version:

1. Sign up for a Google Maps API key

First of all, go to the Google maps documentation and follow the instructions under ‘Obtaining an API Key’. You’ll need this key to be able to use the API on your site.

2. Create placeholders for your map and directions panels

You’ll need somewhere for your directions and map to go on the page. I’m not aiming to teach css here but there’s nothing special/unusual about laying these out. I’m putting my directions in a simple div…

<div id="directions"></div>

…and my map in another…

<div id="map"></div>

3. Load the maps API

To load the maps API, include the google maps script, passing it the API key you created in step 1. A good place to put this is before your closing </body> tag. I also pulled in jQuery to help with responding to user actions later.

<script type="text/javascript" src="[YOUR_KEY_HERE]&amp;sensor=false"></script>
<script src=""></script>


4. Put your location on the map

Now it’s time to use the Google Maps API to set up a map showing your location. Pop this script after the google maps and jQuery scripts you included in step 3.

<script type="text/javascript">

$(function() {

  var directionsService;
  var directionsDisplay;
  var destination = new google.maps.LatLng(55.002327, -1.468102);
  var mapCentre = new google.maps.LatLng(54.99, -1.53);

  var mapOptions = {
    center: mapCentre,
    zoom: 11,
    mapTypeId: google.maps.MapTypeId.HYBRID

  var map = new google.maps.Map(document.getElementById("map"), mapOptions);
  var marker = new google.maps.Marker({
    position: destination,
    map: map,
    title:"Help First Aid Training courses in North Shields"



To explain a little about what’s going on here:

  • We start by definining a couple of locations, one for the directions destination and one for the map centre with new google.maps.LatLng(). You could just centre the map on your location but sometimes it’s useful to have the map offset a little, e.g. in Help First Aid’s case, centring on their North Shields venue would have meant half the map was out in the sea, so I’ve shifted the centre west a little to include more of Newcastle.
  • Next we define some simple map options including the zoom level. It’s worth experimenting with zoom level to find one which best suits your location and map size.
  • Finally, we use new google.maps.Map() to initialise our map and associate it with our map div, then set a marker at our destination.

The script should execute at this point so it’s worth giving it a try to make sure the map’s as expected before moving on.

5. Set up the directions panel

Now it’s time to set up the directions panel. Add the following to the script in step 4, within the jquery ready block, before the closing });

directionsService = new google.maps.DirectionsService();
directionsDisplay = new google.maps.DirectionsRenderer();

This code initialises the directions service and gives it a map and directions panel to work with.

6. Create somewhere for your customer to enter a starting point

You’ll need to collect details of your customer’s starting point for the journey. A simple, one input form will do, such as this one:

<form id="directionsForm" action="">
  <label for="from"><em>Enter your address or postcode for driving directions:</em></label><br/>
  <input name="from" id="from" type="text"/>
  <input type="submit" value="Go" id="directionsSubmit" />

7. Wire it all together

And finally, when the ‘Go’ button is clicked, we want to get the directions from Google, and show those in the directions panel and on the map.

$("#directionsSubmit").click(function() {
  var request = {
    origin: $("#from").val(),
    destination: destination,
    travelMode: google.maps.TravelMode.DRIVING

  directionsService.route(request, function(result, status) {
    if (status == google.maps.DirectionsStatus.OK) {
    else {
      $("#directions").text("Sorry, this address was not found or maybe wasn't specific enough. Please try an alternative.");

  return false;


And that’s it. To re-cap, we’ve created divs to hold a map and a directions panel, set our location on the map, created a simple input for the starting point of the journey and looked up directions via the Google Maps API when the ‘Go’ button is clicked.

If you want to double check what bits of the above code go where on the page in the complete solution, take a look at the source of the page over on Help First Aid Training.

Getting Started with Content Marketing

Content based marketing can be a very effective way for businesses to share their expertise and reach online customers. It’s a big subject but the essentials can be easy to get going with and once you master the basics it becomes easier to practice and hone your content creation skills. Natalie from OCOCO Media shares some tips aimed at small businesses on how to get started.

blogContent marketing has become something of a digital buzz word, used by social media marketers and digital marketing agencies alike – but what does it really mean for you and your business?

In its simplest form, content marketing refers to building an online strategy based around creating high quality original content and sharing it. There’s no getting around the fact that it’s a strong online strategy that’s proven to work – because put simply, content IS king. By creating and sharing content online, you can build authority in your chosen sector, helping to strengthen your brand and gain trust amongst potential customers.

Within the business world, it can be difficult to be heard over all the ‘noise’ of competitors offering the world to customers, with no real evidence to show that their product or service can deliver what it claims. A content marketing approach allows you to demonstrate expertise and support potential customers from their initial contact through to end result sales.

Content marketing as an online strategy can seem so comprehensive it can be difficult to know where to start. By breaking down your strategy into achievable steps it becomes easier, so here’s my top tips for getting started with content marketing:

1. Start a blog – Okay, so this sounds like a big step, but it isn’t! You can set up your own free blog using WordPress – it’s quick and easy, and there are lots of templates for you to use to create a page that looks professional. It’s even possible to customise your WordPress blog so it matches the rest of your website. When it comes to writing, a weekly blog sharing your business thoughts will help you build authority online and engage with customers. Your blog posts can be as short or as detailed as you like, but try to focus on key issues that may be affecting your customers and share knowledge to help overcome them. This will help establish you as an expert in your sector – but it’s not about trying to sell your products immediately. Instead it’s about building trust and playing a slower game by creating content that will drive potential customer to your website and can be shared online.

2. Start reading other peoples blogs – By keeping up with other individuals and organisations sharing their thoughts on your industry, you’re able to stay up to date with news and ahead of the curve. You might also like to comment on blogs and get involved with some online discussions, helping you to network online and develop contacts. Once you have established good relationships, you might like to request expert guest blogs for your website. Most bloggers would be happy to submit something in return for a shout out and backlink, and you’ll reap the content benefits of their work.

3. Start to share – So you’re finding awesome content on other people’s blogs and producing your own high quality, expert blog… Now it’s time to share with the world. You can choose to do this through any social network that works for you, but my social media of choice for businesses is always Twitter. It’s easy to get set up and start Tweeting about relevant industry news – whether it’s from your own website or a great resource you’ve found (and always remember to tag the original author in the post as this will help you build contacts again). By using key #hashtags for your business sector, others looking for information or also sharing expertise will be able to find your tweets, and you’ll build a relevant, engaged following online.

When it comes to a strong content marketing strategy, focus on what you can do to get great quality, shareable content onto your website and make it happen. Share and share alike, and you’ll be on your way to ensuring your brand is seen as customer focused, helpful and an industry leader. Content marketing helps drive traffic and customers to your website, and boosts your SEO and social media. It’s all round winner, and well worth investing a little time to explore.

I’d love to hear your thoughts on and experiences of content marketing in the comments below.

Arduino Camera Control and the Power of Simple Programming

I’ve been learning Arduino recently (as a hobby venture rather than a commercial programming one) and from a programmer’s perspective, I’ve been very impressed with what can be done with such a simple bit of kit. Working within it’s relatively constrained environment has forced me to think in different ways from the usual big enterprise system world that I’m used to spending my days in, and that’s been a refreshing change.

Arduino Uno with camera control prototype

Some early prototyping

My first project has been some hardware for camera shutter control plus a simple state machine implementation to fire the shutter at regular, precise intervals for time lapse photography. Earlier this month I wrote up some basic getting started details in an arduino camera control post over on my photography blog, aimed at photographers interested in going down a similar route. In the week or so since I wrote the original blog post, the little piece of software has grown steadily via half hour coding bursts here and there to allow me to shoot all sorts of increasingly complex scenarios which would have been both error prone and time consuming to do manually.

I think this illustrates quite neatly one of the joys of being able to program even a simple device like this. It shows that the value of being able to code isn’t just to earn a living from it (although that’s nice). Often the more immediate benefit is that you can simply build stuff which helps you out, whether that’s a few scripts to automate tedious stuff in your day to day job or to help you get something that needs to be done accurately right every time.

The end result of this is that more of your human brainpower can be used for what it’s really great at – dealing with more creative and unexpected tasks rather than having to repeat the same thinking time after time. While obviously different people have different aptitudes and coding’s not for everyone, there seems to be increased interest at the moment in basic-level programming as a useful general business skill and I think that’s a good thing. Personally I wouldn’t want to be without it.

Cross Platform Software as a Selling Point

A recent TV ad for Amazon’s Kindle was a bit of a ground breaker as far as I’m concerned in that it actively pushed the cross platform nature of the company’s software as a mainstream selling point. While those of us in the techy community have long seen the benefits of software that doesn’t just run on one OS, I don’t think I’ve seen that used in a TV marketing campaign in this way before. Of course, in this case there’s still the catch that you’re tied to one organisation’s DRM but Amazon aren’t alone at the moment in recognising that there’s a growing demand for software that isn’t limited to just one Operating System. I’m guessing that’s particularly true in the mobile market but it’s becoming more significant for desktop software too.

Looking round the office I’m in at the moment, there are:

  • 2 x Apple Macs running Mac OSX 10.6
  • 2 x Windows 7
  • 1 x Windows XP
  • 2 x Ubuntu Linux
  • 2 x Blackberries
  • 2 x Android phones
  • 1 x iPhone

That diversity would have been really unusual, even 5 years ago but it’s getting more and more common. I think that’s a good thing for end users.

I regularly get asked if one OS is better than another and while I’m happy to advise and don’t tend to sit on the fence, the only honest answer I can give is that they all have different strengths and weakness. Knowing what those are and making an informed choice can sometimes bring big productivity benefits. It’s about gaining awareness of options available and using the best tool for the job and I think more an more people are become aware of those choices.

Of course to make that choice, we need to be not tied to just one operating system in the first place and that’s where cross platform software really helps. As a simple example, I recently helped a couple of users who’d decided to move to Macs (one from Windows Vista and one from a Linux desktop) and wanted to move their mail. Fortunately, both were using a cross-platform email client (in this case Mozilla Thunderbird) and a quick copy of a few files meant they were set up in seconds, with mail settings, passwords, email and inbox totally in tact as if nothing had changed.

I do understand though why some folks think that restricting choice makes the world a simpler place and that’s true to an extent, just as only allowing one brand of beer to be sold would make choices simpler on a night out or only building one type of car might make third party spare parts cheaper. But the reality is, different products suit different people and situations. In the software world, cross platform applications have a big part to play in supporting that choice and the productivity benefits that can follow. At the moment I increasingly bump into situations that suggest to me that everyday technology users are realising those benefits and factoring them into their decisions.

Training Course Early Booking Discounts

I’m working with Help Training Courses again at the moment on the next phase of the web site. We have some interesting new features lined up to make it even easier to find the right training at the right price.

Included in the first batch of these changes is a new facility to allow training providers to add early booking discounts in addition to the current last minute training deals. The last minute discounts option has been very popular, with some great deals appearing on courses such as ITIL, First Aid, Safety Training and Presentation Skills. Feedback so far suggests the new early booking discount feature will be just as popular and creates even more opportunities for training providers to fill courses and for students to find a bargain.

A simple servlet filter for Java performance logging

There are plenty of fancy tools for performance testing Java web applications but in addition to thorough load and volume testing before an application goes live, sometimes it’s useful just to leave simple perfomance monitoring running full time in a production environment. Monitoring the time the server takes to execute requests on your site can help you spot trends that let you anticipate and deal with problems caused by load, volume, unanticipated usage patterns or simply a database in need of some tuning. A proactive approach to performance tuning can bring big wins in customer satisfaction too, allowing you to pre-empt and avoid issues and keep your site running smoothly for your users.

Fortunately Java Enterprise Edition provides an easy mechanism for intercepting every request on the way in and out of the application in the form of a Servlet filter and it’s really simple to use that to create a performance logger. The basic concept here is to capture the time as a request enters an application, then again as it leaves and log the difference between the two. We can do this in the servlet filter’s doFilter method as follows. I’m using a log4j logger to handle the output but obviously any other logging mechanism would work just as well if you prefer something else:

    long startTime;
    long endTime;
    String path = ((HttpServletRequest) request).getServletPath();

    startTime = System.currentTimeMillis();
    chain.doFilter(request, response);
    endTime = System.currentTimeMillis();

    //Log the servlet path and time taken + "," + (endTime - startTime) );

To finish off the code, I’ve added a couple of configuration options for more flexibility. For starters, I prefer to have a little more control over which URLs get logged than the standard servlet url-pattern mechanism allows. For instance, I may want to only log requests containing the word ’search’ anywhere in the path. The standard servlet url-pattern mechanism doesn’t allow this so I’ve added a parameter ‘url-filter’ to allow full regular expression matching. A second parameter ‘log-category’ sets the log4j logging category to use. Wrapping this into a full example servlet filter we get:

package com.actuanceconsulting.perflog;


import javax.servlet.Filter;
import javax.servlet.FilterChain;
import javax.servlet.FilterConfig;
import javax.servlet.ServletException;
import javax.servlet.ServletRequest;
import javax.servlet.ServletResponse;
import javax.servlet.http.HttpServletRequest;

import org.apache.log4j.Logger;

 * A simple filter to log the time taken to execute a request. Logging is carried
 * out via log4j to give the flexibility to add other data (such as current time)
 * and format the log as required.
 * */

 * Copyright John Patrick, Actuance Consulting Limited 2010
 * This program is free software: you can redistribute it and/or modify
 * it under the terms of the GNU Lesser General Public License as published
 * by the Free Software Foundation, either version 3 of the License, or
 * (at your option) any later version.
 * This program is distributed in the hope that it will be useful,
 * but WITHOUT ANY WARRANTY; without even the implied warranty of
 * GNU Lesser General Public License for more details.
 * For a copy of the GNU Lesser General Public License,
 * see <>.
public class PerformanceLogFilter implements Filter {

 * An optional regular expression to use as a filter for the servlet patch.
 * Gives more flexibility than the standard servlet url-filter.
 * All requests are logged if not specified.
 * */
 private static final String URL_FILTER_PARAM = "url-filter";

 * An optional log4j category to use. The fully qualified class name
 * of the filter will be used if not specified.
 private static final String LOG_CATEGORY_PARAM = "log-category"; 

 private Logger perfLogger;
 private String urlFilter;

 public void init(FilterConfig config) throws ServletException {
    String logCategory = config.getInitParameter(LOG_CATEGORY_PARAM);
    if (logCategory == null) {
    perfLogger = Logger.getLogger(logCategory);
    urlFilter = config.getInitParameter(URL_FILTER_PARAM);

 public void doFilter(ServletRequest request, ServletResponse response,
 FilterChain chain) throws IOException, ServletException {

    long startTime;
    long endTime;
    String path = ((HttpServletRequest) request).getServletPath();

    if (urlFilter == null || path.matches(urlFilter)) {
       startTime = System.currentTimeMillis();
       chain.doFilter(request, response);
       endTime = System.currentTimeMillis();

       //Log the servlet path and time taken + "," + (endTime - startTime) );
    else {
       chain.doFilter(request, response);            

 public void destroy() {
    //Nothing to see here        


To get the filter to run, we need to configure it in the web.xml file as follows:


In this case I’ve set my regular expression for the ‘url-filter’ parameter to only match paths containing ’search’. For completeness, here’s the log4j config I used too…

<appender name="PerformanceFileAppender">
  <param name="Threshold" value="DEBUG"/>    
  <param name="File" value="../logs/performance.log"/>
  <param name="Append" value="true"/>
    <param name="ConversionPattern" value="%d{yyyy-MM-dd HH:mm:ss},%m%n"/>

<category name="perflog">
  <priority value="INFO"/>        
  <appender-ref ref="PerformanceFileAppender"/>                

…and that produces output as shown below. The CSV format of this output allows for easy analysis of the results but obviously you can choose whatever format suits you best.

2010-06-19 11:26:20,/search,81
2010-06-19 11:26:25,/advanced-search-input,66
2010-06-19 11:26:30,/advanced-search,62
2010-06-19 11:26:39,/search,56
2010-06-19 11:26:43,/search,38
2010-06-19 11:26:43,/search,39

And that’s all there is to it. It’s a simple bit of code but it can really add value when run over a period of live operation. Being able to see historical performance and spot trends can be a powerful diagnostic tool.

Search that (nearly) passes the Turing test

After spending a fair bit of time over the past few months developing a search facility that was a bit more ‘human’ in its responses than a simple text-based search would be, I was interested to stumble on this discussion on Applying Turing’s Ideas to Search by John Ferrara. In my case, by working in just one niche  (training courses), the problem of creating more intelligent responses that the article discusses became more realistic. Even relatively simple implementations of some of the ideas gave some pretty powerful and less frustrating advantages from a usability perspective.

As a simple example, a search containing the word ‘free’ such as ‘free training’ clearly means to us humans that we want to see a price of zero, so matching ‘free climbing’ and ignoring £0 in the results clearly seems completely mental to the average user. The fix is pretty simple though. By enriching our search index with extra ‘implied’ data (like including the word ‘free’ when the price is zero) and choosing careful weightings for this data, the results quickly become far less ‘dumb’ and even at times appear surprisingly human. Obviously not always 100% perfect, but definitely better.

Automating our understanding of phrases (rather than simply isolated words) fascinates me too. I did a search today on a stock photo library for ‘Mac keyboard and mouse’. The intent of that search is clear enough to us humans but the starkly Turing-failing response was “By ‘Mac’ do you mean Apple Macintosh or Waterproof Clothing; By ‘keyboard’ do you mean computer input device, synthesizer or piano; By ‘mouse’ do you mean rodent or computer mouse?” Er, yes, I want a picture of a small rodent in a raincoat playing piano please.

That challenge for clarification is a reasonable enough question for a machine to ask but us humans know that when ‘keyboard and mouse’ are mentioned together in the same phrase, we’re pretty certain it doesn’t mean a piano-playing rodent. An what’s more, when keyboard and mouse are mentioned together with ‘Mac’, we know we’re not going to be wearing the mac.

What really fascinates me about all this isn’t really the deep, deep theory of it all (I’m a pragmatist at heart), but simply that a recognition and awareness of the weaknesses of ‘pure’ computer logic when we’re designing computer interfaces can quickly lead to huge steps forward in the usability of a system. By putting some thought into covering predictable and reasonable human expectations in a user interface, it’s relatively easy to avoid forcing your users to think like a computer, and even create a surprisingly positive experience.

From a personal perspective, what is often surprising when I start down this line of thinking, is that relatively little changes and increases in effort can bring big usability results. Even attempting (and inevitably failing) to make your user interface pass the Turing test still gives some real advantages to your users.

New Help Training Courses site goes live

Help Training Courses

For those of you following the progress of Help Training Courses, the web site went into public beta at earlier this week. The site started as a collaboration between Actuance and successful local training startup, Help First Aid Training back in November but quickly grew into a full new venture in it’s own right. It’s been a fun project to work on, with the main technical focus being a highly tailored search engine to pull the right results from the course listings and automation of travel industry style last minute discounts.

Our motto quickly became ‘more bums on seats’ and that’s exactly what the site aims to achieve, i.e. filling part-full courses and finding the training and deals that people actually want.

If you’re a training provider, please feel free to register and give us some early feedback.

Tags: ,

Browser javascript speed comparison – IE, Firefox, Chrome, Safari and Opera

I did some work recently on an interactive mapping interface which included optimising some processor-intensive Javascript for increased performance. Along the way I was surprised to learn just how different the Javascript performance was of various current-generation browsers. I was expecting some difference, and definitely expected older browsers to be slower but the speed difference of even modern Javascript engines was wildly different. As part of the work, the clients asked for recommendations on a fast browser so I decided to run some quick controlled tests to compare browser speed in this particular application. Here’s what I found…

Running the tests

To remove external influences such as network time from the tests, I ran everything locally and added some timing code around a particularly heavy bit of Javascript processing which wasn’t downloading data. The code was filtering and processing thousands of geographical points and adding them to an OpenLayers driven map, but the detail isn’t too important. The important point is that exactly the same code was run in each browser.

I used recent builds of the following browsers:

  • Internet Explorer 8
  • Firefox 3.6
  • Chrome 4.1
  • Safari 4.0
  • Opera 10.5

Each browser was tested on the same machine under Windows XP 32 bit with a 2.4GHz dual core CPU and 4GB RAM (slightly less available due to the 32 bit limit). Minimal other processes were running and the CPU was near idle between tests. For each browser, an untimed run was carried out first to allow the browser a fair chance to cache static resources, then three timed runs were carried out, monitoring CPU to ensure it was ‘idle’ at no more than a few percent before and after each test and checking that memory didn’t max out (in reality there was at least 2GB of memory free during the tests).


The raw timings in seconds were:

Browser Run 1 Run 2 Run 3
IE 8 24.34 22.17 28.36
Firefox 3.6 3.39 3.44 3.38
Chrome 1.68 1.69 1.86
Safari 1.08 1.41 1.37
Opera 3.21 3.69 3.65

And showing that graphically in speed order from fastest to slowest…

Average timings for each browser in seconds (shortest bar is fastest)

While Javascript performance isn’t the only reason to choose a browser, it is an increasing factor when running modern interactive web applications and you can see that in my application at least, there’s a huge difference, with Internet Explorer being an order of magnitude behind the competition. Safari takes the performance lead in my case, followed closely by Google Chrome.

Every application’s different though so if you have any timings of your own that agree with or differ from these, please feel free to comment with your own results.