Skip to main content
Travis Wimer
Backbone SEO

Backbone SEO

Backbone SEO

Your app needs to speak the machines' language.

Published / By Travis Wimer
Last updated

Single-page web apps created with frameworks like Backbone.js can offer a very pleasant user experience, but that is only true for humans. If your site needs to be machine-readable, your app needs to speak the machines' language.

There are many SEO strategies that can be used, but this post will focus specifically on Backbone SEO.

Here are the topics that will be covered:

  1. Using relevant URLs with pushstate
  2. Using PhantomJS to generate HTML snapshots
  3. Using functional hyperlinks
  4. Providing meta resources for social media

Using relevant URLs with pushstate

One major benefit of using a framework like Backone is its routing system. Combined with the HTML5 pushState() method, it allows you to easily implement semantic URLs that are both easy on the eyes and relevant to search engine crawlers.

The pushState() method is simply a way to update the browser URL address without actually loading the page. This is built into Backbone, so all you have to do is include the line below:

Javascript
Backbone.history.start({ pushState: true });

Using PhantomJS to generate HTML snapshots

Search engines and Javascript are like oil and water. A Backbone app by itself cannot be rendered by search engine crawlers. This means you will need to serve the fully rendered version of the page when a URL is requested.

Basically, if you were to turn off Javascript in your browser, the site should still be functional enough to view content and links.

First, you will need to download the PhantomJS binary.

Remember, if your development machine and server don't use the same OS, you will need to download the corresponding one for each.

To utilize PhantomJS you need to create a Javascript file that will send commands to the PhantomJS API. Here is the simplest example of a script you might use:

Javascript
// Retrieve command line arguments specifying
// the URL to be loaded and the full path where
// the snapshot file should be saved
var args = require("system").args;
var myURL = args[1];
var snapshotFilePath = args[2];

// PhantomJS API for loading pages
var page = require("webpage").create();

// Handle loading errors
page.onError = function (msg, trace) {
	console.log("Error loading page:");
	console.log(msg);
	phantom.exit(); // Terminate the process
};

// Load the necessary page
page.open(myURL, function (status) {
	if (status === "success") {
		// Retrieve the URL's HTML content
		var pageHTML = page.evaluate(function () {
			// If you need to make any DOM manipulations, you can
			// do that here using the `document` object.
			return document.documentElement.outerHTML;
		});

		// Save file to system
		var fs = require("fs");
		fs.write(snapshotFilePath, pageHTML, "w");

		phantom.exit(); // Terminate the process
	}
});

Now you need to create a script that will make calls on the command line to run this script through PhantomJS. You can do this in any language you want, but for this example I'm going to use PHP:

PHP
<?php
function createSnapshot( $urlToLoad, $snapshotFilePath ){
	$phantom = exec( 'path/to/phantomjs path/to/PhantomJsScriptFile.js ' . $urlToLoad . ' ' . $snapshotFilePath );
}
?>

You want your HTML snapshots stay up to date, so you need to make sure this script runs any time changes are made. You could also create a CRON job to run periodically to keep things updated. It is up to you how you implement the specifics of this system.

As an example, let's say your website has a blog. When you add a new blog post, you want an HTML snapshot to be created for the new page. A snippet of your backend code might look something like this:

Javascript
function addNewBlogPost( $postTitle, $postContent ){
	// Remove spaces from title to use as URL (this isn't actually secure)
	$postURL = preg_replace( '/\s+/', '-', trim( $postTitle ) );

	// Send CREATE query to database
	$db_result = $blog->create( $postTitle, $postContent, $postURL );
	if( $db_result ){
		createSnapshot( $postURL, 'path/to/snapshot/directory/'.$postURL );
	}
}

It should be noted that your Google rank can be affected negatively if your site is serving different content to its crawler than to an average visitor. I don't have any first-hand experience with this, but it is a good reason to make sure your static HTML snapshots are kept fairly accurate.

Now that you have snapshots created, you need to start serving them up. The easiest way to handle various URLs for a single-page-app is to include URL rewrites in an .htaccess file. This enables you to always load a single script regardless of what URL was accessed on your domain. Since you're making a Backbone app, you are probably already doing this, but now you will need to make changes so your snapshots can work.

You want to pass the URL to your script, so you need to a rule like this:

RewriteRule ^(.*)$ main.php?url=$1 [QSA]

Now, anytime a user accesses a URL, the server will instead load main.php, passing in the full URL as the url GET parameter.

In the main.php file, you will load the snapshot for the provided URL, which should then run the code to initialize Backbone. Here is how that might look:

PHP
$url = $_GET['url'];
$snapshot_path = "path/to/snapshots/" . $url . ".html";
echo file_get_contents($snapshot_path);

This assumes that you saved your snapshot files in a hierarchy identical to your site's actual URLs. In production, you will want to have a default snapshot that loads if the URL is not the location of a route (basically a 404 page).

Using functional hyperlinks

Making sure your Backbone route links work even without Javascript is actually fairly easy.

All you have to do is add normal hyperlinks to your templates, then add in some code to intercept all the hyperlink clicks.

So, in the script where you initialize your backbone app, you will need to add something like this:

Javascript
// Intercept click on any link without the data-bypass attribute
$(document).on("click", "a:not([data-bypass])", function (evt) {
	// extract link info
	var href = {
		prop: $(this).prop("href"),
		attr: $(this).attr("href"),
	};
	var root = location.protocol + "//" + location.host;

	// Only use Backbone route if it is an internal URL
	if (href.prop && href.prop.slice(0, root.length) === root) {
		// Prevent normal link behavior
		evt.preventDefault(evt);

		// Trigger the backbone route
		Backbone.history.navigate(href.attr, true);
	}
});

This code was adapted from this stackoverflow answer. Original source attributed to Tim Branyen, but has since been removed from Github.

Providing meta resources for social media

In recent years, social media has become extremely relevant to SEO. Getting your site shared on social media can drive a lot of traffic and can even improve your search engine ranking.

When your site is shared on a social media platform, the service will scrape your page to create a relevant title, description, image, and other information. By adding meta tags that specify this information, you can optimize the content that is displayed.

Each social media service has their own meta tag implementation you must use, but for this example I'm only going to be using Facebook's tags. It is the same process for Twitter, Google+, etc. except these require different tag attributes.

To get a full understanding of Facebook's Open Graph meta tags, just Google it, but here is a basic example what you might use:

HTML
<meta property="og:type" content="website" />
<meta property="og:title" content="My fancy website" />
<meta property="og:description" content="A place for all things fancy" />
<meta property="og:image" content="http://myfancywebsitedomain.com/img/fancy.png" />

The difficulty of using these with Backbone is the fact that, like Google's crawlers, Facebook wont actually render your web page. Unless you want the same meta tags for every one of your URLs, this is a problem.

Luckily, the snapshot implementation discussed earlier will solve this problem as well, but now you will need to make sure unique tags are included.

This could be achieved dynamically on the server-side, but that will likely require additional database queries and will slow down your initial page load.

If your Backbone models/collections are already retrieving the information you want to use for these tags, then the simplest solution is to just dynamically update the meta tags with Javascript. This is a basic example of how you might do that:

Javascript
var setMetaTags = function (title, description, image) {
	$('meta[property="og:title"]').remove();
	$("head").append('<meta name="og:title" content="' + title + '">');

	$('meta[property="og:description"]').remove();
	$("head").append('<meta name="og:description" content="' + description + '">');

	$('meta[property="og:image"]').remove();
	$("head").append('<meta name="og:image" content="' + image + '">');
};

Then in each of your routes' main views, you would include something like this:

Javascript
var myView = Backbone.View.extend({
	initialize: function (modelId) {
		this.model = new FancyModel({
			id: modelId,
		});
	},
	render: function () {
		// Do all your important view stuff
		// ...

		setMetaTags(this.model.get("title"), this.model.get("description"), this.model.get("image"));
	},
});

Updating the meta tags this way doesn't generally accomplish anything by itself. This method works because these dynamic changes will show up in your HTML snapshots, which are the only data social media scrapers will read.

Anyway, I hope you enjoyed reading about Backbone SEO. This is my first attempt at a blog post, so I'd love some feedback.

If you have any suggestions or noticed anything wrong in my post, please leave a message in the comments. Thanks for reading.

Be the first to know when I post articles like this.

Follow Me On LinkedIn