-
-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Intelligent State Handling
Hashbangs (#!) and hashes (#) are not ideal; with a little bit of educated simplicity you'll be able to achieve better results; in terms of a better experience for your users as well as better compatibility, accessibility and maintainability. This article will talk about the problems and their solution.
- Your website will require tedious routing, mapping and escaping on your applications side which break the traditional web architecture 1, 2:
- Have the traditional url that we are use to
http://mywebsite.com/page1
redirect tohttp://mywebsite.com/#!page1
- Code an
onhashchange
event which hooks intohttp://mywebsite.com/#!page1
and send a ajax request off to some custom server-side code made to handle that ajax - Ensure that
http://mywebsite.com?_escaped_fragment_=page1
is exactly the same of what we would have traditionally expected to be athttp://mywebsite.com/page1
and have it accessible via search Engines
- Have the traditional url that we are use to
- Your website will no longer work for js-disabled users and is no longer crawlable by search engines other than Google (a sitemap will have to be provided to them).
These issues are unavoidable if hashes are used.
- There are now two urls for the same content
-
http://twitter.com/balupton
andhttp://twitter.com/#!/balupton
-
http://mywebsite.com/page1
andhttp://mywebsite.com/#/page1
-
- URLs get polluted if we did not start on the home page
http://www.facebook.com/balupton#!/balupton?sk=info
http://mywebsite.com/page1#/page2
- If a user shares a hashed url with a js-disabled user - the user will not get the right page.
- Performance and experience issues when a hashed url is loaded.
- When a user accesses
http://mywebsite.com/page1#/page2
the browser starts onhttp://mywebsite.com/page1
then does the ajax request and loads inpage2
- causing two loads for the initial access instead of just one. - This is an experience issue as it is annoying for the user as they are either stuck on a "loading" page, or they start scrolling the initial page only for it to disappear and change to another page.
- When a user accesses
These issues are generally coupled with the use of hashes despite them just being a result of over-engineering and can be simply avoided.
- Using the hashbang and inheriting its problems.
- Having no support for the traditional url at all, users are forced to use the hashed url; disabling the site for non-js users and search engines.
-
http://twitter.com/balupton
forces a redirect tohttp://twitter.com/#!/balupton
-
- Coding custom and separate AJAX controller actions the client and server side breaking DRY and graceful best practices
There is absolutely no need for the hashbang; it is credited to over-engineering on google's behalf. The following snippet of code is all that your traditional website needs to use hashes and provide rich ajax experiences, support search engines, js-disabled users and even google analytics:
// Prepare our Variables
var
$content = $('#content'),
rootUrl = document.location.protocol+'//'+(document.location.hostname||document.location.host);
// Ajaxify our Internal Links
$('a[href^=/],a[href^='+rootUrl+']').bind('click',function(event){
var relativeUrl = $(this).attr('href').replace(rootUrl,'');
document.location.hash = relativeUrl;
event.preventDefault();
return false;
});
// Hook into Hash Changes: e.g. http://mywebsite.com/#/page1
$(window).bind('hashchange',function(){
var
relativeUrl = '/'+document.location.hash.replace(/^\//,''),
fullUrl = rootUrl+relativeUrl;
// Ajax Request the Traditional Page: e.g. http://mywebsite.com/page1
$.get(url,function(data){
// Find the content in the page's html, and apply it to our current page's content
$content.html($(data).find('#content'));
// Inform Google Analytics of the change
if ( typeof pageTracker !== 'undefined' ) {
pageTracker._trackPageview(relativeUrl);
}
}
}
What does this code do?
- When
http://mywebsite.com/page1
is accessed it works just as it would traditionally - so search engines and js-disabled users are naturally supported. This is without any tedious server-side routing, mapping or escaping. You've coded your website just as you would normally. - When
http://mywebsite.com/#/page1
is accessed it will perform an ajax request to our traditional urlhttp://mywebsite.com/page1
fetch the HTML of that page, and load in the page's content into our existing page.
So already we have a crawlable ajax solution accessible by search engines and js-disabled users without any server-side code. Take that google!
So the above is great, but it still fetches the entire HTML of each page it does a AJAX request for - when really we just need the content of the page we want (the template without the layout). Let's utilise the following server side code in our page action:
<?php
// Our Page Action
public function pageAction ( ) {
// Prepare our variables for our view
// ...
// Handle our view
return $this->awesomeRender('page.html');
}
// Render Helper
public function awesomeRender ( $template ) {
// Get the full template path
$template_path = ...
// Render the template
$template_html = $this->view->render($template_path);
// Check for the XHR header
if ( IS_XHR ) {
// We are a AJAX Request, return just the template
$this->sendJson({'content':$template_html});
}
else {
// Wrap the Template HTML with the Layout and proceed as normal
// ...
}
// Done
}
What we do here is if http://mywebsite.com/page1
is requested normally treat it just as normal rendering it with the layout, if it is requested via AJAX then return just the rendered template in a JSON response. This can easily be extended so we can send JSON data variables along with the rendered content. In fact jQuery Ajaxy has supported these solutions out of the box since July 2008, as well as having a Zend Framework Action Helper to make these server-side optimisations easier and more powerful (supporting sub-pages/sub-templates, data attaching, caching, etc).
So right now we have a crawlable ajax solution which is also incredibly optimised. Though it still suffers from the problems coupled with hashes - which are unavoidable as long we still use hashes.
Recently the HTML5 History API came out which is literally our saviour - it solves the issues coupled with hashes once and for all. The HTML5 History API allows users to modify the URL directly, attach data and titles to it, all without changing the page! Yay! So let's look at what our updated code example will look like:
// Prepare our Variables
var
$content = $('#content'),
rootUrl = document.location.protocol+'//'+(document.location.hostname||document.location.host);
// Ajaxify our Internal Links
$('a[href^=/],a[href^='+rootUrl+']').bind('click',function(event){
var $this = $(this), url = $this.attr('href'), title = $this.attr('title')||null;
window.History.pushState(null,title,url);
event.preventDefault();
return false;
});
// Hook into State Changes
$(window).bind('statechange',function(){
var
State = window.History.getState(),
url = State.url,
relativeUrl = url.replace(rootUrl,'');
// Ajax Request the Traditional Page
$.get(url,function(data){
// Find the content in the page's html, and apply it to our current page's content
$content.html($(data).find('#content'));
// Inform Google Analytics of the change
if ( typeof pageTracker !== 'undefined' ) {
pageTracker._trackPageview(relativeUrl);
}
}
}
Though so far all the HTML5 Browsers handle the HTML5 History API a little bit differently; a pessimist could view this as a blocker and a call for defeat, though a optimist could come along and create a project called History.js which provides a cross-compatible experience between HTML5 and optionally HTML4 browsers fixing all the bugs and issues in the browsers. In fact, the code above already works perfectly with History.js - so bye bye learning curve you're all set to go already.
Okay okay... So what about HTML4 browsers, wouldn't they miss out on all this awesome HTML5 History API awesomeness? Well no and yes - it depends. This is where you need to make a serious decision and a lot of consideration. The question you have to ask yourself is - what is more important to me; supporting the rich web 2.0 ajax experience in HTML5 and HTML4 browsers while incurring the issues that are coupled with hashes when the site is accessed by a HTML4 user, or not incurring those issues by not supporting a rich web 2.0 ajax experience in HTML4 browsers. That is a decision that only you can make based on your websites use cases and audience.
Great, so all I need to do is use History.js, that code above and I've solved life? Yep. And if I want to support HTML4 browsers as the issues coupled with hashes aren't a biggie for me I can? Yep. And if I want to further optimise the AJAX responses I now know how? Yep. Well blimey that's awesome. Thanks :)
History.js is as stable as it gets right now. The future is now towards CMS and Framework plugins (like that Zend Framework Action Helper) mentioned before to make the process of server-side optimisation easier; as well as javascript helpers to allow you to do that javascript stuff in one line of code; while still supporting advanced use cases such as sub-pages. These are all under active development by Benjamin Lupton (his contact details are in the footer). If you'd like to speed some development up of the server-side plugins then get in contact with him and he'll be sure to help you out :)
Any comments, concerns, feedback, want to get in touch? Here are my details.
- Website: http://balupton.com
- Email: contact@balupton.com
- Skype: balupton
- Twitter: balupton
Intelligent State Handling: hashes, hashbang and pushState all have problems - though there is a solution http://j.mp/etU7q6
Copyright 2011 Benjamin Arthur Lupton Licensed under the Attribution-ShareAlike 3.0 Australia (CC BY-SA 3.0)