Category Archives: Web Development

Not Taking Friends With You – or how Facebook and Other Social Networks Ignore Redirects


From a developer perspective, Facebook really p**ses me off. It seems that things change quite a bit and as my time isn’t focused 100% on Facebook stuff (I would go far to say that I’m a casual developer these days), I get surprised when something changes – I mean, how hard can it be to mail registered Facebook developers (or just those that have created apps) to keep us informed as things change?

Anyhow, the above was just a little rant to give you a feel for my mood when it comes to Facebook. It is however a different Facebook annoyance that I’d like to forewarn people about as well as ask Facebook whether they would consider a change that I (and probably many others before me) propose.

Shares Do Not Come Across in a Site Migration

I recently worked on one of my weekend projects (a website) where due to a navigation change in the site, a large number of pages changed their URL as we reflected the navigation change in those URLs. Naturally, it goes without saying that I set up the 301 redirects – job done I thought.

Not so fast is what I should have told myself but that is the power of hindsight as they say. What is lost in such a transition where the URL changes is the likes, tweets, and shares in general. This is particularly frustrating when you realise some time later as I did.

The current solution to this problem is that you need to keep note of that historical URL and provide the share buttons this information instead of the current URL. Annoyingly, like in my situation, you’ll need to sync this logic with the date that the URL change was made. This means you may have pages which collate share counts on the legacy URL and newer pages, which use the same layout/templates will collate share counts on the new URL.

You can provide the Twitter Tweet button a separate URL to which the tweet count is aggregated. This is the value of the data-counturl, which is useful although it doesn’t truly address my issue (keep reading).

Share buttons from Facebook, LinkedIn, and Xing all provide a means to specify a single URL to which the share counts will be added. Therefore, the solution here is simply to use that legacy URL again instead of the current or new one.

At the time of writing, this solution worked for all but Facebook – I’ve implemented the change but waiting to see if those historical counts come back. I’m however rapidly losing hope as Facebook seem to sometimes pick things up but often, also drop the ball.

Doesn’t Feel Right

Either way, the solution to this problem simply doesn’t sit right with me. I was relatively fortunate in that for the site in question, I was using a CMS that allowed me to define a date for when the URL change occurred and then using a little PHP logic, define which pages should use the legacy URL for sharing and which should simply use the current. Many people do not have this luxury, especially if they are using online services such as and then change to using their own domain etc. (disclaimer – this may actually be a bad example. I can imagine that if anyone offers a service that tracks this domain change and ensures social sharing plugins honour the legacy URL, it would be the WordPress guys).

Praise to Google+

Now, I’m no particular ‘fanboy’ of any device or service (aside: it is actually the fanboy culture around Macbook Pros that has hindered me buying what is clearly a very good product), but in this instance Google+ didn’t need any corrective action – it just worked after the change. How refreshing I thought but why only Google?

Let us imagine the scenario when a user clicks on the relevant share button on a given page, which formerly had another URL. At that point in time, that sharing service has no ability to know that the new URL being shared is related to the older one and therefore should aggregate the share counts. This leads to the ‘solution’ above where we as site owners have to provide this information either as a form of link between the legacy and new URLs, such as in the case of the Tweet button where you can provide both the data-url and data-counturl attributes, or simply maintain the choice of using the legacy link regardless as per the other examples above.

That may all seem fair enough as those poor (read ironically as ‘huge and powerful monoliths’) Social Networks simply don’t have access to other information to tell them otherwise unless we spoon feed it to them… Or do they?

To Index or Not Index – That is the Question

If Google are the only ones where no corrective action was needed, then it would make sense to assume that this is because they’ve put the two things together – they know about a page that appears to be at a fresh URL that they have not accumulated any shares for before but they also have continuous indexing of the web’s content and also discover the legacy URL, which has the 301 redirect set up to the new URL creating the connection. Therefore, although not immediate, Google will piece things together and aggregate the count on to one object presumably associated with the new URL.

The question therefore is, if Google can do it, why can’t the others? You would think that indexing all this socially shared content is definitely in their interests.

Twitter, LinkedIn, and Xing have a slightly different model and I could understand it if they didn’t index this content although I would also be surprised if they truly didn’t.

Focusing on Facebook specifically, you are able to search public objects in the Open Graph, which implies that they already index the content. In fact, when you check your legacy URL using the Facebook Open Graph Debugger, it shows clearly that it follows the redirect.

Given all this, I would invite Facebook to comment on why they don’t periodically index in the same vein as Google – it would save Open Graph users a whole lot of hassle.

In the least, if this post doesn’t get any of these services to change, then please don’t fall into the same trap as I did and instead pro-actively plan for this if social shares are important to your site.

Tagged , , , ,

Responsive Images – Web Design with Device Optimized Images

Responsive web design continues to sweep across the web industry for good reason. However, there are still challenges that exist, which in part help to shape standards (i.e. changes to the CSS Box Model due to border placement) as well as adapt design and development approaches moving forwards.

One of these such challenges is around images, delivering an optimal image for a given device. Its on this particular topic that I’d like to share my approach for solving this challenge.


The motivation for solving this problem stems from a couple of things:

First of all, I’ve been keen to ensure that images added to pages by non-technical page authors and editors are not overly large for the consuming device. Whether that device be a smartphone or a desktop machine with a large hi-res monitor is regardless, applying context of that device’s attributes simply makes sense. In other words, there is no point delivering a 1600px x 1000px image to a mobile device with a physical screen resolution of 480px x 800px and simply letting CSS manage the scaling for instance. Whilst this would work, it of course wastes bandwidth.

Secondly, although I’m not very talented in the design department and would tend to favour a partnership with an agency or individual who is, I’ve grown a deep love for good typographic best-practice and it simply disturbs me when images can throw the vertical rhythm out of sync. I know to some, vertical rhythm is not so important but hey, I’m a Brit who now lives in Germany, so please forgive me this adopted solid German trait of attention to such detail.

There are client-side solutions to these challenges coupled with an approach to pre-prepare images for replacement, but this preparing of several sized images just feels like too much effort to me and not truly responsive. I was keen to solve this challenge server-side, where I could rely on the technology available and optimise where necessary.

The Approach

The first thing I did, was to make a decision about what I define the optimal delivery of an image to be. To me, this simply meant delivering images with dimensions that were not greater than the physical dimensions of the viewing device. The thinking here is in the saving of bandwidth and the improvement of page load times, which is something any mobile developer worth their salt should keep in mind.

Device Detection

I’ve been using services like WURFL and more lately DeviceAtlas for some time now and knew that such additional information like the physical screen dimensions could be extracted from the service. Therefore, this provided my constraints for my device specific maximum image size.

As there are a few device detection services out there, I’m not going to go into the specifics of any one and would encourage further reading in the device detection service of your choice for how to extract such additional details.

Ticking this sub-challenge off then got me thinking about how image processing software allows you to re-sample images and led me to the next stage of my quest.

Applying an Old Pattern

As with most things in this world of ours, if you abstract your challenge, you’ll be able to find a solution that’s been applied to a similar if not the same problem before. In this case, most Content Management Systems provide a way to generate thumbnails, which gave me the clue to simply re-purpose this existing logic.

For my “weekend projects”, I utilise a very strong PHP based Open Source Content Management System called Concrete5. So I decided to check out how that allows you to creates thumbnails.

The CMS provides an ImageHelper class for such tasks, which takes a path to an existing image and re-samples the image to maximum constraining height and width. For completeness, and because this is the real engine room of the solution, the method can be seen below:

 * Creates a new image given an original path, a new
 * path, a target width and height.
 * @params string $originalPath, string $newpath,
 * int $width, int $height
 * @return void
 public function create($originalPath, $newPath,
                               $width, $height) {
     // first, we grab the original image. We
     // shouldn't ever get to this function unless 
     // the image is valid
     $imageSize = @getimagesize($originalPath);
     $oWidth = $imageSize[0];
     $oHeight = $imageSize[1];
     $finalWidth = 0;
     $finalHeight = 0;

     // first, if what we're uploading is actually
     // smaller than width and height, we do nothing
     if ($oWidth < $width && $oHeight < $height) {
         $finalWidth = $oWidth;
         $finalHeight = $oHeight;
     } else {
         // otherwise, we do some complicated stuff
         // first, we divide original width and 
         // height by new width and height, and
         // find which difference is greater
         $wDiff = $oWidth / $width;
         $hDiff = $oHeight / $height;
         if ($wDiff > $hDiff) {
             // there's more of a difference between
             // width than height, so if we constrain
             // to width, we should be safe
             $finalWidth = $width;
         } else {
             // more of a difference in height,
             // so we do the opposite
             $finalHeight = $height;

     $image = @imageCreateTrueColor($finalWidth,
     switch($imageSize[2]) {
         case IMAGETYPE_GIF:
             $im = @imageCreateFromGIF($originalPath);
         case IMAGETYPE_JPEG:
             $im = @imageCreateFromJPEG($originalPath);
         case IMAGETYPE_PNG:
             $im = @imageCreateFromPNG($originalPath);

     if ($im) {
         // Better transparency - thanks for the ideas
         // and some code from
         if (($imageSize[2] == IMAGETYPE_GIF) ||
                   ($imageSize[2] == IMAGETYPE_PNG)) {
            $trnprt_indx = imagecolortransparent($im);

            // If we have a specific transparent color
            if ($trnprt_indx >= 0) {
                // Get the original image's
                // transparent color's RGB values
                $trnprt_color =
                 imagecolorsforindex($im, $trnprt_indx);

                // Allocate the same color in the
                // new image resource

                  // Completely fill the background of
                  // the new image with allocated color.
                  imagefill($image, 0, 0, $trnprt_indx);

                  // Set the background color for new
                  // image to transparent

             } else if($imageSize[2] == IMAGETYPE_PNG){

                  // Turn off transparency
                  // blending (temporarily)
                  imagealphablending($image, false);

                  // Create a new transparent color
                  // for image
                                          0, 0, 0, 127);

                  // Completely fill the background
                  // of the new image with allocated
                  // color.
                  imagefill($image, 0, 0, $color);

                  // Restore transparency blending
                  imagesavealpha($image, true);

        $res = @imageCopyResampled($image, $im, 0, 0,
                     0, 0, $finalWidth, $finalHeight,
                                  $oWidth, $oHeight);
        if ($res) {
            switch($imageSize[2]) {
                case IMAGETYPE_GIF:
                    $res2 = imageGIF($image,
                case IMAGETYPE_JPEG:
                    $res2 = imageJPEG($image,$newPath,
                case IMAGETYPE_PNG:
                    $res2 = imagePNG($image, $newPath);

As you can see within the above code, if you chose to glance through it, the function heavily relies on PHP’s GD Library and its ability to extract image info and resample images.

Credit goes to the guys at Concrete5 for the code above.

Actual Usage Scenario #1 – Mobile Devices

I built a desktop website for a good friend and additionally built a mobile site for him as well. I wanted to empower him to manage all editorial changes via the desktop site CMS and re-use image assets where possible for the mobile site. Before, many of you responsive evangelists scream “Why did you not just build a responsive site?”, the answer is in part because I wasn’t too aware of the Responsive Web Design movement at the time of doing this particular favour and my friend and I genuinely thought a mobile focused site was the right option anyhow. Its a debate that many still have and there many varied factors as to why go one way or another that I’m not going to cover here.

Using ‘responsive’ principles, you may typically assign an image element a specific percentage width of the device screen either directly or indirectly via it taking up 100% space as given to it by its parent element. Crudely, you could use the above function to re-sample the original image to be no wider than the physical device width given the info from the device detection service. This would ensure no over bloated images being sent down the connection. This is particularly key when CSS constrains the visible width of the same image on the desktop site and the author/editor has no understanding of the image’s true size and its impact.

This could be taken one step further in that if you knew that the image is constrained to say 25% of the device width, you can calculate this server-side and again use the above function to re-sample a more optimal image for use.

Actual Usage Scenario #2 – A Constrained Page Element

In another weekend project, a less responsive but pixel perfect design was desired. It was important for the “client” (another friend), that the image elements on any given page of the site, aligned with the baseline rhythm of the page. It was also likely that the images would be uploaded by the author and editor into the CMS and unlikely that these assets were meticulously prepared. In other words, the dimensions of the images that were to form a rotating slideshow would vary but the challenge was to maintain consistency in the slideshow and make all images appear the same size.

The above method for re-sampling images in collaboration with another trick solved this particular challenge.

In this case, instead of the device defining the image size constraints, the page element is.

To visually optimize the available space within the image element, I chose to additionally implement a little logic that wraps the image inside an element whose overflow CSS property is set to hidden and centralises the image within that element using negative margins. This provides a “poor mans crop” for the image. The logic to decide whether to constrain the width or height of the original image is slightly outside the focus of this post so will leave it out but it would be suffice to say that I simply used the ‘getimagesize’ GD Library method to understand the original dimensions to do some calculations within the context of the target (containing) dimensions.

The Poor Man's Crop

Although this second approach is not necessarily related to Responsive Web Design directly, it does allow for a level of control over those images that CMS authors and editors may upload without understanding the impact of image size.


It goes without saying, that when images are being processed and prepared server-side for optimal delivery over the connection between the client and the server, that you don’t want to be doing such image processing for every single image every single time. Therefore, in both the scenarios above, I’ve made great use of a server-side cache as well as utilising best-practice to encourage browser based caching. For scenario #1 above, many devices tend to have similar resolution screens so grouping devices can further improve things (i.e. serving a 500px wide image to a 480px wide device allowing for the CSS to manage the on-screen size is not exactly terrible).


There may be other ways to crack this nut and I’m certainly interested to see how CMSs start to address this issue moving forwards but this has worked well for me. I’m looking to further the approach so that I can create a solution for that niggle around images that break the vertical rhythm.

Would love to hear the thoughts of others as well as other approaches to the same challenge. Therefore, feel free to leave a comment.

Tagged , , , ,

Canonical URLs and SEO

As I recently made a foolish mistake, I thought I would share it to help others avoid it in the future.  It was to do with my quest to get certain pages of the Solution Exchange Community platform indexed in Google, Bing, and Yahoo etc.  Specifically, the valuable forum threads.

First of all, it is worth mentioning how these threads are delivered.  The forum itself is an object of the OpenText Social Communities (OTSC) product, which interacts with the Delivery Server through the OTSC XML API.

Therefore, the forum thread pages are dynamically delivered with the shell of the page being the same physical page with the content influenced by parameters.  In this case, I’ve chosen to utilise sensible URL structures that contain the parameters for simplification and SEO.  I mention more about this in this forum post.  The use of rewrite rules in this way for SEO is one of the key values of a Front Controlling Web Server.

As the shell of the page is the same, I initially had the same <title> tag for all threads and thought that this was the problem.  After changing to adapt the <title> value to the title of the forum thread (along with waiting for re-indexing to happen) there was no change.

Finally, through checking the index of Solution Exchange on Bing with a “site:” search, I noticed to my surprise that one of the threads was indexed but was associated with the URL!!!  This was strange due to the fact that externally, the forum thread was only accessible through a URL like{ID} meaning that I must be explicitly telling the search engines the wrong URL.  

This was the clue I needed to realise that my problem was due to something I had implemented many months before.

To address the potential SEO penalty that the home page of the community was able to be reached through and, I introduced the use of the following html header link tag – the example below is the home page value but I included this across the whole site:

<link rel="canonical" href="" />

You can read more about this on the Official Google Webmaster Central Blog.  In summary, it tells the search engines that this page is to be associated with the given URL and page ranking (or “Google juice”) is to be associated with that and not the entry URL that the crawler bot used.  This avoids the possibility of page ranking for the same page being split across two or more URLs or being penalised for duplicating content across multiple URLs.

With this knowledge, I was able to update the page template that houses this dynamic content to form the correct URL within this canonical link.  Now it’s back to the waiting game to see if the indexes will pick the content and forgive me for positioning different pages as one.

Although a small detail, the end goal and potential gain is huge as it opens up the rich content that continues to grow within the forum for discovery via the big search engines.  This in turn will only help those within the wider community who are not aware of Solution Exchange discover the content, which may help them resolve an issue or encourage them to take part in the community platform moving forwards.

As always, leave a comment or get in touch if you have any questions.

Tagged , , , , , , , , ,