Uploading a single changed file with Grunt

grunt-logo

I have embraced the node task runner Grunt recently and because I was grunting a little myself to get it to do what I wanted, I thought I’d share my experience to help others.

What Did I Want to Achieve?

My environment is pretty simple – I have a development LAMP running inside a local virtual machine and I wanted Grunt to upload files to that VM web server as they change. Sounds straight forward, right?

Started Well

I got to grips relatively well with the basics of Grunt and set up a watch (grunt-contrib-watch) task to monitor changes within my local development folder to then trigger things like minification or linting for example and then I wanted to upload the changed file.

There are many different file upload Grunt plugins ranging from rsync (grunt-rsync) to FTP (grunt-ftp-deploy) and I selected the sftp task from the grunt-ssh package. The fact that I chose this over the others is actually not relevant for the core challenge so you can choose whichever you wish.

The problem was that the sftp task (like any other upload related task) took a file pattern input within the config of the gruntfile.js (see example below). This is commonly a pattern like **/*.js for instance and not a specific file.

sftp:{
  upload:{
    options: {
      host: '<%= sshlogininfo.host %>',
      username: '<%= sshlogininfo.username %>',
      password: '<%= sshlogininfo.password %>',
      path:'/remote-path'
    },
    files:{'./':'**/*.js'} 
  }
}

Therefore, when the watch task is triggered, I wanted to ensure that the downstream tasks (the upload in my case) only needs to focus on that specific file and not a large set to be as fast as possible.

Events to the Rescue

My saviour came in the guise of events. The popular watch task emits a watch event, which allows you to do something simple like create a notification or in my case, change the configuration of a downstream Grunt task. It is important to note at this point, that the watch event is not the right place to run further Grunt tasks although you could. The documentation of the watch task does emphasise this but it is worth stressing as you may want to go down that path in a moment of weakness (read: this is exactly what I was thinking at one point). In my case, I needed to use the event listener to change the configuration of the files parameter for the sftp task and I did this with the following simple listener code in my gruntfile.js:

grunt.event.on('watch', function(action, filepath, target) {
  var files = {"./":filepath};
  grunt.config('sftp.upload.files', files);
});

With other upload tasks, it may be changing a src parameter instead but the principle is the same.

There are possibly other ways to crack this nut and I would absolutely welcome feedback and constructive critique on this as it would help me and others learn.

I hope this helps.

I’m a Jack of all Trades and its OK

I thought I’d get back into writing and fulfilling a promise made at an enthusiastic Paul Boag workshop back in April (yes, Paul, I know…) by writing a short post about being a Jack of all Trades and how I feel this is sometimes wrongly devalued within the web industry.

I’ve had many different roles in my career from Junior Developer to Lead System Designer or Solutions Consultant to Community Manager. I’ve not got the typical experience of a “web guy” as I’ve spent the best part of 15 years in large enterprises and have not been one of the cool kids in one of the many fantastic agencies that exist – yes, that is envy. Each role that I’ve held has required a different set of skills at a given point of time meaning that I’ve had to be somewhat of a “Knowledge Chameleon” (I’m bagsying that!) and adapt to my environment. Whether it be knowledge of a proprietary system or understanding of the latest web standard, I’ve always had a “Just In Time (JIT)” education approach – JIT the term itself being learned from my days as a Java developer. Whilst this sounds a little fly by the seat of your pants, I’ve always made it work and made sure that I’ve understood what the experts are doing in a given space to guide my own practices and standards for the benefit of the customer.

So why am I writing this almost-on-the-verge-of-a-rant post? Well, I read all too often that Jack-of-all-Trade types aren’t as valuable as specialists. The most recent netmag contained an interview with the well respected Sarah Parmenter and the advice given once again for those starting out was along the lines of ‘be the best [insert specific skill here] in [insert a geographic area]’. Whilst I have tonnes of respect for Sarah and others who have said similar things in the past and am certainly not looking for a fight, I don’t completely agree.

I personally enjoy the fact that I have good appreciation about good design and typographic principles, good knowledge about HTML & CSS & UX best-practices, solid JavaScript knowledge, sound ability with several server side languages along with very good knowledge of enterprise back-end systems like Salesforce and I know what makes a good CMS.

“Big wow” [with rolling eyes] you may think to what may come across as a bit of an Ego-w*nk… and I would tend to agree. The value is not the skills themselves but rather how knowledge across domains allows for a ‘just right’ and appropriate solution to be proposed to a client regardless of the project size. In one project, I may have the luxury to oversee how the proposed solution is executed and brought together using what knowledge I have to liaise and discuss challenges with experts in each discipline. In another, I may be Mr. Hands On and Chief Get Things Done Officer who has to execute against his own plan. This flexibility is something I absolutely love and I feel it provides great value to my clients and gives me an edge over the competition for a given size of customer.

All that said, I am my biggest critic and would be first to recognise where I am weaker in some areas and stronger in others and am very open and honest about handing things over if they can be done better. Whilst I selfishly find closing the knowledge gap exciting because I sense an improvement in those weaker areas with every project I undertake, I do agree in part with the web gurus whose views I’ve taken exception to – focus is important. That focus or speciality level within a discipline however, has a context.

A case in point may be around my weaker design skills. A customer may have a limited budget initially and may need to have a site built out for a specific business need. I could come along and do a good job with the view that if things take off for the customer, there may be budget in the future to get a design rock-star to come in and take things to the next level to help increase conversions or another goal. Therefore, my job would be to future-proof and facilitate this potential future task by creating the site using well structured semantic HTML and well organised CSS (I like the SMACSS approach personally) and decisions within the front and back-end should be made with this context in mind.

This surely has value for customers who may want to spend incrementally, which also fits perfectly with an open and agile approach that many of enjoy as standard nowadays.

Given this, whilst I completely agree that you should use the best tool for the job and use the best person for a given job where the various constraints permit, I also think that the Jack-of-all-Trade types should be given a little more encouragement as it is most often those types of people who get things off the ground and get things done.

I have to admit to an exception here that may in some way contradict what I’m saying in that although I’m a passionate JOAT (brilliant, I’m basgying that too), I do suffer from the Cobbler’s Children Syndrome, which is why I’ve still not ‘got things done’ with my new site to show the projects I’ve worked on since leaving the slow and politics filled corporate world recently so you’ll have to forgive that indiscretion.

Oh well, I’m still a Jack of all Trades for my clients and get things done so I think its OK. 

 

 

 

What do you think? Do you agree or disagree? What is your experience?

Let me know in the comments.

Tagged , , ,

A Short Goodbye and Thank-you (again)

As my time with OpenText comes to an end tomorrow, I leave the company with gratitude for the 7 years I’ve been with RedDot and then OpenText and I leave with a level of excitement for the future that I haven’t felt in a long time. So whilst it is sad to be leaving, all things indicate that I’m ready for the next challenge.

I’ve written a couple of goodbye emails to my colleagues today and within those I emphasised how much I enjoyed my time establishing and building the SolutionExchange community platform. This certainly feels – in hindsight – one of my greater achievements and armed me with greater knowledge around agile/lean concepts, community management, and the wise utilisation of social channels.

Therefore, to conclude my goodbyes before I get buzzed about the future, I would like to (once again) thank those Customers, Partners, and colleagues who participated in the SolutionExchange community and made it what it was but also more selfishly, for giving me such an enjoyable and constructive phase of my career.

As I shall now be entering the scary world of freelancing, I’m sure I shall meet or engage on-line with many of you still and I look forward to it, which is why this is only a short goodbye.

Those of you who want to get in touch will be able to find me in the usual on-line hangouts – LinkedIn & Twitter being my preferred choice. Failing that, just Google me and you’ll find me. 🙂

 

Dan

Tagged

A Belated Thank-you to SolutionExchange Users

opt003_banner-large_500x98

This post is actually a very overdue thank-you to the user community of SolutionExchange.

The Early Days

Since first raising the idea of an online community platform back at the tail end of 2009, I was fortunate enough to be part of an organisation under Jens Rabe that actively supported the idea and that kick-started a very enjoyable and passion filled couple of years as the Community Leader for the SolutionExchange platform.

There are many aspects of the SolutionExchange where I am particularly proud, from the pure open nature of the platform to the way in which the platform was adopted by you the users, all the way through to some little features which we pioneered on the platform within the enterprise context. On top of this, we did it all whilst showcasing our own products.

A special thanks goes to Markus Giesen during those early days of planning as Markus had already spent years building up an engaged community around the blog he started – The Unofficual RedDot CMS Blog. Markus had every right to be skeptical about what I had planned given the previous community attempts ending in failure but we struck up a great relationship and he provided some very valuable early guidance that helped me justify some of the decisions of our approach to my management team.

I learnt a lot in the following couple of years. I learnt and understood quickly about the philosophies of many start-ups: get to the customer as quick as you can and iterate based on what they say and what they do. We did this.

The Launch

We launched a beta quietly back in the spring of 2010 with a simple Solution repository/App store concept but quickly iterated based on feedback to include an aggregated feed of blog posts from the broader community. I was particularly pleased with the effectiveness of this simple feature as it put users in control by telling them that they can blog externally of the platform, contribute their blog URL in their SolutionExchange profile, and by simply tagging their posts with the word ‘solex’, they would contribute that specific post to the “Community Feed” feature. At the time, this was big as not many were doing this elsewhere on the web as there was a preference to encourage users to blog within a given platform, which added a small but significant barrier. We discovered that this approach worked very well and what followed was a relatively great success story. Beyond early adopters like Markus Giesen, other partners started to contribute, then some customers, and then many of my colleagues created blogs with WordPress and Blogger and started sharing knowledge this way for the first time, specifically with the intention to share via the SolutionExchange. Kudos should once again be given to you, the user community, for leading the way here but you can now consume great knowledge from within OpenText that is being shared by the likes of  Tim DavisJian Huang, Manuel Schnitger, and Dennis Reil. I cannot emphasise enough how this was a big deal and I thank each and every contributor for making the Community Feed such a success.

Further to this, we introduced the forum into the platform some months later with thanks going to netmedia for their support. This once again has been a tremendous success and I must thank those early contributors of feedback, which meant that we applied tweaks until the user experience was just how we wanted, which in turn affected the adoption. Well over 1000 posts later, the forum is still going strong and is a valuable ‘go to’ resource along with the likes of the RedDot Google Group. This forum along with the open nature of the platform brought with it more and more visits from search, which helped the discovery of the platform by new visitors.

We also made other subtle but significant additions to the site over time. I could name the introduction of Twitter Anywhere to help our users engage and connect with others, the Ideas feature, Group pages, the integration of OTSN, the rating features, or the external authentication proof of concept to name but a few. I could also talk about how significant some of the stats are like the almost 600 strong user base the platform has, which is relatively fantastic for a platform you do not need to sign up to in order to consume content. Although I could, I shall not, as my role has changed.

New Challenge

As of July 2012, SolutionExchange was no longer ‘my baby’. In fact, I had been re-assigned and given different responsibilities a year earlier in July 2011 and felt I had already been too ineffective with my commitments to the community platform. That guilty feeling was however put a little into perspective when I had a telephone call with Uli Weiss not that long back where he stated that he felt the platform was getting better and better. This is quite simply once more down to you, the users, as your time and contributions are what make the platform gain value day by day.

I am now part of the corporate web team at OpenText and have a very grand title as a Web Business Architect. It is very important for me (albeit somewhat overdue) to acknowledge that I have gained this opportunity in part due to the relative success of SolutionExchange and it is for this reason that I pass my thanks on to you in the community. Without your passion, engagement and desire to share in the ways in which you have, it would not have been the success it continues to be and I may not have had the opportunity that I am currently working on fulfilling now to bring some of those valuable concepts learnt during the past few years into the broader OpenText web experience.

Safe Hands

All that said, I’m not leaving a vacant hole. I am very enthused and pleased that Manuel Schnitger has decided to step into my shoes and take on the Community Management role as he is absolutely a perfect fit in my eyes. Manuel has many years worth of experience in various roles and importantly understands the challenges from a community perspective. On top of that, he is always willing to share the knowledge he has (as proven through his blog) and connect others when he is not the right person. Therefore, the community leadership is in good hands and it is pleasing to see the community still progress and build momentum over time.

Simply Thanks

I may not have named everybody who deserves a special mention but hopefully you know who you are. Let me once again thank you all as I look forward to a different challenge where I shall look to apply the lessons I’ve learnt over an enjoyable period in my career that was only made possible with your support.

Regards,

Dan

Tagged

Not Taking Friends With You – or how Facebook and Other Social Networks Ignore Redirects

facebook-thumbs-down

From a developer perspective, Facebook really p**ses me off. It seems that things change quite a bit and as my time isn’t focused 100% on Facebook stuff (I would go far to say that I’m a casual developer these days), I get surprised when something changes – I mean, how hard can it be to mail registered Facebook developers (or just those that have created apps) to keep us informed as things change?

Anyhow, the above was just a little rant to give you a feel for my mood when it comes to Facebook. It is however a different Facebook annoyance that I’d like to forewarn people about as well as ask Facebook whether they would consider a change that I (and probably many others before me) propose.

Shares Do Not Come Across in a Site Migration

I recently worked on one of my weekend projects (a website) where due to a navigation change in the site, a large number of pages changed their URL as we reflected the navigation change in those URLs. Naturally, it goes without saying that I set up the 301 redirects – job done I thought.

Not so fast is what I should have told myself but that is the power of hindsight as they say. What is lost in such a transition where the URL changes is the likes, tweets, and shares in general. This is particularly frustrating when you realise some time later as I did.

The current solution to this problem is that you need to keep note of that historical URL and provide the share buttons this information instead of the current URL. Annoyingly, like in my situation, you’ll need to sync this logic with the date that the URL change was made. This means you may have pages which collate share counts on the legacy URL and newer pages, which use the same layout/templates will collate share counts on the new URL.

You can provide the Twitter Tweet button a separate URL to which the tweet count is aggregated. This is the value of the data-counturl, which is useful although it doesn’t truly address my issue (keep reading).

Share buttons from Facebook, LinkedIn, and Xing all provide a means to specify a single URL to which the share counts will be added. Therefore, the solution here is simply to use that legacy URL again instead of the current or new one.

At the time of writing, this solution worked for all but Facebook – I’ve implemented the change but waiting to see if those historical counts come back. I’m however rapidly losing hope as Facebook seem to sometimes pick things up but often, also drop the ball.

Doesn’t Feel Right

Either way, the solution to this problem simply doesn’t sit right with me. I was relatively fortunate in that for the site in question, I was using a CMS that allowed me to define a date for when the URL change occurred and then using a little PHP logic, define which pages should use the legacy URL for sharing and which should simply use the current. Many people do not have this luxury, especially if they are using online services such as wordpress.com and then change to using their own domain etc. (disclaimer – this may actually be a bad example. I can imagine that if anyone offers a service that tracks this domain change and ensures social sharing plugins honour the legacy URL, it would be the WordPress guys).

Praise to Google+

Now, I’m no particular ‘fanboy’ of any device or service (aside: it is actually the fanboy culture around Macbook Pros that has hindered me buying what is clearly a very good product), but in this instance Google+ didn’t need any corrective action – it just worked after the change. How refreshing I thought but why only Google?

Let us imagine the scenario when a user clicks on the relevant share button on a given page, which formerly had another URL. At that point in time, that sharing service has no ability to know that the new URL being shared is related to the older one and therefore should aggregate the share counts. This leads to the ‘solution’ above where we as site owners have to provide this information either as a form of link between the legacy and new URLs, such as in the case of the Tweet button where you can provide both the data-url and data-counturl attributes, or simply maintain the choice of using the legacy link regardless as per the other examples above.

That may all seem fair enough as those poor (read ironically as ‘huge and powerful monoliths’) Social Networks simply don’t have access to other information to tell them otherwise unless we spoon feed it to them… Or do they?

To Index or Not Index – That is the Question

If Google are the only ones where no corrective action was needed, then it would make sense to assume that this is because they’ve put the two things together – they know about a page that appears to be at a fresh URL that they have not accumulated any shares for before but they also have continuous indexing of the web’s content and also discover the legacy URL, which has the 301 redirect set up to the new URL creating the connection. Therefore, although not immediate, Google will piece things together and aggregate the count on to one object presumably associated with the new URL.

The question therefore is, if Google can do it, why can’t the others? You would think that indexing all this socially shared content is definitely in their interests.

Twitter, LinkedIn, and Xing have a slightly different model and I could understand it if they didn’t index this content although I would also be surprised if they truly didn’t.

Focusing on Facebook specifically, you are able to search public objects in the Open Graph, which implies that they already index the content. In fact, when you check your legacy URL using the Facebook Open Graph Debugger, it shows clearly that it follows the redirect.

Given all this, I would invite Facebook to comment on why they don’t periodically index in the same vein as Google – it would save Open Graph users a whole lot of hassle.

In the least, if this post doesn’t get any of these services to change, then please don’t fall into the same trap as I did and instead pro-actively plan for this if social shares are important to your site.

Tagged , , , ,

Responsive Images – Web Design with Device Optimized Images

Responsive web design continues to sweep across the web industry for good reason. However, there are still challenges that exist, which in part help to shape standards (i.e. changes to the CSS Box Model due to border placement) as well as adapt design and development approaches moving forwards.

One of these such challenges is around images, delivering an optimal image for a given device. Its on this particular topic that I’d like to share my approach for solving this challenge.

Motivation

The motivation for solving this problem stems from a couple of things:

First of all, I’ve been keen to ensure that images added to pages by non-technical page authors and editors are not overly large for the consuming device. Whether that device be a smartphone or a desktop machine with a large hi-res monitor is regardless, applying context of that device’s attributes simply makes sense. In other words, there is no point delivering a 1600px x 1000px image to a mobile device with a physical screen resolution of 480px x 800px and simply letting CSS manage the scaling for instance. Whilst this would work, it of course wastes bandwidth.

Secondly, although I’m not very talented in the design department and would tend to favour a partnership with an agency or individual who is, I’ve grown a deep love for good typographic best-practice and it simply disturbs me when images can throw the vertical rhythm out of sync. I know to some, vertical rhythm is not so important but hey, I’m a Brit who now lives in Germany, so please forgive me this adopted solid German trait of attention to such detail.

There are client-side solutions to these challenges coupled with an approach to pre-prepare images for replacement, but this preparing of several sized images just feels like too much effort to me and not truly responsive. I was keen to solve this challenge server-side, where I could rely on the technology available and optimise where necessary.

The Approach

The first thing I did, was to make a decision about what I define the optimal delivery of an image to be. To me, this simply meant delivering images with dimensions that were not greater than the physical dimensions of the viewing device. The thinking here is in the saving of bandwidth and the improvement of page load times, which is something any mobile developer worth their salt should keep in mind.

Device Detection

I’ve been using services like WURFL and more lately DeviceAtlas for some time now and knew that such additional information like the physical screen dimensions could be extracted from the service. Therefore, this provided my constraints for my device specific maximum image size.

As there are a few device detection services out there, I’m not going to go into the specifics of any one and would encourage further reading in the device detection service of your choice for how to extract such additional details.

Ticking this sub-challenge off then got me thinking about how image processing software allows you to re-sample images and led me to the next stage of my quest.

Applying an Old Pattern

As with most things in this world of ours, if you abstract your challenge, you’ll be able to find a solution that’s been applied to a similar if not the same problem before. In this case, most Content Management Systems provide a way to generate thumbnails, which gave me the clue to simply re-purpose this existing logic.

For my “weekend projects”, I utilise a very strong PHP based Open Source Content Management System called Concrete5. So I decided to check out how that allows you to creates thumbnails.

The CMS provides an ImageHelper class for such tasks, which takes a path to an existing image and re-samples the image to maximum constraining height and width. For completeness, and because this is the real engine room of the solution, the method can be seen below:

/**
 * Creates a new image given an original path, a new
 * path, a target width and height.
 * @params string $originalPath, string $newpath,
 * int $width, int $height
 * @return void
 */
 public function create($originalPath, $newPath,
                               $width, $height) {
     // first, we grab the original image. We
     // shouldn't ever get to this function unless 
     // the image is valid
     $imageSize = @getimagesize($originalPath);
     $oWidth = $imageSize[0];
     $oHeight = $imageSize[1];
     $finalWidth = 0;
     $finalHeight = 0;

     // first, if what we're uploading is actually
     // smaller than width and height, we do nothing
     if ($oWidth < $width && $oHeight < $height) {
         $finalWidth = $oWidth;
         $finalHeight = $oHeight;
     } else {
         // otherwise, we do some complicated stuff
         // first, we divide original width and 
         // height by new width and height, and
         // find which difference is greater
         $wDiff = $oWidth / $width;
         $hDiff = $oHeight / $height;
         if ($wDiff > $hDiff) {
             // there's more of a difference between
             // width than height, so if we constrain
             // to width, we should be safe
             $finalWidth = $width;
             $finalHeight=$oHeight/($oWidth/$width);
         } else {
             // more of a difference in height,
             // so we do the opposite
             $finalWidth=$oWidth/($oHeight/$height);
             $finalHeight = $height;
         }
     }

     $image = @imageCreateTrueColor($finalWidth,
                                      $finalHeight);
     switch($imageSize[2]) {
         case IMAGETYPE_GIF:
             $im = @imageCreateFromGIF($originalPath);
             break;
         case IMAGETYPE_JPEG:
             $im = @imageCreateFromJPEG($originalPath);
             break;
         case IMAGETYPE_PNG:
             $im = @imageCreateFromPNG($originalPath);
             break;
     }

     if ($im) {
         // Better transparency - thanks for the ideas
         // and some code from mediumexposure.com
         if (($imageSize[2] == IMAGETYPE_GIF) ||
                   ($imageSize[2] == IMAGETYPE_PNG)) {
            $trnprt_indx = imagecolortransparent($im);

            // If we have a specific transparent color
            if ($trnprt_indx >= 0) {
                // Get the original image's
                // transparent color's RGB values
                $trnprt_color =
                 imagecolorsforindex($im, $trnprt_indx);

                // Allocate the same color in the
                // new image resource
                $trnprt_indx=imagecolorallocate($image,
                                  $trnprt_color['red'],
                                $trnprt_color['green'],
                                $trnprt_color['blue']);

                  // Completely fill the background of
                  // the new image with allocated color.
                  imagefill($image, 0, 0, $trnprt_indx);

                  // Set the background color for new
                  // image to transparent
                  imagecolortransparent($image,
                                          $trnprt_indx);

             } else if($imageSize[2] == IMAGETYPE_PNG){

                  // Turn off transparency
                  // blending (temporarily)
                  imagealphablending($image, false);

                  // Create a new transparent color
                  // for image
                  $color=imagecolorallocatealpha($image,
                                          0, 0, 0, 127);

                  // Completely fill the background
                  // of the new image with allocated
                  // color.
                  imagefill($image, 0, 0, $color);

                  // Restore transparency blending
                  imagesavealpha($image, true);
             }
        }

        $res = @imageCopyResampled($image, $im, 0, 0,
                     0, 0, $finalWidth, $finalHeight,
                                  $oWidth, $oHeight);
        if ($res) {
            switch($imageSize[2]) {
                case IMAGETYPE_GIF:
                    $res2 = imageGIF($image,
                                   $newPath);
                    break;
                case IMAGETYPE_JPEG:
                    $res2 = imageJPEG($image,$newPath,
                      AL_THUMBNAIL_JPEG_COMPRESSION);
                    break;
                case IMAGETYPE_PNG:
                    $res2 = imagePNG($image, $newPath);
                    break;
            }
        }
    }
}

As you can see within the above code, if you chose to glance through it, the function heavily relies on PHP’s GD Library and its ability to extract image info and resample images.

Credit goes to the guys at Concrete5 for the code above.

Actual Usage Scenario #1 – Mobile Devices

I built a desktop website for a good friend and additionally built a mobile site for him as well. I wanted to empower him to manage all editorial changes via the desktop site CMS and re-use image assets where possible for the mobile site. Before, many of you responsive evangelists scream “Why did you not just build a responsive site?”, the answer is in part because I wasn’t too aware of the Responsive Web Design movement at the time of doing this particular favour and my friend and I genuinely thought a mobile focused site was the right option anyhow. Its a debate that many still have and there many varied factors as to why go one way or another that I’m not going to cover here.

Using ‘responsive’ principles, you may typically assign an image element a specific percentage width of the device screen either directly or indirectly via it taking up 100% space as given to it by its parent element. Crudely, you could use the above function to re-sample the original image to be no wider than the physical device width given the info from the device detection service. This would ensure no over bloated images being sent down the connection. This is particularly key when CSS constrains the visible width of the same image on the desktop site and the author/editor has no understanding of the image’s true size and its impact.

This could be taken one step further in that if you knew that the image is constrained to say 25% of the device width, you can calculate this server-side and again use the above function to re-sample a more optimal image for use.

Actual Usage Scenario #2 – A Constrained Page Element

In another weekend project, a less responsive but pixel perfect design was desired. It was important for the “client” (another friend), that the image elements on any given page of the site, aligned with the baseline rhythm of the page. It was also likely that the images would be uploaded by the author and editor into the CMS and unlikely that these assets were meticulously prepared. In other words, the dimensions of the images that were to form a rotating slideshow would vary but the challenge was to maintain consistency in the slideshow and make all images appear the same size.

The above method for re-sampling images in collaboration with another trick solved this particular challenge.

In this case, instead of the device defining the image size constraints, the page element is.

To visually optimize the available space within the image element, I chose to additionally implement a little logic that wraps the image inside an element whose overflow CSS property is set to hidden and centralises the image within that element using negative margins. This provides a “poor mans crop” for the image. The logic to decide whether to constrain the width or height of the original image is slightly outside the focus of this post so will leave it out but it would be suffice to say that I simply used the ‘getimagesize’ GD Library method to understand the original dimensions to do some calculations within the context of the target (containing) dimensions.

The Poor Man's Crop

Although this second approach is not necessarily related to Responsive Web Design directly, it does allow for a level of control over those images that CMS authors and editors may upload without understanding the impact of image size.

Optimising

It goes without saying, that when images are being processed and prepared server-side for optimal delivery over the connection between the client and the server, that you don’t want to be doing such image processing for every single image every single time. Therefore, in both the scenarios above, I’ve made great use of a server-side cache as well as utilising best-practice to encourage browser based caching. For scenario #1 above, many devices tend to have similar resolution screens so grouping devices can further improve things (i.e. serving a 500px wide image to a 480px wide device allowing for the CSS to manage the on-screen size is not exactly terrible).

Conclusion

There may be other ways to crack this nut and I’m certainly interested to see how CMSs start to address this issue moving forwards but this has worked well for me. I’m looking to further the approach so that I can create a solution for that niggle around images that break the vertical rhythm.

Would love to hear the thoughts of others as well as other approaches to the same challenge. Therefore, feel free to leave a comment.

Tagged , , , ,

Doing Good in the Community (and raising Brand Awareness)

“We have become digital crack addicts!”

Within my role as Community Manager for the OpenText Web Site Management (WSM) product, I’m close to celebrating a great milestone – 500 registered users.  This is a truly great milestone as the platform is open, meaning you don’t need to register to read the content and use the platform. You do however need to register if you would like to contribute to the aggregated feed of external blog posts – called “Community Feed”, contribute to the “Tweet Exchange” Twitter feed, or post to the Forum or Ideas feature. The fact that a vast majority of registered users do not provide their details to contribute to the Community Feed or Tweet Exchange is not necessarily surprising but a significant number also have not posted to the Forum as well, which begs the question why register?

I mention this as it could be inferred as another piece of positive qualitative data – perhaps people simply register to ‘belong’ and affiliate themselves with the community even if they are not participating pro-actively straight away. This qualitative feedback is a great compliment to its sibling, quantitative data a.k.a. metrics.

In some cases it even feels like such qualitative feedback has greater value and context. For instance, the open approach to the community platform was endorsed by some praise provided by a prospect (now a customer) who saw that open, honest, and sometimes critical discussion was ongoing in the platform’s forum. This sounds like it should have been a risk as the dreaded variant of the word criticism was used. What turned it into something positive however, was the fact that this prospect could openly see that there was activity in such discussions, and that any such criticism was used constructively and that the engaged members of the community pulled together on many occasions to share experiences or knowledge around a given point of criticism. Internal OpenText employees along with Partners and Customers have jointly played a role here. This subject of openness and transparency is perhaps a subject for another day.

What does particularly interest me in this space, is how digital marketeers and businesses with online assets in general, have become obsessed with metrics. We have become digital crack addicts!

In many cases this is completely understandable as there can often be a very tangible and clearly measurable route from visitor to lead to opportunity to closed deal within a traditional marketing focused website. But what about community platforms?

How does a conversation between peers in a community platform or a blog post by a customer sharing best-practice knowledge tangibly influence that bottom line? Let’s face it, it is that same ROI challenge around “Social Media” that has been floating around for a few years now and we all know there are no magic rules that provide the answer as the context is all so important.

As I tend to be someone who sees the application of repeatable patterns in everything I do, from Software Development code idioms to Marketing Strategies, I thought that there is sure to be a parallel to this challenge and indeed there is – in the traditional marketing world.

The Traditional Approach

This realisation came to me as I recently visited my home town in the UK and noticed that as I drove to see a friend, a local roundabout that had perfectly trimmed grass and beautiful flower beds also had a sponsor — the local Sports Centre.

This really got me thinking as it made me think about why the Sports Centre decided to invest in this way and because of my (digital) crack addiction, I thought how can they measure the return on that investment?

It is not exactly like the UK’s Health and Safety department would allow the placement of an all so trendy QR code on the sponsorship sign situated in the middle of a roundabout on a busy junction — although that wouldn’t surprise me nowadays as I have seen a few on the back of lorries!. I can see the future: “Is this van driven safely? No, then take a picture with your mobile device whilst driving and let us know!” — I digress.

Maybe the Sports Centre simply wanted to raise a positive profile within the community where many of its clients or potential clients pass through. After all, it was a beautifully kept roundabout that many a competitive gardener would be proud of and perhaps it is that association with something well kept and maintained which inferred a well run Sports Centre.

Why Invest?

Whilst looking for an image to accompany this post, I found the image above, which was a stroke of luck. The sponsors on this road sign happen to be CDS, a long-term well-respected Partner of OpenText based in Leeds, UK and one that I’ve had the pleasure of working with on a number of occasions. Given this coincidence, I decided to reach out to Mike Collier who is CDS’ Technical Director to ask directly about this investment. Here is what he said:

“The advertising on the road sign was all about raising brand awareness and coincided with a branding refresh we undertook a few years ago. This was also coupled with advertising on the back of a bus!

The location of the sign and the bus advertising was significant as it was on one of the main routes travelled by business people, into Leeds. The bus advertising was on a route which circled Leeds and in particular the town centre and the main train station.

I am not sure that we generated any real measurable business from it but it did raise awareness of the brand with a number of our existing customers commenting on it in a good way.

We did have a an unexpected piece of good fortune when the bus crashed! (no injuries thankfully) and it was featured on Look North – the local news channel!”

I found this feedback from Mike very interesting as it helps re-enforce the question I’m trying to raise in this post.

The Question

Community platforms such as the Solution Exchange are platforms that in the first instance, are there to help serve the community better. Whether that is the aggregation of related articles on a shared context or the sharing and dissemination of best-practice knowledge, the focus is on generating genuine value for end users to help them get their job done without a hidden agenda of lead generation.

Given this thread of thought, is lead generation a feasible goal for such a community of tech savvy users, who often are abstracted a level or two away from key decision makers? You could track activity at an Account/Company level instead of individual but my feel is that such tracking could come at the cost of user trust — a commodity that is hard to establish but so easy to lose.

What this boils down to, is something very simple — should such community platforms where the intention is to do something good for end users be a Brand Awareness initiative or a Lead Generation/Customer Acquisition initiative?

This question depends on many factors and in particular the context as many “social” communities can certainly facilitate nurturing prospects to a conversion goal. A retail brand using Facebook to promote to potential customers presents a contrasting context to that of an multi-product/service enterprise providing value to an existing customer base in an open and transparent way.

Conclusion – Lay off the (digital) crack!

For me, as the Community Manager for Solution Exchange, my focus is on generating genuine value for Users (Customers, Partners, along with internal staff). It is therefore unbelievably clear to me that I am undertaking a Brand Awareness initiative primarily. Yes, lead generation through referrals and soft promotions is and will be possible but it should not take centre stage.

So maybe it is time for us to lay off the digital crack as it clouds our decision making. Balanced use of quantitative and qualitative data is what is needed here to make educated business decisions. This may not be appropriate in every “community” initiative but one that makes a whole lot of sense to me.

What do you think?

Tagged , , , , ,

An Open MVC Approach to OT WSM (RedDot) Delivery Server Functionality

This topic has been on my mind for some time now and inspired by a chat with Dennis Reil, I thought I would get something written down with the view to harvesting some of the views out there in the community.

The main context for this post is the enablement of Social Media features within an OT WSM project but the pattern described can be equally applied to other forms of integration through the use of the OT WSM Delivery Server.

I’ve long desired a way in which editors can be better empowered within the constraints of what the site builder/developer has allowed them to do with regards to features like commenting and tagging etc.  It turns out that the flexibility within the Management Server product provides us this very possibility.

With version 10 of the product, came the possibility for a SmartEdit user to drag and drop templates into containers from the panels available from within the SmartEdit view.  This was initially focused on the scenario where a SmartEdit user can build up the various content parts of a page but I’d question why can this not allow the same user to enable some functional elements with a page also?  Even without the drag and drop, the point of enabling that business user was something that I was interested in looking into.

Therefore, before I detail my proposed strawman, I think it is worthwhile to detail some of the guiding principles of the idea that has helped me shape this:

All Content in Management Server

This for me is a no-brainer and something that I often pass off as “best-practice”.  What I mean here is that everything as much as possible should live in the Management Server.  This means the content that is normally typically unique to Delivery Server should be within Management Server (e.g. XSLT and XML files) and published into Delivery Server.  More specifically still, those XML and XSLT files are set up as Content Classes and instantiated within the project tree structure. This provides the following benefits:

  • Keeps all assets together in a single repository
  • Allows the utilisation of version control within the Content Classes of this content
  • Allows for the possibility to parametrise elements within the templates through placeholders
  • Allows for the ability to permeate the setting of certain placeholder values through to SmartEdit users
  • A single project that can be published to have all set up within Delivery Server (although Delivery Server project and system config needs to be managed directly within Delivery Server)

Utilise the Existing Skillsets of Site Administrators and Developers

This is another important one for me to ensure that those wishing to adopt new features don’t suffer from that fear of learning another skill by facilitating the rollout of these features through the existing knowledge they already have.

Adopt an MVC Approach

Why is this important? Well, this well established, tried and tested pattern is there for a reason and you’d be able to search for it easily if you haven’t come across it before.  It nicely separates the responsibilities within the feature “module” and you’ll see how this separates out into different CMS pages or elements in the solution allowing for the constraining of access to the  various parts if need be.

An Open Approach

This one should be obvious and is actually related to the point about skills above.  Encapsulation is a good thing when used right, but when something that can and often needs to be customised is shut away behind what appears to the user as a black box, then that task has just got harder.  Therefore, an open approach of providing access to the various parts if needed is important.

The Provisional Proposal

The essence of this proposal is the creation of a feature module made up of different Management Server components:

  • A Configuration/Controller Content Class
  • A Controller/Model Content Class
  • A number of View Content Classes

It looks like I’ve sat on the fence with the “Controller/Model” part above so I’ll explain the purpose of each of the above Content Classes:

Configuration/Controller Content Class

From a SmartEdit user’s perspective, this is the main Content Class that contains the relevant enabling code/content for the feature.  It is this Content Class that can be dragged into a container on the page for instance.

Within this Content Class, several placeholders can be exposed to the SmartEdit interface allowing control of various feature parameters to a relevant level of user.  For instance, if the feature shows a list of comments and a comment form then a parameter may allow the user to set a “refresh time” for the comments, which translates in the technical world to how long the resultant calls under the covers are cached.

In principle, this Content Class refers to two other Content Class instances (actual CMS pages) – the XML Model and the XSLT defining the View.  In the simplest case, this may just contain a single include DynaMent:

<rde-dm:include content="anc_linkToXML" stylesheet="opt_listOfViews" 
                cachingtime="stf_refreshTime" />

It can be imagined that the option list and the standard field could be exposed through SmartEdit to allow user control.  If the option of the view is not to be given to the user, then an anchor placeholder can be used and a pre-assigned reference to the chosen view instance utilised.

Controller/Model Content Class

OK, so this Content Class is part Controller and part Model and the reason is because it contains the controlling code to invoke a given feature and the resultant XML provides us the model, which is the input to the view.

Typically, it is this Content Class that encapsulate the Delivery Server DynaMent language functionality and with OpenText’s Social Communities product, this will be using the HTTP DynaMent, which I have to say is a refreshing and strong addition to the product.

View Content Class

This is simply the XSLT that transforms the output XML from the feature into your resultant format.  Let’s keep it simple and assume we are generating a HTML result here.  One or more can be created if you wanted to provide different ways of using the model data. Of course, if it is just look and feel changes you’re looking to provide your users control in changing then this may be better implemented in CSS.  The various XSLT Content Classes are if the results are fundamentally used in different ways.  An example that I’ve often used is when a features should return XML or JSON – that’s simply a different XSLT file that is achieving this.

The Value

The value of such an approach is that it enables those with the relevant knowledge to encapsulate examples for others to use.  It therefore empowers business (SmartEdit) users to be able to choose functionality within certain sections of a page – for instance, a user can drag and drop comments or ratings onto an article page.  Finally, it shows an open approach for how such features can be enabled using elements that admins are familiar with – Management Server Content Class templates.

The Next Step: Your ideas!

In the first instance, I would like to understand people’s views on this with the intention to conclude the proposal by making a suggestion to how such a module can be packaged.  I would like to somehow make it possible that an admin can import the module into the Management Server and from there, complete a couple of minor configuration steps and then the module’s feature is available to the business user wherever the admin enables it.

Therefore, leave a comment or join the conversation at http://www.solutionexchange.info/forum.

Tagged

Canonical URLs and SEO

As I recently made a foolish mistake, I thought I would share it to help others avoid it in the future.  It was to do with my quest to get certain pages of the Solution Exchange Community platform indexed in Google, Bing, and Yahoo etc.  Specifically, the valuable forum threads.

First of all, it is worth mentioning how these threads are delivered.  The forum itself is an object of the OpenText Social Communities (OTSC) product, which interacts with the Delivery Server through the OTSC XML API.

Therefore, the forum thread pages are dynamically delivered with the shell of the page being the same physical page with the content influenced by parameters.  In this case, I’ve chosen to utilise sensible URL structures that contain the parameters for simplification and SEO.  I mention more about this in this forum post.  The use of rewrite rules in this way for SEO is one of the key values of a Front Controlling Web Server.

As the shell of the page is the same, I initially had the same <title> tag for all threads and thought that this was the problem.  After changing to adapt the <title> value to the title of the forum thread (along with waiting for re-indexing to happen) there was no change.

Finally, through checking the index of Solution Exchange on Bing with a “site:” search, I noticed to my surprise that one of the threads was indexed but was associated with the URL http://www.solutionexchange.info/forum.htm!!!  This was strange due to the fact that externally, the forum thread was only accessible through a URL like http://www.solutionexchange.info/forum/thread/{ID} meaning that I must be explicitly telling the search engines the wrong URL.  

This was the clue I needed to realise that my problem was due to something I had implemented many months before.

To address the potential SEO penalty that the home page of the community was able to be reached through http://www.solutionexchange.info/ and http://www.solutionexchange.info/index.htm, I introduced the use of the following html header link tag – the example below is the home page value but I included this across the whole site:

<link rel="canonical" href="http://www.solutionexchange.info/index.htm" />

You can read more about this on the Official Google Webmaster Central Blog.  In summary, it tells the search engines that this page is to be associated with the given URL and page ranking (or “Google juice”) is to be associated with that and not the entry URL that the crawler bot used.  This avoids the possibility of page ranking for the same page being split across two or more URLs or being penalised for duplicating content across multiple URLs.

With this knowledge, I was able to update the page template that houses this dynamic content to form the correct URL within this canonical link.  Now it’s back to the waiting game to see if the indexes will pick the content and forgive me for positioning different pages as one.

Although a small detail, the end goal and potential gain is huge as it opens up the rich content that continues to grow within the forum for discovery via the big search engines.  This in turn will only help those within the wider community who are not aware of Solution Exchange discover the content, which may help them resolve an issue or encourage them to take part in the community platform moving forwards.

As always, leave a comment or get in touch if you have any questions.

Tagged , , , , , , , , ,

A Mobile Approach in Concrete5

To compliment my short post on Multi-Site deployment of Concrete5, I thought I would add to it with a candidate approach for generating a mobile site.

Assumptions

With this approach, I’m assuming a setup of a mobile site on a subdomain as opposed to sub folder.  I would consider this best practice as URLs stay consistent between the desktop and mobile site, which is encouraged in the Mobile Web Best Practices Guidelines from the W3C (http://bag.gs/gReORJ).

I’m also assuming that the desktop site has been created already.  I appreciate that this may not always be the case and that in some circles, it is encouraged to consider mobile first.

1. Set up another site on your mobile domain

Follow the steps in Multi-Site deployment of Concrete5 for your mobile domain.  In my case, I’ll use the example m.example.com but do not go back to the basic install steps at the end and do not set up your site by providing the name, URL, and database details.

2. Share the Desktop Site Content

With thanks to the great MVC design of the Concrete5 product, all the content resides in the database away from the templates and the controls, which affect the view or presentation.  Therefore, sharing the database between sites is like sharing the content.

So, open up your site.php (e.g. /var/www/m.example.com/config/site.php) and simply copy into it the contents of the site.php from your desktop site.  This means you bypass the latter install steps.

3. Themes, Controllers, and Templates

Now that we share the content, we have a blank canvas for how we would like that to appear.  I actually take a copy of the desktop site files as a starting point as I can then build into them the various mobile optimisations and at least have a starting point where all the theme templates and blocks exist so nothing breaks.

So at this point, the site will be effectively a copy of the desktop site but on a different domain.

4. Device Detection

There are different services available for device detection but they all are orientated around detection through reading the User-Agent HTTP Header.

I ended up using Tera WURFL (http://bag.gs/ewCsoc) as I didn’t want to pay for something and thought that I could manually update ok.  This is working out fine for me but your needs may be different.

Based on the simple example available on the Tera WURFL site (http://bag.gs/fKsG2X), you can extract various properties about the device and make a decision about how you want to serve that device.

I chose to categorise devices based on certain screen resolutions and other capabilities meaning that I ended up with 3 different device profiles to serve: Basic, Intermediate, and Advanced.  From this, I can make decisions based on the profile within my templates.

 

Summary

I don’t make any claims for this being the best approach but I sure would like to get the conversation going about this.  I know there are other smart ways where people change the theme based on device (in comparison to the changing of domain based on device as exampled in this post).

As mentioned earlier in the post, I like this approach as it keeps a balance of URLs between the main and mobile site.  Additionally, block templates are not tied to a theme and so the view.php or custom template for a block on the desktop version could be totally different than in the mobile version.  This also means that you can use Concrete5’s great image helper in such templates to help optimise image sizes for mobile.

Please feel free to leave comments and questions and I’ll do my best to answer or improve the post.

 

Tagged , ,