A Multi-Site Approach in Concrete5

After seeing that many were having trouble understanding how they could use a single core Concrete5 installation to run multiple sites, I thought I would write this post to provide a strawman to help facilitate the discussion of how this can be done.

I run a server in the cloud using Amazon’s EC2 service and utilise an Ubuntu server with the usual LAMP setup.

1. Core Installation Preparation

To start, I have my core Concrete5 installation files in my default web site folder:

/var/www/default/concrete5.4.0.5

I keep the version number as this allows for the many different sites to use different cores if you wish.  How this is achieved will become clear later on in the post.

This basic setup of copying files is covered on the concrete5 site (http://bag.gs/hEWOT9) so I’m not going to repeat that here but it is important to say just copy the files for now.

N.B. Don’t go to the installed location in a browser yet – just copy the files.

2. Set up the domain that you want to host

This means that you can set up a separate virtual host in Apache or however which way you’d like to.  A search (http://bag.gs/heaC2L) will surely find you some assistance here.

The end result will be a virtual host that corresponds to your domain (e.g. http://www.exampledomain.com) and a separate directory from which that site is served on your server.  In my case, this directory is:

/var/www/example.com

N.B. I’m assuming that you’ve purchased your domain and have all the right DNS entries set to point to your server.

3. Set up site specific structure

This step is reasonably simple.  We are going to copy the entire contents of the core installation folder (concrete5.4.0.5 in my case) over to the new site directory so that we have the following:

/var/www/example.com/blocks
/var/www/example.com/concrete
/var/www/example.com/config
...
/var/www/example.com/updates

This copies what are in the most part empty directories and the inner concrete.  The root index.php is important as well as the couple of files in the config directory so make sure you take those across.

As concrete5 is well designed, the guys who have created the product have nicely permitted the extension of almost everything through utilising the directory hierarchy.  This means that as site developers, we should never touch the content of the internal concrete directory.  In the above case, the /var/www/example.com/concrete directory.  Consider it sacred!

If we obey this principle of not messing with anything in the core concrete folder, which helps us all upgrade, then you would notice that there is no value in that the …/example.com/concrete folder being a copy of the …/default/concrete5.4.0.5/concrete folder and you’d be correct.

Therefore, we are going to delete the /var/www/example.com/concrete directory and replace it with a symbolic link.  It is the symbolic link that allows us to point to different cores for different sites.

The syntax for the symbolic link when in the /var/www/example.com directory is:

...:/var/www/example.com$ ln -s ../default/concrete5.4.0.5/concrete concrete

Your Done!

Seriously, that’s all.  You can then pick up the basic setup (http://bag.gs/hEWOT9) at the point where you navigate to the new site, enter the site name, URL, and Database settings and away you go.

This small amount of preparation allows you to scale a single core instance of Concrete5 through using separate databases.

It would be great to get people’s views on this as I’m sure there are areas to improve or things that I’ve not considered.  Therefore, please leave a comment with your feedback.

Advertisement
Tagged ,

Why I’m not a Community Manager (although sometimes I say I am)


Whilst away on holiday in Morocco in between a little food poisoning and camel riding, I managed to complete another book that has been on my ‘to read’ list for some time.

Seth Godin’s book – Tribes (http://amzn.to/cX9JpK) – is a worthwhile quick read.  Although I didn’t find the content as valuable as Clay Shirky’s Here Comes Everybody (http://amzn.to/8ZyTXi), motivationally it was great as it really helped me verify some thoughts and ideas that have been lingering around for some time.  The premise is about how groups of people come together all the time due to a common idea and how a drop in transaction costs to form groups or stay connected i.e. through the use of Social Media has facilitated these tasks meaning that the effort of management is not really there anymore.

Instead of ‘Management’, what is needed to give a particular “tribe” direction and guidance is leadership and it is this that I prefer to refer to when I think of my role within the Solution Exchange platform (as well as Jack of all trades – master of none).  This in itself is relatively easy when the tribe is full of talented and gifted individuals and companies who innovate and lead everyday – no this is not some form of cringe worthy kissing ass.  In this case, “leadership” tasks are merely listening tasks.  This of course is a slight simplification but in the most part is true.  Listening is the consumption of audible information.  Observing the industry in which we all work is also a form of inward consumption and one where many input sources are used.  Choosing to implement ideas within the Solution Exchange or simply facilitating the more meaningful discussion for our customers (sometimes one leading to the other) is very valuable and it is this that I shall continue to try and take the lead on.

So, given all this, what am I saying? Maybe a manifesto is required? – I would like to continue and encourage discussion with those in the community, which the Solution Exchange platform is attempting to unify and connect, in order to bring light on examples of how customers are using and can better use Open Text product.  This in turn will help raise the profile of leaders within the community who are already doing great work and have done for years.  I’ve already established some great connections with some colourfully talented people in the last few months and I’d like to start putting some of these people (and companies) on a pedestal.  Lastly, and most  importantly to me, I would like to continue to lead by example and listen to the community to hear how improvements can be made and better connect the right people to take part in these discussions.  These conversations are so valuable as people inherently like to be listened to, especially when they see that someone has taken action as a result.  In my opinion, some of this is already happening and will continue to happen more and more adding value to the community initiative.

One final question; what is your part in this? Simple – contribute, discuss, and engage – please feel free to reach out to me to discuss what you think is right or wrong.

Twitter: DannyBaggs
Solution Exchange Feedback: www.solutionexchange.info/feedback

Thanks for listening!

Danny Baggs
Community Leader

Tagged , , ,

Moving Open Text Delivery Server to Common Search

As part of the small team behind the Solution Exchange, I was somewhat dreading the day when I had to change the internal search engine over to Open Text Common Search on the Web Site Management Delivery Server.

However, in absolute honesty, this was not the issue of complex configuration that I was expecting and I will explain the steps I took.

The Delivery Server Common Search Connector

  1. The first step is to install the Open Text Common Search product.  I was fortunate enough to have this already in our infrastructure so didn’t need to do this step.
  2. Assuming the Common Search is installed, you can log into Delivery Server, navigate to connectors > Search Engines > Administer and click the import button.  Version 10.1 of Delivery Server has a pre-configured connector that you can use.  Click the OTCommonSearch link to import the connector.
  3. Change the URL of the Common Search Server to the IP of your Common Search machine.
  4. Change the “Incoming directory of indexing jobs” to a shared folder.  This is a path as seen by the Delivery Server.  I’ve chosen to place this on the local machine of the Delivery Server and share that with the Common Search machine.
  5. Change the “Incoming directory of Common Search server” to point to the same directory as above but from the perspective of the Common Search machine.  I initially had problems here as the Delivery Server and Common Search were on different Domains.  We changed this anyhow to reflect better practice in our setup.
  6. Create the shared folder if you haven’t already and make sure both the Delivery Server and Common Search have read/write access.
  7. You’re done!

It was really that easy! (well, if I discount the delays due to not being able to share directories effectively across Windows Domains at first).

Finally, it is worth pointing out the tweaks I made to my queries for the new Search Engine.

When searching specific groups, you can now use the syntax:

group:<ds_group_name>

and

attributeName:'[#request:attributeExample#]'

for attributes.

Admittedly, I didn’t need to do anything more complex than this so there were not a lot of queries to change.

There may be more complex example out there but the key message is to start planning your changeover now as it might just be easier than you think!

As always, please leave your questions or comments.

Tagged , ,

What the **** is Social Media?

The following slide deck from Marta Kagan, which in my opinion, is one of the best I’ve seen to date on the subject of Social Media.  Partially because of its engaging format and eye catching messages but also because it is well researched.

After reading through this slide deck and only imagining how great the presentation would have been live, it got me thinking about a very relevant point made in Clay Shirky’s book – Here Comes Everybody.  He observes how Social Media and the communities that are formed from these ‘new’ tools actually lowers the cost of failure.  This is particularly relevant when you think of this in the context of an example Social Media campaign where those in the community are empowered to create short videos say, of them using a product.  From tens, hundreds, or even thousands of cheaply created contributions, many are going to be poor, some OK, and a minority are going to be fantastically engaging.  This power law (reverse exponential/long tail) shows how Social Media lowers the cost of participation to increase contributions and therefore increases the likelihood of discovering that golden piece of content that casts a large shadow over the others that does far more good than the others put together.

Naturally, there are risks involved also as the potential negativity is also large.  However, I don’t fear this as I’ve come to think that the nature of Social Media is a leveler or regulator of behaviour.  If you are seen to be pushing your brand unethically or are self-obsessed without desiring to understand the true value of your offering, then you’ll be found out and Social Media will provide a platform for people to call you out and damage your brand.  If you’re honest about the mistakes you make and open about what you are trying to achieve, you’ll be supported and supported in ways you never thought you would.

The main take away point from all this for me is the affirmation that we should all be empowering the communities that exist around our brands.  Whether large or small, it is the community that contains a brand’s most powerful “brand ambassadors”.  Giving them a voice and listening to what they have to say is far more powerful than making isolated decisions.  I remember once someone stating to me “never assume you know more than your audience”.  In today’s Social online world, never has that been so true.

Enjoy the presentation!

Tagged , ,

Automatic Translation with Google Language API

When faced with the need to help a multi-lingual community interact better, not intimidate one region or another, and generally facilitate interaction, language can be a huge barrier.

I’ve recently started to investigate what could be done when faced with this challenge as this is a very real problem for me and our http://www.SolutionExchange.info community platform.  Aggregation of user driven content can be a great thing but common publication processes like editing and translation are bypassed.  The availability of tools to aid an individual to publish his or her thoughts and opinions is of course a good thing in the most part as it allows for people to interact more quickly and easily removing the barriers that actually once prevented any kind of sharing or interaction (e.g. you were never able to publically comment on a newspaper article or spread a story without significant effort and cost).

With a wide and varied community, I investigated the use of the Google translate API accessible via the Google AJAX language API to start a trial to see how this automated process can help our users gain some context about content that may not be written in their mother tongue.  What is particularly useful, is that the API can detect the source language automatically, which is great when you have many languages within many sources.

The trial starts on the 6th August 2010 and I would like to run it over the course of a month to see whether this prototype evolves into something valuable for some of our users.  The feature can be seen in the footer of the site http://www.solutionexchange.info and must be invoked manually as no choices are currently remembered.  By design, this was a pro-active choice as I was keen to ensure users pro-actively decided to try out the feature and not become confused by auto-translated content that they had not expected.  Auto-translated content then shows up appended with a green asterix to indicate that the related text has gone through automatic translation.  Currently, Tweets, Solution Descriptions, and Community Feed items are just some of the sections under trial but this can be easily extended or refined depending on feedback.

I’d like to further and improve this trial so I’d happily take feedback here or through the feedback form on the site at http://www.solutionexchange.info/feedback.htm.

If  you have any questions then feel free to pop them in a comment below.

Tagged ,

When jQuery callbacks don’t appear to work in IE7

I recently encountered a frustrating issue where a callback function from a jQuery .get() call was not being fired in IE7 but was working in other browsers including IE8.  I had a hunch that it was related to the data object being used to transport the returned JSON but wasn’t sure how.

After hitting my head against a brick wall and following through a number of dead-end forum thread’s, I worked it out the old fashioned way, which is why I thought I’d write up my findings in this short article.

I was making a request using the .get() jQuery function that was returning JSON, which was perfectly valid (I checked with JSONLint.com). However, upon checking the JSON that was being returned when using the awesome Chrome Developer Tools, I saw a nicely formatted JSON string in the response content… and that was the problem – it was a nice human readable structure… IE7 doesn’t like that!

Removing all redundant whitespace on the server side when forming my JSON string resolved my issue and my callbacks were once again being called within IE7 along with all other browsers.

Leave a comment if you have any questions.

Tagged ,

IIS7, Tomcat & Application Request Routing

Further Update: 27nd June 2011

Another update on this topic. If you were making the use of custom error pages in IIS7 and you implemented the below update, you may have noticed that the custom error commands are no longer being adhered to. To change this, you need to set up custom error pages at a site level by choosing your site, selecting “Error Pages”, then “Edit Feature Settings” from action menu and then “Custom error pages”.

Important Update: 22nd June 2011

On page 2 of this article (How To Configure IIS 7.0 and Tomcat with the IIS ARR Module), there is a key step that I failed to observe when I wrote the original post below.  The step in question is the enablement of the (reverse) proxy server after the ARR install.  By doing this, you are able to apply rewrite rules at the site level — something I wasn’t able to achieve originally, which meant that the routing rules within my server farm were somewhat overloaded.

With this setting enabled, I can leave a single delegation rewrite rule at the server farm level, telling IIS to delegate HTTP requests of a certain pattern but leave the rewrite rules that are there for beautification at the desired site level.  This is a much tidier and more scalable approach.

One gotcha that you need to be aware of is that the rewrites at the site level need to be absolute URLs.  Therefore, you could be tempted to place the host of a single tomcat instance that lay behind IIS direct in here and it would work fine but why not allow for a little future proofing and use localhost within all absolute URL site level rewrites, which isolates the rewrites used for masking ugly application URLs and delegates the job of request delegation to the server farm?  This approach would allow for the server farm config to be used to bring other tomcat instances online or taken offline for maintenance etc without having to change the site level configuration.  In other words, it keeps the various areas of the IIS7 interface focused on the job in hand allowing for easier administration.

Please keep this update in mind as you read the otherwise unchanged original post below.

Regards,

Dan

After many years of using the Tomcat Connector (http://tomcat.apache.org/connectors-doc/) when setting up Tomcat behind IIS, it is now time to say goodbye.

This is the conclusion that I’ve come to after having some particularly significant challenges using IIS7 on a 64bit Windows 2008 machine.

The traditional approach I’ve used in the past has been to utilise the Tomcat Connector, which is implemented as an ISAPI Filter, to delegate requests from IIS through to Tomcat.  This has worked great for me in the past and was the subject of a previous article (http://bit.ly/lp6zW) but the 64bit system threw in a couple of additional challenges that weren’t so easy to get around.

The problems faced led me to discover Application Request Routing (ARR), an official extension for IIS7, which allows you to define the delegation of requests to servers sitting behind the IIS instance.

What is particularly nice with this extension is the way in which it facilitates the former approach within the GUI, making it easier to understand what is being delegated.  The approach however, is similar to the ISAPI filter approach – delegating based on URL path patterns.

The following takes you through an overview of how to set this up:

1. Install ARR

You can obtain the appropriate install for the ARR IIS7 extension at http://www.iis.net/download/applicationrequestrouting

Once installed, the ‘Server Farms’ node indicates that it has installed correctly as indicated in the picture below.

ARR Install

The Server Farms node is seen if ARR is installed correctly

A number of  modules are added as part of this extension.  You can find the details of these from the same ARR link (http://www.iis.net/download/applicationrequestrouting)

2. Create Server Farm

Although the concept of a ‘farm’ of servers may be overkill for our needs of delegating HTTP requests through ISS7 to Tomcat, we shall never the less set up a farm containing one server – our Tomcat instance.

To do this:

  1. highlight the ‘Server Farms’ node in the left panel of the IIS7 Management Console .
  2. Choose ‘Create Server Farm’ from the right hand side action menu.
  3. You will be prompted for a name for the farm.  For my  needs in setting up the Open Text Delivery Server behind IIS7, I gave the farm the name ‘Tomcat – Delivery Server’.ARR Server Farm Name
  4. You will then be prompted to set up a server in the farm.  In our case, we are just going to select the localhost instance of Tomcat running on port 8080. To specify the port, open the ‘Advanced settings’.  Strangely, there appears to be no easy way to edit a servers port once set up so make sure you are correct, otherwise you will have to delete and add a new server.

    ARR Add Server

    Make sure you open the Advanced settings to edit the port number

3. Configure the Routing Rules

Now that we have informed IIS7 about the server that sits behind, we need to let it know how we wish to delegate HTTP requests to it.  To do this, we choose the newly created Server Farm in the left hand panel and select the Routing Rules feature.ARR Routing RulesWithin here, we have a few options.  I’ve chosen to keep the defaults of having both checkboxes checked and have no exclusions set as I am delegating this responsibility to the URL Rewrite Rules.

From here, you can add and modify the rewrite rules defining how requests are delegated using the ‘URL Rewite’ link in the right-hand action panel.

In my case, I chose to change the default rule that was set up for me to a regular expression as opposed to the wildcard default.  However, I only chose this due to personal preference.  The pattern I used for this rule is:

cps(.+)

and I ignore the case.

Finally, I have no Conditions or Server Variables to take note of in my scenario although they can easily be added here, so I conclude the rule by setting the action to ‘Route to Server Farm’ and chose my ‘Tomcat – Delivery Server’ farm with a path setting of

/{R:0}

This passes all URL path info through to Tomcat.  I also choose to stop processing of subsequent rules

4. Refine Rules for your Environment

Lastly, in my setup, I’ve added the following further rules to refine how my site is served through IIS7:

Delegate .htm and .html requests:

Pattern - ([^/]+\.html?)
Action path - /cps/rde/xchg/<project>/default.xsl/{R:1}

Delegate .xml requests:

Pattern - ([^/]+\.xml?)
Action path - /cps/rde/xchg/<project>/default.xsl/{R:1}

Delegate default home page

Pattern - ^/?$
Action path - /cps/rde/xchg/<project>/default.xsl/index.htm

Summary

Although this approach of using IIS7 in a reverse proxy capacity may not benefit from the efficiencies of the AJP protocol used by the Tomcat Connector, the impact in most sites will be negligible.  In exchange, you have a way of Tomcat and IIS7 working together in a way where the GUI of the IIS7 Management Console helps admins define and understand what is happening.  The ISAPI Filter approach is often not so visible because of the broad nature of what ISAPI modules can provide but also due to the configuration required outside of the IIS7 Management Console.

As always, if you have any questions, leave a comment.

Tagged , , , , , ,

Progressive Enhancement via AJAX

I have recently been curious on how a normal web site with various posts and page reloads could be improved through AJAXifying it (yes, I did mean to say that – you get what I mean) – i.e. introduce AJAX calls to improve the smoothness of the user experience and minimise page reloads in key areas of a site.

In particular, my understanding of JS frameworks such as jQuery, based on the examples I’d seen, meant that form submissions were tied to specific knowledge of the form that the event was bound to.  For instance, when you bind the onSubmit event to a particular form, the many examples out there show functions that then pull content from the form through something like the following selector:

var inputVal = $('form input[name=user]').val();

This is OK for specific cases like a registration or contact us form that tends to be unique on a page but what if there were multiple similar forms on a given page and you simply wanted to submit all the form data to the same URL as before with the standard form submission but instead through an AJAX HTTP Request?

Somehow, having this sort of specific knowledge from within the handler function didn’t feel right to me, so I set off with the goal to find out how I can get the related form data from the info that is passed to the event function handler by default, without any extra manual passing of data etc.

This led me to the jQuery Event Object (api.jquery.com/category/events/event-object/), which through the target property provides a reference to the DOM element that initiated the event a.k.a. the element that I bound the event to.  This provides the key piece of information that I was missing.

Let’s take the following HTML code snippet that has 3 similar forms on a single page as an example:

<form name="form1" action="/getSomething" method="post">
  <input type="text" name="input1" />
</form>
<form name="form2" action="/getSomethingElse" method="post">
  <input type="text" name="input2" />
</form>
<form name="form3" action="/getAnotherThing" method="post">
  <input type="text" name="input3" />
</form>

Taking the above example, we can bind the event to all three forms in one go with the following:

$('form').submit(getSomethingFunction);

As we know we are passed the event object to the handler function, we can extract the form specifics being used once within the function:

function getSomethingFunction(eventObject)
{
    var formName = eventObject.target.getAttribute('name');
    var formAction = eventObject.target.getAttribute('action');
    ...  
    return false;
}

We can then initiate our post request using the jQuery serialize() function (api.jquery.com/serialize/):

function getSomethingFunction(eventObject)
{
    var formName = eventObject.target.getAttribute('name');
    var formAction = eventObject.target.getAttribute('action');
    $.post(formAction,$('form[name='+formName+']').serialize(), callbackFunction,'json');
    return false;
}

As you can see, through the power of jQuery, it simplifies this type of challenge with only a small amount of easy to follow code, allowing you to re-use the same (pre-AJAXified) server side code.

In my real case, I added a ?format=json to the post URL when calling via AJAX so that my server side PHP script knew that it didn’t need to send a full HTML page back as a response and instead sent a JSON formatted success/failure message back.

From this small investigation, I’m now interested in understanding what frameworks are out there that facilitate this type of progressive enhancement approach and utilise a widely adopted JS library such as jQuery.  Please leave a comment if you have any tips or advice.

Tagged , , , , , ,

Open Text Delivery Server with a Front Controlling Web Server

Overview

This post discusses the best practice of deploying the Open Text Delivery Server in an optimal way alongside a front controlling web server.

Delivery Server is a dynamic web server component that has strengths in coarse grained personalisation and dynamic behaviour as well as system integration.  Therefore, as it is housed within a Servlet Container, it is not the ideal location from which to serve static content (unless you wish to maintain a level of access control over the static content).

Leveraging the use of a front controlling Web Server, facilitates an optimised site deployment as web servers such as Microsoft’s IIS or Apache’ HTTP Server can be utilised for delivering static content in an optimised way.  For example, it is possible to easily configure a far future ‘Expires’ header on a given folder and therefore its content within either Apache or IIS, which promotes the caching of content in a user’s browser, which reduces page load times.  Another example is in the use of mature compression features within such web servers.  Although these examples can be achieved with some Servlet Container’s, it is certainly not straight forward and doesn’t necessarily make sense from an architectural perspective.

It is for this architectural reason, that best-practice dictates that we delegate only the relevant HTTP requests to Delivery Server.  In most cases, this means that Delivery Server is delegated requests for .htm and .xml resources.  The rest can be served from the front controlling web server (or better still a CDN).

This article provides a high-level overview of what to set up.  Depending on feedback, I may post further posts on the details of each step.

Delegating Requests from the Web Server to Delivery Server

This step can be easily achieved using the Tomcat Connector for both IIS and Apache. To find out more see the Tomcat Connector documentation here: http://bit.ly/at1w8G.

This connector uses the Apache JServ Protocol, which connects to port 8009 by default on Tomcat and is optimised to use a single connection between the Web Server and the Delivery Server for many HTTP requests.  Therefore, this represents a better option than using reverse proxy functionality within the Web Server.

If we take a typical Delivery Server install (i.e. the reference install using Tomcat), a page can be accessed with something like the following URL:

http://<host>:8080/cps/rde/xchg/<project>/<xsl_stylesheet>/<resource>

where resource could be any text based file like index.html or action.xml.

The result of correctly installing the Tomcat Connector means that we can access that same resource but through the Web Server on port 80 and not direct to the Tomcat instance on port 8080:

http://<host>/cps/rde/xchg/<project>/<xsl_stylesheet>/<resource>

Many confuse this step with URL rewriting or redirecting as the Tomcat Connector is often called the Jakarta Redirector.  Therefore, I choose to differentiate by saying that this delegates HTTP requests between the two systems and nothing more.

In every install, I have always used the defaults in the workers.properties file and just used the following rule in the uriworkermap.properties file:

/cps/*=wlb

URL Rewriting

Due to the effort of setting up delegation, deciding which HTTP requests should be forwarded to Delivery Server is a simple matter of performing some URL rewrites.

As we have decided to use a mature Web Server, there are best practice ways to achieve this.  In IIS6, HeliconTech (http://bit.ly/bgJEF6) created a very useful ISAPI filter which ports the widely adopted Apache mod_rewrite (http://bit.ly/cfvuLD) functionality.  For both of these, the same rewrite rules can be used.  The following provides a couple of typical examples:

# Default landing page redirect
RewriteRule ^/$ /cps/rde/xchg/<project>/<xsl_stylesheet>/index.htm [L]
# Rewrite to delegate all *.html or *.htm HTTP requests to Delivery Server
RewriteRule ^/?.*/(.+\.html?)$ /cps/rde/xchg/<project>/<xsl_stylesheet>/$1 [L]
# Rewrite to delegate all *.xml HTTP requests to Delivery Server
RewriteRule ^/?.*/(.+\.xml)$ /cps/rde/xchg/<project>/<xsl_stylesheet>/$1 [L]

Those of you who are well versed in regular expressions will see that the last two rules could be combined but I tend to leave them separate to aid readability.

The beauty of using regular expressions in this way is that you can actually create useful SEO benefits to your site also. Take for example the following rule:

RewriteRule ^/?.*/([0-9a-zA-Z_]+)$ /cps/rde/xchg/<project>/<xsl_stylesheet>/$1.htm [L]

This rule maps a URL with many apparent subdirectories to the Delivery Server file.  This means that you can publish a page with a “virtual” path within the Management Server which appears to a browser (and search engines) as something like the following:

http://<host>/this/is/a/descriptive/directory/structure/page.htm

and yet this maps to:

/cps/rde/xchg/<project>/<xsl_stylesheet>/page.htm

IIS7

Being a Microsoft product, IIS7 has some quirks with regards to the rewriting (of course), which I explained in a previous post: http://bit.ly/lp6zW.

Summary

This approach has led to many successful installations where sites could additionally be optimised for SEO and page load.

Tagged , , , , , , , ,

The Integrity of Football

I, like many millions of viewers, watched the World Cup play-off second leg between France and Ireland, hoping for an exciting and entertaining game.  That it certainly was, with the web providing a medium for football fans world wide to vent their anger about the way the French team qualified for the last 32.

In this game, I was fairly neutral but would be lying if I didn’t say that I clung to the romantic hope of Ireland making it to South Africa.  It all looked against them from the first leg but they dug out a great result in Paris over 90 minutes to force the game into extra time.

I don’t want to discuss the handball itself as I think we need to accept that these occurrences will happen from time to time.  I’m more concerned with how individuals, teams, and countries will be incentivised to reduce this happening.  I’ve been an admirer of Thierry Henry for many years as he has possessed exceptional talent that has been a joy to watch, which sometimes permits me to forgive his sometimes arrogant demeanour.  There have however, been occasions where I felt he has been close to crossing the line almost feigning fair play in retrospect.  This is unfortunately nothing more than an inkling as opposed to something that I can specifically state instances of but as an avid follower of football over the years, you do come to appreciate a perception of some players personalities.  Therefore, I have lost much respect for the man since Wednesday as his actions of running exuberantly to the goal scorer Gallas said a lot about the levels of true remorse.  I’m afraid from my view, the public show of apology by sitting with Richard Dunne at the end of the game came across as nothing more than a public relations exercise as he retrospectively knew he had done wrong.  To say “It was handball but I’m not the referee” is nothing more than a cop out.

So, its happened, these things have happened for years, what are FIFA going to do about it?

Well, although the Irish Football Association have requested a replay, it is unlikely to happen.  If FIFA are to refuse such a replay in a situation where TV footage shows a wrong-doing, the player admits the infringement, and French fans openly are ashamed at the way in which their team has progressed, then I would like to know what positives the world leading body will take from this.  Will they stick their heads in the sands again to the problem?

I am critical of bad referees who simply don’t man-manage players well but I do not class referees who make mistakes as bad referees.  To me, these are referees who need better support.  For instance, would the extra official on the goal line that is being trialled in the Europa league have been able to spot the offence?  Most likely as they stand on that side of the goal only metres away from where Henry handled!

This also will raise questions about the use of video technology in the game, something that has been used very successfully in international rugby.  I recall having the same conversation with my parents and friends of the family as a 9 year old in 1986 after the “Hand of God” incident so why then have we not progressed in 23 years?  Why would FIFA not want to use this?  Wasn’t it rumoured that it was through video technology that Zidane got sent off in the last world cup final?

It is all well and good stating that a replay will cause chaos but not using this high-profile game as a reason to introduce greater support for referees is ignorant and negligent to their responsibility as a governing body.  They are scared that a precedent will be set but I will argue that one needs to be set to avoid a repeat of such unsporting behaviour.  As mentioned before, rugby has introduced improvements to support referees, cricket has also for many years so why hasn’t football.  We cannot simply go on placing more and more unfair responsibility on referees.  I strongly believe that this inactivity will only promote the certain thick-skinned officials like Steve Bennett (see my earlier comment about poor man management as I believe Mr. Bennett is a first class example of this) and less of the communicative and bold referees we see in international rugby who gain belief and faith in their own judgement from the support they receive.

To finish off, what realistic options did the referee have on Wednesday?  Yes, he could have spoken with his assistant but let’s assume he didn’t notice the handball either (as he may have been still thinking whether he failed to signal offside – a skill that sometimes requires chameleon eyes) and it did all happen rather quickly.  What options remain for referees in the modern game? Listen to the players or make an educated blind guess? Referees that have taken players’ body language and reaction into account in the English Premier League in the past and have boldly changed decisions (correctly) have been punished so what could a referee on an international stage do?!

In dead ball situations i.e. like when it is resting in the back of the net, why oh why can we not use a video official to support referees?

Tagged , , , , , , ,