Life after ThoughtWorks Part Deux

Last year i wrote a blog post Life After ThoughtWorks and this year i was reminded of it by an (now ex) ThoughtWorker Chris Read.

I re read the old post and marvelled at how far i had come in twelve months, and also how my attitude towards TW has changed too.

Now, twelve months on from that last post and I’m older for sure, more knowledgeable too, but i’m still passionate about Agile software delivery. So passionate in fact it often leaks out into my daily life. I have jokingly called my self evangelical about it. I’m often challenged by people who have had a less than optimal experience with Agile as a software delivery methodology, and sometimes i can feel their hate and anger for this stupid thing called Agile that everyone is talking about.

So, first up an apology, to a good friend of mine Keith Henry. Keith was talking to me about this weird and wonderful stuff called Agile at least 6 years ago, and i dismissed him as a nutter, while he just nodded and grinned; he knew I would succumb eventually, then he could (rightfully) say i told you so. Keith, i’m sorry it was I that was the nutter.

Next up a thank you. To several ThoughtWorkers who have changed my perceptions, shared their wealth of knowledge with me, and listened to my many, many criticisms, scepticism and pessimisms. They taught me many things (often through observation and not directly), and so, in no particular order:

Luke Barrett – for many things, the consummate consultant and voice of reason.
Graham “Wookie” Brooks – for teaching me that you can teach an old dog new tricks.
Chris “Dude” Read – for the filth and the energy.
Tom “Boobies” Scott – for teaching me to get my game face on, and never high 5 in public.
Richard “ffs” Fillippi – for the “Victor Kiam” moment, classic.
Sam Newman – don’t over promise and under deliver.
Jo Cranford – for making nice story cards and making coke come out of Chris Reads nose.
Pat Kua – Agile and performance testing can work.

I worked with many, many more ThoughtWorkers, but those above are the few who have shaped who I am twelve months on.

The big Question – Are we coping?

So TW roll off, hows it all hanging, are we still Agile? I think we mostly are, there have been a few times were the edges became a little frayed, but I think that in the past we may have stood by and watched as the fabric fell apart. This time we were able to tell that we needed to act, and were able to put in place a fix that was secure enough to have the required longevity and light enough as so not to deplete us of time and resource, and of course, we left the repair until the last responsible moment.

Our big challenge is not staying Agile, but keeping one team, that is distributed across two locations in sync. More than that, keep the synergy. That is not a software delivery problem, that’s a people problem.

Painting over the rust – Preparation, preparation, preparation.

I thought I would follow on from my post painting over the rust.
It would be too easy for me just to say, hey our business people don’t get Agile, boo hoo. So I thought I would try and address the issue by thinking about how I felt when I was first introduced to Agile. Thinking about it this way allowed me to take a step back and understand why they don’t get it.

1. All Requirements are not needed up front.
This ones a biggie. I wrestled with it for ages, but then I would, I’m a tester and I need those requirements to do my job; after all the V-Model is nothing without upfront requirements, right?
How did I overcome this? Easy (for me at least), in that in our old way of working, even when the requirements were available early, they often didn’t reflect what was actually delivered. This is software delivery after all; we expect the customer to change its mind during the life cycle. So if I’m used to finding that the requirements documentation don’t match what’s delivered into test, why am I so worked up now? Oh wait, I’m not after all. It just felt uncomfortable because its new to me.

2. Story Points are a guess but the deadline is still firm (if not impossible).
This one is a bit of a paradox. Both are true. During estimation we size the stories with t-shirt sizes or points. The point or size at that stage is no more than an informed/educated guess. Again I had a hard time with this, until I realised that it is just a concept that allows us humans to gauge the relative size of the stories. Wrap your head around the simplicity of that device, and your home free. Sure it’s a guesstimate, but we can still meet the deadline, knowing how much work is in there. Once you get the fact that its just about relative sizing and not a unit of currency for determining performance, it becomes second nature. Its all about making sure the team gets the same amount of work passed down the conveyor belt, and that they don’t get under or over utilised. Once you know they can knock out two 4 pointers and a 6 per iteration, you know your capacity (well mostly).

3. Tasks do not get assigned at Iteration Kick off.
I didn’t struggle with this too much, in that previously a task could sit with someone that didn’t have the bandwidth to complete the task for a very long time. One important protocol is leaving things until “the last responsible moment”. By not assigning tasks we can leverage the flexibility that the last responsible moment affords. Also, during the iteration planning, the team can agree what tasks make up a story and perhaps identify the required SMEs. The whole team takes Responsibility for promises the team made. We will deliver value.

4. An increase in speed comes from maximising Quality.
It took me a while to fully understand the power of this. If the team gets it right first time, then it spend less time fixing broken code, or miss understood/poorly captured requirements. I’m sure some Kanbaners out there will be nodding sagely. But this is one of the core principals that Toyota follow, they claim to produce one brand new, fully tested, inspected and passed vehicle in the same time other manufacturers spend snag fixing at the end of their lines.
Traditionally in software delivery, code fixes are not time boxed, and a defect can loop through QA and DEV several times before its fixed. One pair can sink a load of time into fixing the defect while QA source the effort around regression. It’s a massive waste. Enforce a Zero Defect policy; get the business to sign up to it, get buy in from the operational functions (Helpdesk, System Administrators etc) and say goodbye to the defect backlog. Get it right first time and watch throughput soar. Write your tests, then your code, then apply your design.

In Summary

It was difficulty for me to adjust to being a tester in an Agile world, and it took me time. I learnt through an apprenticeship with ThoughtWorks. They may have their own flavour of Agile, regardless its not something you can learn in a book or on a course. Much of what i learnt about Agile is more about your mindset, and self discipline.

Several times along the way, I thought, this is rubbish, this is causing pain and this doesn’t work. But hey that’s Agile right? If it doesn’t work for you, you can stop, change, and monitor for improvement. Once you appreciate that, you become empowered. You can leverage the power of one.

But, we had left our business behind. We (the software delivery team) decided we had to be Agile to deliver a project, without considering the impact to the business. Worse than that, we undertook an apprenticeship with ThoughtWorks and expected the business to understand story points, velocity, waste, lean, eXtreme Programming… the list goes on.

If you haven’t taken the time to bring your business along for the ride you cant expect them just to get it.
If you use enough filler, you can paint over the rust.

Life after ThoughtWorks Part Deux

Last year i wrote a blog post Life After ThoughtWorks and this year i was reminded of it by an (now ex) ThoughtWorker Chris Read.

I re read the old post and marvelled at how far i had come in twelve months, and also how my attitude towards TW has changed too.

Now, twelve months on from that last post and I’m older for sure, more knowledgeable too, but i’m still passionate about Agile software delivery. So passionate in fact it often leaks out into my daily life. I have jokingly called my self evangelical about it. I’m often challenged by people who have had a less than optimal experience with Agile as a software delivery methodology, and sometimes i can feel their hate and anger for this stupid thing called Agile that everyone is talking about.

So, first up an apology, to a good friend of mine Keith Henry. Keith was talking to me about this weird and wonderful stuff called Agile at least 6 years ago, and i dismissed him as a nutter, while he just nodded and grinned; he knew I would succumb eventually, then he could (rightfully) say i told you so. Keith, i’m sorry it was I that was the nutter.

Next up a thank you. To several ThoughtWorkers who have changed my perceptions, shared their wealth of knowledge with me, and listened to my many, many criticisms, scepticism and pessimisms. They taught me many things (often through observation and not directly), and so, in no particular order:

Luke Barrett – for many things, the consummate consultant and voice of reason.
Graham “Wookie” Brooks – for teaching me that you can teach an old dog new tricks.
Chris “Dude” Read – for the filth and the energy.
Tom “Boobies” Scott – for teaching me to get my game face on, and never high 5 in public.
Richard “ffs” Fillippi – for the “Victor Kiam” moment, classic.
Sam Newman – don’t over promise and under deliver.
Jo Cranford – for making nice story cards and making coke come out of Chris Reads nose.
Pat Kua – Agile and performance testing can work.

I worked with many, many more ThoughtWorkers, but those above are the few who have shaped who I am twelve months on.

The big Question – Are we coping?

So TW roll off, hows it all hanging, are we still Agile? I think we mostly are, there have been a few times were the edges became a little frayed, but I think that in the past we may have stood by and watched as the fabric fell apart. This time we were able to tell that we needed to act, and were able to put in place a fix that was secure enough to have the required longevity and light enough as so not to deplete us of time and resource, and of course, we left the repair until the last responsible moment.

Our big challenge is not staying Agile, but keeping one team, that is distributed across two locations in sync. More than that, keep the synergy. That is not a software delivery problem, that’s a people problem.

Painting Over the Rust

Having Story Cards and Stand-Ups doesn’t make you Agile.

It is with a heavy heart that i watch the team descend into the latest form of chaos. Looking down on it now, i can see how we arrived here. One of the developers used the phrase “painting over the rust” to describe his dissatisfaction with the architectural decisions being made early on in the project. He was right, however what he didn’t know, was he was right on a number of levels.

I’ve struggled for some time with not being able to intelligently articulate what Agile software delivery is exactly. I hear lots of people repeating the the same old mantras, but for me, they lacked depth. Reciting the Agile manifesto to my grandmother isn’t going to help her understand what Agile software delivery is. I reason that if I don’t feel Able to describe it t my grandmother, how could i ever manage to explain it to the upper echelons of the executive?

Its not Agile, its fragile.

So the team has a story wall, and swim lanes and cards. Pretty standard stuff (actually the wall is further segmented by geographical location, because the team is split across two sites). Estimation is done planning poker style and points are points, pink elephants if you like, not days, or elapsed time (although we have worked out how much a point cost).

The team knows how many points we can get through the production line, and is comfortable managing the loss in throughput from context switching or working on new technologies etc.

We have a team made up of DEV pairs, QAs and BAs, and they understand what Agile software delivery is. They understand why we pair program. The developers are using TDD and we automate as much as we can using Continuous Integration. We have stand ups, retrospectives, iteration kick off meetings and showcases. We have brown bags and we use collaboration tools. We as a software delivery team have made the paradigm shift. But its still all buggered, why? Effectively, all we have done is painted over the rust.
The rust in our case is, (unfortunately) the executive. They don’t get it. Worse, yet i don’t think the product owners don’t get it, and in some cases i even wonder if the project managers get it. So what can we do?

We could shield the business from our internal workings. Let them treat us like a black box. They request function x, and we return a cost and an estimated delivery date. That way, they don’t have to get it, they don’t have to understand it, nothing changes for them, and the delivery team carries on irrespective. They then don’t need to learn about points or swim lanes or story cards or any other aspect of Agile. But the rust is still there, under the shiny new Agile paint, and its not going to go away, and left untreated it could get worse.

In the real world, there are only two real ways to treat rust. Cut it out, or blast it away. You can then replace the resultant hole with shiny new metal.Once the new metal is in place, its a good idea to apply a liberal coating of rust inhibitor.

Painting Over the Rust

Having Story Cards and Stand-Ups doesn’t make you Agile.

It is with a heavy heart that i watch the team descend into the latest form of chaos. Looking down on it now, i can see how we arrived here. One of the developers used the phrase “painting over the rust” to describe his dissatisfaction with the architectural decisions being made early on in the project. He was right, however what he didn’t know, was he was right on a number of levels.

I’ve struggled for some time with not being able to intelligently articulate what Agile software delivery is exactly. I hear lots of people repeating the the same old mantras, but for me, they lacked depth. Reciting the Agile manifesto to my grandmother isn’t going to help her understand what Agile software delivery is. I reason that if I don’t feel Able to describe it t my grandmother, how could i ever manage to explain it to the upper echelons of the executive?

Its not Agile, its fragile.

So the team has a story wall, and swim lanes and cards. Pretty standard stuff (actually the wall is further segmented by geographical location, because the team is split across two sites). Estimation is done planning poker style and points are points, pink elephants if you like, not days, or elapsed time (although we have worked out how much a point cost).

The team knows how many points we can get through the production line, and is comfortable managing the loss in throughput from context switching or working on new technologies etc.

We have a team made up of DEV pairs, QAs and BAs, and they understand what Agile software delivery is. They understand why we pair program. The developers are using TDD and we automate as much as we can using Continuous Integration. We have stand ups, retrospectives, iteration kick off meetings and showcases. We have brown bags and we use collaboration tools. We as a software delivery team have made the paradigm shift. But its still all buggered, why? Effectively, all we have done is painted over the rust.
The rust in our case is, (unfortunately) the executive. They don’t get it. Worse, yet i don’t think the product owners don’t get it, and in some cases i even wonder if the project managers get it. So what can we do?

We could shield the business from our internal workings. Let them treat us like a black box. They request function x, and we return a cost and an estimated delivery date. That way, they don’t have to get it, they don’t have to understand it, nothing changes for them, and the delivery team carries on irrespective. They then don’t need to learn about points or swim lanes or story cards or any other aspect of Agile. But the rust is still there, under the shiny new Agile paint, and its not going to go away, and left untreated it could get worse.

In the real world, there are only two real ways to treat rust. Cut it out, or blast it away. You can then replace the resultant hole with shiny new metal.Once the new metal is in place, its a good idea to apply a liberal coating of rust inhibitor.

DNS pre fetching

DNS pre fetch is something i have been looking into, but so far as in-browser DNS pre fetch tags go, its still a dream…

<link rel=”dns-prefetch” href=”http://working-thought.blogger.com”>

Mozilla and Chromium browsers should be able to perform DNS pre-fetching to reduce the overall page load time, however, testing this out doesn’t match the published behaviour.

  • MSIE simply does not pre fetch, as expected, perhaps IE9 will?
  • Google Chrome does pre fetch and it does seem to do it asynchronously (as per published specs), not surprising, see below.
  • Safari does not pre fetch.
  • FireFox should be able to pre fetch (see here) but i couldn’t make it go.

Meanwhile Google have re invented the DNS server with one that incorporates DNS pre fetch.

Normally when the ISPs DNS server goes back to the authoritative host to get the DNS record it just caches it without inspecting the TTL on the record. For our site that is 300 seconds, which is deliberately quite short. The next time a user requests the DNS record from their ISPs DNS server, only then does it inspect the TTL and if its expired then it will go back to the authoritative host.

Google found that their googlebot was spending more time looking up DNS records, than it was spidering pages. So they made their DNS server inspect the TTL of the record and before it expired fetch it again so the cache is kept hot.

If you give yourself a tight target like a “1 second homepage” then you may see 700ms wasted in DNS resolving. If that’s the case give the google DNS server a go. its available on 8.8.8.8 and 8.8.4.4 its quite shocking the difference it can make.

Stuart

No liars please, we’re testers.

I’ve been prompted to write this after sifting through another load of dross CVs. I say another because I was recruiting heavily last year. A recruitment drive that took me twelve months to find just two test analysts.. hold that thought.

Its not that I didn’t get many applicants the first time round I did, hundreds in fact (quite literally) and I duly read and scored every CV personally and feedback too. Not only that, I also had two of my peers review the CVs too. You cant ask for more than that. If two out of three agreed, the decision was final.

The reason it took a year was down to the quality of the CVs. At least half of the CVs we read didn’t state the daily tasks that you would expect a career tester to be doing. Only if you were playing at being a tester would you make that mistake. Unfortunately sometimes the CV would tick all the boxes and we would invite the applicant in for interview. Imagine their horror when they were faced with a simple exercise to test their SQL skills even though on their CV they had waxed lyrical about how they practically spoke to their friends in SQL because they had used it so much they were fluent.

“so I have a table called customer, with the fields ID, Name, Address. How would I get all of the records form the table?” you should see some of the answers. Its often hard for me not to shout “just give up, I know you haven’t got a bloody clue despite what it says on your CV” they muddle on “GET ALL RECORDS FROM TABLE WHERE NAME == CUSTOMERS AND ID….”

Imagine also them recoiling in their seats when I ask them to draw out the V-model and label it.
“What’s that you say, you don’t know how? But here on your CV you state you are a senior test analyst and have and ISEB foundation certificate, how do you not know the V model?” they shuffle their feet and mumble how they studied at home, well not actually studied so much as bought a guide on how to pass that’s full of example questions.

I watch in wonder as their faces contort when I ask them “so what is the difference between white box testing and black box” I let them fumble through telling me how they have used both of those “methodologies” and I follow up with “can you give me some examples where you white box tested?”

Then come the questions on web testing (its what we do after all) “so whats a cookie I ask”, the smile, easy they think “it’s a virus you get from visiting sex sites” oh my! What should I do as a tester? “never ever accept cookies, they track all your movements, like little spys in your computer”.
I follow with a simple exercise about shopping carts and sessions to see if the candidate understands why a cookie maybe important here. “the system gets all that info from the cookie” but how did it get in the cookie? “from the internet” can I see it, I want to see my cookie “oh no you cant see them, they are secret”.

I have even had to terminate several interviews because it became apparent very quickly that the applicant in the chair didn’t actually know what was on their CV because they had just copied it off a (delete as appropriate) friend / colleague / LinkedIn profile.

It was so bad in the past that we setup an online quiz. It was very simple, multiple choice, some questions around testing, some around our domain. Some very easy questions “Which of the following is a search engine” with an obvious answer “google”. We discovered a side effect of an easy question like that is that we could see how fast the candidate answered the question when they knew it straight off the cuff (about 9 seconds for that question) and compare it to a testing related question that took 3 them minutes to answer (did they have to search for the answer?). The test was very easy for a career web tester, but not so easy for an IT support person or BA or even a developer who fancied a move into testing. Its only real purpose was to filter out the complete time wasters.

So here I am again, I’m hiring and I’m inundated with CVs and again 50% are pure wasted bandwidth (I don’t give them the luxury of printing them out). But this time I don’t have the online test, and I’m gnashing my teeth at some of the incredulous stuff in these CVS. Some of them read like horrible blog postings “on this job we had this challenge and so we had to X because of Y but then Z happened and so we used plan A…” blah blah blah “then the business wanted B but I wrote the very detailed spec of C” bletch grrr spit pfftt, its all I can do to stop myself posting these fetid monologues online for no other reason than ridicule and I hate myself for it.

So faced with the prospect of interviewing a load of (lets be blunt here) bullshit artists only to show them the door at the end of it isn’t a prospect I’m overjoyed with. I don’t want to spend two hours of my life demonstrating why the candidate is a liar. I don’t want to be associated with these bottom feeders in a professional sense either. I loathe them, and I loathe the arseholes that gave them a “consultant” badge at Logica (or any other faceless body shop), because now they think they are gods gift and we should roll out the red carpet for them.

I will continue to sift through the dross, the cream always floats and that’s what I’m after, the cream, the crème de la crème.

So if you are interviewing a tester that tells you that you gave them a much easier ride than in a previous interview they attended, you know I rejected them, and that you may want to make use of that probationary period.

But if you’re a testers who’s CV isn’t straight up and down, you may want to rethink applying for a job with me.

Oh and by the way, don’t put “I have a keen eye for attention to detail” then litter your CV with spelling mistakes poor grammar and mixed styling!

No liars please, we’re testers.

I’ve been prompted to write this after sifting through another load of dross CVs. I say another because I was recruiting heavily last year. A recruitment drive that took me twelve months to find just two test analysts.. hold that thought.

Its not that I didn’t get many applicants the first time round I did, hundreds in fact (quite literally) and I duly read and scored every CV personally and feedback too. Not only that, I also had two of my peers review the CVs too. You cant ask for more than that. If two out of three agreed, the decision was final.

The reason it took a year was down to the quality of the CVs. At least half of the CVs we read didn’t state the daily tasks that you would expect a career tester to be doing. Only if you were playing at being a tester would you make that mistake. Unfortunately sometimes the CV would tick all the boxes and we would invite the applicant in for interview. Imagine their horror when they were faced with a simple exercise to test their SQL skills even though on their CV they had waxed lyrical about how they practically spoke to their friends in SQL because they had used it so much they were fluent.

“so I have a table called customer, with the fields ID, Name, Address. How would I get all of the records form the table?” you should see some of the answers. Its often hard for me not to shout “just give up, I know you haven’t got a bloody clue despite what it says on your CV” they muddle on “GET ALL RECORDS FROM TABLE WHERE NAME == CUSTOMERS AND ID….”

Imagine also them recoiling in their seats when I ask them to draw out the V-model and label it.
“What’s that you say, you don’t know how? But here on your CV you state you are a senior test analyst and have and ISEB foundation certificate, how do you not know the V model?” they shuffle their feet and mumble how they studied at home, well not actually studied so much as bought a guide on how to pass that’s full of example questions.

I watch in wonder as their faces contort when I ask them “so what is the difference between white box testing and black box” I let them fumble through telling me how they have used both of those “methodologies” and I follow up with “can you give me some examples where you white box tested?”

Then come the questions on web testing (its what we do after all) “so whats a cookie I ask”, the smile, easy they think “it’s a virus you get from visiting sex sites” oh my! What should I do as a tester? “never ever accept cookies, they track all your movements, like little spys in your computer”.
I follow with a simple exercise about shopping carts and sessions to see if the candidate understands why a cookie maybe important here. “the system gets all that info from the cookie” but how did it get in the cookie? “from the internet” can I see it, I want to see my cookie “oh no you cant see them, they are secret”.

I have even had to terminate several interviews because it became apparent very quickly that the applicant in the chair didn’t actually know what was on their CV because they had just copied it off a (delete as appropriate) friend / colleague / LinkedIn profile.

It was so bad in the past that we setup an online quiz. It was very simple, multiple choice, some questions around testing, some around our domain. Some very easy questions “Which of the following is a search engine” with an obvious answer “google”. We discovered a side effect of an easy question like that is that we could see how fast the candidate answered the question when they knew it straight off the cuff (about 9 seconds for that question) and compare it to a testing related question that took 3 them minutes to answer (did they have to search for the answer?). The test was very easy for a career web tester, but not so easy for an IT support person or BA or even a developer who fancied a move into testing. Its only real purpose was to filter out the complete time wasters.

So here I am again, I’m hiring and I’m inundated with CVs and again 50% are pure wasted bandwidth (I don’t give them the luxury of printing them out). But this time I don’t have the online test, and I’m gnashing my teeth at some of the incredulous stuff in these CVS. Some of them read like horrible blog postings “on this job we had this challenge and so we had to X because of Y but then Z happened and so we used plan A…” blah blah blah “then the business wanted B but I wrote the very detailed spec of C” bletch grrr spit pfftt, its all I can do to stop myself posting these fetid monologues online for no other reason than ridicule and I hate myself for it.

So faced with the prospect of interviewing a load of (lets be blunt here) bullshit artists only to show them the door at the end of it isn’t a prospect I’m overjoyed with. I don’t want to spend two hours of my life demonstrating why the candidate is a liar. I don’t want to be associated with these bottom feeders in a professional sense either. I loathe them, and I loathe the arseholes that gave them a “consultant” badge at Logica (or any other faceless body shop), because now they think they are gods gift and we should roll out the red carpet for them.

I will continue to sift through the dross, the cream always floats and that’s what I’m after, the cream, the crème de la crème.

So if you are interviewing a tester that tells you that you gave them a much easier ride than in a previous interview they attended, you know I rejected them, and that you may want to make use of that probationary period.

But if you’re a testers who’s CV isn’t straight up and down, you may want to rethink applying for a job with me.

Oh and by the way, don’t put “I have a keen eye for attention to detail” then litter your CV with spelling mistakes poor grammar and mixed styling!

Hokey Cokey or Hocus Pocus

Back in September 2007 we released a new version of our search application.
The new version was step change for us. At that time we were powering the core of our search offering with an Oracle database, and a Java application that returned flat html. It was all very web 1.0, and we had begun to see issues with the performance of the site and discovered that throwing 8 more servers into the oracle grid didn’t gives 8 x more power. We took the oracle database out of the mix and brought in Endeca search.

The Endeca API allowed us to show the visitors how many of the things they were searching for were available before they submitted the search from. For example, if you were searching for a BMW 5-Series, the fuel type drop down on the search form would list the number available next to the drop down [Petrol (5), LPG (2)]. So a big change to the “build your search and submit it and hope it returns results” model we had previously used. To be able to allow this feature to work we had to use Ajax, or more specifically JSon. So as the user changed their criterion the relevant drop down were updated without refreshing the form. So like I said a step change for the front end, the back end and user behaviour.

The new version was released in stages, inviting visitors to the site to try the new version. This tactic has its own associated problems (for example only a certain type of person will follow a “try our new X” link, so your new application doesn’t get exposure to a good representation of your audience), once the visitors had interacted with the new search form, we invited them to give us some feedback, so that we could improve on what we had we had done. Below is a selection of that feedback:

It crashed 5 times and slow.
Takes longer, too complicated, should be kept simple
Too slow!!
Not as easy to use
Very slow to bring up menus, Spent time waiting.
It doesn’t work – my searches keep spontaneously disapearing (Cars)
is slow. maybe is my broadband problem.
I don’t want to know how many cars in the uk, I just want to know how many in my local area
It’s silly to have to click ‘show your results’ it was better on the previous version where it showed the results.
Too slow in uploads.
More criteria = more time.
Too many details to put in .
More options, as not 100% encyclopaedic knowledge of cars, the sub model option was difficult .


So, pretty damming stuff. But something didn’t make any sense. We had rigorously tested the performance of the system and were confident that it was faster than the old system. The Market leading browse back then was IE6 and given that we had engineered or built it for IE6 it positively flew in Opera or FireFox. So we were perplexed. That is until we did some usability testing (I wont discuss the fact that the usability testing was too late in the project to be really beneficial).

The usability testing did allow us to understand why we got so many “slow” comments on the feedback. Faced with all the feedback the new search form gave the users as they refined their search, they believed two things, 1. That they had to fill in all of the options and 2, that they couldn’t interact with the form until the animated counters stopped moving.

Manufacturer, Model, Variant, Trim, Colour, Fuel Type, Mileage, Age, Min Price, Max Price, Distance from the visitor. As the user slowly changed each of the drop down controls on the search form, some options would become unavailable (greyed out). This was because the back end had contracted them out of the possible results. If no Red BMWs were available, the colour Red would not be available for choice on the drop down control. So the user would change say model to 3-Series and find there wasn’t any Red available on the drop down, so they would back up and change 3-Series to 5-Series and so on. They didn’t realise you could just search for all the Red cars within 20 miles of their house, and drill down from there. To some extent they still don’t 2 years on.

It reminds me a little bit of when I was working on a project with BT and the then new System-X exchanges. The switches could support loads of (then) new features (things we take for granted today, like 1471 in the UK). Being a geek I was amazed at what I could do with a DTMF (touch tone) phone, and went out immediately and bought one. The next day I asked why BT hadn’t publicised any of the features and capabilities. Their response was immediate, dry and serious. “Our users won’t understand them”. I can still remember how I felt, almost like I had stumbled into some great conspiracy. BT wanted to keep people in the dark, and protect them from the nasty technology that might confuse them.
It was several years later that I received a booklet with my phone bill that explained the features and how to access them. Having used the features for sometime at hat point, I had great difficulty in understanding the booklet. Maybe BT were right, maybe it was all too confusing.

Fast forward to now, and my current project. Again another release and another step change. This time the look and feel of the site has been overhauled. The backend is still Endeca powered, but the Java app has been completely rewritten. And in the rewriting of the application we have taken the opportunity to bake testing in from the start. The JavaScript, cascading style sheets and html are all tested automatically. Regression should be a thing of the past (but that’s another blog post), the application has unit testing, functional and non functional testing applied at every check in. The functional testing has been expanded into “User Journey” testing, in which likely user scenarios are played out. All of this happens automatically before the application reaches QA. Then the QA team goes to town, with time to exercise their real skill, exploratory testing. So there you have it, never in the history of our company has a product been so well inspected. So we felt pretty confident when we were ready for Beta.

This time round, instead of us inviting user to try out our new site, we employed AB testing. 5% of our traffic was diverted to the new site, and once again, users were invited to leave feedback. I took the opportunity to setup a Google alert for spotting the use of the beta url in forum or blog posts, so I could keep track of what the community was saying.
Once again the feedback came in…

The used car search, the old one is much clearer to use and a lot better, . The new “improved” one is poor.
Preffered old site looked more proffesional and was easier to use.
The search criteria should be your main focus and keep that in a clear box format like your old site and allow people to search quickly but also as specifically as they want.
The old site is much better the new site is more complicated to use in the end I shut it down and went on to ebay please change back.
It looks much better than the previous website, but since I dont live in UK, I usually have to copy and paste the London postcode from the FAQ page. Unfortunately, I cannot find the page.
Bad design. Not as easy to use and selct options, not as clear and concise. the old one was perfect.

Erm, what? The old one was better? Perfect? Now we are confused.

So again we tackle the perceived issues of our users. We keep seeing comments of missing images, and we start pulling apart the application, the infrastructure and network. It turns out it was an Ad Blocker, that has decided that the way we format our image urls (cache busting) means they are adverts and blocks them.
People complain of slow loading time, so I begin to conduct some testing around that. I conclude they maybe right, so we engage with Gomez to find out for sure. Gomez shows something alarming. Users on a half decent (2Mb and above) broadband connection will get a decent experience. People on anything less are going to be pulling their hair out. The digital Britain report suggests that most of the UK has 3Mb broadband, so do our users just have slow connections? Regardless I have begun some work into improving the perceived page load times, and will roll those requirements into cross cutting requirements in the same way as we do for SEO and DDA compliance. We are going to lighten the page weight and strip out the heavy JQuery that is only used to titillate. We are going to build our own analytics into the front end that will allow us to see in real time what the users experience (current render times etc), we are moving some of the content so that it resides under a new host allowing it to be fetched in parallel by the browser. All of this should help the usrs with slow connections.

But what about the “it crashes my browser” comment? Our in page analytics trap JavaScript errors and reports them. And while our users suffer at the hands of errant JavaScript squirted into the page by 3rd party advert traffickers our own code is solid, so what’s this crash?

We contacted a customer who had left his details and asked him if he could walk us through the “crash” and we would follow along step by step in the office. At the point where he claimed his browser had crashed, we were watching the light box “melt away”, something we had designed in. His expectation was that the light box would work like a tab, and that he could tab between the photos and the detailed specification of the vehicle. Not melt away to the bottom of the screen. So now we will remove the animations on the light boxes (and other objects).

What have I learnt?

Three things:

1. Next project, I’m running the usability testing, with real scenarios and everything.
2. Perceived performance is more damaging than actual performance.
3. BT may have been right…