How will we preserve Twitter, Facebook or LinkedIn?

On Tuesday i attended a talk by Doron Swade at the MOSI for the Computer Conservation Society.

Doron Swade is an engineer, historian, and museum professional, internationally recognized as the authority on the life and work of Charles Babbage, the 19th-century English mathematician and computer pioneer. He was Senior Curator of Computing at the Science Museum, London, for fourteen years and during this time he masterminded the eighteen-year construction of the first Babbage Calculating Engine built to original 19th-century designs. The Engine, was completed in 2002.

Doron was talking about the historical and cultural issues in the history of computing that he faced at the Computer History Museum in Silicon Valley, California. When something struck me.

The big kids on the block today are not the computers but the programmes they run, the software. There hasn’t been any significant advancement in computing hardware for some time. However the internet is changing the way we communicate and socialise and somehow we will need to preserve it for historical interest.

But how on earth will we preserve software like Google, Twitter or FaceBook or MySpace? Because the software that powers these sites is only part of the puzzle, because with these sites its the content that makes these sites what they are. Terabytes of user generated content. How can we preserve that so that in 60 years we can look back with the same fondness we look back at the Manchester Baby, the UNIVAC or the IBM360??

Once you have wrapped your head round that task, who will test such a system? and how will the ensure that its a true representation of what those sites look like today?

While i sat there among some of the early pioneers of British computing who were gently dozing off i wondered if one day, i would be sat in that room, while tomorrows Doron tells me the problems faced with ranking the websites, facebook before twitter, or linkedin before Myspace?

How will we preserve Twitter, Facebook or LinkedIn?

On Tuesday i attended a talk by Doron Swade at the MOSI for the Computer Conservation Society.

Doron Swade is an engineer, historian, and museum professional, internationally recognized as the authority on the life and work of Charles Babbage, the 19th-century English mathematician and computer pioneer. He was Senior Curator of Computing at the Science Museum, London, for fourteen years and during this time he masterminded the eighteen-year construction of the first Babbage Calculating Engine built to original 19th-century designs. The Engine, was completed in 2002.

Doron was talking about the historical and cultural issues in the history of computing that he faced at the Computer History Museum in Silicon Valley, California. When something struck me.

The big kids on the block today are not the computers but the programmes they run, the software. There hasn’t been any significant advancement in computing hardware for some time. However the internet is changing the way we communicate and socialise and somehow we will need to preserve it for historical interest.

But how on earth will we preserve software like Google, Twitter or FaceBook or MySpace? Because the software that powers these sites is only part of the puzzle, because with these sites its the content that makes these sites what they are. Terabytes of user generated content. How can we preserve that so that in 60 years we can look back with the same fondness we look back at the Manchester Baby, the UNIVAC or the IBM360??

Once you have wrapped your head round that task, who will test such a system? and how will the ensure that its a true representation of what those sites look like today?

While i sat there among some of the early pioneers of British computing who were gently dozing off i wondered if one day, i would be sat in that room, while tomorrows Doron tells me the problems faced with ranking the websites, facebook before twitter, or linkedin before Myspace?

If you are not thinking about perfomrance, you are not thinking

Performance testing is a funny old thing, in that whenever the subject comes up, people get all hot and bothered about it. The thing that really tickles my fancy is when developers suddenly get righteous about testing!

Testers and developers have a totally different view of the world. The best testers i have worked with have a real need to dig into systems. Even with black box testing they find a way to work out what a systems does way beyond its simple inputs and outputs. They cant help themselves. It is almost like they cant pass go if they don’t break the system, almost an addiction (or is that affliction).

Now that the developers find themselves writing unit tests, integration tests and acceptance tests they think that overnight they have learnt everything there is to know about testing, right? Wrong!

Yes, sure a developer can write a test, but they often struggle with the intent of the test an more so with non functional testing like performance testing. let me shake it down.

Ok, so the business wants to monetise their existing data by presenting it in a new way, for example “Email Alerts”, you know the sort of thing. You create a search, and when your criteria are met you get sent an email.

The developer sits down to think about performance testing, and thinks about how the system works. In our example here, the system will fire the searches every night, when the database is relatively quiet so that we don’t overload the system during peak hours.

So the developer thinks OK I’ll create a load of these “alerts” using SQL inserts and fire up the system and see how fast it can work through them.

They do just that and get back some statistics like, number of threads, amount of memory the JVM consumed, how many connections to the DB were needed, how many searches were executed, how long it took to execute a search, that sort of thing. They call meetings and stroke their chins in a sage like way. The figures look about right.

But in real life the database would never have the alerts inserted into it in that way. Its probable that users would be inserting data at the same time as it was being read out. Also the product isn’t likely to go live and have 100% take up over night. Its more probable that the take up would be slower, perhaps taking weeks or months never achieving 100% take up. Old alerts would be expiring and some users would renew those while new ones are being created while some other are being edited (change of email address etc).

The crux of the matter is mindset. The tester sits down and thinks, what could go wrong? What happens if the DB is unavailable at the time the batch job runs? What happens if the DB needs to be taken down for maintenance during a batch run, will the batch pick up where it left off? Can the batch job complete before the off peak period comes to an end? Can the mail server handle the number of emails to be sent? What happens to email that bounces? In other words the tester takes a step back and looks at the system holistically. Because a user doesn’t give a damn if your search engine can execute a query in 33ms if they don’t get the email until 12hours after it was relevant.

Now on the current project we have completely rewritten the platform. New version of Java, new styles of writing code, new infrastructure etc etc. The search engine technology is the same, however during the life of the project the API has been updated and a particular feature enabled. This feature allows us to search in a related way. Generally speaking it allows us to do the Amazon type thing; “People who search for X also search for Y”, but it comes at a cost, it takes longer to return the result set (of course it would its got to do another search under the hood).

Again during “testing” the developers just threw as much load as they could muster at the application. But guess what, now its live the figures they got during testing don’t match, not even close.

It isn’t like hadn’t been bleating on about performance for months. I even printed out posters that read “If you are not thinking about performance, you are not thinking” and stuck them above the urinals in the toilets.

Its only now that the team are in the spotlight (its so bright it burns) that they have come to us to ask for our help. Once again we are trying to polish a rough product instead of building the quality in from the start. Once again we cant.

It doesn’t matter a damn that we went all out Agile, TDD and XP if the thing doesn’t perform. The end user doesn’t care that we have continuous integration, they know its damn slow and they vote with their feet (or keyboards/mouse).

If you are not thinking about perfomrance, you are not thinking

Performance testing is a funny old thing, in that whenever the subject comes up, people get all hot and bothered about it. The thing that really tickles my fancy is when developers suddenly get righteous about testing!

Testers and developers have a totally different view of the world. The best testers i have worked with have a real need to dig into systems. Even with black box testing they find a way to work out what a systems does way beyond its simple inputs and outputs. They cant help themselves. It is almost like they cant pass go if they don’t break the system, almost an addiction (or is that affliction).

Now that the developers find themselves writing unit tests, integration tests and acceptance tests they think that overnight they have learnt everything there is to know about testing, right? Wrong!

Yes, sure a developer can write a test, but they often struggle with the intent of the test an more so with non functional testing like performance testing. let me shake it down.

Ok, so the business wants to monetise their existing data by presenting it in a new way, for example “Email Alerts”, you know the sort of thing. You create a search, and when your criteria are met you get sent an email.

The developer sits down to think about performance testing, and thinks about how the system works. In our example here, the system will fire the searches every night, when the database is relatively quiet so that we don’t overload the system during peak hours.

So the developer thinks OK I’ll create a load of these “alerts” using SQL inserts and fire up the system and see how fast it can work through them.

They do just that and get back some statistics like, number of threads, amount of memory the JVM consumed, how many connections to the DB were needed, how many searches were executed, how long it took to execute a search, that sort of thing. They call meetings and stroke their chins in a sage like way. The figures look about right.

But in real life the database would never have the alerts inserted into it in that way. Its probable that users would be inserting data at the same time as it was being read out. Also the product isn’t likely to go live and have 100% take up over night. Its more probable that the take up would be slower, perhaps taking weeks or months never achieving 100% take up. Old alerts would be expiring and some users would renew those while new ones are being created while some other are being edited (change of email address etc).

The crux of the matter is mindset. The tester sits down and thinks, what could go wrong? What happens if the DB is unavailable at the time the batch job runs? What happens if the DB needs to be taken down for maintenance during a batch run, will the batch pick up where it left off? Can the batch job complete before the off peak period comes to an end? Can the mail server handle the number of emails to be sent? What happens to email that bounces? In other words the tester takes a step back and looks at the system holistically. Because a user doesn’t give a damn if your search engine can execute a query in 33ms if they don’t get the email until 12hours after it was relevant.

Now on the current project we have completely rewritten the platform. New version of Java, new styles of writing code, new infrastructure etc etc. The search engine technology is the same, however during the life of the project the API has been updated and a particular feature enabled. This feature allows us to search in a related way. Generally speaking it allows us to do the Amazon type thing; “People who search for X also search for Y”, but it comes at a cost, it takes longer to return the result set (of course it would its got to do another search under the hood).

Again during “testing” the developers just threw as much load as they could muster at the application. But guess what, now its live the figures they got during testing don’t match, not even close.

It isn’t like hadn’t been bleating on about performance for months. I even printed out posters that read “If you are not thinking about performance, you are not thinking” and stuck them above the urinals in the toilets.

Its only now that the team are in the spotlight (its so bright it burns) that they have come to us to ask for our help. Once again we are trying to polish a rough product instead of building the quality in from the start. Once again we cant.

It doesn’t matter a damn that we went all out Agile, TDD and XP if the thing doesn’t perform. The end user doesn’t care that we have continuous integration, they know its damn slow and they vote with their feet (or keyboards/mouse).

Skills not Roles – Communities not Teams.

When i decided to put this blog together, part of the impetus was using it as a historical repository. There are a great many of my posts on the internet today that date back to 1999. Ten years is a fairly long time in internet years, and looking back over those posts i can see how my understanding of different topics has evolved and how much I’ve learnt and grown. That said, its not going to work (a historical record) if i don’t post anything is it!

The reason i haven’t posted for quite some time is two fold. Simply put I’ve been too busy at work and too busy at home. That is to say, the blog has had to take a back seat. I’m sorry I’ll try harder.

OK on to the post proper.


So this post will differ in that i’m not going to discuss technologies like Twist or Selenium or WebDriver.

I want to talk about something that is currently crippling the project I’m currently working on.

For a project to be truly Agile and Lean it needs to be able to respond to the challenges that we face daily in IT and overcome the challenges without waste. So why then do we have to hand off tasks like deployments to another non Agile team? Moreover the handover is done via an abhorrent “work-flow” tool that admonishes the receiving team from any aspect of quality or responsibility “I’ve done my bit mate, its with team X now”.

I want to get rid of teams as we know and recognise them today and usher in communities. Yeah sure the name is a bit hippy-ish but then so is the ideal. Skills not roles. If someone within our delivery community has the necessary skills to deploy some code to a database or server then why do we have to interface with an external team? If we have the capability, and we are responsible, what’s the problem?

OK, sure, the guys who look after the production systems want to achieve 99.999% uptime (26 seconds of downtime a month), and often they are targeted on this so they become averse to change. After all any change increases the risk of a failure. However, if we have tested the code, not once, not twice, but umpteen times and more importantly we only compiled the code only once, and all previous deployments have gone without incident. You could be forgiven for thinking that the deployment could be considered as being safe for deployment. A non event. We should be able to deploy the code at 17:30 on a Friday afternoon and skip off home safe in the knowledge that the site is up and running, humming along like a well oiled machine.

However, those teams have become so averse to change or risk as they perceive it, that they actually start to display behaviours reminiscent of the 1970 trade union shenanigans that plagued British industry “you can’t do that mate, not your job. Not anyone can deploy code you know, oh no. where would we be if just any old tom, dick or harry could deploy code willy nilly”.

As an Agile commune focused on delivery of our project we would share the skills and socialise ideas. We need to create innovative environments that promote people trying new things. This aids the members of the community which therefore benefits the business. So not anyone could deploy the code. Only those people that had the skills and were responsible in the execution of their duties.

The more i think about this, the clearer it all becomes. I suddenly find myself questioning my own role as a “people manager” within such a community. After all my role as in its current shape would be wasteful. I should not manage the team (to be honest, that’s not my natural style) I should coach and mentor, not preach and target. I should lead by example, not autocratic rule. As Alan Keith of Genentech said “Leadership is ultimately about creating a way for people to contribute to making something extraordinary happen.”

I’ve run this idea past my peers. Old peers agree with me, they see it as a way to empower individuals, and therefore the community they reside in. But younger, less wise peers are worried. “How will we administer pay grades” they ask, “how will we hire people, if we don’t have recognised roles”.

Its really quite simple. Individuals are rewarded for the skills they have not their ranking within a role. Why should an experienced tester with polygot skills and several years domain knowledge be paid less than a BA with flaky knowledge of your technology platform? What because business analysts traditionally earn more than testers? For that matter why should a developer be paid more than a business analyst if the BA can also test? Two skills vs one. The current game is rigged and is demotivational.

Hiring is also easy. You want titles for your people? Call them analysts. Then all you need to do is hire analysts with the appropriate skills for your domain and your platform. Other companies call their staff “consultants”, and they hire consultants with the appropriate skills for the client they engage with.

Once you have a pool of multi skilled analysts, it would be easier to create a community that had the right skills required to deliver the project, instead of worrying about the interfaces to external teams or having a shortfall of a particular discipline within your community. You can select your community members based on their proven experience, their skills, their domain knowledge and the feedback they receive from the communities they have previously worked in. A community is unlikely to carry a lazy person who knows little about the domain and has few or poor skills.

Now i’m not saying we don’t need SysAdmin or DBA or networks etc. We need all those teams, and what they do is invaluable to the delivery of the project. But do those external teams need to perform what amounts to mundane tasks for us? Shouldn’t they concentrate on what’s important to them, the stability and performance of their area. Because as it stands today at the point we interface with those external teams for the execution of a task that could be carried out in-team by an appropriately skilled and responsible analyst, they become a blocker and they become wasteful. We sit twiddling our thumbs while we wait for those teams to follow their internal processes and use the infernal work-flow tool (and its only real purpose is to provide the business with yet more meaningless statistics).

When i have approached the external teams, i have found that they harbour a fear of “Agile” and i think this fear is the real problem. They feel uncomfortable, anxious, or inadequate. They worry that by allowing us to be responsible for our actions then they will be allowing themselves to be exploited and by denying us it helps protect their rights as an individual.

The business feels the same way. Despite the corporate line being “we are Agile, lean, innovative…” they fear change to the point that they have implemented a change management process, and a change manager and recently said we have to use sharepoint (ffs). But what the business hasn’t realised is that as a tester i am risk averse (no really its a curse), so we are actively baking the quality into our products and continuously inspecting them for that quality – through unit testing, integration testing, and acceptance testing.

It will be incredibly hard for the external teams let go, especially while the business is frozen with fear, but in the future they will have to. They will have to or we may as well pack up and go back to PrinceII – not while there is breath left in my body…

Skills not Roles – Communities not Teams.

When i decided to put this blog together, part of the impetus was using it as a historical repository. There are a great many of my posts on the internet today that date back to 1999. Ten years is a fairly long time in internet years, and looking back over those posts i can see how my understanding of different topics has evolved and how much I’ve learnt and grown. That said, its not going to work (a historical record) if i don’t post anything is it!

The reason i haven’t posted for quite some time is two fold. Simply put I’ve been too busy at work and too busy at home. That is to say, the blog has had to take a back seat. I’m sorry I’ll try harder.

OK on to the post proper.


So this post will differ in that i’m not going to discuss technologies like Twist or Selenium or WebDriver.

I want to talk about something that is currently crippling the project I’m currently working on.

For a project to be truly Agile and Lean it needs to be able to respond to the challenges that we face daily in IT and overcome the challenges without waste. So why then do we have to hand off tasks like deployments to another non Agile team? Moreover the handover is done via an abhorrent “work-flow” tool that admonishes the receiving team from any aspect of quality or responsibility “I’ve done my bit mate, its with team X now”.

I want to get rid of teams as we know and recognise them today and usher in communities. Yeah sure the name is a bit hippy-ish but then so is the ideal. Skills not roles. If someone within our delivery community has the necessary skills to deploy some code to a database or server then why do we have to interface with an external team? If we have the capability, and we are responsible, what’s the problem?

OK, sure, the guys who look after the production systems want to achieve 99.999% uptime (26 seconds of downtime a month), and often they are targeted on this so they become averse to change. After all any change increases the risk of a failure. However, if we have tested the code, not once, not twice, but umpteen times and more importantly we only compiled the code only once, and all previous deployments have gone without incident. You could be forgiven for thinking that the deployment could be considered as being safe for deployment. A non event. We should be able to deploy the code at 17:30 on a Friday afternoon and skip off home safe in the knowledge that the site is up and running, humming along like a well oiled machine.

However, those teams have become so averse to change or risk as they perceive it, that they actually start to display behaviours reminiscent of the 1970 trade union shenanigans that plagued British industry “you can’t do that mate, not your job. Not anyone can deploy code you know, oh no. where would we be if just any old tom, dick or harry could deploy code willy nilly”.

As an Agile commune focused on delivery of our project we would share the skills and socialise ideas. We need to create innovative environments that promote people trying new things. This aids the members of the community which therefore benefits the business. So not anyone could deploy the code. Only those people that had the skills and were responsible in the execution of their duties.

The more i think about this, the clearer it all becomes. I suddenly find myself questioning my own role as a “people manager” within such a community. After all my role as in its current shape would be wasteful. I should not manage the team (to be honest, that’s not my natural style) I should coach and mentor, not preach and target. I should lead by example, not autocratic rule. As Alan Keith of Genentech said “Leadership is ultimately about creating a way for people to contribute to making something extraordinary happen.”

I’ve run this idea past my peers. Old peers agree with me, they see it as a way to empower individuals, and therefore the community they reside in. But younger, less wise peers are worried. “How will we administer pay grades” they ask, “how will we hire people, if we don’t have recognised roles”.

Its really quite simple. Individuals are rewarded for the skills they have not their ranking within a role. Why should an experienced tester with polygot skills and several years domain knowledge be paid less than a BA with flaky knowledge of your technology platform? What because business analysts traditionally earn more than testers? For that matter why should a developer be paid more than a business analyst if the BA can also test? Two skills vs one. The current game is rigged and is demotivational.

Hiring is also easy. You want titles for your people? Call them analysts. Then all you need to do is hire analysts with the appropriate skills for your domain and your platform. Other companies call their staff “consultants”, and they hire consultants with the appropriate skills for the client they engage with.

Once you have a pool of multi skilled analysts, it would be easier to create a community that had the right skills required to deliver the project, instead of worrying about the interfaces to external teams or having a shortfall of a particular discipline within your community. You can select your community members based on their proven experience, their skills, their domain knowledge and the feedback they receive from the communities they have previously worked in. A community is unlikely to carry a lazy person who knows little about the domain and has few or poor skills.

Now i’m not saying we don’t need SysAdmin or DBA or networks etc. We need all those teams, and what they do is invaluable to the delivery of the project. But do those external teams need to perform what amounts to mundane tasks for us? Shouldn’t they concentrate on what’s important to them, the stability and performance of their area. Because as it stands today at the point we interface with those external teams for the execution of a task that could be carried out in-team by an appropriately skilled and responsible analyst, they become a blocker and they become wasteful. We sit twiddling our thumbs while we wait for those teams to follow their internal processes and use the infernal work-flow tool (and its only real purpose is to provide the business with yet more meaningless statistics).

When i have approached the external teams, i have found that they harbour a fear of “Agile” and i think this fear is the real problem. They feel uncomfortable, anxious, or inadequate. They worry that by allowing us to be responsible for our actions then they will be allowing themselves to be exploited and by denying us it helps protect their rights as an individual.

The business feels the same way. Despite the corporate line being “we are Agile, lean, innovative…” they fear change to the point that they have implemented a change management process, and a change manager and recently said we have to use sharepoint (ffs). But what the business hasn’t realised is that as a tester i am risk averse (no really its a curse), so we are actively baking the quality into our products and continuously inspecting them for that quality – through unit testing, integration testing, and acceptance testing.

It will be incredibly hard for the external teams let go, especially while the business is frozen with fear, but in the future they will have to. They will have to or we may as well pack up and go back to PrinceII – not while there is breath left in my body…