Paperless bathrooms, completely automated software testing, and other myths

In the 1970s much attention was given to the concept of the coming ‘paperless office’. Experts predicted that with the emergence of the computer, people would simply use digital displays and paper would be a thing of the past.

Instead, between 1980 and 2000, the amount of paper used commercially doubled, facilitated by the rise of the personal computer and associated printers. Observations were made that the paperless office was about as likely as the paperless bathroom.

Our thirst for technology is often driven by a desire to automate and streamline so we can make things easier and cheaper. Pressure on CIOs and other business managers to streamline existing processes as well as introduce new innovation is intense, putting pressure on the speed and flexibility of software testing and releasing new technology.

Automated software testing i.e. where software tests are run by specific software tools independent of manual intervention, can fall prey to the same instinct. There can be a headlong rush to automate the testing process, with not enough attention being paid to achieving quality outcomes for software projects.

The complete testing solution?

The attraction of automated testing is obvious.

Successfully implemented, automated testing can reduce the testing effort and therefore the cost of this part of the software development process. It also contributes to increased speed of delivery on projects. Quality can be improved, as software tests can be reused and repeated in exactly the same manner over and over again.

Return on investment in automated testing can be significant, but unfortunately it has been oversold, like so many other technologies, as a ‘paradigm shift’, something that can be applied to almost any testing task. Tools for automation are often sold without the recognition that they only play a small part in the overall testing process.

There is a flawed ‘manual testing verse automated testing’ argument, when in most projects there is a need to apply both techniques to get the right result.

Creating chaos automatically

Like many areas of business, automated testing works well when very predictable, repeatable tasks are involved. Once there is variation, in terms of human interaction with a system, multiple platforms, interfaces to other systems, different vendors etc it becomes more problematic. Applied incorrectly, test automation can be an expensive mistake. It has a failure rate similar to software development projects.

Automated tests can pick up problems constantly, actually slowing down the testing process and testing the faith of the project team. The time saved in running tests is wasted in the time required to review and resolve error reports.

The problem is exacerbated by commercial tools providers who claim more than they can deliver. There are some excellent tools available, but they have to be applied judiciously. These tools are not as easy to use as often claimed, requiring training, consulting and expensive contracting staff to use effectively, let alone significant licence fees to begin with.

Organisations can become ‘locked-in’ to a tool in which they have invested considerable time and effort, and become focussed on using it no matter what. A better approach is to look at the whole project and make objective judgements about what testing approaches are best applied.

When to apply automated testing?

Instead of considering automated testing against manual approaches, it is best to understand where it can fit into a much bigger picture. Automated testing should be seen as complementing the human intelligence inherent in manual testing, not replacing it.

Organisations can be too focussed on the cost reduction around labour with automated testing, rather than having a total cost of ownership focus. Automated testing is not a cheaper alternative, it is more an appropriate weapon in the testing armoury. It can deliver return on investment when used in the right area.

When appropriately controlled, automated testing can be usefully applied in areas like:

  • Regression testing: in a well-structured development process automation can improve productivity significantly. It is especially valuable in agile development projects.
  • Randomised testing: if you want to ‘attack’ a system with large number of random interactions, or big datasets.
  • Capacity assessment: simulating large numbers of users concurrently accessing a system.
  • Structural testing: checking whether messages between different parts of a system are being sent accurately and producing the right results.

An example

Qual IT was involved in a mid-sized software development project at a New Zealand organisation. We automated the testing of how different parts of the system communicated with each other i.e. the very precise instructions different parts of computer systems exchange to function properly. We could run 2500 tests in 6 hours using an automated approach, giving us a high level of productivity and a high quality result. The user interface for this application was tested manually, as it is much harder to automate user interface testing with all the variations that human involvement with a system creates.

The outcome was a system that has run for 18 months, with only one bug found post release.

The value of independence

One of the biggest issues in deciding how to apply automated testing is the IT industry itself. Like ‘miraculous’, ‘paradigm shifting’ developments in the past, the industry tends to overstate what can be achieved, and underestimate the value of technical knowledge and skills.

Having someone who takes an independent view of testing tools is important in your evaluation of automated testing. They range from the high end, highly expensive from large and reputable vendors to the free, open source variety. Every tool is different and has different limitations which you need to understand before applying.

Having invested in expensive tools, organisations can become blind to these limitations and apply them ineffectively. Being tools ‘agnostic’ is important to assessing what to apply to a specific project.

Not only do you need the ability to develop a strategy from an independent view, you then need to be able to secure resources to implement and train staff to use whatever technology approach you choose to take.

Four questions to ask

For the busy CIO or business manager, here’s four key questions to ask next time testing strategies are discussed at the beginning of a project:

  1. Are we using a mix of manual and automated testing approaches? 
    There should be a clear rationale for applying the different approaches, rationale that is based on what delivers the best quality outcome.
  2. Have we analysed the total cost of ownership of the different approaches to testing? 
    Be careful not to see automated testing simply as a labour saving approach. And ensure you understand how much testing is required during development, and then on-going by your business as usual team.
  3. Are we getting testing involved early in the development/implementation process? 
    The earlier in the cycle the tests are designed, the more effective they are likely to be.
  4. What automated testing tools are we using, and have we done a careful analysis of the different licensed and open source options? 
    There are significant direct and hidden costs in either approach, you need to understand these very clearly from the beginning and get good, independent advice.

 

Looking for more information? Check out our engineering services, or get in touch with one of the team.