Who should write user acceptance tests?
Identifying who should write user acceptance tests is not as easy as it might sound. There are many possibilities. But which are the best ones?
Estimated reading time: 8 minutes
The users should write and run the user acceptance tests.
This is probably true, at least in theory, but as we shall see, things are rarely that simple; that’s life.
User acceptance testing (UAT) is the responsibility of the users. By ‘user’, we are typically referring to those stakeholders who will use the system to support their roles in the day to day operation of the business. At least some of these users have hopefully been involved in the elicitation of the user (functional) requirements. There are a few things to consider:
- Do we know all the categories of users?
- Do the users know how to create and run tests that will promote confidence in the software’s ability to comprehensively and reliably support the business operation?
- Should user acceptance testing involve any non business categories of users such as system administrators?
- Could other categories of acceptance testing usefully be conducted for non business categories of users?
- Should any stakeholders that are not be described as users, of any description, be involved?
Let’s look first at what we mean by user acceptance.
What do we mean by ‘user’ acceptance testing?
Acceptance testing should demonstrate, to those who will be purchasing, using, or managing the system, that the system is ready for prime time. In using the term, User Acceptance Testing, we should define what we mean by a ‘user’. We test acceptance generally by running the software system and checking for compliance with the ‘acceptance criteria’, for that system; acceptance criteria are normally based on the requirements and in some cases are an alternative to writing detailed requirements; we discuss acceptance criteria later in this article.
Testing should ensure that the developers have:
- Understood the requirements
- Created code that satisfies the requirements
What testing does not do is check that the requirements are correct, i.e.
- That the requirements really do represent what the users want
- That what the users want, or say they want, is what the business really needs
For this article, we’ll consider only those tests that implicitly assume that the requirements are correct, although note the comments on ‘acceptance criteria’ in the section, ‘Role of the requirements authors’.
What we ideally want to ensure is that the software system that has been created will support the regular operations of the business, including anticipated variants to the norm. To achieve this, we have to ensure that we test in the context of the relevant business processes, information systems, and business rules; does the new system really support these things? We will implicitly assume that we are also testing the user (clerical) procedures and the user manuals.
Acceptance testing is typically among the last thing the project team is involved with before commencing the handover of a new system to release management, the business, and the system administrators. By that time, everyone should have confidence that the system does work; bugs should not be surfacing at that stage. Bugs should have been eliminated during earlier testing phases such as unit, component, integration, and system testing. Of course, some bugs will still crawl out during acceptance testing, but if too many of them surfaces, user confidence will vanish.
Before we consider who should, or shouldn’t, write the acceptance tests, we ought to consider who should, or shouldn’t, write those confidence promoting tests that are run prior to UAT.
Who should NOT write the tests, pre UAT?
The answer to this question is often based on the premise that the person who authored the object being tested is not the best person to write and run the tests; the author(s) may not be sufficiently objective. This suggests that programmers should not test their own code. We recognise of course that programmers typically do test their own code. Some organisations make a pragmatic compromise, with programmers doing the initial tests and more independent testing roles following this up with additional tests.
We are also aware that many people maintain that programmers ought to test their own code. This is particularly true in environments where development team members are all generalists, i.e. everyone is supposed to do everything or anything. Some agile environments promote this sort of approach. There is, however, a good body of opinion that says that leaving the authors of the programs to do all the testing is not sufficient to promote confidence on the part of the customers and users. Furthermore, certain types of test such as performance, security, and usability testing, perhaps using specialist tool sets, requires specialist expertise.
Who should be involved in designing and writing the tests, pre UAT?
We are excluding the program authors from this section simply because we have already discussed their possible roles.
There are many broad approaches that can be combined to identify the stakeholders who could or should be involved:
- Requirements authors
- Business stakeholders including independent subject matter experts
- Technical and other specialist experts
- The organisation’s specialist test team, assuming that it has one
- External (outsourced) specialist testers; these may be expensive but they are independent, e.g. of project managers and business managers
Role of the authors of the requirements, pre UAT
Acceptance criteria come in various shapes and sizes, but essentially they are the basis of a mechanism for testing that computer code satisfies requirements. Requirements can be refined into ‘acceptance criteria’; if a requirement is not testable, it’s not a requirement. Alternatively, a user story might be defined as a placeholder requirement, and the detail is defined as acceptance criteria; your choice.
The way that acceptance criteria are identified is all important. The process of specifying acceptance criteria may well challenge the thinking about the requirements themselves, e.g. their wording, rationale, priority, and associations with other requirements.
Being obliged to write the acceptance criteria forces the authors of the requirements to immediately analyse the requirement for completeness or comprehensiveness, clarity, complexity, and over engineering, precision and ambiguity; there is, of course, a lot of overlap between each of these terms. Acceptance criteria force examination of the basis of the requirement.
Depending on your approach to requirements, the authoring group might comprise users, business analysts, product owners and managers, user experience experts, subject matter experts, and other concerned stakeholders, e.g. finance or legal. Again, depending on your approach, the authoring group might include developers and various groups with professional sounding titles such as engineer or architect.
In gathering or discovering requirements, the emphasis might have been, perhaps unwittingly, on quantity and speed over quality. Acceptance criteria should provide a counter to this; it should improve real productivity through the early elimination of errors, omissions, and misunderstandings. The authoring group could take a ‘hackathon’ approach to this task, brainstorming acceptance criteria and rewriting or adding detail to the requirements as necessary.
The kind of activity described in this section is a form of testing; it is a test of the requirements themselves. This form of testing is sometimes described as ‘static testing’. It is very effective at improving the quality of the delivered software product whilst speeding up the rest of the development process. Involving the users, product owners and subject matter experts should ensure that the requirements really do represent what the business wants and needs.
Testing using working software is called ‘dynamic testing’. Static testing supports dynamic testing. Our testing budget only allows a finite number of tests to be run; if static testing, hackathon style, has eliminated many issues, dynamic testing should also be should be more efficient.
Role of specialist test teams pre UAT
Specialist test teams can and should apply static testing techniques to the requirements; they should review the requirements, including the acceptance criteria, prior to developing the dynamic tests. They can do this as soon as the requirements are available. This practice offers additional quality checks on the requirements; have the developers, the testers, and the users all understood the requirements in the same way; see requirements discovery is child’s play. Feedback should immediately be given to the requirements author(s).
Specialist test teams will run the dynamic tests of the software as soon as it is ready. There are typically multiple levels of test, e.g. component (unit) tests, integration (link) tests, and system tests. At the start of integration testing, planners can refer to feedback from component testing. System test planners can refer to feedback from integration tests. The feedback should include sight of the expected and actual test results. Assuming that all is going well, these activities should add to the feeling of confidence about the system under construction.
So finally, who should write user acceptance tests?
The acceptance criteria are the basis of the user acceptance tests. By the time user acceptance testing starts, the team responsible for running them should have a comprehensive set of acceptance criteria.
For each acceptance criterion, one or more tests can be written and run. The exact number of tests will depend on such things as:
- System criticality and strategic importance
- Level of confidence gained so far
- Time
- Money
There is a choice of options for who runs user acceptance tests. Whoever does do it will obviously need to be familiar with the system. This familiarity might be gained by involvement in the development process, reading the requirements documentation, helping to establish acceptance criteria or receiving training in the system.
Options for who does it might include:
- The users and product owners alone, perhaps with some training from specialists testers or business analysts
- The users and product owners with the support of some combination of testers, business analysts, or others
- The organisation’s specialist acceptance test team, if it has one
- An external group of testers, e.g. a group from a consultancy that specialises in testing
- Users as observers whilst a specialist team do the actual tests
There is no universally correct answer to this. It will depend on such things as:
- Availability of staff
- Risks associated with the operational system or systems involved
- Nature and complexity of the system
- Organisational capacity and expertise of available staff
- Organisational culture
- Other factors relevant to specific organisations
Who reviews the results of acceptance testing?
Ultimately the project sponsor must be accountable for the review and consequent decision about acceptability. In practice, the sponsor is likely to rely on the recommendations of others, including the project manager and customer representatives such as product owners and product managers. It is the comprehensiveness and effectiveness of the entire process that will give weight to the quality of these recommendations. The entire team can make contributions, direct or indirect, to the writing and running of the user acceptance tests.