Acceptance Testing

Ensure that the deliverable meets business and customer requirements.

Acceptance Testing cover image

Acceptance Testing is the final level of quality testing. The main aim of this testing is to determine the working process of the system by satisfying the required specifications and it is acceptable for delivery. It is part of Acceptance Test Driven Development (ATDD), involves team members with different perspectives (customer, development, testing) collaborating to write acceptance tests in advance of implementing the corresponding functionality.  These acceptance tests represent the user’s point of view and act as a form of requirements to describe how the system will function, as well as serve as a way of verifying that the system functions as intended.

Acceptance testing ensures that the deliverable meets business and customer requirements. Acceptance tests should be brief statements that explain intended behavior and the desired results, and should be written in clear, easy-to-understand language. For example, “If I am logged in, when I click the “Buy” button, the total item count for my cart should increase by one.” These statements are generally expressed as examples or usage scenarios.

The generation of the Acceptance Tests functions to drive discussions taking into account the three perspectives of customer (what problem are we trying to solve?), development (how will we solve this problem?), and usability (are there better ways to solve this problem?).

Similar to a unit test, an acceptance test generally has a binary result, pass or fail. A failure suggests, though does not prove, the presence of a defect in the product. In many cases the aim is that it should be possible to automate the execution of such tests by a software tool, either ad-hoc to the development team or off the shelf.

Teams mature in their practice of agile  use acceptance tests as the main form of functional specification and the only formal expression of business requirements. Other teams use acceptance tests as a complement to specification documents containing uses cases or more narrative text.


Also known as: Functional Testing, Customer Testing, Story Testing, Acceptance Test Driven Development, End-User Testing.


Types of Acceptance Testing:

  • User Acceptance Testing (UAT): Ensures the final deliverable satisfies the functional necessities of those who will handle the system after completion (the end user of the product).
  • Business Acceptance Testing (BAT): Ensures that the final deliverable satisfies the desired monetization objectives of the business, or that it fits within the intended business model.
  • Contract Acceptance Testing (CAT): Ensures that the final deliverable includes all aspects stipulated in the contract.
  • Regulation Acceptance Testing (RAT): Ensures that the final deliverable complies with all rules and regulations set forth by a governing entity. Depending on the industry, this type of acceptance test can be very complicated and/or costly (think of the automobile industry and crash testing).
  • Operational Acceptance Testing (OAT): Ensures that the final deliverable is compatible, reliable and stable when used in actual operations conditions.
  • Alpha Testing: Early testing intended to identify systemic flaws with the final deliverable.
  • Beta Testing: Testing done by a large sample of final users intended to understand operation "in the wild" - or how users will operate the deliverable with their own systems.


Benefits:

  • Confirms when a user story is complete.
  • Helps the team understand the story/feature.
  • Removes ambiguity from requirements.
  • Increases satisfaction of the customer by ensuring their requirements are met.
  • Identifies functionality and usability issues early on.
  • Promotes collaboration between developers and the end-user.
  • encouraging closer collaboration between developers and customers, users or domain experts.
  • providing a clear and unambiguous “contract” between customers and developers.
  • decreasing the chance and severity both of new defects and regressions.


Origins:

  • 1996: Automated tests identified as a practice of Extreme Programming, without much emphasis on the distinction between unit and acceptance testing, and with no particular notation or tool recommended
  • 2002: Ward Cunningham, one of the inventors of Extreme Programming, publishes Fit, a tool for acceptance testing based on a tabular, Excel-like notation
  • 2003: Kent Beck publishes the book “Test Driven Development: By Example”
  • 2003: Bob Martin combines Fit with Wikis (another invention of Cunningham’s), creating FitNesse
Like
Created on Sep 20, 2020 13:14,
last edited on Sep 20, 2020 14:13