Software Testing Classes Twitter LinkedIn LinkedIn Facebook
         
<< Back

Software Testing


1.    Introduction to s/w Testing and Quality Assurance

1.1.        Project Organization

           The Test discipline is related to other disciplines, as follows (This topic will be elaborated in chapter 3 Testing Process and its activities)

The Requirements discipline captures requirements for the software product, which is one of the primary inputs for identifying what tests to perform.

The Analysis & Design discipline determines the appropriate design for the software product, which is another important input for identifying what tests to perform.

The Development discipline produces builds of the software product that are validated by the Test discipline. Within an iteration,multiple builds will be tested — typically one per test cycle.

The Environment discipline develops and maintains supporting artifacts that are used during Test, such as the Test Guidelines and Test Environment.

The Project Management: Test team receives the heads-up regarding up-coming projects, change requests, time lines for different phases of testing etc from Project management. Test team provides time and resource estimates to it. If there are any issues or support needs that cannot be solved/met with in the test team, project management team facilitates help from other teams.

        The Configuration & Change Management discipline controls change within the project. The test effort verifies that each change has been completed appropriately

   



      1.2.        What is testing/QC?

 

Testing is a process of evaluating a software system against pre-defined standards with the intention of finding defects/errors. Here Pre-defined standards are the requirements, which specify the results expected of the system.

A defect/error is a difference between expected result and actual result. This difference can be a difference in behavior of the application (also called functionality) or characteristics of the application (also called look and feel).

Requirements are the basis for expected results. Actual results are realized at the time of testing software application/system.

1.3.        What is QA?

Software QA deals with creation,implementation and continuous improvement of processes employed in all phases of System Development Life Cycle – including but not limited to:


·        Planning

·        Analyses

·        Designing

·        Coding/Building

·        Testing

·        Deployment/Implementation and

·        Maintenance.

 

Here process is a set of methods, procedures and practices that we use to develop and maintain software systems and the associated products (e.g., project plans,design documents, code, test cases, and user manuals).

 

1.4.        Testing/QC Vs Quality Assurance

As we can see from above description, QA is much broader than Testing. QA encompasses all phases of SDLC, where as testing is a particular phase in SDLC.

So,testing is only a part of QA.

QA focuses on completeness and thoroughness in the methods and practices followed for creating a software product, where as Testing seeks to find out the incompleteness and deficiencies of the same product.

 

2.    Software Development Life Cycle and role of Testing

 

2.1.        SDLC

SDLC stands for System Development Life Cycle. A system is an inter-related set of components, with identifiable boundaries, working together for some purpose.

Every software system passes through certain distinct identifiable phases during its life. These phases together make up its life cycle called System Development Life Cycle (SDLC).

Following table out lines the phases in the SDLC and explains the activities that take place in each phase (please note that these are most commonly identified phases. You will find somewhat different wording in other’s writings, don’t worry about it)

Phase

Activities

  1. Planning
  1. Conceive Idea of the project (comes out of identification of problem or opportunity or need).
  2. Consultation and feasibility study.
  3. Secure funds and other approvals.
  4. Create Project Plan.
  1. Analyses

 

  1. Conduct a detailed study of the system.
  2. Analyze business requirements.
  3. Create system requirements
  4. Review and baseline system requirements.
  1. Design

 

High Level Design:

 

1.      List of modules and a brief description of functionality of each module

2.      Interface relationship among modules

3.      Dependencies between modules

4.      Database tables identified with key elements

5.      Overall architecture diagrams along with technology details

 

Low Level /Detailed Design:

 

1.      Detailed functional logic of the module, in pseudo code

2.      Database tables, with all elements, including their type and size

3.      All interface details

4.      All dependency issues

5.      Error messages listing

6.      Complete input and output format of a module

 

  1. Develop/Code/Build

1.    Code/Develop the modules according to system requirements and design documents

2.    Unit testing

3.    Integrate the modules to create sub-systems and/or sub-systems

4.    Transfer the code to testing environment

 

  1. Test
  1. Smoke Testing
  2. Integration and Interface testing
  3. System Testing
  4. Regression Testing
  5. User Acceptance Testing
  6. Release/ Production Testing

 

  1. Implement/Deploy
  1. Move the code to the production environment.
  2. Create a rollback plan.
  3. Perform high-level validations on the production environment to make sure that implementation was successful and that the code in this environment is in fact the one that has been tested in testing phase.
  1. Support and Maintain
  1. Production Support Team provides live support.
  2. Come up with needs for emergency fixes, if any.

                                                            Table 1

 

2.2.           SDLC Models

This document provides only high-level descriptions of some common SDLCs and is nota comprehensive list of all known SDLCs. The SDLC models included in this document are Waterfall, Staged,Prototyping, and Extreme Programming (XP).

 

Waterfall Model

TheWaterfall development methodology (also known as the “Traditional Method”)structures a project into distinct phases with defined deliverables in eachphase. This methodology assumes each phase is completed before the next phasein the sequence begins.

 

Staged Model

TheStaged model is also known as the Incremental model. The Incremental modeldivides the end product into separate builds, where sections of the product arecreated, tested and deployed parallel. This model is adopted when all of therequirements are known (and considered relatively stable) at the start ofdevelopment (with minimal anticipated changes), and when it is desirable forsome of the builds/work packages to be completed as soon as possible. 

 

 

2.3.        Testing/QC life cycle in SDLC

 

Testing or involvement of test team should begin as early in the development cycle as possible. In the initiation phase, project management team may approach test team lead and/or other expert in test team for time and resource estimates for any up coming changes or projects. They incorporate some or all the estimates in the project plan and schedule. Other than that, usually QC/testing process begins as soon as the first draft of the functional requirement specs is ready for review. In other words testing should begin after system’s functional requirements have been created but before they are base lined for coding. Look at the Table 1 given under section 2.1 SDLC. Testing/QC typically begins in 2nd phase of SDLC (Analyses phase), to be precise, along with 4th activity (i.e.Review and baseline system’s functional requirement specifications). The following five diagrams depict the Testing/QC life cycle in the context of SDLC. You can also see the testing activities that go parallel to corresponding SDLC activities. Please be advised that this list of QC activities is by no means a comprehensive. You might find some additions/ subtractions to this list in others writing and some jumbling of sequence of these activities.

 

 


 

 

 

 

                                                                          

 

 



3.    Testing Process and its activities

 

When a new project begins or changes come up for an existing project, the project management team seeks input from test team and/or team lead/manager for resourcee stimates and timelines.

 

Like wise the project management team gathers input from development team and business analysts. Then, reconciles all the inputs andcreate a project plan with defined phases and deliverables in each phase along with time lines i.e. start and end dates.

 

In this way, test team will have some involvement at the beginning of every project. Other than this early involvement of test team for providing inputs to the project plan, the actual testing efforts (as explained in section 2.3) typically begins when system’s requirements are ready for review.

 

The following tables lists Testing/QC activities during life cycle along with person(s) responsible for them

 

QC/Testing Phase

Activity

Who is responsible

Test Preparation

Create resource plan

QC Team Lead/QC Manager

Create Schedule

QC Team Lead/QC Manager

Create Unit test plan, also called component test plan. Do not confuse this with the plan for unit testing, which developers do. This plan is for the work assigned to an individual tester.

All members of test team

Create Master test plan

 

QC Project Lead or QC Manager combines the component test plans into master test plan.

Analyze Requirements and perform risk analyses

All members of test team in consultation with developers, business analysts and some times client also.

Create Requirements Traceability Matrices – i.e. derive test objectives and test conditions from the requirements

All members of test team

Design Test Cases

All members of test team

Design Test Scenarios or Use cases

All members of test team

Identify Test data needs, create and/or obtain test data

All members of test team with the help of developers, if necessary out side teams.

Identify Environment needs and set-up/get it set-up

All members of test team should convey their needs to the QC Project Lead and/or QC Team Lead who will take care of it with the help of developers/Sys Admins/Database Admins.

Create test scripts (Manual and/or Automated)

All members of test team

Peer Review

All members of test team

Create Execution plan

All members of test team

Test Execution

Execute Scripts according to execution plan, risk analyses priorities and test schedule

All members of test team

Manage Defects

All members of test team

Certify the code in test environment

QC Project Manager/ QC Team Lead

Create a rollback plan.

Development team in consultation with QC Project Manager/ QC Team Lead

Create new and/or pick up existing scripts for execution in production environment

All members of test team

Validate the code moved to production environment

All members of test team

Approve the release or rollback

Development team in consultation with QC Project Manager/ QC Team Lead

Post Test

Create defect Analyses Reports

All members of test team

Write Change Requests for deferred defects

All members of test team

Fill the key-learnings document

All members of test team

 

 

3.1.        Requirements Analyses and walkthroughs

Testing is comparing expected behavior with actual behavior and reporting

bugs if they are different. Who will tell you about expected behavior of the application?.REQUIREMENT / FUNCTIONAL SPECIFICATIONS tell you that. Without requirement specs you cannot meaningfully test any application.

 

Business/Requirement analysts interact with the end user/client and gather the Business Requirements. Business Requirements are converted into Functional

Specifications also called system requirements. Business requirements are what the client wants. Remember business requirements say what client wants, but not how client want them.

Functional specs or System Requirements say how the system should look and work in order to meet the business requirements of the client. In other words System requirements are what should be seen in the application. The clients could only give their business requirements. Business analysts/Requirement specialists have to convert them into functional specs/System Requirements.

Once functional specs are ready the testing process begins. When Functional Specs are ready a copy of it will be distributed to all stakeholders of the project including testers. Testers have to thoroughly read the requirements and prepare a list of questions they have on them. Then testers participate in what is called the “Requirement walk through sessions” in which application designers,developers and requirement specialists also participate. It is basically a meeting of all and all the requirements (here after requirements mean Functional/Requirement specs) are critically discussed to find missing,incomplete, inconsistent and contradictory requirements.

 

Example for Missing requirements:

We see a ‘Reset’ button below a form that can clear all the values entered in the form. But there is no mention of that button in the requirements, except in the sample screen shot.

 

Example of incomplete requirements

A requirement says, “User can change his personal details in the application from abc screen.” This requirement dose not say what if the user logs out of the application without saving the changes he made to his personal details etc.

Exampleof inconsistence requirementsis that - at one place requirements say one thing about a particular action andsay some thing different at other place regarding the same or similar kind ofaction.

Lookat the following example requirements:

Onerequirement reads:

‘When a user tries toaccess files belonging to others he/she should get an error message “ You donot have permission for this action.”’

Anotherrequirement reads:

When a user tries toaccess files or folders belonging to others he/she should get the message “ Youneed admin privileges for this kind of action”’

Inthe above two requirements the user action is more or less same but themessages are different.

 

Anotherexample is that user gets warning message when he attempts to delete individualmails from bulk mail folder but don’t get any warning message when “EmptyFolder” link is clicked, which in fact deletes all the mails from bulk mailfolder.

 

Anotherexample is at some place requirements document says “when user tries toquit the application without saving his/her changes he/she should get themessage window ‘Are you sure you want to quit without saving changes’ with twobuttons labeled OK and Cancel. ”. At some other place requirementsdocument says,  “when user tries toquit the application without saving his/her changes he/she should get themessage window ‘Your changes will be lost if you logout withoutsaving them. Do you want to continue’ with two buttons labeled Yes and No.”

Inabove case, requirements specify two different messages for the

samekind of action by the user. These are called inconsistent requirements.

 

Exampleof contradictory requirements:

Ifthere are contradictory requirements, implementing one requirementautomatically leads to elimination of some other requirement. At one placerequirements say one thing about a particular function and an opposite thingabout the same function at a different place or requirements say one thingabout a function at one place and one thing about some other function at someother place, whose effect is to go against what was said about the firstfunction.

Forexample Requirements at one place say that role X should not have access tofunctionalities from A1 to A10 and at some other place say functionality A5should be accessible to role X only. These are mutually contradictory.

 

Oncethe requirement walkthrough sessions are over and all necessary corrections tofunctional/requirements specs are made, the Requirement Specifications aredeclared as finalized/Base lined. A copy of the requirements is given to thetesting and developing team. Testers go ahead with developing test plan, testcases, test scripts etc (elaborated else where in this document). Developers goahead with coding.

 

3.2.        Test plan creation

Testplan, also called QC plan, is a comprehensive master plan for executing atesting project. It is a blue print on which every thing else is based. Itaddresses many important questions like:

 

What to test?

How to test?

            When to test?

            Where to test?

            Who should test it? And

What is required to test it?

 

Answersto these questions will make up the test plan. Test plan is an evolutionarydocument. It keeps changing and evolving as the testing progresses.       

 

What to test?– This activity identifies the components of the application under test, anddecision on what to test and what not to test.

 

How to test?  - This activityhas to do with test strategy. It involves making many such decisions as:

 

Whatare test/QC entry and exit criteria?

 

Whetherto test manually or use automation tools! Or use a combination of both.

 

Whattemplates processes and standards should be followed?

 

Whereto store the QC/Testing artifacts like, test plan, test cases etc.

 

Howto track defects and what should be the defect management policy?

 

Howmany levels of testing should be executed and what should be the executionsequence (execution plan)?

 

Howthe change control policy should be?

 

Whatif some things take longer than expected or how to meet tight dead lines orover come pressure situations (This calls for risk analyses and contingencyplan)

 

What methodologies should be used? – Black Box/WhiteBox/Gray Box or a combination of these.

 

When to test? – An answer to this questionproduces schedule for the testing project with different phases, detailedactivities in each phase along with clearly defined deliverables and time lines(start and end dates).

 

Where to test? This question is related totest environment (operating systems, Network, databases, tools, applicationsetc.)

 

How close the test environment should be to the real worldenvironment?

Who is responsible to setup and manage the testenvironment?

 

Who should test it? – This question relatesto resource planning. Usually QC Team lead or manager is responsible for this.This activity includes but not limited to:

1.   Identifying the number of resources required andrecruiting new resources if needed.

2.   Identifying the training needs and arranging fortraining.

3.   Define roles and responsibilities of test teammembers.

4.   Dividing work among QC team members.

5.   Responsibilities of QC team towards other teams.

6.   QC Team’s communication process with otherteams.

7.   What QC team could expect from other teams as amatter right.

 

What is required to test it? – QC Projectmanager with the help of team members should identify the needs of the testingteam in terms of test data, environment, tools etc and arrange to get them tothe team in timely manner for successful execution of testing project.

 

There are two levels of test plan. Unit/Component testplan and Master test plan. It is the responsibility of every test team memberto create a test plan for the piece of work that was assigned to him or her.Such test plans are called unit/component test plans. From the component testplans the QC Project manager, with input from QC Team lead and other teams, createsa master test plan.  Unit test plans canbe very brief. Since all the information might not be available at the time ofcreation of test unit and master test plans, it is not possible to completethem upfront. Some of the sections might have to be left blank or filled withthe acronym TBD (means ‘to be defined’). And some of the sections might have tobe changed as the testing progresses. Here is a sample or template for theQC/Test plan:

 

 

 


 

 

Application under Test

 

 

 

 

XxxxxRelease xx.xx Test Plan.

 

 

 

 

 

 

 

 

 

 

Version xx
Last Modified Date:

TABLEOF CONTENTS

 

1.   The Application undertest…………………………………………………………..24

2.   Test PlanObjectives…………………………………………………………………24

3.   Components of the Release. ………………………………………………..……..24

4.   Releasepriorities…………………………………………………………………….24

5.   ReleaseSchedule……………………………………………………………………25

6.   Contact personnel for theRelease………………………………………….……..27

7.   Subject Matter Experts in TestTeam………………………………..…………….27

8.   Release Test PlanMatrix……………………………………………………………28

9.   Test Dataneeded…………………………………………………………….….xxxx

10. TestURLs………………………………………………………………….……..xxxx

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 


 

 

1.    TheApplication under Test:

·        ApplicationName:

·        Release#:

·        Releaseduration:

·        Testingduration:

2.    TestPlan Objectives:

Toidentify the components of the Release.

Toidentify critical areas and establish priorities  for testing the release components.

Todefine items to be tested and items not to be tested.

Toidentify the contact personnel in each team for each component of the release.

Toplan the testing resources,

Toidentify the activities needed for successfully testing the Release.

 

3.    ReleaseComponents:

           

 

 

 

 

 

 

 

 

 

 

    

 

4.    ReleasePriorities:

After having meetings with Developmentand Requirements team it was understood that following components of thecurrent release will have maximum impact for the business client. So they areregarded as top priority areas for testing as well. However, please note thatrelease components not listed below by means should be construed as less orunimportant ones. Every thing is important.

 

 

 

 

 

5.    QCProject Schedule:

 

QC/Testing Phase

Activity

Deliverable

Who is responsible

Start Date

End Date

Test Preparation

Create resource plan

 

QC Team Lead/QC Manager

 

 

Create Schedule

 

QC Team Lead/QC Manager

 

 

Create Unit test plan

 

All members of test team

 

 

Create Master test plan

 

QC Project Lead

 

 

Analyze Requirements and perform risk analyses

 

All members of test team in consultation with developers, business analysts and some times client also.

 

 

Create Requirements Traceability Matrices – i.e. derive test objectives and test conditions from the requirements

 

All members of test team

 

 

Design Test Cases

 

All members of test team

 

 

Design Test Scenarios or Use cases

 

All members of test team

 

 

Identify Test data needs, create and/or obtain test data

 

All members of test team with the help of developers, if necessary out side teams.

 

 

Identify Environment needs and set-up/get it set-up

 

All members of test team should convey their needs to the QC Project Lead and/or QC Team Lead who will take care of it with the help of developers/Sys Admins/Database Admins.

 

 

Create test scripts (Manual and/or Automated)

 

All members of test team

 

 

Peer Review

 

All members of test team

 

 

Create Execution plan

 

All members of test team

 

 

Test Execution

Execute Scripts according to execution plan, risk analyses priorities and test schedule

 

All members of test team

 

 

Manage Defects

 

All members of test team

 

 

Certify the code in test environment

 

QC Project Manager/ QC Team Lead

 

 

Create a rollback plan.

 

Development team in consultation with QC Project Manager/ QC Team Lead

 

 

Create new and/or pick up existing scripts for execution in production environment

 

All members of test team

 

 

Validate the code moved to production environment

 

All members of test team

 

 

Approve the release or rollback

 

Development team in consultation with QC Project Manager/ QC Team Lead

 

 

Post Test

Create defect Analyses Reports

 

All members of test team

 

 

Write Change Requests for deferred defects

 

All members of test team

 

 

Fill the key-learnings document

 

All members of test team

 

 

 

 

6.    ResourcePlan

 

#

Name

Role

Responsible for

SME for!

Needs training on

Mentor for

1.      

Name1

Team member

Component or Functionality of the application under test

 

 

 

2.      

 

 

 

 

 

 

3.      

 

 

 

 

 

 

4.      

 

 

 

 

 

 

5.      

 

 

 

 

 

 

6.      

 

 

 

 

 

 

7.      

 

 

 

 

 

 

8.      

 

 

 

 

 

 

9.      

 

 

 

 

 

 

10.   

 

 

 

 

 

 

 

 

7.    Releasecontact personnel (out side teams):

 

#

Name

Email

Phone#

Team

Contact for

1.      

 

 

 

 

 

2.      

 

 

 

 

 

3.      

 

 

 

 

 

4.      

 

 

 

 

 

5.      

 

 

 

 

 

 

 

8.    SubjectMatter Experts in Test Team:

Thislist has been prepared based completely on who worked on what during previousreleases:

 

Subject Name

Primary contact

Alternative Contact

 

 

 

 

 

 

 

9.    ReleaseTest Plan Matrix:

Therelease test plan matrix begins with identification of deliverables expected oftest team during the course of Testing Cycle and proceeds to spell out thetasks that should be accomplished to have the identified deliverables alongwith the due dates for each of them and the personnel responsible. It alsoprovides a non-detailed strategy to perform each of the tasks.

 

Release components, Testing Deliverables.

What to do – The Testing Tasks

How to do – The Strategy

When to do – The Due Dates

Who should do it – Assigned to

Status/Notes

Test Plan.

 

 

 

 

 

Resource Plan

 

 

 

 

 

Release Testing Schedule

 

 

 

 

 

Requirements Validation Matrices (RVM)

 

 

 

 

 

Identify Test Data and Environments needs

 

 

 

 

 

Test Scripts.

 

 

 

 

 

 

 

 

 


 

 

3.3.        Traceability Matrices

 

One of thedefining characteristics of good QC or testing practices is to provide thetraceability between the requirements and the tests that verified them.Requirements are one end of the testing while defects represent the other end.It is very important to link them together in some form or the other. It isvery important in the light of the fact that defects are after all deviations fromrequirements. A defect can’t be a defect if there are no requirements relatedto it. We find defects while executing tests. Tests are created by groupingtest conditions. Test conditions are derived from requirements. Traceability isa mechanism to establish link between all theses entities. Traceability can beas simple as a table showing requirements, test conditions, test cases/testsand defects.

 

Whether youdo it using a tool or do it manually using word documents or excel workbooks itdoesn’t matter, there should be a clear mapping between tests created and therequirements the tests have been covering. Given below is one of the widelyused templates to map tests to requirements. It is called RequirementsTraceability Matrix or Requirements Validation Matrix.

 

It is highlyadvisable to use such matrix even if you are using a test management tool likeTest Director/Quality Center. Due to their tree structures and limited view ofcontent in the tabs, they tend to inhibit your ability to think clearly. Youcannot see requirements and tests side by side. You get to see only one at atime. So, first design you tests in a matrix like this and then update the testmanagement tool. This may seem extra burden, but benefits are worth the effort.

 

 

Business Requirements

System Requirements

Test Conditions

Test Case Name

Defect IDs

Risk of not testing

(see the codes below)

BR1

SR1

SR2

SR3

1.1

1.2

1.3

1.4

Test Case1

 

 

BR2

SR1

SR2

SR3

2.1

2.2

2.3

Test Case 2

 

 

 

Risk Codes:

1= Critical; 2= High; 3=Medium; 4=Low; 5=Trivial

 

The firstmust at least describe the name of the business requirement document along withits location and section which it has been picked from with in that document.If available it is highly advisable to put the requirement numbers also.

 

The secondcolumn should contain similar information about System Requirements.

 

For exampleone system requirement reads in one section of the requirement document say –“XYZ System Requirements, section – Registration form”

 

“The application should be protectedwith User ID and password”

 

Correspondingsystem requirements may look like below

 

  1. There should be a password field in the registration form and it should accept five to eight alphanumeric characters only. Otherwise the system should throw an error - ‘Invalid password format’ with ok button. Upon clicking ok in the error message, the password field should be cleared so that user can enter a different password.
  2. There should be a field for the user to enter his choice of user name. System should validate if that user name is available or already taken by some one else. If the user name has already been taken, then the system should display an error message “This user name is not available. Please select a different user name” with an OK button. Upon clicking ok in the error message, the User Name  field should be cleared so that user can enter a different password

 

In the abovematrix, first column should have “XYZ Functional Requirements, section –Registration form”. Second column should have above two system requirements

 

Then each systemrequirement should be broken down further into test conditions using black boxtesting techniques (described else where in this document).

 

Onerequirement can be broken down into any number of test conditions based on whatthe requirement is.

 

While firstand second columns are filled during analysis phase of testing life cycle,third, fourth and last columns are usually filled during test design phase.Defect IDs columns can only be filled during test execution phase

 

3.4.        Test Case Design

 

Test case design, also called test design, is a process of deriving testconditions from system requirements and grouping related test conditions intological units called tests or test cases. A test case is a test which includesone or more test conditions. A test condition is smallest unit into which arequirement can be broken down.

 

The process of grouping test conditions into test cases should havefollowing characteristics:

 

  1. Each test condition must be covered at least once in a test case. There should not be any missing of test conditions.
  2. Avoid coverage of test conditions multiple times. This causes duplication of effort and redundancy.
  3. Test conditions grouped into test case should have some commonality or relationship based on some criteria.

 

Above are the most important features of an efficient test design.

The criteria for grouping test conditions into test cases could be anyone or combination of the following characteristics of the test conditions.

 

  1. Test conditions might be verifying same screen of the application.
  2. They might be verifying same functionality of the application.
  3. They might require same kind of inputs.
  4. They might need same tools or technologies etc.

 

This characteristic list could grow on and on based on the applicationunder test.

 

Once a test case is identified for a set of test conditions, a meaningfulname should be given to it. The test case names should be documented in therequirements traceability matrix (specifically in the third column) against thetest requirements or test conditions they cover.

 

There could be one to one, one to many as well as many to onerelationship between test cases and test conditions. In other words, one testcondition may be covered by more than one test case or just one test case maycover more than one test condition.

 

3.5.        Test Script Creation

 

Test script is set of detailed, step by step instructions for executing atest case.  Every test case MUSTNECESSARILY have a test script associated with it. They always go together. Thesteps in the test script explain what to do in order to verify the testconditions included in the test case.

 

Some organizations that have very tight schedules and close releasedeadlines might some times skip the test script creation altogether. But, it isnot recommended for critical or some what important applications that haveserious implications should some thing go wrong.

 

Test cases and test scripts are some times used synonymously. Given belowis a suggested structure of test script of a test case. If you are using sometest management tool like Test Director, you don’t need this template.

 

 

 

 

 

Test Case Identifier:<<Test Case Name and Number>>

                         ApplicationIdentifier: ………………………………………               

 

Test Case logistics:

 

Executed by

Defect #s

Date Opened

Date Resolved

Execution Status

Status Date

Pass/Fail

 

 

 

 

Not Started

 

 

 

 

 

 

In progress

 

 

 

 

 

 

Completed etc…

 

 

               

Execution status codes: 1. NotStarted, 2. In Progress, 3. completed.

 

Test Case Objectives:

  1. To verify that …
  2. To verify that …

 

Environment setup:

            Specify any special test environmentsetup needs here: Like purposefully disconnect an interface to verify errormessages, point the application to an empty database etc.

 

Test Data Setup:

                Specify test data requirementshere i.e. data needed to execute this test case

 

Field Name

Name Input Values

Iteration 1

Iteration 2

Iteration 3

Iteration 4

Field Name 1

Field Name 2

 

Script / Execution steps:

 

Step#

Description

Expected Result

Actual Result

Pass/Fail

Action

Input Data

1

Open … application

-

Application should open with out errors

 

 

2

Enter valid user name in User ID field

abcd

…..

 

 

3

Enter valid password in the password field

xxxxx

…..

 

 

4

Click on submit button

-

User should be able to login.

 

 

5

 

 

 

 

 

.

.

.

 

 

 

 

3.6.        Test Execution

 

Needless to say this activity is performed during test execution phase.Every member of the QC team should include an execution plan in their componenttest plans. Execution plan specifies the execution sequence and execution schedulefor the test cases created.

 

Following are some of the recommendations for test execution phase:

  1. Test execution sequence and schedule should confirm to the application’s release schedule, risks and priorities.
  2. Usually tests that verify the defining functionality of the application should be executed first. For example, in an email application, you should first test login, read mail and write mail functionalities, before attachment, address book etc.
  3. Execution of some tests results in creation of test data for some other tests. Identify such tests and execute them first. This saves time otherwise required to create test data.

 

For example there are many online applications thatrequire users to register. In such applications executing tests that, say forexample, verify the functionality of adding new users ahead of the tests thatverify the functionality of modifying existing users will save some time.Because, to modify a user details, first of all there should be users. Thefirst category of tests adds the new users. The same user details could be usedlater while executing the tests that verify the functionality of modifying userdetails.

 

  1. Tests that verify the maximum functionality of the application in minimum time should be executed next.
  2. Tests that verify the same functionality with different test data several times could be taken up next.
  3. Tests that verify cosmetic features of the application should be pushed to last.

 

 

3.7.        Defect Management

 

 

Defect management activity goes simultaneously with test execution. Asexplained before a defect is a difference between expected state or behavior ofthe application (as stated in functional and business requirements) and itsactual behavior learned while executing tests. Defect management deals with setof methods policies and procedures followed during the life cycle of a defect.This topic will be elaborated more in the chapter dedicated to it.


 

 

4.    Categoriesof Testing

 

 

Functionality Testing

Tests focused on validating the target-of-test functions as intended, providing the required services, methods, or use cases. This test is implemented and executed against different targets-of-test, including units, integrated units, applications, and systems.

Usability To top of page

 

*       Usability test: Tests that focus on: 

*   human factors

*   esthetics

*   consistency in the user interface 

*   online and context-sensitive help

*   wizards and agents

*   user documentation 

*   training materials

ReliabilityTo top of page

*       Integrity test: Tests that focus on assessing the target-of-test's robustness (resistance to failure), and technical compliance to language, syntax, and resource usage. This test is implemented and executed against different targets-of-test, including units and integrated units.

*       Structure test: Tests that focus on assessing the target-of-test's adherence to its design and formation. Typically, this test is done for Web-enabled applications ensuring that all links are connected, appropriate content is displayed, and no content is orphaned.

*       Stress test: A type of reliability test that focuses on evaluating how the system responds under abnormal conditions. Stresses on the system could include extreme workloads, insufficient memory, unavailable services and hardware, or limited shared resources. These tests are often performed to gain a better understanding of how and in what areas the system will break, so that contingency plans and upgrade maintenance can be planned and budgeted for well in advance.

PerformanceTo top of page

 

*       Benchmark test: A type of performance test that compares the performance of a new or unknown target-of-test to a known reference-workload and system.

*       Contention test: Tests focused on validating the target-of-test's ability to acceptably handle multiple actor demands on the same resource (data records, memory, and so on).

*       Load test: A type of performance test used to validate and assess acceptability of the operational limits of a system under varying workloads while the system-under-test remains constant. In some variants, the workload remains constant and the configuration of the system-under-test is varied. Measurements are usually taken based on the workload throughput and in-line transaction response time. The variations in workload usually include emulation of average and peak workloads that occur within normal operational tolerances.

*       Performance profile: A test in which the target-of-test's timing profile is monitored, including execution flow, data access, function and system calls to identify and address both performance bottlenecks and inefficient processes.

*        

*       Volume test: Testing focused on verifying the target-of-test's ability to handle large amounts of data, either as input and output or resident within the database. Volume testing includes test strategies such as creating queries that would return the entire contents of the database, or that would have so many restrictions that no data is returned, or where the data entry has the maximum amount of data for each field.

 

 

Security test:

*       Tests focused on ensuring the target-of-test data (or systems) are accessible only to those actors for which they are intended. This test is implemented and executed on various targets-of-test.

 

 

 

 

 

 

5.    Levels ofTesting:

Testing is appliedto different types of targets, in different stages or levels of work effort.These levels are distinguished typically by those roles that are best skilledto design and conduct the tests, and where techniques are most appropriate fortesting at each level. It's important to ensure a balance of focus is retainedacross these different work efforts.

5.1.        Unit Testing

Unit testing focuses on verifying the smallesttestable elements of the software. Typically unit testing is applied tocomponents represented in the implementation model to verify that control flowsand data flows are covered, and that they function as expected. The Implementerperforms unit testing as the unit is developed. The details of unit testing aredescribed in the Implementation discipline.

Unit testing isalso called Module Testing. It is applied at the level ofeach module of the system under test. Module testing is usually done by thedevelopers themselves, before the module is handed over for integration withother modules. Usually the testing at this level is structural (white-box)testing. Module Testing is sometimes called Unit Testing representing thelowest level component available for testing. Note in some situations, unittesting is also used to refer to testing of logical or function units that haveintegrated a number of modules. We consider this to be integration testing.

 

5.2.        Integration Testing

Integration testing is performed to ensure that thecomponents in the implementation model operate properly when combined toexecute a use case. The target-of-test is a package or a set of packages in theimplementation model. Often the packages being combined come from differentdevelopment organizations. Integration testing exposes incompleteness ormistakes in the package's interface specifications.

In some cases, the assumption by developers is thatother groups such as independent testers will perform integration tests. Thissituation presents risks to the software project and ultimately the softwarequality because:

·        Integrationareas are a common point of software failure.

·        Integrationtests performed by independent testers typically use black-box techniques andare typically dealing with larger software components.

A better approach is to consider integration testingthe responsibility of both developer and independent testers, but make thestrategy of each teams testing effort does not overlap significantly. The exactnature of that overlap is based on the needs of the individual project. Werecommend you foster an environment where developers and independent systemtesters share a single vision of quality.

Integration Testingis an interim level of testing applied between module testing and systemtesting to test the interaction and consistency of an integrated subsystem.Integration testing is applied incrementally as modules are assembled intolarger subsystems. Testing is applied to subsystems which are assembled either:

 

1. Bottom-up - assembles up from the lowestlevel modules and replaces higher level modules with test harnesses to drivethe units under test.

2. Top-down – assembles down from the highestlevel modules replacing lower level modules with test stubs to simulateinterfaces

3. Mixture of both bottom-up and top-downapproaches.

 

Integration testingis performed so a partial system level test can be conducted without having towait for all the components to be available, and thus problems can be fixedbefore affecting downstream development on other components. Thus integrationtesting is usually done as soon as identifiable logical subsystems arecomplete. Integration testing is also performed as some tests cannot be carriedout on a fully integrated system, and requires integration with special testharnesses and stubs to properly test a component. It also helps to focustesting to particular components and to isolate the problems that will bediscovered. Usually the testing at this level is a mixture of functional(black-box) and structural (white-box) testing.

 

 

5.3.        Interface Testing

 

Interfaces are thesources of many errors, as often there is a misunderstanding of the waycomponents and subsystems should work together because they are developed bydifferent people. Interface testing focuses specifically on the way componentsand subsystems are linked and work together. Interface testing can be appliedto internal interfaces between subcomponents of a system (e.g. between separatemodules), as well as to external interfaces to other systems (e.g. datainterchanges to other applications).

 

5.4.        System Testing

 

Traditionally systemtesting is done when the software is functioning as a whole. An iterativelifecycle allows system testing to occur much earlier—as soon as well-formedsubsets of the use-case behavior are implemented. Usually the target is thesystem's end-to-end functioning elements.

 

5.5.        Regression Testing

 

Regression testing is applied after changeshave been made to a system. The operation of the new version of the system iscompared to the previous version to determine if there are any unexpecteddifferences.

 

To be precise - “Regression testing is a processof comparing two different version of a same software entity to ensure thatonly intended changes have been made in the old version to come up with latestversion”. Here intended changes means, changes documented in an approvedFunctional/Business requirements document.

 

Regression is applied as changing software,for example to fix known defects or to add new functions, has a very highlikelihood of introducing new defects. Some studies have predicted that forevery 6 lines of code modified a new defect will be added.

 

Regression testing has become a commonlyapplied form of testing due to the introduction of capture/playback test tools.Testers use these tools to capture the operation of a version of a system undertest, and then the operation can be played back automatically on later versionsof the system and any differences in behavior are reported.

 

5.6.        User Acceptance Testing

 

User acceptance testing is the final test actiontaken before deploying the software. The goal of acceptance testing is toverify that the software is ready, and that it can be used by end users toperform those functions and tasks for which the software was built.

There are other notions of acceptance testing, whichare generally characterized by a hand-off from one group or one team toanother. For example, a build acceptance test is the testing done toaccept the hand-over of a new software build from development into independenttesting.

Acceptance testing is the final test action beforedeploying the software. The goal of acceptance testing is to verify that thesoftware is ready and can be used by your end users to perform those functionsand tasks for which the software was built. There are three common strategiesfor implementing an acceptance test. They are:

*       Formal acceptance

*       Informal acceptanceor alpha test

*       Beta test

The strategy you select is often based on thecontractual requirements, organizational and corporate standards, andapplication domain.

Formal Acceptance Testing To top of page

Formal acceptance testing is a highly managed processand is often an extension of the system test. The tests are planned anddesigned as carefully as, and in the same detail as, system testing. The testcases chosen should be a subset of those performed in system test. It'simportant not to deviate in any way from the chosen test cases. In manyorganizations, formal acceptance testing is fully automated.

The activities and artifacts are the same as forsystem testing. In some organizations, the development organization (or itsindependent test group), with the representatives of the end-user organization,performs the acceptance test. In other organizations, acceptance testing isperformed completely by the end-user organization or an objective group of peoplechosen by the end-user organization.

The benefits of this form of testing are:

·        Thefunctions and features to be tested are known.

·        Thedetails of the tests are known and can be measured.

·        Thetests can be automated, which permits regression testing.

·        Theprogress of the tests can be measured and monitored.

·        Theacceptability criteria are known.

The disadvantages include:

  • Requires significant resources and planning.
  • The tests may be a re-implementation of system tests.
  • The testing may not uncover subjective defects in the software, since you're only looking for defects you expect to find.

 

Informal Acceptance Testing

 

In informal acceptance testing, the test proceduresfor performing the test are not as rigorously defined as for formal acceptance testing.The functions and business tasks to be explored are identified and documented,but there are no particular test cases to follow. The individual testerdetermines what to do. This approach to acceptance testing is not as controlledas formal testing and is more subjective than the formal one.

Informal acceptance testing is most frequentlyperformed by the end-user organization.

The benefits of this form of testing are:

·        Thefunctions and features to be tested are known.

·        Theprogress of the tests can be measured and monitored.

·        Theacceptability criteria are known.

·        Youwill uncover more subjective defects than with formal acceptance testing.

The disadvantages include:

·        Resources,planning, and management resources are required.

·        Youhave no control over what test cases are used.

·        Endusers may conform to the way the system works and not see the defects.

·        Endusers might focus on comparing the new system to a legacy system, rather thanlooking for defects.

·        Resourcesfor acceptance testing are not under the control of the project and could beconstricted.

 

5.7.        Beta Testing

 

Betatesting is the least controlled of the three acceptance test strategies. Inbeta testing, the amount of detail, the data, and the approach taken isentirely up to the individual tester. Each tester is responsible for creatinghis or her own environment, selecting his or her data, and determining whatfunctions, features, or tasks to explore. Each tester is responsible foridentifying his or her own criteria for whether to accept the system in itscurrent state or not.

Betatesting is implemented by end users, often with little or no management fromthe development (or other non end-user) organization. Beta testing is the mostsubjective of all acceptance test strategies.

The benefits of this form of testing are:

·        Testingis implemented by end users.

·        Thereare large volumes of potential test resources.

·        Thereis increased customer satisfaction for those who participate.

·        Youuncover more subjective defects than with formal or informal acceptancetesting.

The disadvantages include:

·        Youmight not test all functions or features.

·        Testprogress is difficult to measure.

·        Endusers might conform to the way the system works and not see or report thedefects.

·        Endusers may focus on comparing the new system to a legacy system, rather thanlooking for defects.

·        Resourcesfor acceptance testing are not under the control of the project and could beconstricted.

·        Acceptabilitycriteria are not known.

·        Youneed increased support resources to manage the beta testers.

A comment about sequence and timing oftest levels To top of page

Traditionally,unit testing is thought of as being implemented early in the iteration as thefirst stage of testing: all units required to be passed before subsequentstages are conducted. However, in an iterative development process, thisapproach is as a general rule, inappropriate. A better approach is to identifythe unit, integration and system tests that offer most potential for findingerrors, then implement and execute them based on a combination of greatest riskand supporting environment.


 

6.    TestingApproaches/Methodologies

 

6.1.        Black-Box Testing

 

Black-box testing isproposing tests without detailed knowledge of the internal structure of thesystem or component under test. It is based on the specification of what thesystem’s requirements. Many other replacement terms are used for black-boxtesting, including:

  • specification-based testing
  • input/output testing
  • functional testing

Black-box testing istypically performed as the development lifecycle nears a completely integratedsystem, such as during integration testing, interface testing, system testingand acceptance testing. At this level the components of the system areintegrated sufficiently as to demonstrate that complete requirements arefulfilled. The types of errors that are most commonly found in black-box testinginclude:

 

  • Incorrect or missing functions
  • Interface errors, in the way different functions interface together, the way the system interfaces with data files and data structures, or the way the system interfaces with other systems, such as through a network
  • Load and performance errors
  • Initialization and termination errors

 

SystematicallyDecomposing Requirements into Tests

 

Most testers, at thestart of the testing project, are confronted with the problem of deciding whattest cases they will execute to thoroughly test their system. Initially thetester is overwhelmed, as they are looking at an empty test set that must bepopulated, which, for the average system, will need to be filled with many thousandsof test cases to test adequately. As with any huge task, the key is to breakthe task down into smaller manageable activities. This is where test designfits in, decomposing the task of testing a system into smaller manageableactivities, ultimately to a level that corresponds to establishment of an individualtest case. Of course, test design is also the mechanism used for assuringyourself that you have sufficient test cases to cover all appropriate aspectsof your system. Designing what test cases are required is a labor-intensiveactivity. No tool can automatically determine what test cases are needed foryour system, as each system is different, and test tools do not know whatconstitutes correct (or incorrect) operation. Test design requires the tester’sexperience, reasoning and intuition.

 

Specificationto Guide Testing

 

The Systemrequirements specification or a model of your system is the initial startingpoint for test design. The system specification or model may be a functionalspecification, performance or security specification, user scenariospecification, or a specification of the hazards and risks of the system.Whatever the case may be, the specification describes the criteria againstwhich test as it defines correct or acceptable operation. In many cases,particularly with legacy systems, there may be little or no documentationavailable to use as a system specification. Even where documentation doesexist, there is a high risk that it has not been kept up-to-date after year ofmodification to the system. It is essential that knowledgeable end-users of thesystem are included in test design, as a substitute for missing or out-of-datespecifications. If current documentation is inadequate for specificationpurposes, then at least a simple form of specification should be created fromwhich the top level set of test objectives can be derived. In many cases, asignificant part of the test design technique involves formulating aspecification to test against.

 

Decomposing Test Objectives

 

Test design willfocus on a set of specification components to test. High-level test objectivesare proposed for these specification components. Each test objective is then systematicallydecomposed into either other test objectives or test cases using test designtechniques. Once you have decomposed your test objectives to single componentsfor a given test criteria, there are many kinds of test design techniques thatcan be selected depending on the type of testing you are applying, and many ofthese techniques are appearing in standards.

 

Techniques include

 

· Equivalence ClassPartitioning

· Boundary ValueAnalysis

· State TransitionsTesting

· Cause-Effect Graphing

 

Documenting your testdesigns is crucial. Test designs are normally represented in a document calleda Test Design Specification, for which standard formats are available. The TestDesign Specification provides an audit trail that traces the design from specificationcomponents through to applicable test cases. The document records thetransformation of test objectives and justifies that each test objective hasbeen adequately decomposed at the subsequent level. In many cases, test designdocuments are required to justify to other parties that each requirement hasbeen fulfilled. Systematic test design is the key to decomposing your hugecomplex testing task. Once you have competed the difficult task of decidingwhat test cases you need to complete your tests, executing the test cases isrelatively straight-forward.

 

6.2.        Black-Box Testing Techniques

 

Presented below aredifferent techniques that can be employed to test a component

in an attempt todiscover defects. The component level considered is typically that of

a single function.

 

Many of thetechniques and examples are extracted from the British Computer

Society’s Standardfor Component Testing. This standard provides excellent guidance for testdesign techniques, and is being proposed for international standard status.

 

6.2.1.  FunctionalAnalysis

 

Functional analysisis a very general technique that is probably the most widely used approach inindustry. The technique is based on the simple assumption that the system isexpected to perform certain functions, and test cases are proposed to thefunctions are performed as expected. The approach first needs to specificationof what functions are expected to be provided by the system. This informationis typically provided in functional specifications of the system. However inmany cases, for example legacy systems, such specification may not be availableor be so far out-of-date that they no longer correctly reflect the system’sfunctions.

In these situations,the user must build a functional specification. Quite often a good startingpoint for such a specification is the menu structure provided by the system, orthe user documentation provided with the system.

 

6.2.2.  UseCase Analyses

 

A use case is asequence of actions performed by a system, which together produce resultsrequired by users of the system. It defines process flows through a systembased on its likely use. In other words, in general any software system ismeant to be used for many different purposes, in this context use case can bedefined as using the system for one particular purpose.

 

Tests derived fromuse cases help uncover defects in process flows during actual use of thesystem. The tester may discover that it is not possible to transition to adifferent part of the system when using the system as defined by the use case.Use cases also involve interaction or different features and functions of thesystem. For this reason, tests derived from use cases will help uncoverintegration errors.

 

Each use case has:

 

  • Preconditions which need to be met for the use case to work successfully.
  • Post conditions define the conditions in which the use case terminates. The post conditions identify the observable results and final state of the system after the use case has completed.
  • Flow of events, which defines user actions and system responses to those actions. It is comprised of a normal scenario, which defines the mainstream of most likely behavior of the system for the use case, and alternative branches, which provide other paths that may be taken through the use case

 

Use cases may alsohave shared or common pieces, such as threads of use cases reused acrossseveral use cases.

 

Deriving Test Cases

 

Deriving test casesfrom use cases is relatively straight-forward, and consists of choosing pathsthat traverse through the use case. Paths exist not only for the normal pathbut also for the alternative branches. For each test case the path exercisedcan be identified, and then input and expected results can be defined. A singlepath can give rise to many different test cases, and other black-box testdesign techniques, such as equivalence partitioning and boundary value analysis(described in later sections), should be used to derive further testconditions.

A large number oftest cases can be derived using this approach, and it requires judgment toprevent an explosion of test cases. As usual, selecting candidate test casesshould be based on the associated risk, which involves the impact of failure, thelikelihood of finding errors, the frequency of use, and complexity of the usecase.

 

Negative Test Cases

 

Negative test casesconsider the handling of invalid data, as well as scenarios in which theprecondition has not been satisfied. There are kinds of negative test casesthat can be derived:

1. Alternativebranches using invalid user actions. An example is provided below.

2. Attempting inputsnot listed in the test case. This includes attempting to violate

the preconditions,e.g. trying the use case on POs that do not have status "in

progress", suchas adding a new item to a "closed" PO.

 

Deriving test caseswill inevitably discover errors in the use cases as well. We may discover thatwe can't abort without providing a correct purchase order number. Use cases andtest cases work well together in two ways:

 

1. If the use casesfor a system are complete, accurate and clear, the process of deriving testcases is straight-forward.

2. If the use casesare not in good shape, deriving test cases will help to debug the test cases.

 

In other material,this approach might be referred to as transaction flow testing. Transactionflow is similar to a flow chart, but instead of using low level internal detailsin the flow chart, it provides a high level functional description from auser’s perspective. In the same way as for use cases above we derive test casesto exercise different paths available through the transaction flow.

 

6.2.3.  EquivalencePartitioning

 

Equivalencepartitioning is a standardized technique. Equivalence partitioning is based on the premise that inputs and outputsof a component can be partitioned into classes, which  according to the component’s specification,will be treated similarly (equivalently) by the component. This assumption isthat similar inputs will evoke similar responses. A single value in anequivalence partition is assumed to be representative of all other values inthe partition. This is used to reduce the problem that it is not possible totest every input value. The aim of equivalence testing is to select values thathave equivalent processing, one that we can assume if a test passes with therepresentative value; it should pass with all other values in the samepartition. That is we assume that similar Some equivalence partitions mayinclude combinations of the following:

 

  • Valid vs. invalid input and output values
  • Numeric values with negative, positive and 0 values
  • Strings that are empty or non-empty
  • Lists that are empty or non-empty
  • Data files that exist or not, are readable/writable or not
  • Date years that are pre-2000 or post 2000, leap years or non-leap years (a special case is 29 February 2000 which has special processing of its own)
  • Dates that are in 28, 29, 30 or 31 day months
  • Days on workdays or weekends
  • Times inside/outside office-hours
  • Type of data file, e.g. text, formatted data, graphics, video or sound
  • File source/destination, e.g. hard drive, floppy drive, CD-ROM, network

 

Example

 

Consider a function, generate_grading,with the following specification:

The function ispassed an exam mark (out of 75) and a coursework mark (out of 25), from whichit generates a grade for the course in the range ‘A’ to ‘D’. The grade iscalculated form the overall mark, which is calculated as the sum of exam andc/w marks, as follows:

· greater than or equalto 70 – ‘A’

· greater than or equalto 50, but less than 70 – ‘B’

· greater than or equalto 30, but less than 50 – ‘C’

· less than 30 – ‘D’

Where a mark isoutside its expected range then a fault message (‘FM’) is generated. All inputsare passed as integers.

 

Analysis of Partitions

 

The tester provides amodel of the component under test that partitions the input and output valuesof the component. The inputs and outputs are derived from the specifications ofthe component’s behavior. A partition is a set of values, chosen in such a waythat all values in the partition are expected to be treated the same way by thecomponent (i.e. they have equivalent processing). Partitions for both valid andinvalid values should be chosen. For the generate_grading function, twoinputs are identified:

 

· exam mark

· coursework mark

 

Less obviousequivalence partitions would include non-integer values for inputs. For example:

 

· exam mark = realnumber

· exam mark = alphabetic

 

· coursework mark =real number

· coursework mark =alphabetic

Next, outputs of the generate_gradingfunction are considered:

· grade

Equivalencepartitions may also be considered for invalid outputs. It is difficult to identifyunspecified outputs, but they must be considered as if we can cause one to occurthen we have identified a defect in the component, its specification, or both. Forexample, for grade outputs we may propose the following invalid outputs.

· grade = ‘E’

· grade = ‘A+’

· grade = ‘null ’

In this example, wehave proposed 19 equivalence partitions. In developing equivalence partitions,the tester must exercise subjective choice. For example, the additional invalidinputs and invalid outputs. Due to the subjectivity, different testers willarrive at different equivalence partitions.

 

Design of Test Cases

 

Test cases aredesigned to exercise partitions. A test case comprises the following:

 

· The inputs to thecomponent

· The partitionsexercised

· The expected outcomeof the test case

 

Two approaches todeveloping test cases to exercise partitions are available:

 

1. Separate testcases are generated for each partition on a one-to-one basis

2. A minimal set oftest cases is generated to cover all partitions. The same test case may berepeated for different test cases.

 

When developing testcases, the corresponding input or output value is varied to exercise thepartition. Other input and output values, not related to the partition beingexercised, are set to an arbitrary value.

 

One-to-one Test Casesfor Partitions

The test cases forthe exam mark input partitions are as follows:

An arbitrary value of15 has been used for coursework mark inputs. The test cases for the courseworkmark input partitions are as follows:

 

An arbitrary value of40 has been used for exam mark inputs.

The test cases forother invalid input partitions are as follows:

The test cases forvalid outputs partitions are as follows:

 

The input values ofexam mark and coursework mark have been derived from the total mark.

Finally, invalidoutputs partitions are considered:

 

BoundaryValue Analysis

 

Boundary valueanalysis is a standardized technique. Boundary value analysis extendsequivalence partitioning to include values around the edges of the partitions. Aswith equivalence partitioning, we assume that sets of values are treatedsimilarly by components. However, developers are prone to making errors in thetreatment of values on the boundaries of these partitions. For example,elements of a list may be processed similarly, and they may be grouped into asingle equivalence partition. However, in processing the elements,

the developer may nothave correct processing for either the first or last element of

the list. Boundary-valuesare usually the limits of the equivalence classes. Examples include:

 

· Monday and Sunday forweekdays

· January and Decemberfor months

· 32767 and –32768 for16-bit integers

· top-left andbottom-right cursor position on a screen

· first line and lastline of a printed report

· 1 January 2000 fortwo digit year processing

· strings of onecharacter and maximum length strings

 

Test cases areselected to exercise values both on and either side of the boundary.

Values either side ofthe boundary are selected at an incremental distance from the

boundary, theincrement being the smallest significant value for the data type underconsideration (e.g. increment of 1 for integers, $0.01 for dollars).

 

6.2.4.  CauseEffect Graphing and decision tables

Cause-effect graphing attempts to provide aconcise representation of logical combinations and corresponding actions.

  1. Causes (input conditions) and effects (actions) are listed for a module and an identifier is assigned to each.
  2. A cause-effect graph developed.
  3. Graph converted to a decision table.
  4. Decision table rules are converted to test cases.

Forexample a functionhas the following specification:

If there aresufficient funds available in the account or the new balance would be withinthe authorised overdraft limit then the debit is processed. If the new balancewould exceed the authorised overdraft limit then the debit is not processed andif it is a postal account it is suspended. Letters are sent out for alltransactions on postal accounts and for non-postal accounts f there areinsufficient funds available (i.e. the account would no longer be in credit).

 

The conditions are:

 

C1 – new balance incredit

C2 – new balance inoverdraft, but within authorised limit

C3 – account ispostal

 

The actions are:

 

A1 – process debit

A2 – suspend account

A3 – send out letter

The followingcause-effect graph shows the relationships between conditions and actions:

The cause-effectgraph is then reformulated as a decision table. All true and false combinationsfor input conditions are proposed, and true and false values are allocated toactions (* is used for combinations of input conditions that are infeasible andconsequently there are no actions possible). The result is shown in thefollowing decision table:

 

 

6.2.5.  StateTransition Testing

 

State transitiontesting is a standardized technique. State transition testing uses a model ofthe system comprising:

· the states theprogram may occupy

· the transitionsbetween those states

· the events whichcause those transitions

· the actions which mayresult

 

The model istypically represented as a state transition diagram. Test cases aredesigned to exercise the valid transitions between states. Additional testcases may also be designed to test that unspecified transitions cannot beinduced.

 

 

7.    DefectManagement

 

What is a defect?

 

A defectis a difference between expected behavior and actual behavior of the applicationunder test. Expected behavior is specified by functional requirements where asactual behavior is seen while executing test scripts written to test theapplication.

 

What isdefect management?

 

Defect management isa set of activities followed from the time a defect is detected till it isclosed and policies surrounding those activities.

 

 

Identify

While under test orduring general use the system may exhibit behavior that is unexpected

· the user (hereaftercalled the originator) can create a defect report.

· formally initiates anassessment of the unexpected behavior

 

Assess

Each defect reportedis assessed

· to determine whetherit is to be considered a defect that requires resolution

· to determine theimpact of unexpected behavior

 

Resolve

Resolution involvesmodifying the system to remove the defect. It may involve modification to:

· code

· data / configuration

· documentation

 

Apart from modifying the system there are many other ways in which adefect can be resolved. For example the defect can be deferred for futurereleases, it can be closed on the ground that it can’t be fixed. It can becancelled if in fact it is not a defect of if it is a duplicate defect (i.e. adefect that was already fixed)

 

Validate (Re-Test)

After modification,the repaired system undergoes a retest:

· to check that thedefect was fixed

· to check that otherproblems related to the incident have been considered in the resolution

· to check that theresolution has not unexpectedly affected other parts of the system

 

Management

Overall generationand assessment of reports summarizing collective defect data

The purposes of thereport include:

· Ensuring that alldefects are addressed as soon as practicable.

· Estimating theprogress of defect resolution

· Estimatingeffectiveness of the testing process

· Estimating thereliability of aspects of the system under test

 

Analysis

Each defect isanalyzed to determine

· how the defect mayhave originally been inserted into the system

· What could have beendone to avoid it occurrence

·  What might be its root cause.

 

 

Managingdefects efficiently is heart and soul of testing, for it is the only way tomake sure that the application looks and works according to expectations setout in the from of requirements.

 

There isa much broader concept than defect management. It is Issue Management. An issue can be defined as any event orunexpected behavior of a system under test that requires further investigation.

 

All defects areissues to begin with. However, not all issues are defects:

Up on furtherinvestigation an issue might turn into:

  • A defect
  • A non-Issue
  • An opportunity or suggestion for improvement

Also an issue can beraised to seek clarification on ambiguous functionality or un-explainedbehavior of the application under test.

 

Characteristics of a defect:

 

  1. It should be reproducible. The founder of the defect either by himself or with the help of others should be able to show the defect to others and provide steps for the same. Any body should be able to see the defect by following those steps.
  2. It should be supported. Since, defect being a difference between expected behavior and actual behavior, the expected behavior should be supported by agreed upon standards (i.e. documented or un-documented, but approved requirements).

 

 

Defect Report:

Reporting defect isessential to manage defects effectively. Defect report template is given hereunder.

 


Defect ID:                                            Defect Title:

Application Name:                              Release:                                       Version:

Functional Area:

Date Found: .....                      Dateresolved: ......     Date re-tested:.......

Found by: ......                         Resolvedby: ......        Re-tested by: ......

Reported By: .....

 

Status: ________________               

(New, Open, Assigned, WIP, Fixed, Re-test, Re-open, Deferred, Closed, Cancelled,Rejected)

Resolution: ________________

(Fixed, Duplicate, Can't be Fixed, Working as Designed)

Priority: ________________                                     Severity: ______________

(1. High, 2. Medium,3. Low)                                  (1.High, 2. Medium, 3. Low)

Report Type: _____________

(1. Defect, 2.Question 3. Suggestion, 4. Other Issue)

Defect Source: ________________

(1. Code, 2. Requirements,3. Design,  4. Hardware, 5. Intermittent,6. Cosmetic/UI)

Functional Area: ________________

(This depends on theapplication under test)

Brief Description of the problem:

Reproduction Steps:

Attachments:

 

Provide name andlocation of any associated attachments like, screenshots test data files etc.

 

Expected Result

 

Write what you thinkis expected behavior… Include references to requirements.

 

Actual Result

 

Write what saw in theapplication.

 

Comments:

Use this section toexplain how expected and actual results are different.

Also, wheneverresolution code it is advisable to add a comment.

 

Suggested Fix(Optional)

 

Make your suggestions if any as to how the defect can befixed, if you know!

 

 

Defect life cycle: Defect Life Cycle is depicted in thefollowing self explanatory diagram:

A defect reacheslight blue circle due to some action taken by tester

A defect reacheslight green circle due to some action by developer

A defect reachespurple circle due to some action either by developer or tester

A circles with redcolor outline represent end possible end states of a defect

 

Defect Management Policies

 

Defectmanagement policies are standards, agreements and common understanding betweendevelopers, testers, business analysts and other involved parties on followingaspects of defect management:

 

1.   Who has final say on closing a defect?

2.   Who should be informed of the defects?

3.   Who should be invited to defect review meetings?

4.   How soon and how frequently the defects should be fixed?

5.   Who can change the status and resolutions of a defect? What areprivileges of developers and testers in this regard?

6.   How developers, testers and business analysts should collaborateto resolve defects and how project management should facilitate suchcollaboration.

7.   What constitutes Yellow and Red jeopardizes.

8.   How to track deferred defects and defects that can’t be fixed?

 




Facebook Icon Twitter Icon LinkedIn Icon Google+ Icon YouTube Icon
Company Courses Free Resources Learners Instructors Visitors Community
All Rights Reserved. Copyright @ 2016 Testing Classes.com