Posts Tagged ‘Tests’

Difference between Sanity and Smoke Testing:

Smoke Testing:

  • When a build is received smoke testing is done to ensure that whether the build is ready or stable for further testing.
  • Smoke testing is a wide approach where all areas of software application are tested without getting into deeper.
  • Test Cases for smoke testing can be manual or automated.
  • A smoke test is basically designed to touch each and every part of an app in a cursory way.
  • Smoke testing is Shallow and wide.
  • Smoke testing is conducted to ensure whether the most crucial functions of a program are working, but not bothering with finer details.
  • Smoke testing is like General Health Check Up

Sanity Testing:

  • After receiving a software build, with minor changes in code, or functionality, Sanity testing is performed to ascertain that the bugs have been fixed and no further issues are introduced due to these changes. The goal is to determine that the proposed functionality works roughly as expected.
  • Sanity testing exercises only the particular component of the entire system.
  • A sanity test is usually unscripted and without test scripts or test cases.
  • Sanity Testing is narrow and deep
  • Sanity testing is to verify whether requirements are met or not, checking all features breadth-first
  • Sanity Testing is like specialized health check up

Test Plan:

It is a high level document in which how to perform testing is described. The Test Plan document is usually prepared by the Test Lead or Test Manager and the focus of the document is to describe what to test, how to test, when to test and who will do what test.

The plan typically contains a detailed understanding of what the eventual workflow will be.

Master test plan: A test plan that typically addresses multiple test levels.

Phase test plan: A test plan that typically addresses one test phase.

Test Plan Template contains following components:

1. Introduction

A brief summary of the product being tested. Outline all the functions at a high level.

  • Overview of This New System
  • Purpose of this Document
  • Objectives of System Test

2. Resource Requirements

  • Hardware– List of hardware requirements
  • Software–List of software requirements: primary and secondary OS
  • Test Tools—List of tools that will be used for testing.
  • Staffing

 3. Responsibilities

List of QA team members and their responsibilities

4.      Scope—

  • In Scope
  • Out Scope

 5. Training

List of training’s required

6. References

List the related documents, with links to them if available, including the following:

  • Project Plan
  • Configuration Management Plan

7.      Features To Be Tested / Test Approach

  • List the features of the software/product to be tested
  • Provide references to the Requirements and/or Design specifications of the features to be tested

8.      Features Not to Be Tested

  • List the features of the software/product which will not be tested.
  • Specify the reasons these features won’t be tested.

9.      Test Deliverables—

  • List of the test cases/matrices or their location
  • List of the features to be automated

10.  Approach

  • Mention the overall approach to testing.
  • Specify the testing levels [if it’s a Master Test Plan], the testing types, and the testing methods [Manual/Automated; White Box/Black Box/Gray Box

11.Dependencies

  • Personnel Dependencies
  • Software Dependencies
  • Hardware Dependencies
  • Test Data & Database

12.Test Environment—

  • Specify the properties of test environment: hardware, software, network etc.
  • List any testing or related tools.

13.APPROVALS

  • Specify the names and titles of all persons who must approve this plan.
  • Provide space for signatures and dates.

14.Risks and Risk management plans—

  • List the risks that have been identified.
  • Specify the mitigation plan and the contingency plan for each risk.

15.Test Criteria—

  • Entry Criteria
  • Exit Criteria
  • Suspension Criteria

16.Estimate

  • Size
  • Effort
  • Schedule

MonkeyTalk Test Suite:

MonkeyTalk Test Suite is a file with extension .mts in which you can manage the number of scripts. You can use SetUp, TearDown and Test commands in test suites. A test suite can only contain three types of commands.

Test is a script which runs as part of test suite.

Setup-A script to run before each test in the script.

TearDown – A script to run after each test in the script.

Steps to create Test Suite:

  • Right click on Project.
  • Select new.
  • Then Select Test Suite.

Scripts which you want to run through testsuite should have extension .mt.

Test Suite Must contains at least one Script.

Advantage:

We can Run no of scripts in one go and Screenshots are captured and stored in screenshot folder under project in report folder for failures automatically.

Test suites output the standard XML report, making MonkeyTalk easy to integrate into existing systems.

Results:

The results are displayed in the JUnit panel at the bottom of the screen.

You can also view Results from Test Xml file under project in report folder.

Difference between Test case and Test scenario:

  • Test case consist of set of input values, execution precondition, excepted Results and executed post condition, developed to cover certain test Condition. While Test scenario is nothing but test procedure.
  • A Test Scenarios have one to many relation with Test case, Means A scenario have multiple test case. Every time we have write test cases for test scenario. So while starting testing first prepare test scenarios then create different-2 test cases for each scenario.
  • Test cases are derived (or written) from test scenario. The scenarios are derived from use cases.
  • Test Scenario represents a series of actions that are associated together. While test Case represents a single (low level) action by the user.
  • Scenario is thread of operations where as Test cases are set of input and output given to the System.

For example:

Checking the functionality of Login button is Test scenario and
Test Cases for this Test Scenario are:
1. Click the button without entering user name and password.
2. Click the button only entering User name.
3. Click the button while entering wrong user name and wrong password.
Etc…

In short,

Test Scenario is ‘What to be tested’ and Test Case is ‘How to be tested’.

Test Metrics

Posted: August 30, 2012 in Testing Basics
Tags: , , ,

Test Metrics:

The objective of Test Metrics is to capture the planned and actual quantities the effort, time and resources required to complete all the phases of Testing of the SW Project. It provides a measure of the percentage of the software tested at any point during testing.
Test metrics should cover basically 3 things:
1. Test coverage
2. Time for one test cycle
3. Convergence of testing

There are various types of test metrics. Different organization used different types of test metrics.

Functional test coverage:It can be calculated as:
FC=Number of test requirements that are covered by test cases/Total number of test requirements.

Schedule Variance:Schedule Variance indicates how much ahead or behind schedule the testing is. It can be calculated as:
SV = (Actual End Date-Planned End Date) / (Planned End Date-Planned Start Date+1)*100

A high value in schedule variance may signify poor estimation. A low value in schedule variance may signify correct estimation, clear and well understood requirements.

Effort Variance: Effort may be measured in person hours or person days or person months. Effort variance would be computed for all tasks completed during a period .It can be calculated as:
EV= (Actual effort-Estimated effort)/ (Estimated Effort) X 100%

A high positive value of effort variance may signify optimistic estimation, changing business processes, high learning curve, new technology and/or functional area.
A high negative value of effort variance may signify pessimistic estimation or excessive buffering an efficient and skilful project team, high level of componentization and re-usability, clear plans and schedules.
A low value of effort variance may signify accuracy in estimation, timely availability of resources, no creeping requirements.

Defect Age (In Time): Defect Age is used to calculate the time from Introduction to Detection.
Average Age = Phase (Detected – Introduced) / Number of Defects

On-Time delivery: This metrics sheds light on the ability to meet customer commitments. On time delivery may be tracked during the course of the project based on the actual delivery dates and planned commitments for the deliveries done during a period.
OTD= ((No. Of Delivery on time)/Total No of due Delivery)*100

A low value of %On time delivery may signify poor planning and tracking, delays on account of customer, , incorrect estimation, or may point to a project risk having occurred.
A large value of %on time delivery may signify good planning, tracking and foresight, with a high responsiveness for immediate corrective action; a receptive customer, high commitment of the team, and good estimation.

Test cost: It is used to find resources consumed in the testing.
TC= test cost Vs total system cost
This meets identifies the amount of resources used in testing process.