Software Engineering

 Software Engineering

What is Bottom-Up Design

Bottom-Up Design: Any design method in which the most primitive operations are specified first and the combined later into progressively larger units until the whole problem can be solved: the converse of TOP-DOWN DESIGN. For example, a communications program might be built by first writing a routine to fetch a single byte from the communications port and working up from that.

While top-down design is almost mandatory for large collaborative projects, bottom up design can be highly effective for producing ‘quick-and-dirty’ solutions and rapid prototypes, most often by a single programmer using an interactive, interpreted language such as VISUAL BASIC, LISP or FORTH.

Project Planning in Software Engineering

Before starting a software project, it is essential to determine the tasks to be performed and properly manage allocation of tasks among individuals involved in the software development. Hence, planning is important as it results in effective software development.

Project planning is an organized and integrated management process, which focuses on activities required for successful completion of the project. It prevents obstacles that arise in the project such as changes in projects or organization’s objectives, non-availability of resources, and so on. Project planning also helps in better utilization of resources and optimal usage of the allotted time for a project. The other objectives of project planning are listed below.

  • It defines the roles and responsibilities of the project management team members.
  • It ensures that the project management team works according to the business objectives.
  • It checks feasibility of the schedule and user requirements.
  • It determines project constraints.

Several individuals help in planning the project. These include senior management and project management team. Senior management is responsible for employing team members and providing resources required for the project. The project management team, which generally includes project managers and developers, is responsible for planning, determining, and tracking the activities of the project. Table lists the tasks performed by individuals involved in the software project.

                            Tasks of Individuals involved in Software Project

Senior Management

Project Management Team

  • Approves the project, employ personnel, and provides resources required for the project.
  • Reviews project plan to ensure that it accomplishes the business objectives.
  • Resolves conflicts among the team members.
  • Considers risks that may affect the project so that appropriate measures can be taken to avoid them.
  • Reviews the project plan and implements procedures for completing the project.
  • Manages all project activities.
  • Prepares budget and resource allocation plans.
  • Helps in resource distribution, project management, issue resolution, and so on.
  • Understands project objectives and finds ways to accomplish the objectives.
  • Devotes appropriate time and effort to achieve the expected results.
  • Selects methods and tools for the project.

Project planning should be effective so that the project begins with well-defined tasks. Effective project planning helps to minimize the additional costs incurred on the project while it is in progress. For effective project planning, some principles are followed. These principles are listed below.

  • Planning is necessary: Planning should be done before a project begins. For effective planning, objectives and schedules should be clear and understandable.
  • Risk analysis: Before starting the project, senior management and the project management team should consider the risks that may affect the project. For example, the user may desire changes in requirements while the project is in progress. In such a case, the estimation of time and cost should be done according to those requirements (new requirements).
  • Tracking of project plan: Once the project plan is prepared, it should be tracked and modified accordingly.
  • Meet quality standards and produce quality deliverables: The project plan should identify processes by which the project management team can ensure quality in software. Based on the process selected for ensuring quality, the time and cost for the project is estimated.
  • Description of flexibility to accommodate changes: The result of project planning is recorded in the form of a project plan, which should allow new changes to be accommodated when the project is in progress.

Project planning comprises project purpose, project scope, project planning process, and project plan. This information is essential for effective project planning and to assist project management team in accomplishing user requirements.

Project Purpose

Software project is carried out to accomplish a specific purpose, which is classified into two categories, namely, project objectives and business objectives. The commonly followed project objectives are listed below.

  • Meet user requirements: Develop the project according to the user requirements after understanding them.
  • Meet schedule deadlines: Complete the project milestones as described in the project plan on time in order to complete the project according to the schedule.
  • Be within budget: Manage the overall project cost so that the project is within the allocated budget.
  • Produce quality deliverables: Ensure that quality is considered for accuracy and overall performance of the project.

Business Software Engineering

Business objectives ensure that the organizational objectives and requirements are accomplished in the project. Generally, these objectives are related to business process improvements, customer satisfaction, and quality improvements. The commonly followed business objectives are listed below.

  • Evaluate processes: Evaluate the business processes and make changes when and where required as the project progresses.
  • Renew policies and processes: Provide flexibility to renew the policies and processes of the organization in order to perform the tasks effectively.
  • Keep the project on schedule: Reduce the downtime (period when no work is done) factors such as unavailability of resources during software development.
  • Improve software: Use suitable processes in order to develop software that meets organizational requirements and provides competitive advantage to the organization.

Project Scope

With the help of user requirements, the project management team determines the scope of the project before the project begins. This scope provides a detailed description of functions, features, constraints, and interfaces of the software that are to be considered. Functions describe the tasks that the software is expected to perform. Features describe the attributes required in the software as per the user requirements. Constraints describe the limitations imposed on software by hardware, memory, and so on. Interfaces describe the interaction of software components (like modules and functions) with each other. Project scope also considers software performance, which in turn depends on its processing capability and response time required to produce the output.

Once the project scope is determined, it is important to properly understand it in order to develop software according to the user requirements. After this, project cost and duration are estimated. Lf the project scope is not determined on time, the project may not be completed within the specified schedule. Project scope describes the following information.

  • The elements included and excluded in the project
  • The processes and entities
  • The functions and features required in software according to the user requirements.

Note that the project management and senior management team should communicate with the users to understand their requirements and develop software according to those requirements and expected functionalities.

Project Planning Process

The project planning process involves a set of interrelated activities followed in an orderly manner to implement user requirements in software and includes the description of a series of project planning activities and individual(s) responsible for performing these activities. In addition, the project planning process comprises the following.

  1. Objectives and scope of the project
  2. Techniques used to perform project planning
  3. Effort (in time) of individuals involved in project
  4. Project schedule and milestones
  5. Resources required for the project
  6. Risks associated with the project.

Project planning process comprises several activities, which are essential for carrying out a project systematically. These activities refer to the series of tasks performed over a period of time for developing the software. These activities include estimation of time, effort, and resources required and risks associated with the project.

                   Project Planning Activities

Project planning process consists of the following activities.

  • Identification of project requirements: Before starting a project, it is essential to identify the project requirements as identification of project requirements helps in performing the activities in a systematic manner. These requirements comprise information such as project scope, data and functionality required in the software, and roles of the project management team members.
  • Identification of cost estimates: Along with the estimation of effort and time, it is necessary to estimate the cost that is to be incurred on a project. The cost estimation includes the cost of hardware, network connections, and the cost required for the maintenance of hardware components. In addition, cost is estimated for the individuals involved in the project.
  • Identification of risks: Risks are unexpected events that have an adverse effect on the project. Software project involves several risks (like technical risks and business risks) that affect the project schedule and increase the cost of the project. Identifying risks before a project begins helps in understanding their probable extent of impact on the project.
  • Identification of critical success factors: For making a project successful, critical success factors are followed. These factors refer to the conditions that ensure greater chances of success of a project. Generally, these factors include support from management, appropriate budget, appropriate schedule, and skilled software engineers.
  • Preparation of project charter: A project charter provides a brief description of the project scope, quality, time, cost, and resource constraints as described during project planning. It is prepared by the management for approval from the sponsor of the project.
  • Preparation of project plan: A project plan provides information about the resources that are available for the project, individuals involved in the project, and the schedule according to which the project is to be carried out.
  • Commencement of the project: Once the project planning is complete and resources are assigned to team members, the software project commences.

Once the project objectives and business objectives are determined, the project end date is fixed. The project management team prepares the project plan and schedule according to the end date of the project. After analyzing the project plan, the project manager communicates the project plan and end date to the senior management. The progress of the project is reported to the management from time to time. Similarly, when the project is complete, senior management is informed about it. In case of delay in completing the project, the project plan is re-analyzed and corrective actions are taken to complete the project. The project is tracked regularly and when the project plan is modified, the senior management is informed.

Project Plan

As stated earlier, a project plan stores the outcome of project planning. It provides information about the end date, milestones, activities, and deliverables of the project. In addition, it describes the responsibilities of the project management team and the resources required for the project. It also includes the description of hardware and software (such as compilers and interfaces) and lists the methods and standards to be used. These methods and standards include algorithms, tools, review techniques, design language, programming language, and testing techniques.

A project plan helps a project manager to understand, monitor, and control the development of software project. This plan is used as a means of communication between the users and project management team. There are various advantages associated with a project plan, some of which are listed below.

  • It ensures that software is developed according to the user requirements, objectives, and scope of the project.
  • It identifies the role of each project management team member involved in the project.
  • It monitors the progress of the project according to the project plan.
  • It determines the available resources and the activities to be performed during software development.
  • It provides an overview to management about the costs of the software project, which are estimated during project planning.

Note that there are differences in the contents of two project plans depending on the kind of project and user requirements. Atypical project plan is divided into the following sections.

  1. Introduction: Describes the objectives of the project and provides information about the constraints that affect the software project.
  2. Project organization: Describes the responsibilities assigned to the project management team members for completing the project.
  3. Risk analysis: Describes the risks that can possibly arise during software development as well as explains how to assess and reduce the effect of risks.
  4. Resource requirements: Specifies the hardware and software required to carry out the software project. Cost estimation is done according to these resource requirements.
  5. Workbreakdown: Describes the activities into which the project is divided. It also describes the milestones and deliverables of the project activities.
  6. Project schedule: Specifies the dependencies of activities on each other. Based on this, the time required by the project management team members to complete the project activities is estimated.

In addition to these sections, there are several plans that may be a part of or ‘linked to a project plan. These plans include quality assurance plan, verification and validation plan, configuration management plan, maintenance plan, and staffing plan.


Quality Assurance Plan


The quality assurance plan describes the strategies and methods that are to be followed to accomplish the following objectives.

  1. Ensure that the project is managed, developed, and implemented in an organized way.
  2. Ensure that project deliverables are of acceptable quality before they are delivered to the user.

Verification and Validation Plan


The verification and validation plan describes the approach, resources and schedule used for system validation. The verification and validation plan, which comprises the following sections.

$11.      General information: Provides description of the purpose, scope, system overview, project references, acronyms and abbreviations, and points of contact. Purpose describes the procedure to verify and validate the components of the system. Scope provides information about the procedures to verify and validate as they relate to the project. System overview provides information about the organization responsible for the project and other information such as system name, system category, operational status of the system, and system environment. Project references provide the list of references used for the preparation of the verification and validation plan. Acronyms and abbreviations provide a list of terms used in the document. Points of contact provide information to users when they require assistance from organization for problems such as troubleshooting and so on.

$12.      Reviews and walkthroughs: Provides information about the schedule and procedures. Schedule describes the end date of milestones of the project. Procedures describe the tasks associated with reviews and walkthroughs. Each team member reviews the document for errors and consistency with the project requirements. For walkthroughs, the project management team checks the project for correctness according to software requirements specification (SRS).

$13.      System test plan and procedures: Provides information about the system test strategy, database integration, and platform system integration. System test strategy provides an overview of the components required for integration of the database and ensures that the application runs on at least two specific platforms. Database integration procedure describes how database is connected to the Graphical User Interface (GUI).Platform system integration procedure is performed on different operating systems to test the platform.

$14.      Acceptance test and preparation for delivery: Provides information about procedure, acceptance criteria, and installation procedure. Procedure describes how acceptance testing is to be performed on the software to verify its usability as required. Acceptance criteria describes that software will be accepted only if all the components, features and functions are tested including the system integration testing. In addition, acceptance criteria checks whether the software accomplishes user expectations such as its ability to operate on several platforms. Installation procedure describes the steps of how to install the software according to the operating system being used.


Configuration Management Plan

The configuration management plan defines the process, which is used for making changes to the project scope. Generally, the configuration management plan is concerned with redefining the existing objectives of the project and deliverables (software products that are delivered to the user after completion of software development).

Maintenance Plan

The maintenance plan specifies the resources and processes required for making the software operational after its installation. Sometimes, the project management team (or software development team) does not carry out the task of maintenance. In such a case, a separate team known as software maintenance team performs the task of software maintenance.

The maintenance plan, which comprises the sections listed below.

$11.      Introduction and background: Provides a description of software to be maintained and the services required for it. It also specifies the scope of maintenance activities that are to be performed.

$12.      Budget: Specifies the budget required for carrying out software maintenance and operational activities.

$13.      Roles and responsibilities: Specifies the roles and responsibilities of the team members associated with the software maintenance and operation. It also describes the skills required to perform maintenance and operational activities. In addition to software maintenance team, software maintenance comprises user support, user training, and support staff.

$14.      Performance measures and reporting: Identifies the performance measures required for carrying out software maintenance. It also describes how measures required for enhancing the performance of services (for the software) are recorded and reported.

$15.      Management approach: Identifies the methodologies that are required for establishing maintenance priorities of the projects. For this purpose, the management either refers to the existing methodologies or identifies new methodologies. Management approach also describes how users are involved in software maintenance and operations activities as well as how users and project management team communicate with each other.

$16.      Documentation strategies: Provides a description of the documentation that is prepared for user reference. Generally, documentation includes reports, information about problems occurring in software, error messages, and the system documentation.

$17.      Training: Provides information about the training activities.

$18.      Acceptance: Defines a point of agreement between the project management team and software maintenance team after the completion of implementation and transition activities. Once the agreement has been made, the software maintenance begins.

Staffing Plan

The staffing plan describes the number of individuals required for a project. It includes selecting and assigning tasks to the project management team members. It provides information about appropriate skills required to perform the tasks to produce the project deliverables and manage the project. In addition, it provides information of resources such as tools, equipment, and processes used by the project management team.

Staff planning is performed by a staff planner, who is responsible for determining the individuals available for the project. Other responsibilities of a staff planner are listed below.

11.      The staff planner determines individuals, who can be from existing staff, staff on contract, or newly employed staff. It is important for the staff planner to know the structure of the organization to determine the availability of staff.

12.      The staff planner determines the skills required to execute the tasks mentioned in the project schedule and task plan. In case staff with required skills is not available, staff planner informs the project manager about the requirements.

13.      The staff planner ensures that the required staff with required skills is available at the right time. For this purpose, the staff planner plans the availability of staff after the project schedule is fixed. For example, at the initial stage of a project, staff may consist of a project manager and a few software engineers whereas during software development, staff consists of software designers as well as the software developers.

14.      The staff planner defines roles and responsibilities of the project management team members so that they can communicate and coordinate with each other according to the tasks assigned to them. Note that the project management team can be further broken down into sub-teams depending on the size and complexity of the project.

The staffing plan comprises the following sections.

11.      General information: Provides information such as name of the project and project manager who is responsible for the project. In addition, it specifies the start and end dates of the project.

12.      Skills assessment: Provides information, which is required for assessment of skills. This information includes the knowledge, skill, and ability of team members who are required to achieve the objectives of the project. In addition, it specifies the number of team members required for the project.

13.      Staffing profile: Describes the profile of the staff required for the project. The profile includes calendar time, individuals involved, and level of commitment. Calendar time specifies the period of time such as month or quarter for which individuals are required to complete the project. Individuals who are involved in the project have specific designations such as project manager and the developer. Level of commitment is the utilization rate of individuals such as work performed on full-time and part-time basis.

14.      Organization chart: Describes the organization of project management team members. In addition, it includes information such as name, designation, and role of each team member.

Responsibilities of Software Project Manager

Proper project management is essential for the successful completion of a software project and the person who is responsible for it is called project manager. To do his job effectively, the project manager must have certain set of skills. This section discusses both the job responsibilities of project manager and the skills required by him.

  1. Involves with the senior managers in ‘the process of appointing team members
  2. Builds the project team and assigns tasks to various team members
  3. Responsible for effective project planning and scheduling, project monitoring and control activities in order to achieve the project objectives
  4. Acts as a communicator between the senior management and the other persons involved in the project like the development team and internal and external stakeholders
  5. Effectively resolves issues (if any) that arise between the team members by changing their roles and responsibilities
  6. Modifies the project plan (if required) to deal with the situation.

Although the actual skills for effective project management develop with experience, every project manager must exhibit some basic skills that are listed below.

  1. Must have the knowledge of different project management techniques like risk management, configuration management, cost estimation techniques, etc.
  2. Must have the ability to make judgement, since project management frequently requires making decisions.
  3. Must have good grasping power to learn the latest technologies to adapt to project requirements.
  4. Should be open-minded enough to accept new ideas from the project members. In addition, he should be creative enough to come up with new ideas.
  5. Should have good interpersonal, communication, and leadership qualities in order to get work done from the team members.

Issues in Software Metrics

Implementing and executing software metrics is a cumbersome task as it is difficult to manage the technical and human aspects of the software measurement. Also, there exist many issues which prevent the successful implementation and execution of software metrics. These issues are listed below.

  • Lack of management commitment: It is observed that management is not committed towards using software metrics due to the following reasons
  • §         Management opposes measurement.
  • §         Software engineers do not measure and collect data as management does not realize their importance.
  • §         Management charters a metrics program, but does not assist in deploying the program into practice.


  • Collecting data that is not used: Data collected during the measurement process should be such that it can be used to enhance the process, project, or product. This is because collecting incorrect data results in wrong decision making, which in turn leads to deviation from the software development plan.
  • Measuring too much and too soon: In a software project, sometimes excess data is collected in advance, which is difficult to manage and analyze. This results in unsuccessful implementation of the metrics.
  • Measuring the wrong things: Establishing metrics is a time consuming process and only those data that provide valuable feedback should be measured in an effective and efficient manner. To know whether data needs to be measured, a few questions should be addressed (if answers are no, then metrics should not be established).

§         Do data items collected relate to the key success strategies for business?

§         Are managers able to obtain the information they need to manage projects and people on time?

§         Is it possible to conclude from data obtained that process changes are working?


  • Imprecise metrics definitions: Vague or ambiguous metrics definition can be misinterpreted. For example, some software engineers may interpret a software feature as unnecessary while some software engineers may not.
  • Measuring too little, too late: Measuring too less, provides information, which is of little or no importance for the software engineers. Thus, software engineers tend to offer resistance in establishing metrics. Similarly, if data is collected too late, the requested data item may result in unnecessary delay in software project as project managers and software engineers do not get the data they need on time.
  • Misinterpreting metrics data: Interpretation of metrics data is important to improve the quality of software. However, software metrics are often misinterpreted. For example, if the number of defects in the software increases despite effort taken to improve the quality, then software engineers might conclude that software improvement effort are doing more harm than good.
  • Lack of communication and training: Inadequate training and lack of communication results in poor understanding of software metrics and measurement of unreliable data. In addition, communicating metrics data in an ineffective manner results in misinterpretation of data.

In order to resolve or avoid these issues, the purpose for which data will be used should be clearly specified before the measurement process begins. Also, project managers and software engineers should be adequately trained and measured data should be properly communicated to them. Software metrics should be defined precisely so that they work effectively.

Object Oriented Metrics in Software Engineering

Lines of code and functional point metrics can be used for estimating object-oriented software projects. However, these metrics are not appropriate in the case of incremental software development as they do not provide adequate details for effort and schedule estimation. Thus, for object-oriented projects, different sets of metrics have been proposed. These are listed below.

  • Number of scenario scripts: Scenario scripts are a sequence of steps, which depict the interaction between the user and the application. A number of scenarios is directly related to application size and number of test cases that are developed to test the software, once it is developed. Note that scenario scripts are analogous to use-cases.
  • Number of key classes: Key classes are independent components, which are defined in object -oriented analysis. As key classes form the core of the problem domain, they indicate the effort required to develop software and the amount of ‘reuse’ feature to be applied during the development process.
  • Number of support classes: Classes, which are required to implement the system but are indirectly related to the problem domain, are known as support classes. For example, user interface classes and computation class are support classes. It is possible to develop a support class for each key class. Like key classes, support classes indicate the effort required to develop software and the amount of ‘reuse’ feature to be applied during the development process.
  • Average number of support classes per key class: Key classes are defined early in the software project while support classes are defined throughout the project. The estimation process is simplified if the average number of support classes per key class is already known.
  • Number of subsystems: A collection of classes that supports a function visible to the user is known as a subsystem. Identifying subsystems makes it easier to prepare a reasonable schedule in which work on subsystems is divided among project members.

The afore-mentioned metrics are collected along with other project metrics like effort used, errors and defects detected, and so on. After an organization completes a number of projects, a database is developed, which shows the relationship between object-oriented measure and project measure. This relationship provides metrics that help in project estimation.

Measuring Software Quality in Software Engineering

The aim of the software developer is to develop high-quality software within a specified time and budget. To achieve this, software should be developed according to the functional and performance requirements, document development standards, and characteristics expected from professionally developed software. Note that private metrics are collected by software engineers and then assimilated to achieve project-level measures. The main aim at the project level is to measure both the errors and defects. These measures are used to derive metrics, which provide an insight into the efficacy of both individual and group software quality assurance and software control activities.

Many measures have been proposed for assessing software quality such as interoperability, functionality, and so on. However, it has been observed that reliability, correctness, maintainability, integrity, and usability are most useful as they provide valuable indicators to the project team.

  • Reliability: The system or software should be able to maintain its performance level under given conditions. Reliability can be defined as the ability of the software product to perform its required functions under stated conditions for a specified period of time or for a specified number of operations. Reliability can be measured using Mean Time Between Failure (MTBF), which is the average of time between successive failures. A similar measure to MTBF is Mean Time To Repair (MTTR) which is the average time taken to repair the machine after a failure occurs. MTBF can be combined with Mean Time To Failure (MTTF), which describes how long the software can be used to calculate MTBF, that is,
  • Correctness: A system or software must function correctly. Correctness can be defined as the degree to which software performs its specified function. It can be measured in terms of defects per KDLOC. For quality assessment, defects are counted over a specified period of time.
  • Maintainability: In software engineering, software maintenance is one of the most expensive and time-consuming activities. Maintainability can be defined as the ease with which a software product can be modified to correct errors, to meet new requirements, to make future maintenance easier, or adapt to the changed environment. Note that software maintainability is assessed by using indirect measures like Mean Time to Change (MTTC), which can be defined as the time taken to analyze change request, design modifications, implement changes, testing, and distribute changes to all users. Generally, it has been observed that programs having lower MITC are easier to maintain.
  • Integrity: In the age of cyber-terrorism and hacking, software integrity has become an important factor in the software development. Software integrity can be defined as the degree to which unauthorized access to the components of software (program, data, and documents) can be controlled.

For measuring integrity of software, attributes such as threat and security are used. Threat can be defined as the probability of a particular attack at a given point of time. Security is the probability of repelling an attack, if it occurs. Using these two attributes, integrity can be calculated by using the following equation.

Integrity = ∑/1-(threat*(1-security))] .

  • Usability: Software, which is easy to understand and easy to use is always preferred by the user. Usability can be defined as the capability of the software to be understood, learned, and used under specified conditions. Note that software, which accomplishes all the user requirements but is not easy’ to use, is often destined to fail.

In addition to the afore-mentioned measures, lack of conformance to software requirements should be avoided as these form the basis of measuring software quality. Also, in order to achieve high quality both explicit and implicit requirements should be considered.

Defect Removal Efficiency (DRE)

Defect removal efficiency (DRE) can be defined as the quality metrics, which is beneficial at both the project level and process level. Quality assurance and control activities that are applied throughout software development are responsible for detecting errors introduced at various phases of SDLC. The ability to detect errors (filtering abilities) is measured with the help of DRE, which can be calculated by using the following equation.

DRE = E/(E + D)


E = number of errors found before software is delivered to the user

D = number of defects found after software is delivered to the user.

The value of DRE approaches 1, if there are no defects in the software. As the value of E increases for a given value of D, the overall value of DRE starts to approach 1. With an increase in the value of E, the value of D decreases as more errors are discovered before the software is delivered to the user. DRE improves the quality of software by establishing methods which detect maximum number of errors before the software is delivered to the user.

DRE can also be used at different phases of software development. It is used to assess the software team’s ability to find errors at each phase before they are passed on to the next development phase. When DRE is defined in the context of SDLC phases, it can be calculated by the following equation.

DRE= K/ (Ei + Ei+1)


Ei = number of errors found in phase i

Ei + 1 = number of errors that were ignored in phase i, but found in phase i + 1.

The objective of the software team is to achieve the value of DREi as 1. In other words, errors should be removed before they are passed on to the next phase.

Classification of Software Metrics in Software Engineering

Measurement is done by metrics. Three parameters are measured: process measurement through process metrics, product measurement through product metrics, and project measurement through project metrics.about Classification of Software Metrics in Software Engineering

Designing Software Metrics in Software Engineering

An effective software metrics helps software engineers to identify shortcomings in the software development life cycle so that the software can be developed as per the user requirements, within estimated schedule and cost, with required quality level, and so on. To develop effective software metrics, the following steps are used.

  1. Definitions: To develop an effective metric, it is necessary to have a clear and concise definition of entities and their attributes that are to be measured. Terms like defect, size, quality, maintainability, user-friendly, and so on should be well defined so that no issues to ambiguity occur.
  2. Define a model: A model for the metrics is derived. This model is helpful in defining how metrics are calculated. The model should be easy to modify according to the future requirements. While defining a model, the following questions should be addressed.
  • Does the model provide more information than is available?
  • Is the information practical?
  • Does it provide the desired information?
  1. Establish counting criteria: The model is broken down into its lowest-level metric entities and the counting criteria (which are used to measure each entity) are defined. This specifies the method for the measurement of each metric primitive. For example, to estimate the size of a software project, line of code (LOC) is a commonly used metric. Before measuring size in LOC, clear and specific counting criteria should be defined.
  2. Decide what is good: Once it is decided what to measure and how to measure, it is necessary to determine whether action is needed. For example, if software is meeting the quality standards, no corrective action is necessary. However, if this is not true, then goals can be established to help the software conform to the quality standards laid down. Note that the goals should be reasonable, within the time frame, and based on supporting actions.
  3. Metrics reporting: Once all the data for metric is collected, the data should be reported to the concerned person. This involves defining report format, data extraction and reporting cycle, reporting mechanisms, and so on.
  4. Additional qualifiers: Additional metric qualifiers that are ‘generic’ in nature should be determined. In other words, metric that is valid for several additional extraction qualifiers should be determined.

The selection and development of software metrics is not complete until the effect of measurement and people on each other is known. The success of metrics in an organization depends on the attitudes of the people involved in collecting the data, calculating and reporting the metrics, and people involved in using these metrics. Also, metrics should focus on process, projects, and products and not on the individuals involved in this activity.

Software Metrics in Software Engineering

Once measures are collected they are converted into metrics for use. IEEE defines metric as ‘a quantitative measure of the degree to which a system, component, or process possesses a given attribute.’ The goal of software metrics is to identify and control essential parameters that affect software development. Other objectives of using software metrics are listed below.

  • Measuring the size of the software quantitatively.
  • Assessing the level of complexity involved.
  • Assessing the strength of the module by measuring coupling.
  • Assessing the testing techniques.
  • Specifying when to stop testing.
  • Determining the date of release of the software.
  • Estimating cost of resources and project schedule.

Software metrics help project managers to gain an insight into the efficiency of the software process, project, and product. This is possible by collecting quality and productivity data and then analyzing and comparing these data with past averages in order to know whether quality improvements have occurred. Also, when metrics are applied in a consistent manner, it helps in project planning and project management activity. For example, schedule-based resource allocation can be effectively enhanced with the help of metrics.

Difference in Measures, Metrics, and Indicators

Metrics is often used interchangeably with measure and measurement. However, it is important to note the differences between them. Measure can be defined as quantitative indication of amount, dimension, capacity, or size of product and process attributes. Measurement can be defined as the process of determining the measure. Metrics can be defined as quantitative measures that allow software engineers to identify the efficiency and improve the quality of software process, project, and product.

To understand the difference, let us consider an example. A measure is established when a number of errors is (single data point) detected in a software component. Measurement is the process of collecting one or more data points. In other words, measurement is established when many components are reviewed and tested individually to collect the measure of a number of errors in all these components. Metrics are associated with individual measure in some manner. That is, metrics are related to detection of errors found per review or the average number of errors found per unit test.

Once measures and metrics have been developed, indicators are obtained. These indicators provide a detailed insight into the software process, software project, or intermediate product. Indicators also enable software engineers or project managers to adjust software processes and improve software products, if required. For example, measurement dashboards or key indicators are used to monitor progress and initiate change. Arranged together, indicators provide snapshots of the system’s performance.

Measured Data

Before data is collected and used, it is necessary to know the type of data involved in the software metrics. Table lists different types of data, which are identified in metrics along with their description and the possible operations that can be performed on them.

                                                      Type of Data Measured

Type of data

Possible operations

Description of data





<, >



+, –




Absolute zero

  • Nominal data: Data in the program can be measured by placing it under a category. This category of program can be a database program, application program, or an operating system program. For such data, operation of arithmetic type and ranking of values in any order (increasing or decreasing) is not possible. The only operation that can be performed is to determine whether program ‘X’ is the same as program ‘Y’.
  • Ordinal data: Data can be ranked according to the data values. For example, experience in application domain can be rated as very low, low, medium, or high. Thus, experience can easily be ranked according to its rating.
  • Interval data: Data values can be ranked and substantial differences between them can also be shown. For example, a program with complexity level 8 is said to be 4 units more complex than a program with complexity level 4.
  • Ratio data: Data values are associated with a ratio scale, which possesses an absolute zero and allows meaningful ratios to be calculated. For example, program lines expressed in lines of code.

It is desirable to know the measurement scale for metrics. For example, if metrics values are used to represent a model for a software process, then metrics associated with the ratio scale may be preferred.

Guidelines for Software Metrics

Although many software metrics have been proposed over a period of time, ideal software metric is the one which is easy to understand, effective, and efficient. In order to develop ideal metrics, software metrics should be validated and characterized effectively. For this, it is important to develop metrics using some specific guidelines, which are listed below.

  • Simple and computable: Derivation of software metrics should be easy to learn and should involve average amount of time and effort.
  • Consistent and objective: Unambiguous results should be delivered by software metrics.
  • Consistent in the use of units and dimensions: Mathematical computation of the metrics should involve use of dimensions and units in a consistent manner.
  • Programming language independent: Metrics should be developed on the basis of the analysis model, design model, or program’s structure.
  • High quality: Effective software metrics should lead to a high-quality software product.
  • Easy to calibrate: Metrics should be easy to adapt according to project requirements.
  • Easy to obtain: Metrics should be developed at a reasonable cost.
  • Validation: Metrics should be validated before being used for making any decisions.
  • Robust: Metrics should be relatively insensitive to small changes in process, project, or product.
  • Value: Value of metrics should increase or decrease with the value of the software characteristics they represent. For this, the value of metrics should be within a meaningful range. For example, metrics can be in a range of 0 to 5.

Software Measurement in Software Engineering

To assess the quality of the engineered product or system and to better understand the models that are created, some measures are used. These measures are collected throughout the software development life cycle with an intention to improve the software process on a continuous basis. Measurement helps in estimation, quality control, productivity assessment and project control throughout a software project. Also, measurement is used by software engineers to gain insight into the design and development of the work products. In addition, measurement assists in strategic decision-making as a project proceeds.

Software measurements are of two categories, namely, direct measures and indirect measures. Direct measures include software processes like cost and effort applied and products like lines of code produced, execution speed, and other defects that have been reported. Indirect measures include products like functionality, quality, complexity, reliability, maintainability, and many more.

Generally, software measurement is considered as a management tool which if conducted in an effective manner, helps the project manager and the entire software team to take decisions that lead to successful completion of the project. Measurement process is characterized by a set of five activities, which are listed below.

  • Formulation: This performs measurement and develops appropriate metric for software under consideration.
  • Collection: This collects data to derive the formulated metrics.
  • Analysis: This calculates metrics and the use of mathematical tools.
  • Interpretation: This analyzes the metrics to attain insight into the quality of representation.
  • Feedback: This communicates recommendation derived from product metrics to the software team.

Note that collection and analysis activities drive the measurement process. In order to perform these activities effectively, it is recommended to automate data collection and analysis, establish guidelines and recommendations for each metric, and use statistical techniques to interrelate external quality features and internal product attributes.

Technology Change Management (TCM)

In today’s world, change is an ongoing process and Information Technology (IT) has contributed to changes in every aspect of life (such as business and education). This is due to the emerging technologies. Nowadays, the business environment needs to use the new technologies available in order to be successful and compete with similar organizations in the market. To incorporate new technology into business activities, Technology Change Management (TCM) is used. TCM is a process of identifying, selecting, and evaluating new technologies (such as tools, methods, and processes) to incorporate the most effective technology in a software system.

To perform TCM, an organization establishes a group who is responsible for assessing emerging technologies as well as managing changes that occur in existing technologies. The technologies that tend to improve the capability of the standard software process of the organization are significantly considered.

TCM is advantageous to organizations as it helps in maintaining awareness of new software related technologies. In addition, it assists organizations in selecting the most suitable technology to improve the software quality and productivity of software activities. Before incorporating new technologies in the organization, both advantages and disadvantages of implementing the technology are checked with the help of a prototype (pilot) that helps to assess the output of new and unproven technology. The technologies that seem desirable for the organization are suggested to the organization for approval and after they get approved, they are incorporated into the standard software process of the organization.

In addition to the above mentioned objectives, other common objectives of TCM are listed below.

  1. Minimize Total Cost of Operation (TCO): TCO is concerned with the identification of total cost incurred to develop a software system. This cost includes the cost of hardware, software, support services, and hidden costs required to develop the entire software system. The cost is determined to estimate the actual cost in providing the technology to the end user. The factors that affect TCO are listed below.
  2. Technology-centric utilization policies: The total cost of operation is affected by hardware, software, and operational standards. The organization policies that are followed for a single technology and are not replaced according to the latest technologies result in increased costs. This happens because the organization spends time and cost on evaluating the hardware required along with the software rather than on making decisions on the replacement of technologies on expiration of a warranty. In this case, TCM provides a solution to organize business rules and corporate technology standards to ensure that the maximum value is achieved from each asset.
  3. Discontinuity and delay: Inefficient employees affect the performance of the organization due to which the process of software development suffers. As a result, the organization needs to spend large amounts of money to avoid delay. On the contrary, if a good project management approach is used, the expenses and delays are minimized to a certain extent. In addition, prior knowledge of the user requirements and communication between users and the organization are essential to manage TCM.
  4. Maximize asset utilization: To ensure optimal utilization of assets, it is essential to minimize the stock of components (such as hardware and software) for the software system that are reserved for future use. In addition, the collection of new components should be kept to a minimum and the components that are not required should be avoided or disposed off. This approach maximizes the asset utilization as only the required components are used.
  5. Reduce expenses: TCM minimizes the expense of components that are generally not considered in total cost of operation. This is done by automating the exchange of information such as asset repository and transmission and approval of documents in business process. This information also includes checking the components of a software system before their purchase and installation. Minimization of such expenses reduces the expenses of the organization.

Tools for Software Maintenance

Software maintenance involves modifying the existing software system and recording all the modifications made to it. For this, various maintenance tools are used. One of the commonly used maintenance tool is text editor. This tool creates a copy of the documentation or the code. The key feature of this tool is that it provides a medium to roll back (when required) from the current version of a file to the previous one. Several other tools used in software maintenance are listed in Table.

                                             Table Software Maintenance Tools



File comparator

Compares two files or systems and maintains the record of the differences in the files. In addition, it determines whether the two files or the systems are identical.

Compiler and linker

Compilers are used to check syntax errors and in some cases, locate the type of errors. When the code is compiled, the linker is used to link the code with other components, which are required for the program execution. Linkers sometimes are used to track the version numbers of the components so that appropriate versions are linked together.


Allows tracing the logic of the program and examines the contents of the registers and memory areas.

Cross-reference generator

Assures that the changes in code are in compliance with the existing code. When a change to a requirement is requested, this tool enables to know which other requirements, design, and code components will be affected.

Static code analyzer

Measures information about the code attributes such as the number of lines of code, number of spanning paths, and so on. This can be calculated when the new versions of the system are developed.


Techniques for Maintenance

To perform software maintenance effectively, various techniques are used. These include software configuration management, impact analysis, and software rejuvenation, all of which help in maintaining a system and thus, improve the quality of the existing system.about Techniques for Maintenance

Software Maintenance Models

Studies suggest that the software maintenance process begins without proper knowledge of the software system. This occurs because the software maintenance team is unaware of the requirements and design documentation. Also, traditional models fail to capture the evolutionary nature of the software. To overcome these problems, software maintenance models have been proposed, which include quick fix model, iterative enhancement model, and reuse-oriented model.

The quick-fix model is an ad hoc approach used for maintaining the software system. The objective of this model is to identify the problem and then fix it as quickly as possible. The advantage is that it performs its work quickly and at a low cost. This model is an approach to modify the software code with little consideration for its impact on the overall structure of the software system.

                                          Quick-fix Model

Sometimes, users do not wait for long time. Rather, they require the modified software system to be delivered to them in the least possible time. As a result, the software maintenance team needs to use a quick-fix model to avoid the time consuming process of SMLC.

This model is beneficial when a single user is using the software system. As the user has proper knowledge of the software system, it becomes easier to maintain the software system without having need to manage the detailed documentation. This model is also advantageous in situations when the software system is to be maintained with certain deadlines and limited resources. However, this model is not suitable to fix errors for a longer period.

The iterative enhancement model, which was originally proposed as a process model, can be easily adapted for maintaining a software system. It considers that the changes made to the software system are iterative in nature. The iterative enhancement model comprises three stages, namely, analysis of software system, classification of requested modifications, and implementation of requested modifications.

                           Iterative Enhancement Model

In the analysis stage, the requirements are analyzed to begin the software maintenance process. After analysis, the requested modifications are classified according to the complexity, technical issues, and identification of modules that will be affected. At the end, the software is modified to implement the modification request. At each stage, the documentation is updated to accommodate changes of requirements analysis, design, coding, and testing phases.

Note: It is essential to have a complete documentation before the implementation of iterative enhancement model begins.

The Reuse-oriented Model

The reuse-oriented model assumes that the existing program components can be reused to perform maintenance.

                        Reuse-oriented Model

It consists of the following steps.

  1. Identifying the components of the old system which can be reused
  2. Understanding these components
  3. Modifying the old system components so that they can be used in the new system
  4. Integrating the modified components into the new system.

Software Maintenance Life Cycle

Changes are implemented in the software system by following a software maintenance process, which is known as Software Maintenance Life Cycle (SMLC).This life cycle comprises seven phases, namely, problem identification, analysis, design, implementation, system testing, acceptance testing, and delivery phase. about Software Maintenance Life Cycle

Types of Software Maintenance

There are four types of maintenance, namely, corrective, adaptive, perfective, and preventive. Corrective maintenance is concerned with fixing errors that are observed when the software is in use. Adaptive maintenance is concerned with the change in the software that takes place to make the software adaptable to new environment such as to run the software on a new operating system. Perfective maintenance is concerned with the change in the software that occurs while adding new functionalities in the software. Preventive maintenance involves implementing changes to prevent the occurrence of errors. The distribution of types of maintenance by type and by percentage of time consumed.about Types of Software Maintenance

Software Maintenance in Software Engineering

Over a period of time, the developed software system may need modifications according to the changing user requirements. Such being the case, maintenance becomes essential. The software maintenance process comprises a set of software engineering activities that occur after the software has been delivered to the user.

Sometimes, maintenance also involves adding new features and functionalities (using latest technology) to the existing software system. The primary objective of software maintenance is to make the software system operational according to the user requirements and fix errors in the software. The errors arise due to nonfunctioning of the software or incompatibility of hardware with the software. When software maintenance is to be done on a small segment of the software code, software patches are applied. These patches are used to fix errors only in the software code that contains errors.

Software maintenance is affected by several constraints such as increase in cost and technical problems with hardware and software. This chapter discusses how software maintenance assists the present software system to accommodate changes according to the new requirements of users.

Basics of Software Maintenance

Software does not wear out or get tired. However, it needs to be upgraded and enhanced to meet new user requirements. For such modifications in the software system, software maintenance is performed. IEEE defines maintenance as ‘a process of modifying a software system or component after delivery to correct faults, to improve performance or other attributes or to adapt the product to a changed environment.’ The objective is to ensure that the software is able to accommodate changes after the system has been delivered and deployed.

To understand the concept of maintenance properly, let us consider an example of a car. When a car is ‘used’, its components wear out due to friction in the mechanical parts, unsuitable use, or by external conditions. The car owner solves the problem by changing its components when they become totally unserviceable and by using trained mechanics to handle complex faults during the car’s lifetime. Occasionally, the owner gets the car serviced at a service station. This helps in preventing future wear and tear of the car. Similarly, in software engineering the software needs to be ‘serviced’ so that it is able to meet the changing environment (such as business and user needs) where it functions. This servicing of software is commonly referred to as software maintenance, which ensures that the software system continues to perform according to the user requirements even after the proposed changes have been incorporated. In addition, software maintenance serves the following purposes.

1.   Providing continuity of service: The software maintenance process focuses on fixing errors, recovering from failures such as hardware failures or incompatibility of hardware with the software, and accommodating changes in the operating system and the hardware.

2.    Supporting mandatory upgrades: Software maintenance supports upgradations, if required, in a software system. Upgradations may be required due to changes in government regulations or standards. For example, if a web-application system with multimedia capabilities has been developed, modification may be necessary in countries where screening of videos (over the Internet) is prohibited. The need for upgradations may also be felt to maintain competition with other software that exist in the same category.

3.    Improving the software to support user requirements: Requirements may be requested to enhance the functionality of the software, to improve performance, or to customize data processing functions as desired by the user. Software maintenance provides a framework, using which all the requested changes can be accommodated.

4.     Facilitating future maintenance work: Software maintenance also facilitates future maintenance work, which may include restructuring of the software code and the database used in the software.

Changing a Software System

As stated earlier, the need for software maintenance arises due to changes required in the software system. Once a software system has been developed and deployed, anomalies are detected, new user requirements arise, and the operating environment changes. This means that after delivery, software systems always evolve in response to the demands for change.

The concept of software maintenance and evolution of systems was first introduced by Lehman, who carried out several studies and proposed five laws based on these studies. One of the key observations of the studies was that large systems are never complete and continue to evolve. Note that during evolution, the systems become more complex, therefore, some actions are needed to be taken to reduce the complexity. The five laws stated by Lehman are listed in Table and discussed below.

1.     Continuing change: This law states that change is inevitable, since systems operate in a dynamic environment, as the systems’ environment changes, new requirements arise and the system must be modified. When the modified system is re-introduced into the environment, it requires further modifications in the environment. Note that if a system remains static, after a period of time it will not be able to serve the users’ ever-changing needs. This is because the system may become outdated after some time.

2.    Increasing complexity: This law states that as a system changes, its structure degrades (often observed in legacy systems). To avoid this problem, preventive maintenance should be used, where only the structure of the software is improved without adding any new functionality to it. However, additional costs have to be incurred to reverse the effects of structural degradation.

3.     Large software evolution: This law state~ that for large systems, software evolution is largely dependent on management decisions because of organizational factors, which are established earlier in the development process. This is true for large organizations, which have their own internal bureaucracies that control the decision-making process. The rate of change of the system in these organizations is governed by the organization’s decision-making processes. This determines the gross trends of the system maintenance process and limits the possible number of changes to the system.

Organizational stability: This law states that changes to resources such as staffing have unnoticeable effects on evolution. For example, productivity may not increase by assigning new staff to a project because of the additional communication overhead. Thus, it can be said that large software development teams become unproductive because the communication overheads dominate the work of the team.

4.      Conservation of familiarity: This law states that there is a limit to the rate at which new functionality can be introduced. This implies that adding a large increment of functionality to a system in one release, may certainly introduce new system faults. If a large increment is introduced, a new release will be required ‘fairly quickly’ to correct the new system faults. Thus, organizations should not budget for large functionality increments in each release without taking into account the need for fault repair.

                                                      Table Lehman Laws



Continuing change

The environment in which the software operates keeps on changing, therefore, the software must also be changed to work in the new environment.

Increasing complexity

The structure of the software becomes more complex with continuous change in software, therefore, some preventive steps must be taken to improve and simplify its structure.

Large software evolution

Software evolution is a self-regulating process. Software attributes such as size, time between releases, and the number of reported errors are almost constant for each system release.

Organizational stability

The rate with which the software is developed remains approximately constant and is independent of the resources devoted to the software development.

Conservation of familiarity

During the life of software, added to it in each release, may be introduced.

Lehman’s observations have been accepted universally and are taken consideration when planning the maintenance process. However, it may be that one of the laws is ignored when some particular business decision is For example, it may be mandatory to carry out several major system changes in a single release for marketing and sales reasons.

Legacy System

The term ‘legacy system’ describes an old system, which remains in operation within an organization. These systems were developed according to the ‘dated development practice’ and-technology existing before the introduction of structured programming. Process models and basic principles such as modularity, coupling, cohesion, and good programming practice emerged too late for them. Thus, these systems were developed according to ad hoc processes and often used programming techniques, which were not amenable for developing large systems.  The combination of employing dated processes, techniques, and technology resulted in undesirable characteristics in legacy systems, which are listed in Table.

                                       Table Legacy System Characteristics



High maintenance cost

Results due to combination of other system factors such as complexity, poor documentation, and lack of inexperienced personnel.

Complex software

Results due to structural degradation, which must have occurred over a legacy system’s lifetime of change.

Obsolete support software

Support software may not be available for a particular platform or no longer be supported by its original vendor or any other organization.

Obsolete hardware

Legacy system’s hardware may have been discontinued.

Lack of technical expertise

Original developers of a legacy system are unlikely to be involved with its maintenance today.

Business critical

Many legacy systems are essential for the proper working of the organizations which operate them.

Poorly understood

Documentation is often missing or inconsistent.

Poorly documented

As a consequence of system complexity and poor documentation, software maintainers often understand the legacy systems poorly.

Legacy systems are generally associated with high maintenance costs. The root cause of this expense is the degraded structure that results from prolonged maintenance. Systems with contrived structures are invariably complex and understanding them requires considerable effort. System understanding (a prerequisite for implementing changes) is particularly expensive, as it relies on individuals grasping a sufficient depth of understanding of the system.

Legacy systems were not designed to accommodate changes. This is because of the following reasons.

1.   Short lifetime expectancy: At the time of their commission, it was not anticipated that legacy systems would be used for so many decades.

2.   Failure of process models to treat evolution as an important activity: Evolution requirements, for example, can be extracted from business goals, but according to traditional practice, future requirements are largely ignored during the specification phase of the development.

3.   Constraints present at the time of development: When legacy systems were developed, the memory and processing power were limited that constrained the software design decisions. Techniques were used to make efficient use of these resources, but at the expense of maintainability it has been observed that to develop ‘long-lived’ software systems, ease of maintainability is a prerequisite.

Often organizations face a dilemma known as legacy dilemma, which states that ‘a legacy system, which is business critical, must remain operational, in some form, within its organization. However, continued maintenance of the system is expensive and the scope for effectively implementing further change is heavily constrained. Moreover, the costs of replacing the system from scratch are prohibitively high.’

The Components of a Legacy System

The ‘evolveability’ of a legacy system is determined by the parts that constitute the legacy system.

                     Components of Legacy System

These parts can be categorized as given here.

1.    Business: Represents a business perspective of legacy systems. Business goals are the long-term objectives for an organization, which heavily influence the evolveability of a legacy system. Business goals generate future requirements for the software systems that support the business. Note that if the business goals are thorough in nature, it is difficult to implement the changes. Also, thorough changes can only be accommodated after extensive rework.

2.   Organizational: Includes both the development and operational organizations involved with the legacy system. The development organization is responsible for maintaining the system. Note that it is difficult to evolve a system if the individuals who mail1.tained the system retire or when poor documentation exists. The operational organization is the organization which is supported by the legacy system. That is, a legacy system provides services to its operational organization. An organization’s attitude to change affects the system’s evolveability. For example, workforces in some organizations are unwilling to accept change if it is imposed by the senior management.

3.    Technical: Categorizes the legacy system into application software, system software, and hardware. When a system’s hardware is no longer supported, it would be wise to replace the hardware instead of investing further. Also, the condition and quality of application software (including its documentation) is a significant factor in determining how a legacy system can evolve. For example, a contrived software architecture, or inconsistent documentation implies that the system cannot evolve readily.

Note: A thorough analysis of the technical, business, and organizational parts of the system determines the future of a legacy system.

Software Maintenance Prediction

Since unexpected maintenance costs may lead to an unexpected increase in costs, it is important to predict the effect of modifications in the software system. Software maintenance prediction refers to the study of software maintainability, the modifications in the software system, and the maintenance costs that are required to maintain the software system. Various maintenance predictions and the questions associated with them.

                                Software Maintenance Prediction

Various predictions are closely related and specify the following.

1.     The decision to accept a system change depends on the maintainability of the system components affected by that change up to a certain extent.

2.    Implementation of changes results in degradation of system structure as well as reduction in system maintainability.

3.     Costs involved in implementing changes depend on the maintainability of the system components.

To predict the number of changes requested for a system, the relationship between the system and its external environment should be properly understood. To know the kind of relationship that exists, organizations should assess the following.

1.     Number and the complexity involved in the system interface. More interfaces mean more complexity, which in turn means more demand for change.

2.   Number of system (volatile) requirements. Changes required in organizational policies and procedures tend to be more volatile than the requirements based on a particular domain.

3.    Number of business processes in which the system operates. More business processes implies more demands for system change.

To predict maintainability of a software system, it is important to consider the relationship among the different components and the complexity involved in them. Generally, it is observed that a software system having complex components is difficult and expensive to maintain. The complexity in a software system occurs due to the size of procedures and functions, the size and the number of modules, and the nested structures in the software code. On the other hand, a software system developed by using good programming practices reduces not only the complexity but also the effort required in software maintenance. As a result, such software systems minimize the maintenance cost. For maintaining the individual components in software systems, it is essential to identify the complexity measurements of components.

After a system has been put into operation, several process metrics are used to predict the software maintainability. Process metrics, which may be useful for assessing maintainability, are listed below.

1.     Corrective maintenance: Sometimes, more errors are introduced rather than being repaired during the maintenance process. This shows decline in maintainability.

2.  Average time required for impact analysis: Before starting the software maintenance process, it is essential to analyze the impact of modifications in the software system. This is known as impact analysis, which reflects the number of components affected by the change.

3.   Number of outstanding change requests: If the number of outstanding change requests increases with time, it may imply decline in maintainability.

4.   Average time taken to implement a change request: This involves activities· concerned with making changes to the system and its documentation rather than simply assessing the components which are affected. If the time taken to implement a change increases, it may imply a decline in maintainability.

Factors Affecting Software Maintenance

Many factors directly or indirectly lead to high maintenance costs. A software maintenance framework is created to determine the affects of these factors on maintenance. This framework comprises user requirements, organizational and operational environment, maintenance process, maintenance personnel, and the software product (see Table). These elements interact with each other based on three kinds of relationships, which are listed below.

                    Table Components of Software Maintenance Framework



User requirements

  • Request for additional functionality, error correction, capability, and improvement in maintainability.
  • Request for non-programming related support.

Organizational environment

  • Change in business policies.
  • Competition in market.

Operational environment

  • Hardware platform.
  • Software specifications.

Maintenance process

  • Capturing requirements.
  • Variation in programming and working practices.
  • Paradigm shift.
  • Error detection and correction.

Software product

  • Quality of documentation.
  • Complexity of programs.
  • Program structure.

Software maintenance team

  • Staff turnover.
  • Domain expertise.

                    Relationship between Software Mintenance Framework Elements

1.    Relationship of software product and environment: In this relationship, the software product changes according to the organizational and operational environment. However, it is necessary to accept only those changes which are useful for the software product.

2.      Relationship of the software product and user: In this relationship, the software product is modified according to the new requirements of users. Hence, it is important to modify the software that is useful and acceptable to users after modification.

3.   Relationship of software product and software maintenance team: In this ‘relationship, the software maintenance team members act as mediators to keep track of the software product. In other words, the software maintenance team analyzes the modifications in other elements of software maintenance framework to determine their effect on the software product. These elements include user requirements, organizational and operational environments, and the software maintenance process. All these elements affect the modifications in software and are responsible for maintaining software quality.

Generally, users have little knowledge of the software maintenance process due to which they can be unsupportive to the software maintenance team. Also, users may have some misconceptions such as software maintenance is like hardware maintenance, changing software is easy, and changes cost too much and are time consuming.

If user requirements need major changes in the software, a lot of time may be consumed in implementing them. Similarly, users may opt for changes that are not according to the software standards or policies of a company. This situation creates a conflict between users and the software maintenance team.

To implement user requirements in software, the following characteristics should be considered.

1.         Feasible: User requirements are feasible if the requested change is workable in the software system.

2.         Desirable: Before implementing new changes, it is important to consider whether the user modification request is necessary.

3.     Prioritized: In some cases, the user requirements may be both feasible and desirable. However, these requirements may not be of high priority at that time. In such a situation, the user requirements can be implemented later.

The working of software is affected by two kinds of environments, namely, organizational environment and operational environment. The organizational environment includes business rules, government policies, taxation policies, work patterns, and competition in the market. An organization has its own business rules and policies, which should be incorporated in the software maintenance process. The operational environment includes software systems (such as operating systems, database systems, and compilers) and hardware systems (such as processor, memory, and peripherals).

In both the environments, scheduling of the maintenance process can create problems. The scheduling is affected by various factors such as urgent requirement of the modified software, allocation of less amount of time to modify the software, and the lack of proper knowledge on how to implement user requirement in software.

Changes are implemented in the software system by following the software maintenance process (also known as software maintenance life cycle). The facets of a maintenance process which affect the evolution of software or contribute to high maintenance costs are listed below.

1.    Error detection and correction: It has been observed that error-free software is virtually non-existent. That is, a software product tends to contain some kind of ‘residual’ errors. If these errors are uncovered at a later stage of software development, they become more expensive to fix. The cost of fixing errors is even higher when errors are detected during the maintenance phase.

2.     Difficulty in capturing change (and changing) requirements: Requirements and user problems become clear only when a system is in use. Also users may not be able to express their requirements in a form, which is understandable to the analyst or programmer.

3.     Software engineering paradigm shift: Older systems that were developed prior to the advent of structured programming techniques may be difficult to maintain.

Software Product

The software developed for users can be for general use or specific use. For example, MSOffice is a software application that is generic in nature and may be used by a wide range of people. On the other hand, the payroll system may be customized according to the needs of the organization. However, the problem occurs when software is to be maintained. Generally, the aspects of a software product that contribute to the maintenance cost/ challenge are listed below.

1.   Difficulty of the application domain: The requirements of applications that have been widely used and well understood are less likely to undergo substantial modifications than those that have been recently developed.

2.   Inflexibility in programs: While modifying software, it should be checked for the flexibility of change and reuse. This is because the inflexible software products are more prone to failures.

3.   Quality of the documentation: Documentation is essential for understanding the requirements, software design, and how these requirements are converted into the software code. The unavailability of up-to-date systems documentation affects maintenance productivity adversely.

Software Maintenance Team

The group of individuals responsible for the software maintenance is referred to as the software maintenance team, which mayor may not comprise the development team that ‘built’ the software. Often, a separate maintenance team (comprising analysts, designers, and programmers) is formed to ensure that a system performs its functions properly. This team is employed as it has been observed that generally developers do not keep documentation up-to-date, leading to the need of more individuals or resources to tackle a problem. This results in a long time-gap between the time when a problem occurs and when it is fixed.

Various functions performed by the software maintenance team are listed below.

  1. Locating information in system documentation
  2. Keeping system documentation up-to-date
  3. Improving system functionalities to adapt new environment
  4. Enhancing system to perform new functions according to the user’s needs
  5. Detecting root cause of failures, if any
  6. Handling changes made to the system.

The aspects of a maintenance team that lead to high maintenance costs are listed below.

1.     Staff turnover: Generally, it is observed that when the staff turnover (the ratio of number of individuals that leave the organization during a specified period of time) is high, the software maintenance is not performed properly. This is because employees who originally worked on software products are replaced by new personnel who spend a substantial proportion of the maintenance effort in understanding the system.

2.   Domain expertise: Sometimes, the maintenance team may have little or no knowledge about the system domain and the application domain they are working in. This problem is worsened if documentation is not maintained or is not up-to-date. All this may lead to delay in implementing the changes requested by the user.

Debugging in Software Testing

On successful culmination of software testing, debugging is performed. Debugging is defined as a process of analyzing and removing the error. It is considered necessary in most of the newly developed software or hardware and in commercial products/ personal application programs. For complex products, debugging is done at all the levels of the testing.

Debugging is considered to be a complex and time-consuming process since it attempts to remove errors at all the levels of testing. To perform debugging, debugger (debugging tool) is used to reproduce the conditions in which failure occurred, examine the program state, and locate the cause. With the help of debugger, programmers trace the program execution step by step (evaluating the value of variables) and halt the execution wherever required to reset the program variables. Note that some programming language packages include a debugger for checking the code for errors while it is being written.

Some guidelines that are followed while performing debugging are discussed here.

  1. Debugging is the process of solving a problem. Hence, individuals involved in debugging should understand all the causes of an error before starting with debugging.
  2. No experimentation should be done while performing debugging. The experimental changes instead of removing errors often increase the problem by adding new errors in it.
  3. When there is an error in one segment of a program, there is a high possibility that some other errors also exist in the program. Hence, if an error is found in one segment of a program, rest of the program should be properly examined for errors.
  4. It should be ensured that the new code added in a program to fix errors is correct and is not introducing any new error in it. Thus, to verify the correctness of a new code and to ensure that no new errors are introduced, regression testing should be performed.

The Debugging Process

During debugging, errors are encountered that range from less damaging (like input of an incorrect function) to catastrophic (like system failure, which lead to economic or physical damage). Note that with the increase in number of errors, the amount of effort to find their causes also increases.

Once errors are identified in a software system, to debug the problem, a number of steps are followed, which are listed below.

  1. Defect confirmation/identification: A problem is identified in a system and a defect report is created. A software engineer maintains and analyzes this error report and finds solutions to the following questions.
    1. Does a .defect exist in the system?
    2. Can the defect be reproduced?
    3. What is the expected/desired behavior of the system?
    4. What is the actual behavior?
  1. Defect analysis: If the defect is genuine, the next step is to understand the root cause of the problem. Generally, engineers debug by starting a debugging tool (debugger) and they try to understand the root cause of the problem by following a step-by-step execution of the program.
  2. Defect resolution: Once the root cause of a problem is identified, the error can be resolved by making an appropriate change to the system by fixing the problem.

When the debugging process ends, the software is retested to ensure that no errors are left undetected. Moreover, it checks that no new errors are introduced in the software while making some changes to it during the debugging process.

Debugging Strategies

As debugging is a difficult and time-consuming task, it is essential to develop a proper debugging strategy. This strategy helps in performing the process of debugging easily and efficiently. The commonly-used debugging strategies are debugging by brute force, induction strategy, deduction strategy, backtracking strategy, and debugging by testing.

Brute force method of debugging is the most commonly used but least efficient method. It is generally used when all other available methods fail. Here, debugging is done by taking memory (or storage) dumps. Actually, the program is loaded with the output statements that produce a large amount of information including the intermediate values. Analyzing this information may help to identify the errors cause. However, using a memory dump for finding errors requires analyzing huge amount of information or irrelevant data leading to waste of time and effort.

This strategy is a ‘disciplined thought process’ where errors can be debugged by moving outwards from the particulars to the whole. This strategy assumes that once the symptoms of the errors are identified, and the relationships between them are established, the errors can be easily detected by just looking at the symptoms and the relationships. To perform induction strategy, a number of steps are followed, which are listed below.


1. Locating relevant data: All the information about a program is collected to identify the functions, which are executed correctly and incorrectly.

2. Organizing data: The collected data is organized according to importance. The data can consist of possible symptoms of errors, their location in the program, the time at which the symptoms occur during the execution of the program and the effect of these symptoms on the program.

3. Devising hypothesis: The relationships among the symptoms are studied and a hypothesis is devised that provides the hints about the possible causes of errors.

4. Proving hypothesis: In the final step, the hypothesis needs to be proved. It is done by comparing the data in the hypothesis with the original data to ensure that the hypothesis explains the existence of hints completely. In case, the hypothesis is unable to explain the existence of hints, it is either incomplete or contains multiple errors in it.

Deduction Strategy

In this strategy, first step is to identify all the possible causes and then using the data each cause is analyzed and eliminated if it is found invalid. Note that as in induction strategy, deduction strategy is also based on some assumptions. To use this strategy following steps are followed.

1. Identifying the possible causes or hypotheses: A list of all the possible causes of errors is formed. Using this list, the available data can be easily structured and analyzed.

2. Eliminating possible causes using the data: The list is examined to recognize the most probable cause of errors and the rest of the causes are deleted.

3. Refining the hypothesis: By analyzing the possible causes one by one and looking for contradiction leads to elimination of invalid causes. This results in a refined hypothesis containing few specific possible causes.

4. Proving the hypothesis: This step is similar to the fourth step in induction method.

Backtracking Strategy

This method is effectively used for locating errors in small programs. According to this strategy, when an error has occurred, one needs to start tracing the program backward one step at a time evaluating the values of all variables until the cause of error is found. This strategy is useful but in a large program with many thousands lines of code, the number of backward paths increases and becomes unmanageably large.

Debugging by Testing

This debugging method can be used in conjunction with debugging by induction and debugging by deduction methods. Additional test cases are designed that help in obtaining information to devise and prove a hypothesis in induction method and to eliminate the invalid causes and refine the hypothesis in deduction method. Note that the test cases used in debugging are different from the test cases used in testing process. Here, the test cases are specifically designed to explore the internal program state.

Software Testing Tools

Software testing can be performed either manually or using automated testing tools. In manual testing, test cases are generated, the software is executed, and the result produced is documented manually. Hence, manual testing is considered to be costly and time-consuming. To reduce the time and cost, automated testing is used. There are many testing tools available that are useful in several places while testing software product. These tools can be categorized as static testing and dynamic testing tools.

Static testing tools: These tools test the software without executing it; rather, they are concerned with analyzing the code or documentation for syntax checking, consistency, etc. Static testing can be manual or even automated with the use of static analysis tools. Static analysis tools examine the source code of program and highlight the statements with wrong syntax, undefined symbols or variables, use of uninitialized variables, and so on. They also check for flaws in the logic flow of the program.

Dynamic testing tools: These tools interact with the software while execution and help the testers by providing useful information about the program at different events. This information may include the number of times some particular statements is executed, whether all the branches of decision point have been exercised, minimum and maximum values of variables, and so on. While performing testing with automated tools, the following points should be noted.

  1. Clear and reasonable expectations should be established in order to know what can and what cannot be accomplished with automated testing in the organization.
  2. There should be a clear understanding of the requirements that should be met in order to achieve successful automated testing. This requires the following consideration.  The organization should have detailed, reusable test cases, which contain exact expected results and a stand-alone test environment with a restorable database.
    1. Technical personnel to use the tools effectively
    2. An effective manual testing process, which must exist before automation begins.
  3. Testing tool should be cost-effective. It should involve minimum technical personnel and should ensure that test cases developed for manual testing are also useful for automated testing.
  4. Select a tool that allows implementation of automated testing in a way that conforms to the specified long-term testing strategy.

Many automated tools are available for performing the testing process in an effective and efficient manner. Automated tools like Mothora are used to design test cases, evaluate their adequacy, verify the correctness of input and output, find and remove the errors, and control and summarize the test. Similarly, Bug Trapper is used to perform white box testing. This tool traces the path of execution and captures the bug along with the path of execution and the different input values that resulted in the error. Some other commonly used automated tools are listed in Table.

                                           Table Software Testing Tools


Testing Tools


• SilkTest

• SilkPerformer

• SilkCentral

IBM/ Rational

• RequirementPro

• Robot

• ClearCase

Mercury Interactive

• WinRunner

• LoadRunner

• TestDirector


• Reconcile

• QALoad

• QARun


Object-Oriented Testing

The shift from traditional to object-oriented environment involves looking at and reconsidering old strategies and methods for testing the software. The traditional programming consists of procedures operating on data, while the object-oriented paradigm focuses on objects that are instances of classes. In object-oriented (OO) paradigm, software engineers identify and specify the objects and services provided by each object. In addition, interaction of any two objects and constraints on each identified object are also determined. The main advantages of OO paradigm include increased reusability, reliability, interoperability, and extendibility.

With the adoption of OO paradigm, almost all the phases of software development have changed in their approach, environments, and tools. Though OO paradigm helps make the designing and development of software easier, it may pose new kind of problems. Thus, testing of software developed using OO paradigm has to deal with the new problems also. Note that object-oriented testing can be used to test the object-oriented software as well as conventional software.

OO program should be tested at different levels to uncover all the errors. At the algorithmic level, each module (or method) of every class in the program should be tested in isolation. For this, white-box testing can be applied easily. As classes form the main unit of object-oriented program, testing of classes is the main concern while testing an OO program. At the class level, every class should be tested as an individual entity. At this level, programmers who are involved in the development of class conduct the testing. Test cases can be drawn from requirements specifications, models, and the language used. In addition, structural testing methods such as boundary value analysis are extremely used. After performing the testing at class level, cluster level testing should be performed. As classes are collaborated (or integrated) to form a small subsystem (also known as cluster), testing each cluster individually is necessary. At this level, focus is on testing the components that execute concurrently as well as on the interclass interaction. Hence, testing at this level may be viewed as integration testing where units to be integrated are classes. Once all the clusters in the system are tested, system level testing begins. At this level, interaction among clusters is tested.

Usually, there is a misconception that if individual classes are well designed and have proved to work in isolation, then there is no need to test the interactions between two or more classes when they are integrated. However, this is not true because sometimes there can be errors, which can be detected only through integration of classes. Also, it is possible that if a class does not contain a bug, it may still be used in a wrong way by another class, leading to system failure.

Developing Test Cases in Object-oriented Testing

The methods used to design test cases in OO testing are based on the conventional methods. However, these test cases should encompass special features so that they can be used in the object-oriented environment. The points that should be noted while developing test cases in an object-oriented environment are listed below.

  1. It should be explicitly specified with each test case which class it should test.
  2. Purpose of each test case should be mentioned.
  3. External conditions that should exist while conducting a test should be clearly stated with each test case.
  4. All the states of object that is to be tested should be specified.
  5. Instructions to understand and conduct the test cases should be provided with each test case.

Object-oriented Testing Methods

As many organizations are currently using or targeting to switch to the OO paradigm, the importance of OO software testing is increasing. The methods used for performing object-oriented testing are discussed in this section.

                                                      Object-oriented Testing Methods

State-based testing is used to verify whether the methods (a procedure that is executed by an object) of a class are interacting properly with each other. This testing seeks to exercise the transitions among the states of objects based upon the identified inputs.

For this testing, finite-state machine (FSM) or state-transition diagram representing the possible states of the object and how state transition occurs is built. In addition, state-based testing generates test cases, which check whether the method is able to change the state of object as expected. If any method of the class does not change the object state as expected, the method is said to contain errors.

To perform state-based testing, a number of steps are followed, which are listed below.

  1. Derive a new class from an existing class with some additional features, which are used to examine and set the state of the object.
  2. Next, the test driver is written. This test driver contains a main program to create an object, send messages to set the state of the object, send messages to invoke methods of the class that is being tested and send messages to check the final state of the object.
  3. Finally, stubs are written. These stubs call the untested methods.

Fault-based Testing

Fault-based testing is used to determine or uncover a set of plausible faults. In other words, the focus of tester in this testing is to detect the presence of possible faults. Fault-based testing starts by examining the analysis and design models of OO software as these models may provide an idea of problems in the implementation of software. With the knowledge of system under test and experience in the application domain, tester designs test cases where each test case targets to uncover some particular faults.

The effectiveness of this testing depends highly on tester experience in application domain and the system under test. This is because if he fails to perceive real faults in the system to be plausible, testing may leave many faults undetected. However, examining analysis and design models may enable tester to detect large number of errors with less effort. As testing only proves the existence and not the absence of errors, this testing approach is considered to be an effective method and hence is often used when security or safety of a system is to be tested.

Integration testing applied for OO software targets to uncover the possible faults in both operation calls and various types of messages (like a message sent to invoke an object). These faults may be unexpected outputs, incorrect messages or operations, and incorrect invocation. The faults can be recognized by determining the behavior of all operations performed to invoke the methods of a class.

Scenario-based Testing

Scenario-based testing is used to detect errors that are caused due to incorrect specifications and improper interactions among various segments of the software. Incorrect interactions often lead to incorrect outputs that can cause malfunctioning of some segments of the software. The use of scenarios in testing is a common way of describing how a user might accomplish a task or achieve a goal within a specific context or environment. Note that these scenarios are more context- and user specific instead of being product-specific. Generally, the structure of a scenario includes the following points.

  1. A condition under which the scenario runs.
  2. A goal to achieve, which can also be a name of the scenario.
  3. A set of steps of actions.
  4. An end condition at which the goal is achieved.
  5. A possible set of extensions written as scenario fragments.

Scenario- based testing combines all the classes that support a use-case (scenarios are subset of use-cases) and executes a test case to test them. Execution of all the test cases ensures that all methods in all the classes are executed at least once during testing. However, testing all the objects (present in the classes combined together) collectively is difficult. Thus, rather than testing all objects collectively, they are tested using either top-down or bottom-up integration approach.

This testing is considered to be the most effective method as scenarios can be organized in such a manner that the most likely scenarios are tested first with unusual or exceptional scenarios considered later in the testing process. This satisfies a fundamental principle of testing that most testing effort should be devoted to those paths of the system that are mostly used.

Challenges in Testing Object-oriented Programs

Traditional testing methods are not directly applicable to OO programs as they involve OO concepts including encapsulation, inheritance, and polymorphism. These concepts lead to issues, which are yet to be resolved. Some of these issues are listed below.

  1. Encapsulation of attributes and methods in class may create obstacles while testing. As methods are invoked through the object of corresponding class, testing cannot be accomplished without object. In addition, the state of object at the time of invocation of method affects its behavior. Hence, testing depends not only on the object but on the state of object also, which is very difficult to acquire.
  2. Inheritance and polymorphism also introduce problems that are not found in traditional software. Test cases designed for base class are not applicable to derived class always (especially, when derived class is used in different context). Thus, most testing methods require some kind of adaptation in order to function properly in an OO environment.

Software Testing Techniques

Once the software is developed it should be tested in a proper manner before the system is delivered to the user. For this, two techniques that provide systematic guidance for designing tests are used. These techniques are discussed here. about Software Testing Techniques

Levels of Software Testing

The software is tested at different levels. Initially, the individual units are tested arid once they are tested, they are integrated and checked for interfaces established between them. After this, the entire software is tested to ensure that the output produced is according to user requirements. There are four levels of software testing, namely, unit testing, integration testing, system testing, and acceptance testing. about Levels of Software Testing

Software Testing Strategies – Types of Software Testing Strategies

To perform testing in a planned and systematic manner, software testing strategy is developed. A testing strategy is used to identify the levels of testing which are to be applied along with the methods, techniques, and tools to be used during testing. This strategy also decides test cases, test specifications, test case decisions, and puts them together for execution.

Developing a test strategy, which efficiently meets the requirements of an organization, is critical to the success of software development in that organization. Therefore, a software testing strategy should contain complete information about the procedure to perform testing and the purpose and requirements of testing.

The choice of software testing strategy is highly dependent on the nature of the developed software. For example, if the software is highly data intensive then a strategy that checks structures and values properly to ensure that all inputs given to the software are correct and complete should be developed. Similarly, if it is transaction intensive then the strategy should be such that it is able to check the flow of all the transactions. The design and architecture of the software are also useful in choosing testing strategy. A number of software testing strategies are developed in the testing process. All these strategies provide the tester a template, which is used for testing. Generally, all testing strategies have following characteristics.

  1. Testing proceeds in an outward manner. It starts from testing the individual units, progresses to integrating these units, and finally, moves to system testing.
  2. Testing techniques used during different phases of software development are different.
  3. Testing is conducted by the software developer and by an ITG.
  4. Testing and debugging should not be used synonymously. However, any testing strategy must accommodate debugging with itself.

Types of Software Testing Strategies

There are different types of software testing strategies, which are selected by the testers depending upon the nature and size of the software. The commonly used software testing strategies are listed below.

                     Types of Software Testing Strategy

  1. Analytic testing strategy: This uses formal and informal techniques to access and prioritize risks that arise during software testing. It takes a complete overview of requirements, design, and implementation of objects to determine the motive of testing. In addition, it gathers complete information about the software, targets to be achieved, and the data required for testing the software.
  2. Model-based testing strategy: This strategy tests the functionality of the software according to the real world scenario (like software functioning in an organization). It recognizes the domain of data and selects suitable test cases according to the probability of errors in that domain.
  3. Methodical testing strategy: It tests the functions and status of software according to the checklist, which is based on user requirements. This strategy is also used to test the functionality, reliability, usability, and performance of the software.
  4. Process-oriented testing strategy: It tests the software according to already existing standards such as the IEEE standards. In addition, it checks the functionality of the software by using automated testing tools.
  5. Dynamic testing strategy: This tests the software after having a collective decision of the testing team. Along with testing, this strategy provides information about the software such as test cases used for testing the errors present in it.
  6. Philosophical testing strategy: It tests the software assuming that any component of the software can stop functioning anytime. It takes help from software developers, users and systems analysts to test the software.

A testing strategy should be developed with the intent to provide the most effective and efficient way of testing the software. While developing a testing strategy, some questions arise such as: when and what type of testing is to be done? What are the objectives of testing? Who is responsible for performing testing? What outputs are produced as a result of testing? The inputs that should be available while developing a testing strategy are listed below.

  1. Type of development project
  2. Complete information about the hardware and software components that are required to develop the software
  3. Risks involved
  4. Description of the resources that are required for testing
  5. Description of all testing methods that are required to test various phases of SDLC
  6. Details of all the attributes that the software is unable to provide. For example, software cannot describe its own limitations.

The output produced by the software testing strategy includes a detailed document, which indicates the entire test plan including all test cases used during the testing phase. A testing strategy also specifies a list of testing issues that need to be resolved.

An efficient software testing strategy includes two types of tests, namely, low-level tests and high-level tests. Low-level tests ensure correct implementation of small part of the source code and high-level tests ensure that major software functions are validated according to user requirements. A testing strategy sets certain milestones for the software such as final date for completion of testing and the date of delivering the software. These milestones are important when there is limited time to meet the deadline.

In spite of these advantages, there are certain issues that need to be addressed for successful implementation of software testing strategy. These issues are discussed here.

  1. In addition to detecting errors, a good testing strategy should also assess portability and usability of the software.
  2. It should use quantifiable manner to specify software requirements such as outputs expected from software, test effectiveness, and mean time to failure which should be clearly stated in the test plan.
  3. It should improve testing method continuously to make it more effective.
  4. Test plans that support rapid cycle testing should be developed. The feedback from rapid cycle testing can be used to control the corresponding strategies.
  5. It should develop robust software, which is able to test itself using debugging techniques.
  6. It should conduct formal technical reviews to evaluate the test cases and test strategy. The formal technical reviews can detect errors and inconsistencies present in the testing process.

Test Case Design | Software Testing

A test case provides the description of inputs and their expected outputs to observe whether the software or a part of the software is working correctly. IEEE defines test case as ‘a set of input values, execution preconditions, expected results and execution post conditions, developed for a particular objective or test condition such as to exercise a particular program path or to verify compliance with a specific requirement.’ Generally, a test case is associated with details like identifier, name, purpose, required inputs, test conditions, and expected outputs.

Incomplete and incorrect test cases lead to incorrect and erroneous test outputs. To avoid this, the test cases must be prepared in such a way that they check the software with all possible inputs. This process is known as exhaustive testing and the test case, which is able to perform exhaustive testing, is known as ideal test case. Generally, a test case is unable to perform exhaustive testing; therefore, a test case that gives satisfactory results is selected. In order to select a test case, certain questions should be addressed.

  1. How to select a test case?
  2. On what basis are certain elements of program included or excluded from a test case?

To provide an answer to these questions, test selection criterion is used that specifies the conditions to be met by a set of test cases designed for a given program. For example, if the criterion is to exercise all the control statements of a program at least once, then a set of test cases, which meets the specified condition should be selected.

The process of generating test cases helps to identify the problems that exist in the software requirements and design. For generating a test case, firstly the criterion to evaluate a set of test cases is specified and then the set of test cases satisfying that criterion is generated. There are two methods used to generate test cases, which are listed below.

  1. Code-based test case generation: This approach, also known as structure based test case generation, is used to assess the entire software code to generate test cases. It considers only the actual software code to generate test cases and is not concerned with the user requirements. Test cases developed using this approach are generally used for performing unit testing. These test cases can easily test statements, branches, special values, and symbols present in the unit being tested.
  2. Specification-based test case generation: This approach uses specifications, which indicate the functions that are produced by the software to generate test cases. In other words, it considers only the external view of the software to generate test cases. It is generally used for integration testing and system testing to ensure that the software is performing the required task. Since this approach considers only the external view of the software, it does not test the design decisions and may not cover all statements of a program. Moreover, as test cases are derived from specifications, the errors present in these specifications may remain uncovered.

Several tools known as test case generators are used for generating test cases. In addition to test case generation, these tools specify the components of the software that are to be tested. An example of test case generator is the ‘astra quick test’, which captures business processes in the visual map and generates data-driven tests automatically.

Test Case Specifications

A test plan is neither not related to the details of testing units nor it specifies the test cases to be used for testing units. Thus, test case specification is done in order to test each unit separately. Depending on the testing method specified in a test plan, the features of the unit to be tested are determined. The overall approach stated in the test plan is refined into two parts: specific test methods and the evaluation criteria. Based on these test methods and the criteria, the test cases to test the unit are specified.

For each unit being tested, these test case specifications describe the test cases, required inputs for test cases, test conditions, and the expected outputs from the test cases. Generally, it is required to specify the test cases before using them for testing. This is because the effectiveness of testing depends to a great extent on the nature of test cases.

Test case specifications are written in the form of a document. This is because the quality of test cases is evaluated by performing a test case review, which requires a formal document. The review of test case document ensures that test cases satisfy the chosen criteria and conform to the policy specified in the test plan. Another benefit of specifying test cases in a formal document is that it helps testers to select an effective set of test cases.

Test Plan | Software Testing

A test plan describes how testing would be accomplished. It is a document that specifies the purpose, scope, and method of software testing. It determines the testing tasks and the persons involved in executing those tasks, test items, and the features to be tested. It also describes the environment for testing and the test design and measurement techniques to be used. Note that a properly defined test plan is an agreement between testers and users describing the role of testing in software.

A complete test plan helps the people who are not involved in test group to understand why product validation is needed and how it is to be performed. However, if the test plan is not complete, it might not be possible to check how the software operates when installed on different operating systems or when used with other software. To avoid this problem, IEEE states some components that should be covered in a test plan. These components are listed in Table.

                                           Table Components of a Test Plan




Assigns responsibilities to different people and keeps them focused.


Avoids any misinterpretation of schedules.


Provides an abstract of the entire process and outlines specific tests. The testing scope, schedule, and duration are also outlined.


Communication plan (who, what, when, how about the people) is developed.

Risk analysis

Identifies areas that are critical for success.

Defect reporting

Specifies the way in which a defect should be documented so that it may reoccur and be retested and fixed.


Describes the data, interfaces, work area, and the technical environment used in testing. All this is specified to reduce or eliminate the misunderstandings and sources of potential delay.


A carefully developed test plan facilitates effective test execution, proper analysis of errors, and preparation of error report. To develop a test plan, a number of steps are followed, as listed below.

  1. Set objectives of test plan: Before developing a test plan, it is necessary to understand its purpose. But, before determining the objectives of a test plan, it is necessary to determine the objectives of the software. This is because the objectives of a test plan are highly dependent on that of software. For example, if the objective of the software is to accomplish all user requirements, then a test plan is generated to meet this objective.
  2. Develop a test matrix: A test matrix indicates the components of the software that are to be tested. It also specifies the tests required to check these components. Test matrix is also used as a test proof to show that a test exists for all components of the software that require testing. In addition, test matrix is used to indicate the testing method, which is used to test the entire software.
  3. Develop test administrative component: A test plan must be prepared within a fixed time so that software testing can begin as soon as possible. The purpose of administrative component of a test plan is to specify the time schedule and resources (administrative people involved while developing the test plan) required to execute the test plan. However, if the implementation plan (plan that describes how the processes in the software are carried out) of software changes, the test plan also changes. In this case, the schedule to execute the test plan also gets affected.
  4. Write the test plan: The components of a test plan such as its objectives, test matrix, and administrative component are documented. All these documents are then collected together to form a complete test plan. These documents are organized either in an informal or formal manner.

                                      Steps Involved in a Test Plan

In the informal manner, all the documents are collected and kept together. The testers read all the documents to extract information required for testing the software. On the other hand, in a formal manner, the important points are extracted from the documents and kept together. This makes it easy for testers to extract important information, which they require during software testing.

A test plan has many sections, which are listed below.

  1. Overview: Describes the objectives and functions of the software to be performed. It also describes the objectives of test plan such as defining responsibilities, identifying test environment and giving a complete detail of the sources from where the information is gathered to develop the test plan.
  2. Test scope: Specifies features and combination of features, which are to be tested. These features may include user manuals or system documents. It also specifies the features and their combinations that are not to be tested.
  3. Test methodologies: Specifies the types of tests required for testing features and combination of these features such as regression tests and stress tests. It also provides description of sources of test data along with how test data is useful to ensure that testing is adequate such as selection of boundary or null values. In addition, it describes the procedure for identifying and recording test results.
  4. Test phases: Identifies different types of tests such as unit testing, integration testing and provides a brief description of the process used to perform these tests. Moreover, it identifies the testers that are responsible for performing testing and provides a detailed description of the source and type of data to be used. It also describes the procedure of evaluating test results and describes the work products, which are initiated or completed in this phase.
  5. Test environment: Identifies the hardware, software, automated testing tools; operating system, compliers and sites required to perform testing, as well as the staffing and training needs.     
  6. Schedule: Provides detailed schedule of testing activities and defines the responsibilities to respective people. In addition, it indicates dependencies of testing activities and the time frames for them.
  7. Approvals and distribution: Identifies the individuals who approve a test plan and its results. It also identifies the people to whom the test plan document(s) is distributed.

Software Testing – What is Software Testing? Characteristics of Software Test.

After the implementation phase, the testing phase begins. Testing of software is critical, since testing determines the correctness, completeness and quality of the software being developed. Its main objective is to detect errors in the software.


Errors prevent software from producing outputs according to user requirements. They occur if some part of the developed system is found to be incorrect, incomplete, or inconsistent. Errors can broadly be classified into three types, namely, requirements errors, design errors, and programming errors. To avoid these errors, it is necessary that: requirements are examined for conformance to user needs, software design is consistent with the requirements and notational convention, and the source code is examined for conformance to the requirements specification, design documentation and user expectations. All this can be accomplished through efficacious means of software testing.

The activities involved in testing phase basically evaluate the capability of the developed system and ensure that the system meets the desired requirements. It should be noted that testing is fruitful only if it is performed in the correct manner. Through effective software testing, the software can be examined for correctness, comprehensiveness, consistency and adherence to standards. This helps in delivering high-quality software products and lowering maintenance’ costs, thus leading to more contented users.

Software Testing Basic

Software testing determines the correctness, completeness and quality of software being developed. IEEE defines testing as ‘the process of exercising or evaluating a system or system component by manual or automated means to verify that it satisfies specified requirements or to identify differences between expected and actual results.’

Software testing is closely related to the terms verification and validation. Verification refers to the process of ensuring that the software is developed according to its specifications. For verification, techniques like reviews, analysis, inspections and walkthroughs are commonly used. While validation refers to the process of checking that the developed software meets the requirements specified by the user. Verification and validation can be summarized thus as given here.

Verification: Is the software being developed in the right way?

Validation: Is the right software being developed?

Software testing is performed either manually or by using automated tools to make sure that the software is functioning in accordance with the user requirements. Various advantages associated with testing are listed below.

  1. It removes errors, which prevent software from producing outputs according to user requirements.
  2. It removes errors that lead to software failure.
  3. It ensures that the software conforms to business as well as user’s needs.
  4. It ensures that the software is developed according to user requirements.
  5. It improves the quality of the software by removing maximum possible errors from it.

                      Advantages of Software Testing

Software testing comprises a set of activities, which are planned before testing begins. These activities are carried out for detecting errors that occur during various phases of SDLC. The role of testing in the software development life cycle is listed in Table.

                        Table Role of Testing in Various Phases of SDLC

Software Development Phase


Requirements specification

1.      To identify the test strategy.

2.  To check the sufficiency of requirements.

3. To create functional test conditions.


4.   To check the consistency of design with the requirements.

5.  To check the sufficiency of design.

$16.      To create structural and functional test conditions.


7.  To check the consistency of implementation with the design.

8.  To check the (sufficiency of implementation.

9. To create structural and functional test conditions for programs/units.


10.  To check the sufficiency of the test plan.

11. To test the application programs.

Installation and maintenance

12.  To put the tested system under operation.

13.  To make changes· in the system and retest the modified system.

Software testing is aimed at identifying any bugs, errors, faults, or failures (if any) present in the software. Bug is defined as a logical mistake, which is caused by a software developer while writing the software code. Error is defined as the measure of deviation of the outputs given by the software from the outputs expected by the user. Fault is defined as the condition that leads to malfunctioning of the software. Malfunctioning of software is caused due to several reasons such as change in the design, architecture or software code. Defect that causes error in operation or negative impact is called failure. Failure is defined as that state of software under which it is unable to perform functions according to user requirements. Bugs, errors, faults and failures prevents the software from performing efficiently and hence cause the software to produce unexpected outputs. Errors can be present in the software due to the following reasons.

  1. Programming errors: Programmers can make mistakes while developing the source code.
  2. Unclear requirements: The user is not clear about the desired requirements or the developers are unable to understand the user requirements in a clear and concise manner.
  3. Software complexity: The greater the complexity of the software, the more the scope of committing an error (especially by an inexperienced developer).
  4. Changing requirements: The users usually keep on changing their requirements, and it becomes difficult to handle such changes in the later stage of development process. Therefore, there are chances of making mistakes while incorporating these changes in the software.
  5. Time pressures: Maintaining schedule of software projects is difficult. When deadlines are not met, the attempt to speed up the work causes errors.
  6. Poorly documented code: If the code is not well documented or well written, then maintaining and modifying it becomes difficult. This causes errors to occur.

Note:  ‘error’ is used as a general term for ‘bugs’, ‘errors’, ‘faults’, and ‘failures’.

Testing is an organizational issue, which is performed either by the software developers (who originally developed the software) or by an independent test group (ITG), which comprises software testers. The software developers are considered to be the best persons to perform testing as they have the best knowledge about the software. However, since software developers are involved in the development process, they may have their own interest to show that the software is error-free, meets user requirements, and is within schedule and budget. This vested interest hinders the process of testing.

To avoid this problem, the task of testing is assigned to an Independent Test Group (ITG), which is responsible to detect errors that may have been neglected by the software developers. ITG tests the software without any discrimination since the group is not directly involved in the development process. However, the testing group does not completely take over the testing process, instead it works with the software developers in the software project to ensure that testing is performed in an efficient manner. During the testing process, developers are responsible for correcting the errors uncovered by the testing group.

Generally, an independent test group forms a part of the software development project team. This is because the group becomes involved during the specification activity and stays involved (planning and specifying test procedures) throughout the development process.

Various advantages and disadvantages associated with independent test group are listed in Table.

        Table Advantages and Disadvantages of Independent Test Group



  1. ITG can more efficiently find defects related to interaction among different modules, system usability and performance, and many other special cases
  1. ITG may perform some tests that have already been performed by the developers. This results in duplication of effort as well as wastage of time
  1. ITG serves the better solution than leaving testing to the developers. This is because the developers have neither training nor any motivation for testing.
  1. It is essential for the test group to be physically collocated with the design group; otherwise, problems may arise.
  1. Test groups can have better perception of how reliable is the software before delivering it to the user.
  1. Keeping a separate group for testing results in extra cost to the organization.

To plan and perform testing, software testers should have the knowledge about the function for which the software has been developed, the inputs and how they can be combined, and the environment in which the software will eventually operate. This process is time-consuming and requires technical sophistication and proper planning on the part of the testers. To achieve technical know-how, testers are required to possess strong development skills as well as knowledge of concepts like graph theory, algorithms, and formal languages. Other factors that should be kept in mind while performing testing are listed below.

  1. Time available to perform testing
  2. Training required acquainting testers about the software
  3. Attitude of testers
  4. Relationship between testers and developers.

Note: Along with software testers, customers, end-users, and management also play an important role in software testing.

Guidelines of Software Testing

There are certain rules and guidelines that are followed during software testing. These guidelines act as a standard to test the software and make testing more effective and efficient. The commonly used software testing guidelines are listed below.

  1. Define the expected output: When programs are executed during testing they mayor may not produce the expected outputs due to different types of errors present in the software. To avoid this, it is necessary to define the expected output before software testing begins. Without knowledge of the expected results, testers may fail to detect an erroneous output.
  2. Inspect output of each test completely: Software testing should be performed once the software is complete in order to check its performance and functionality along with the occurrence of errors in various phases of software development.
  3. Include test cases for invalid and unexpected conditions: Generally, software produces correct outputs when it is tested using accurate inputs. However, if unexpected input is given to the software, it may produce erroneous outputs. Hence, test cases that detect errors even when unexpected and incorrect inputs are specified should be developed.
  4. Test the modified program to check its expected performance: Sometimes, when certain modifications are made in the software (like adding of new functions) it is possible that the software produces unexpected outputs. Hence, it should be tested to verify that it performs in the expected manner even after modifications.

                    Software Testing Guidelines


The ease with which a program is tested is known as testability. Testability should always be considered while signing and implementing a software system so that the errors (if any) in the system can be detected with minimum effort. There are several characteristics of testability, which are listed below.

  1. Easy to operate: High quality software can be tested in a better manner. This is because if the software is designed and implemented considering quality, then comparatively fewer errors will be detected during the execution of tests.
  2. Stability: Software becomes stable when changes made to the software are controlled and when the existing tests can still be performed.
  3. Observability: Testers can easily identify whether the output generated for certain input is accurate simply by observing it.
  4. Easy to understand: Software that is easy to understand can be tested in an efficient manner. Software can be properly understood by gathering maximum information about it. For example, to have a proper knowledge of the software, its documentation can be used, which provides complete information of the software code thereby increasing its clarity and making the testing easier.
  5. Decomposability: By breaking software into independent modules, problems can be easily isolated and the modules can be easily tested.

                                        Characteristics of Testability

Characteristics of Software Test

There are several tests (such as unit and integration) used for testing the software. Each test has its own characteristics. The following points, however, should be noted.

  1. High probability of detecting errors: To detect maximum errors, the tester should understand the software thoroughly and try to find the possible ways in which the software can fail. For example, in a program to divide two numbers, the possible way in which the program can fail is when 2 and 0 are given as inputs and 2 is to be divided by 0. In this case, a set of tests should be developed that can demonstrate an error in the division operator.
  2. No redundancy: Resources and testing time are limited in software development process. Thus, it is not beneficial to develop several tests, which have the same intended purpose. Every test should have a distinct purpose.
  3. Choose the most appropriate test: There can be different tests that have the same intent but due to certain limitations such as time and resource constraint, only few of them are used. In such a case, the tests, which are likely to find more number of errors, should be considered.
  4. Moderate: A test is considered good if it is neither too simp1e, nor too complex. Many tests can be combined to form one test case. However this can increase the complexity and leave many errors undetected. Hence, all tests should be performed separately.

Coding Documentation in Software Engineering

Code documentation is a manual-cum-guide that helps in understanding and correctly utilizing the software code. The coding standards and naming conventions written in a commonly spoken language in code documentation provide enhanced clarity for the designer. Moreover, they act as a guide for the software maintenance team (this team focuses on maintaining software by improving and enhancing the software after it has been delivered to the end user) while the software maintenance process is carried out. In this way, code documentation facilitates code reusability.

While writing a software code, the developer needs proper documentation for reference purposes. Programming is an ongoing process and requires modifications from time to time. When a number of software developers are writing the code for the same software, complexity increases. With the help of documentation, software developers can reduce the complexity by referencing the code documentation. Some of the documenting techniques are comments, visual appearances of codes, and programming tools. Comments are used to make the reader understand the logic of a particular code segment. The visual appearance of a code is the way in which the program should be formatted to increase readability. The programming tools in code documentation are algorithms, flowcharts, and pseudo-codes.about Coding Documentation in Software Engineering

Coding Tools in Software Engineering

While writing software code, several coding tools are used along with the programming language to simplify the tasks of writing the software code. Note that coding tools vary from one programming language to another as they are developed according to a particular programming language. However, sometimes a single coding tool can be used in more than one programming language. Generally, coding tools comprises text editors, supporting tools for a specific programming language, and the framework required to run the software code. Some of the commonly used coding tools are listed in Table.

In addition to the programming language and coding tools, there are some software programs that are essential to run the software code. For instance, a debugger is used to detect the source of program errors by performing a step-by-step execution of the software code. A debugger breaks program execution at various levels in the application program. It supports features such as breakpoints, displaying or changing memory, and so on. Similarly, compilers are used to translate programs written in a high-level language into their machine language equivalents.

                                                 Table Coding Tools


Coding Tools




Java, XML

Used to speed up web applications and database applications. It provides interfaces to application servers such as Web Logic, WebSphere and EAServer as an editor and visual flow designer. In addition, it provides the J2SE and J2EE support in Java, enhanced performance tools, and code audits.



Used for the server-side scripting of languages such as ASP.NET and HTML. In addition, it is used for performing functions such as creation of websites, database connections, querying a database, formatting the output of software code, and displaying multiple records.



Used for Integrated Development Environment (IDE) for Java. It is a cross-platform tool integrated with other coding tools of Java and relatively easy to set up.



Used for writing code in Java. In addition, it is a cross-platform tool and is flexible to use as any action being performed repeatedly can be standardized using Ant.



Used for testing a framework by creating tests for the software code that are to be repeated as often as required.


Code Verification Techniques in Software Engineering

Code verification is the process used for checking the software code for errors introduced in the coding phase. The objective of code verification process is to check the software code in all aspects. This process includes checking the consistency of user requirements with the design phase. Note that code verification process does not concentrate on proving the correctness of programs. Instead, it verifies whether the software code has been translated according to the requirements of the user.

The code verification techniques are classified into two categories, namely, dynamic and static. The dynamic technique is performed by executing some test data. The outputs of the program are tested to find errors in the software code. This technique follows the conventional approach for testing the software code. In the static technique, the program is executed conceptually and without any data. In other words, the static technique does not use any traditional approach as used in the dynamic technique. Some of the commonly used static techniques are code reading, static analysis, symbolic execution, and code inspection and reviews.


                                      Static Techniques

Code Reading

Code reading is a technique that concentrates on how to read and understand a computer program. It is essential for a software developer to know code reading. The process of reading a software program in order to understand it is known as code reading or program reading. In this process, attempts are made to understand the documents, software specifications, or software designs. The purpose of reading programs is to determine the correctness and consistency of the code. In addition, code reading is performed to enhance the software code without entirely changing the program or with minimal disruption in the current functionality of’ the program. Code reading also aims at inspecting the code and removing (fixing) errors from it.

Code reading is a passive process and needs concentration. An effective code reading activity primarily focuses on reviewing ‘what is important’. The general conventions that can be followed while reading the software code are listed below.

  1. Figure out what is important: While reading the code, emphasis should be on finding graphical techniques (bold, italics) or positions (beginning or end of the section). Important comments may be highlighted in the introduction or at the end of the software code. The level of details should be according to the requirements of the software code.
  2. Read what is important: Code reading should be done with the intent to check syntax and structure such as brackets, nested loops, and functions rather than the non-essentials such as name of the software developer who has written the software code.

Static Analysis

Static analysis comprises a set of methods used to analyze the source code or object code of the software to understand how the software functions and to set up criteria to check its correctness. Static analysis studies the source code without executing it and gives information about the structure of model used, data and control flows, syntactical accuracy, and much more. Due to this, there are several kinds of static analysis methods, which are listed below.

Control flow analysis: This examines the control structures (sequence, selection, and repetition) used in the code. It identifies incorrect and inefficient constructs and also reports unreachable code, that is, the code to which the control never reaches.

Data analysis: This ensures that-proper operations are applied to data objects (for example, data structures and linked lists). In addition, this method also ensures that the defined data is properly used. Data analysis comprises two methods, namely, data dependency and data-flow analysisData dependency (which determines the dependency of one variable on another) is essential for assessing the accuracy of synchronization across multiple processors. Dataflow analysis checks the definition and references of variables.

Fault/failure analysis: This analyzes the fault (incorrect model component) and failure (incorrect behaviour of a model component) in the model. This method uses input-output transformation descriptions to identify the conditions that are the cause for the failure. To determine the failures in certain conditions, the model design specification is checked.

Interface analysis: This verifies and validates the interactive and distributive simulations to check the software code. There are two basic techniques for the interface analysis, namely, model interface analysis and user interface analysis. Model interface analysis examines the sub-model interfaces and determines the accuracy of the interface structure. User interface analysis examines the user interface model and checks for precautionary steps taken to prevent errors during the user’s interaction with the model’. This method also concentrates on how accurately the interface is integrated into. the overall model and simulation.

Symbolic Execution

Symbolic execution concentrates on assessing the accuracy of the model by using symbolic values instead of actual data values for input. Symbolic execution, also known as symbolic evaluation, is performed by providing symbolic inputs, which produce expressions for the output.

Symbolic execution uses a standard mathematical technique for representing the arbitrary program inputs (variables) in the form of symbols. To perform the calculation, a machine is employed to perform algebraic manipulation on the symbolic expressions. These expressions include symbolic data meant for execution. The symbolic execution is represented as a symbolic state symbol consisting of variable symbolic values, path, and the path conditions. The symbolic state for each step in the arbitrary input is updated. The steps that are commonly followed for updating the symbolic state considering all possible paths are listed below.

  1. The read or the input symbol is created.
  2. The assignment creates a symbolic value expression.
  3. The conditions in symbolic state add constraints to the path condition.

The output of symbolic execution is represented in the form of a symbolic execution tree. The branches of the tree represent the paths of the model. There is a decision point to represent the nodes of the tree. This node is labeled with the symbolic values of the data at that junction. The leaves of the tree are complete paths through the model and they represent the output of symbolic execution. Symbolic execution helps in showing the correctness of the paths for all computations. Note that in this method the symbolic execution tree increases in size and creates complexity with growth in the model.

Code Inspection and Reviews

This technique is a formal and systematic examination of the source code to detect errors. During this process, the software is presented to the project managers and the users for a comment of approval. Before providing any comment, the inspection team checks the source code for errors. Generally, this team consists of the following.

  1. Moderator: Conducts inspection meetings, checks errors-, and ensures that the inspection process is followed.
  2. Reader: Paraphrases the operation of the software code.
  3. Recorder: Keeps record of each error in the software code. This frees the task of other team members to think deeply about the software code.
  4. Author: Observes the code inspection process silently and helps only when explicitly required. The role of the author is to understand the errors found in the software code.

As mentioned above, the reader paraphrases the meaning of small sections of code during the code inspection process. In other words, the reader translates the sections of code from a computer language to a commonly spoken language (such as English). The inspection process is carried out to check whether the implementation of the software code is done according to the user requirements. Generally, to conduct code inspections the following steps are performed.

    1. Planning: After the code is compiled and there are no more errors and warning messages in the software code, the author submits the findings to the moderator who is responsible for forming the inspection team. After the inspection team is formed, the moderator distributes the listings as well as other related documents like design documentation to each team member. The moderator plans the inspection meetings and coordinates with the team members.
    2. Overview: This is an optional step and is required only when the inspection team members are not aware of the functioning of the project. To familiarize the team members, the author provides details to make them understand the code.
    3. Preparation: Each inspection team member individually examines the code and its related materials. They use a checklist to ensure that each problem area is checked. Each inspection team member keeps a copy of this checklist, in which all the problematic areas are mentioned.
    4. Inspection meeting: This is carried out with all team members to review the software code. The moderator discusses the code under review with the inspection team members.

There are two checklists for recording the result of the code inspection, namely, code inspection checklist and inspection error listThe code inspection checklist contains a summary of all the errors of different types found in the software code. This checklist is used to understand the effectiveness of inspection process. The inspection error list provides the details of each error that requires rework. Note that this list contains details only of those errors that require the whole coding process to be repeated.

All errors in the checklist are classified as major or minor. An error is said to be major if it results in problems and later comes to the knowledge of the user. On the other hand, minor errors are spelling errors and non-compliance with standards. The classification of errors is useful when the software is to be delivered to the user and there is little time to review all the errors present in the software code.

At the conclusion of the inspection meeting, it is decided whether the code should be accepted in the current form or sent back for rework. In case the software code needs reworking, the author makes all the suggested corrections and then compiles the code. When the code becomes error-free, it is sent back to the moderator. The moderator checks the code that has been reworked. If the moderator is completely satisfied with the software code, inspection becomes formally complete and the process of testing the software code begins.

Post a Comment