Please use this identifier to cite or link to this item: http://dspace.dtu.ac.in:8080/jspui/handle/repository/14692
Title: DEVELOPMENT OF TECHNIQUES AND MODELS FOR IMPROVING SOFTWARE QUALITY
Authors: BANSAL, ANKITA
Keywords: SOFTWARE QUALITY
Prediction of quality
fault proneness
Issue Date: Apr-2016
Series/Report no.: TD 2153;
Abstract: ABSTRACT Prediction of quality attributes to improve software quality is gaining significant importance in the research. A number of metrics measuring important aspects of an object oriented program such as coupling, cohesion, inheritance and polymorphism have been proposed in the literature. Using these metrics, the quality attributes such as maintainability, fault proneness, change proneness, reliability etc. can be predicted during the early phases of the software development life cycle. Various models establishing the relationship between software metrics and quality attributes can be constructed, which can be used by researchers and practitioners in improving the software quality. Faults and changes in the software are inevitable. This is due to large sized, complex software and presence of inadequate resources (time, money and manpower) to completely test the software. Additionally, there are ongoing changes in the software due to multiple reasons such as change in user requirements, change in technology, competitive pressure etc. Given this scenario, it is very important to predict the changes and faults during the early phases of software development life cycle leading to better quality and maintainable software at a low cost. Identification of change and fault prone parts of the software, help managers to allocate resources more judiciously, thereby leading to reduction of costs associated with software development and maintenance. Testing and inspection activities can be disproportionately focused on the change and fault prone parts of the design and code. In literature, there are few prediction models proposed to predict change prone parts of the software. Therefore, a structured review is very important to provide commonalities and differences between the results of these studies. We have formulated various research questions according to which we have compared and reviewed a number of the software change proneness models for object oriented software. The research questions formulated in the review helped in identifying gaps in the current research and future guidelines have been proposed which can be used by software practitioners in future research. To gather insights into the quality and reliability of the open source software, we have used a number of popular open source software for the purpose of empirical validation. The literature shows that majority of the prediction models are trained using the historical data of the same project. There are broadly two approaches for predictive analysis, machine learning and statistical. Both these approaches are inherently different, raising the question that which approach is better than the other. Besides this, another question that keeps revolving in the minds of researchers is, “Among multiple machine learning techniques available, which classifier should be used for accurate prediction?” To investigate these questions, we have compared the performance of 15 data analysis techniques (14 machine learning and one statistical) on five official versions of the Android operating system. In other words, we have constructed various metric models using the machine learning and statistical techniques. Literature shows that metric models are widely used for identification of change and fault prone classes. However, training these models using machine learning and statistical techniques is a time consuming task and thus, it is not feasible on a daily basis to use these models. An alternative is to define thresholds of metrics which can be used for predicting change and fault prone parts. Thresholds, also known as risk indicators, define an upper bound on the metric values such that the classes having metric value above thresholds are considered to be potentially problematic. Identifying thresholds helps developers, designers and testers to pay focused and careful attention on these risky (or problematic) classes. We have identified threshold values of various object - oriented metrics of different open source software to predict change and fault proneness. A statistical approach based on logistic regression is used to calculate the threshold values. Another approach to calculate the threshold values is based on receiver operating characteristics curve. We have explored both the approaches to calculate threshold values of the metrics of different software. There are studies of inter - project validation for fault prediction; however, there is limited research regarding cross-project validation for change prediction. In this research, we have conducted inter - project validation for change prediction using 12 open source datasets obtained from three software. Testing the prediction models on the same data from which they are derived is some what intuitive, hence inter - project validation can help in obtaining generalizable results.
URI: http://dspace.dtu.ac.in:8080/jspui/handle/repository/14692
Appears in Collections:Ph.D. Computer Engineering

Files in This Item:
File Description SizeFormat 
Ankita Bansal Final Thesis.pdf3.75 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.