Bug reports are the primary means through which developers triage and fix bugs. To achieve this effectively, bug reports need to be clearly described those features that are important for the developers. However, previous studies have found that reporters do not always provide such features that affect the different phases of the bug fixing process.
In this dissertation, we first perform two empirical studies to investigative key features that reporters should provide in the description of the bug report to improve the bug-fixing process. We observe that (1) Steps to Reproduce, Test Case, Code Example, Stack Trace, and Expected Behavior are the key features that reporters often miss in their initial bug reports and developers require them for fixing bugs. (2) the degree of the key features varies among the different types of high impact bug reports, and (3) the additional requirement for the key features during bug fixing significantly affect the bug-fixing process.
Then, we propose two approaches in order to support reporters to improve the bug fixing process. First, we develop classification models to predict whether reporters should provide certain key features in the description of bug reports by leveraging four popular machine-learning techniques. Then, we develop a key features recommendation model by leveraging historical bug-fixing knowledge and text mining techniques. We observe that (1) our models achieve promising F1-scores to predict key features; (2) Na\"ive Bayes Multinomial (NBM) outperforms other classification techniques to predict the key features based on the summary text of the bug reports; (3) our best performing model can work successfully to predict key features in the cross-projects setting; and (4) our key features recommendation model can successfully recommend key features that reporters should provide in the description in bug reports. We believe that our findings and proposed models make a valuable contribution to improve the bug management process.