· Hello,
This week, I read a paper on responsible conduct in research. This is in preparation to begin collecting data for our first round of experiments.
I also looked into another possible method of classifying duplicate bugs. This method creates a discriminative model that compares duplicates and non duplicates to calculate the probability that a new bug report is in fact a duplicate. This model also constantly updates its coefficients to reflect the new data in an ever changing corpus of reports.
I Finally, I have been watching tutorial videos on web scraping and using XPath. These tools will be very useful to collect the large set of data we plan to recover.
No comments:
Post a Comment