Welcome to CS 294-70: Automated Bug Finding and Debugging: Fall 2011
General Information
- Instructor: Koushik Sen (Office: Soda 581)
- Time and Place: Monday and Wednesday 2:30 pm - 4:00 pm, 405 Soda
- Office Hours: Mondays 11am - 12 pm.
- Credits: 3 units
- Prerequisites: Students should be familiar with the basics of programming languages and should have moderate programming experience. Please consult the instructor, if you are not sure of your qualifications.
- Course URL: http://fa11.pbworks.com/
Course Description
Today's software systems suffer from poor reliability. NIST estimates that $60 billion is lost annually in the US due to software errors, and such errors in transportation, medicine, and other areas can put lives at risk. This indicates that our techniques to ensure software reliability are far from the level of maturity attained by other engineering disciplines that create critical infrastructure. The situation is likely to get worse, as the complexity of software systems increases without a matching increase in the effectiveness of software quality tools and techniques. Testing is the only predominant technique used by the software industry to make software reliable. Studies show that testing accounts for more than half of the total software development cost in industry. Although testing is a widely used and a well-established technique for building reliable software, existing techniques for testing are mostly ad hoc and ineffective---serious bugs are often exposed post-deployment. A number of recent techniques and tools have combined ideas from program analysis, automated theorem proving, constraint solving, and formal methods and have shown great promises in making software reliable and secure. In this graduate seminar, we will study this new trend with a special focus on automated bug finding and debugging. Check out http://fa09.pbworks.com/ for a similar course offered in Fall 2009.
We will study:
- Automated test generation using sophisticated program analysis and constraint solving techniques
- Software model checking and various theoretical results that form the foundation of software model checking
- Automated debugging
Goals of this course:
- To help students start research in the area of program analysis of sequential and concurrent systems.
- To help students to apply the techniques learned in this course in their ongoing research in other areas such as parallel computing, operating systems, computer networks, security, and database systems.
Lecture Format and Student Workload
The course readings will include a selected list of papers. Students will read papers ahead of time, review papers, participate in discussions, and do a research project in small teams or individually.
Schedule
Lecture Schedule
Grading Details
- Project (1-2 student team): (40%) A project must involve new research: it may involve designing a new algorithm, improving an existing technique, or performing a large case study. The idea behind each project should be evaluated by experiments on publicly used programs. The evaluation metrics should be performance improvement, scalability, or discovering previously unknown bugs.
- Paper review, class participation, and paper presentation: (30%) We will discuss one paper per meeting. A student will submit a short written review (at most 500 words) of each paper. Reviews will be made available to all students.
- Homework (30%).
Reference Books
This course has no required textbook. Research papers and slides will be posted online as needed.
Comments (0)
You don't have permission to comment on this page.