Development and Testing of a Secure Programming Clinic
Development and Testing of a Secure Programming Clinic
The current state of software today is generally poor. The quality of the software in most systems does not support the trust placed in that software. Software security failures are common, and the effects range from inconvenience to severe problems. For example, failing to properly handle an error condition made Facebook inaccessible for a few hours; the iPhone failed to ring in the New Year in 2010; a flaw in an election system used to count votes resulted in some votes from a precinct not being counted; a 2010 FDA study of recalled infusion pumps reported that one of the most common reported problems was software defects; and scientists analyzing on-board computers that control automobiles were able to exploit software vulnerabilities to control the car without physical access.
Indeed, much of computer security deals with handling problems created by programming errors, including analyzing and countering attacks that exploit these problems.
A variety of causes underlie this problem. One cause is the failure of practitioners to practice “defensive programming,” in which basic principles of robust coding guard against unexpected inputs and events. This style of programming is often called “secure programming” or “secure coding,” but this term is a misnomer. By rights, it should refer to programming designed to satisfy a specified security policy—the definition of “secure”, after all, is “satisfying a security policy”. But the term is used more broadly to refer to programming designed to prevent problems that might cause security breaches. This style of programming deals with common programming errors (such as failing to check that the size of an input is no greater than the size of where it is to be stored), and should more properly be called “robust programming.” We use this term throughout this proposal. The failure of practitioners to practice this style is not due to incompetence.
One factor is simply lack of training. As Evans and Reeder noted, “We … [have a] desperate shortage of people who can design secure systems, write safe computer code, and create the ever more sophisticated tools needed to prevent, detect, mitigate and reconstitute from damage due to system failures and malicious acts.” Further, those who do know how to practice these are often not given the opportunity to do so because it would increase cost or time to market. Few believe that customers will accept longer delivery times, or higher prices, for more robust programs.
The goals of this project are to: develop a clinic to enhance student learning and expertise in writing robust, secure software; develop and validate measures that reflect the ability of learners with regard to writing robust, secure software; implement and evaluate the clinic; and disseminate the clinic procedures, instructional modules and measures to the educational community.
In 2011, the National Science Foundation sponsored a meeting, the Summit on Education in Secure
Software. The Final Report examined a number of ways to deal with this situation; one in particular, a “secure programming clinic” approach, does not require adding new courses, and can in fact be integrated into existing curricula. A “secure programming clinic,” analogous to a writing clinic in law schools or English departments, provides continual reinforcement of the mechanisms, methods, technologies, and need for programming with security and robustness considerations throughout a student’s undergraduate coursework. The clinic would augment courses, not replace them or their content. One key consideration is how to provide students with appropriate and due consideration for learning robust programming.
Collaborative Project: Development and Testing of a Secure Programming Clinic. Sponsoring Organization: National Science Foundation. Role: Co-PI. Total Award: $272,956. Dates: 08/08/2014- 07/31/2019.