Tutorials

Tutorial 1 Introduction to software reliability modeling and prediction

Presenter: John Healy

Tuesday, November 3rd, 11h-12h30

Description: This tutorial provides an introduction to software reliability modeling and prediction. It provides background on the class of models most commonly used for software reliability modeling: Non-Homogeneous Poisson Process (NHPP) models. The tutorial provides several examples of NHPP models including the Goel-Okumoto model and the Musa-model. The tutorial also provides a description of the Poisson Regression models that can be used to predict software reliability when counts of failures are available. Limitations of software reliability modeling will be presented.

John Healy has a PhD in mathematical statistics from Purdue University. He has published papers in a wide range of journals including the IEEE Transactions on Reliability, Technometrics and the Journal of the American Statistical Association. John is a former member of the Administrative Committee of the IEEE Reliability Society. John has received the Alvin Plait award for best tutorial at RAMS. John currently is the Assistant Division Manager of the Cybersecurity and Communications Reliability Division at the FCC. John is in charge of the Network Outage Reporting System and the Disaster Information Reporting System at the FCC.


 

Tutorial 2 Combinatorial Testing for High Assurance of Software and Systems

Presenters: Rick Kuhn, Raghu Kacker

Monday, November 2nd, 11h-12h30

Description: Combinatorial Testing for High Assurance of software and systems: Combinatorial methods have attracted attention as a means of providing strong assurance at reduced cost, but when are these methods practical and cost-effective? This talk explains the background, process, and tools available for combinatorial testing, with illustrations from industry experience with the method. The focus is on practical applications, including an industrial example of testing to meet FAA-required standards for life-critical software for commercial aviation. Other example applications include modeling and simulation, mobile devices, network configuration, and testing for a NASA spacecraft.


 

Tutorial 3 ODC - Agile Root Cause Analysis

Presenter: Ram Chillarege

Monday, November 2nd, 14h-17h30

Description: The tutorial on Orthogonal Defect Classification (ODC) provides the practicing engineer and manager a good overview of the technology, its benefits, practice and implementation. One must have a reasonable experience with the software development lifecycle, process improvement methods, tools, and practices, and appreciation of Agile Development methods. Knowledge of historical software development processes and principles are useful, but not necessary.

ODC is a technology that extracts semantics from the software defect stream to provide insight into the development process and product. This tutorial covers:

  • ODC Concepts

  • ODC Classification and Information Extraction

  • How to gain 10x in Root Cause Analysis

  • How to tune up the Test Process using ODC

  • In-process Measurement and Prediction with ODC

  • Case Studies of ODC based Process Diagnosis

  • What is required to support ODC?

  • How does one plan an ODC Rollout ?


 

Tutorial 4 Advanced Software Reliability and Availability Models

Presenters: Kishore Trivedi, Michael Grottke, Javier Alonso, Allen Nikora

Tuesday, November 3rd, 14h-18h

Description: While traditional software reliability research has focused on reliability growth modeling during the testing/debugging phase, this tutorial concentrates on software failures, their underlying faults and the mitigation techniques used to deal with them during the operational phase.

The tutorial will be driven by three sets of examples: The first case study, based on failures of NASA satellite onboard software, leads to data-driven models. We will present the input data analysis, including the identification of probability distributions and parameter estimation. In the second case study, based on IBM’s high availability architecture of SIP on WebSphere, a model is developed from the system architecture, and is then parameterized from real data. Another distinction between these two case studies is that the latter deals with software fault tolerance while in the former the data about fault-tolerance-based recovery is not available in the problem reports from NASA. Here, we will discuss in detail different approaches for reliability and availability modeling. The third set of examples will be based on failures caused by software aging and an associated proactive recovery method known as software rejuvenation.


 

Tutorial 5 Hard Problems at the Intersection of Cybersecurity and Software Reliability

Presenters: Suresh Kothari, Ben Holland

Tuesday, November 3rd, 14h-18h

Description: This tutorial is aimed at the audience interested in knowing how software reliability and cybersecurity converge in terms of intrinsic hard problems, and how that knowledge can be useful for advancing the research and practice in both fields. This tutorial is based on our research in three Defense Advanced Projects Research Agency (DARPA) projects and our practical experience of applying the research. The tutorial will provide succinct understanding of the “hardness” through representative problems and by introducing a programming language agnostic notion of an intrinsic hardness spectrum derived from fundamental impediments to detecting vulnerabilities accurately. About 60% of the tutorial will be demonstrations to elaborate on the hardness spectrum and its practical applicability. The representative problems will pertain to reliability issues for operating system kernels and malware attacks on Android apps. We will introduce the use of a powerful program comprehension tool to derive the hardness spectrum by mapping the Java, C, and Java byte code to high-level entities that reveal the inner workings of complex software.


 

Tutorial 6 Secure Software Architectures

Presenter: Jungwoo Ryoo

Wednesday, November 4th, 14h-18h

Description: Security is a quality attribute that has both architectural and coding implications—it is necessary to get both right to create and maintain secure systems. But most of the existing research on making systems secure has focused on coding, and there is little direction or insight into how to create a secure architecture. This tutorial teaches participants several ways to analyze and evaluate the security readiness of a software architecture: vulnerability-based (VoAA), tactics-based (ToAA), and pattern-based architectural analysis (PoAA) techniques. The tutorial will also compare the strengths and weaknesses of each approach. Participants will eventually learn that these different approaches are complementary to each other. Finally this tutorial teaches how to combine these analysis techniques to obtain the best outcomes. A real-life case study will be used to demonstrate the feasibility of AAFS.


 

Tutorial 7 Structured Assurance Cases: A Crash Course

Presenters: Robin Bloomfield, Kate Netkachova

Wednesday, November 4th, 14h-18h

Description: Assurance Cases in their many forms (safety, security, dependability and reliability cases) have been around for many years. Over the past 5 years we have been developing for industry a simplified and practical approach to structuring cases based on Claims Argument and Evidence (CAE) and associated CAE Blocks – simplified case fragments. These developments have been based on an empirical analysis of how people actually create cases in industry. The tutorial will provide an introduction to our approach to cases and be highly interactive based on examples where we introduce the concepts, the CAE Blocks and develop and review case structures. We will also provide an overview of current research directions in assurance cases and in particular discuss recent work we have been doing on Security Justification.


 

Tutorial 8 Vetting the Security of Mobile Applications

Presenter: Stephen Quirolgico

Thursday, November 5th, 14h-15h30

Description: Recently, organizations have begun to deploy mobile applications (or apps) to facilitate their business processes. Such apps have increased productivity by providing an unprecedented level of connectivity between employees, vendors, and customers, real-time information sharing, unrestricted mobility, and improved functionality. Despite the benefits of mobile apps, however, the use of apps can potentially lead to serious security issues. This is so because, like traditional enterprise applications, apps may contain software vulnerabilities that are susceptible to attack. Such vulnerabilities may be exploited by an attacker to gain unauthorized access to an organization’s information technology resources or the user’s personal data.

To help mitigate the risks associated with app vulnerabilities, organizations should develop security requirements that specify, for example, how data used by an app should be secured, the environment in which an app will be deployed, and the acceptable level of risk for an app. To help ensure that an app conforms to such requirements, a process for evaluating the security of apps should be performed. We refer to this process as an app vetting process. An app vetting process is a sequence of activities that aims to determine if an app conforms to an organization’s security requirements. In this tutorial, we describe the app vetting process and its application for facilitating the security of mobile applications. In addition, we describe and demonstrate the NIST AppVet Mobile App Vetting System for vetting the security of mobile applications.