Using the F1/10 Autonomous Racing Platform for Runtime Verification Research
Houssam Abbas, University of Pennsylvania, USA
Monitor synthesis from high-level specifications provides a fast and convenient way of developing monitoring code with guarantees on monitoring complexity. It remains important to test the generated code on a realistic setup, that closely mimics the final deployment environment. In this talk we will describe the F1/10 Autonomous Racing Platform, and its associated simulator, as an environment for developing and testing auto-generated monitoring code for robotic applications. F1/10 is a fully open-source project for creating a 1/10th scale autonomous race car, and its associated simulator allows easy and risk-free testing of navigation and monitoring algorithms. The platform allows developers to run their monitors on the deployment hardware, inside a deployment software stack based on ROS, which means that the code can be deployed as-is on the car. This tutorial-style talk will also demonstrate how to setup the simulator and we will synthesize ROS-based monitors using the Reelay tool. We will also describe plans for creating hardware-as-a-service for testing algorithms targeting autonomous driving.
Short Bio: Houssam is a postdoctoral fellow in the Department of Electrical and Systems Engineering at the University of Pennsylvania. His research interests are in the verification, control and conformance testing of autonomous Cyber-Physical Systems. Current research includes the verification of life-supporting medical devices, the verification and control of autonomous systems with a view towards certifying such systems, and anytime computation and control. Houssam holds a PhD in Electrical Engineering from Arizona State University.
Signal classification using temporal logic
Calin Belta, Boston University, USA
In recent years, there has been a great interest in applying formal methods to the field of machine learning (ML). In this talk, I will discuss inferring high-level descriptions of a system from its execution signals. The system behaviors are formalized using a temporal logic language called Signal Temporal Logic (STL). I will first focus on supervised, off-line classification.
Given a set of pairs of signals and labels, where each label indicates whether the respective signal exhibits some system property, a decision-tree algorithm outputs an STL formula which can distinguish the signals. I will then show how this method can be extended
to the online learning scenario. In this setting, it is assumed that new signals may arrive over time and the previously inferred formula should be updated to accommodate the new data.
Finally, I will focus on the (unsupervised) clustering problem. In this case, it is assumed that only a set of unlabeled signals is available, that is, it is not known a priori if a signal exhibits a specific behavior or satisfies some property, and the end goal is to group similar signals together, in so-called clusters, and to describe each cluster with a formula.
Short Bio: Calin Belta is a Professor in the Department of Mechanical Engineering at Boston University, where he holds the Tegan Family Distinguished Faculty Fellowship. He is the Director of the BU Robotics Lab, and is also affiliated with the Department of Electrical and Computer Engineering, the Division of Systems Engineering at Boston University, the Center for Information and Systems Engineering (CISE), and the Bioinformatics Program.
His research focuses on dynamics and control theory, with particular emphasis on hybrid and cyber-physical systems, formal synthesis and verification, and applications in robotics and systems biology. He received the Air Force Office of Scientific Research Young Investigator Award and the National Science Foundation CAREER Award.
Systematic analysis and improvement of CNNs.
Tommaso Dreossi, University of California, Berkeley, USA
In this talk, I will present 1) a framework to systematically analyze convolutional neural networks and 2) a counterexample-guided data augmentation scheme. The analysis procedure comprises a falsification framework in the system-level context. The falsifier is based on an image generator that produces synthetic pictures by sampling in a lower dimension image modification subspace. The generated images are used to test the CNN and expose its vulnerabilities. The misclassified pictures (counterexamples) are then used to augment the original training set and hence improve the accuracy of the considered model. The talk focuses on case studies of object detection in autonomous driving based on deep learning.
Short Bio: He is currently a postdoctoral researcher in the group of Sanjit Seshia at the department of Electrical Engineering and Computer Science of UC Berkeley. He obtained his Ph.D. in Computer Science from the University of Udine (Italy) and the University Joseph Fourier in Grenoble (France), working under the joint supervision of Carla Piazza and Thao Dang. I hold a Master’s degree in Computer Science received from the University of Udine (Italy).
First order temporal properties of continuous signals
Thomas Ferrère, Institute of Science and Technology (IST), Austria
We propose to introduce first order quantifiers in Signal Temporal Logic. Our definitions enable to describe many properties of interest, such as the stabilization of a signal around an undetermined threshold, otherwise not expressible. A monitoring algorithm is described for the one-variable fragment of this new logic, based on a geometric representation of validity domains.
Short Bio: Thomas Ferrère is a postdoctoral researcher at IST Austria, Vienna in the group of Tom Henzinger. Previously he worked towards a PhD in Mathematics and Computer Science at Verimag, University of Grenoble, France under the supervision of Oded Maler, and in collaboration with Mentor Graphics corporation. The research of Thomas Ferrère is centered on monitoring, pattern matching, and fault localization applied to cyber-physical systems and mixed analog/digital circuit verification.
Hardware-based runtime verification with Tessla
Martin Leucker, Universität zu Lübeck, Germany
In this talk, we present a novel platform for online monitoring of multicore. It gives insight to the system’s behavior without affecting it. This insight is crucial to detect non-deterministic failures as for example caused by race conditions and access to inconsistent data.
A system-on-chip is observed using the tracing capabilities available on many modern multi-core processors. They provide highly compressed tracing information over a separate tracing port. From this information our system reconstructs the sequence of instructions executed by the processor, which can then be analyzed online by a reconfigurable monitoring unit. Analyses are described in a high-level temporal stream-based specification language, called TESSLA, that are compiled to configurations of the monitoring unit. To cope with the amount of tracing data generated by modern processors the our system is implemented in hardware using an FPGA.
The work presented is carried out within the COEMS project and has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement no. 732016.
Short Bio: Martin Leucker is Director of the Institute for Software Engineering and Programming Languages at the University of Lübeck, Germany. He obtained his Habilitation at TU München (awarded in 2007) while being a member of Manfred Broy’s group on Software and Systems Engineering. At TU Munich, he also worked as a Professor of Theoretical Computer Science and Software Reliability. Martin Leucker is the author of more than 100 reviewed conference and journal papers in software engineering, formal methods, and theoretical computer science. He is frequently a PC member of top-ranked conferences and has been the principal investigator in several research projects with industry participation, especially in the medical devices, auto- motive, and energy domains.
Formalizing Requirements for Cyber-Physical Systems: Real-World Experiences and Challenges
Jim Kapinski, Toyota Research Institute of North America, Toyota, USA
Cyber-physical systems (CPSs) are used in many mission critical applications, such as automobiles, aircraft, and medical devices; therefore, it is vital to ensure that these systems behave correctly. Testing is required to ensure correct behavior for CPSs, but traditional methods to evaluate test data can be time consuming and unreliable. New technologies, such as property monitoring, can automatically evaluate measurements of system behaviors against requirements; however, most property monitoring methods rely on the availability of formal system requirements, often in the form of temporal logic formulae, which can be challenging to develop for complex applications. This talk describes industrial CPS challenges in defining formal requirements, presents examples from testing results for an automotive fuel cell application, and suggests possible directions for solutions.
Short Bio: James Kapinski is a Senior Principal Scientist at the Toyota Research Institute of North America (TRINA) in Ann Arbor, Michigan and is a Senior Member of the IEEE. James received a B.S. and M.S. in Electrical Engineering from the University of Pittsburgh in 1996 and 1999, respectively, and a Ph.D. from Carnegie Mellon University in 2005. From 2007 to 2008 he was a post-doctoral researcher at Carnegie Mellon University. He went on to found and lead Fixed-Point Consulting, serving clients in the defense, aerospace, and automotive industries. James has been with Toyota since 2012. His work at Toyota focuses on advanced research into verification techniques for control system designs and analysis of hybrid dynamical systems.
Real-time Stream Processing with RTLola
Hazem Torfah, Saarland University, Germany
The online evaluation of real-time data streams is a ubiquitous part of modern IT systems. Applications range from collecting statistics on data-driven platforms to monitoring safety-critical real-time systems. We present RTLola, a specification and monitoring framework for stream-based real-time properties. RTLola’s stream processing engine translates input streams, which contain sensor readings and other data collected at runtime, into output streams, which contain aggregated statistics. This setup has the advantage of great expressiveness and easy-to-reuse, compositional specifications. A major challenge in monitoring real-time data is to build stable and efficient stream processing engines. A monitor that consumes an unbounded amount of memory is bound to eventually disrupt the normal operation of the system. In real-time applications, data arrives at varying rates that are usually hard to predict. A key feature of RTLola is that it computes guaranteed bounds on the memory based on the desired output rates, which are usually known in advance. We illustrate the use of RTLola with a series of case studies from a wide range of real-time systems, including networks and unmanned aerial systems.
Short Bio: Hazem Torfah is a member of the Reactive Systems Group at Saarland University in Germany and a doctoral candidate in the Saarbrücken Graduate School of Computer Science. His research interests are the specification, verification, and synthesis of reactive systems, especially, the investigation of quantitative variants of these problems. In his work in runtime verification, Hazem focuses on the development of descriptive specification languages that allow for the definition of statistical system properties, and algorithms for the associated monitoring tasks.