Online Program

Note:

  • All program times are Central European Summer Time CEST (UTC+2). The core conference time 2-6 pm CEST is 8-12pm UTC+8 in Beijing, 8-12am UTC-4 in New York, and 5-9am UTC-7 US Pacific.
Saturday - April 9, 2022
Time Room 1 Room 2 Room 3
7:00am - 11:00am ITSMS
2:00pm - 6:00pm Tutorial 1 - Optimizing the Performance of Fog Computing Environments Using AI and Co-Simulation PECS HotCloudPerf

Sunday - April 10, 2022
Time Room 1 Room 2 Room 3
1:00pm - 2:00pm Tutorial 2 - Automated Benchmarking of cloud-hosted DBMS with benchANT
2:00pm - 4:00pm Tutorial 2 - Automated Benchmarking of cloud-hosted DBMS with benchANT WOSP-C
4:00pm - 6:00pm Tutorial 3 - SPEC Server Efficiency Benchmark LTB WOSP-C
6:00pm - 8:00pm Tutorial 3 - SPEC Server Efficiency Benchmark LTB
7:00pm - 9:00pm LTB

Monday - April 11, 2022 (Day 1)
2:00pm - 2:15pm Opening
2:15pm - 3:00pm Keynote 1: Ivona Brandic, TU Wien
3:00pm - 3:10pm Break
3:10pm - 4:05pm Session 1 - Service and Cloud Computing
4:05pm - 4:15pm Break
4:15pm - 5:10pm Session 2 - GPUs and Heterogeneous Platforms
5:10pm - 5:15pm Short Break
5:15pm - Open End Poster/Demo Session
7:00pm - 9:00pm Annual Meeting RG Predictive Data Analytics WG & PANDA Workshop

Tuesday - April 12, 2022 (Day 2)
2:00pm - 2:05pm Gathering & Day 2 Welcome by PC
2:05pm - 2:40pm

SPEC Research Group: Introduction and Updates

SPEC Kaivalya Dixit Distinguished Dissertation Award

2:40pm - 2:45pm Short Break
2:45pm - 3:30pm Session 3 - Empirical Studies of Performance
3:30pm - 3:40pm Break
3:40pm - 4:15pm Session 4 - Machine Learning and Performance
4:15pm - 4:55pm Data Challenge Presentations
4:55pm - 5:00pm Short Break
5:00pm - 5:45pm Keynote 2: John Wilkes, Google
5:45pm - Open End Awards Ceremony & Virtual Social Event

Wednesday - April 13, 2022 (Day 3)
2:00pm - 2:05pm Gathering & Day 3 Welcome by PC
2:05pm - 2:20pm 10y Most Influential Paper Award Presentation
2:20pm - 3:05pm Keynote 3: Longxiang Li, Inspur
3:05pm - 3:15pm Break
3:15pm - 4:15pm Session 5 - Hardware Performance
4:15pm - 4:25pm Break
4:25pm - 4:50pm WIP & Vision
4:50pm - 5:00pm Break
5:00pm - 5:45pm Session 6 - Theory of Performance
5:45pm - 6:00pm Closing & ICPE 2023

Session 1 - Service and Cloud Computing (Monday, 3:10pm - 4:05pm)

Session Chair: Klaus-Dieter Lange

Richard Li, Min Du, Zheng Wang, Hyunseok Chang, Sarit Mukherjee and Eric Eide. LongTale: Toward Automatic Performance Anomaly Explanation in Microservices (full research paper)

Mohammad Reza Saleh Sedghpour, Cristian Klein and Johan Tordsson. An empirical study of service mesh traffic management policies for microservices (full research paper)

Seetharami Seelam and Robert Walkup. Best Practices for HPC Workloads on Public Cloud Platforms: A Guide for Computational Scientists to Use Public Cloud for HPC Workloads (short industry paper)

Lixiang Luo, Ihsin Chung, Ming-Hung Chen, Seetharami Seelam and Yun Joon Soh. NVMe Virtualization in Cloud Virtual Machines (full industry paper)

Session 2 - GPUs and Heterogeneous Platforms (Monday, 4:15pm - 5:10pm)

Session Chair: Heng Li

Rizwan Ashraf and Roberto Gioiosa. Exploring the Use of Novel Spatial Accelerators in Scientific Applications (full research paper)

Wilson Feng, Shucai Yao, Md Aamir Raihan, Kai Ting Wang, Laichun Feng and Chunrong Xu. Extending SYCL’s Programming Paradigm with Tensor-based SIMD Abstractions (short industry paper)

Chuanming Shao, Jinyang Guo, Pengyu Wang, Jing Wang, Chao Li and Minyi Guo. Oversubscribing GPU Unified Virtual Memory: Implications and Suggestions (full research paper)

Rico van Stigt, Stephen Nicholas Swatman and Ana Lucia Varbanescu. Isolating GPU Architectural Features using Parallelism-Aware Microbenchmarks (full research paper)

Poster & Demo (Monday, 5:15pm - Open End)

Session Chairs: Christoph Laaber, Wen Xia

André Bauer, Mark Leznik, Md Shahriar Iqbal, Daniel Seybold, Igor Trubin, Benjamin Erb, Jörg Domaschka and Pooyan Jamshidi. SPEC Research — Introducing the Predictive Data Analytics Working Group

Nupur Sumeet, Manoj Nambiar and Deeksha Deeksha. HLS_Profiler: Non-Intrusive Profiling tool for HLS based Applications

Chetan Phalak, Aniruddha Sen, Dheeraj Chahal and Mayank Mishra. MAPLE: Model Aggregation and Prediction for Learned Ecosystem

Junjie Li. SPEChpc 2021 Benchmark Suites for Modern HPC Systems

Demos by research or industry papers with accepted artifact

Stefan Kaalen, Mattias Nyberg, Anton Hampus and Olle Mattsson. A Stochastic Extension of Stateflow

Kim Long Ngo, Joydeep Mukherjee, Zhen Ming Jiang and Marin Litoiu. Supplemental Material for Evaluating the Scalability and Elasticity of Function as a Service Platform

Alexandru Baluta, Joydeep Mukherjee and Marin Litoiu. Supplemental Material for Machine Learning based Interference Modelling in Cloud-Native Applications

Session 3 - Empirical Studies of Performance (Tuesday, 2:45pm - 3:30pm)

Session Chair: Varsha Apte

Mark Leznik, Johannes Grohmann, Nina Kliche, André Bauer, Daniel Seybold, Simon Eismann, Samuel Kounev and Jörg Domaschka. Same, Same, but Dissimilar: Exploring Measurements for Workload Time-series Similarity (short research paper)

Mikael Sabuhi, Petr Musilek and Cor-Paul Bezemer. Studying the Performance Risks of Upgrading Docker Hub Images: A Case Study of WordPress(short research paper)

Martin Straesser, Johannes Grohmann, Joakim Von Kistowski, Simon Eismann, Andre Bauer and Samuel Kounev. Why Is It Not Solved Yet? Challenges for Production-Ready Autoscaling (full industry paper)

Kim Long Ngo, Joydeep Mukherjee, Zhen Ming Jiang and Marin Litoiu. Evaluating the Scalability and Elasticity of Function as a Service Platform (short industry paper)

Session 4 - Machine Learning and Performance (Tuesday, 3:40pm - 4:15pm)

Session Chair: Pooyan Jamshidi

Alexandru Baluta, Joydeep Mukherjee and Marin Litoiu. Machine Learning based Interference Modelling in Cloud-Native Applications (short research paper)

Ashwin Krishnan, Manoj Nambiar, Nupur Sumeet and Sana Iqbal. Performance Model and Profile Guided Design of a High-Performance Session Based Recommendation Engine (full industry paper)

Danilo de Goede, Duncan Kampert and Ana Lucia Varbanescu. The cost of reinforcement learning for game engines: the AZ-Hive case-study (short research paper)

Data Challenge (Tuesday, 4:15pm - 4:55pm)

Session Chairs: David Daily, Cor-Paul Bezemer, Weiyi Shang

André Bauer, Martin Straesser, Lukas Beierlieb, Maximilian Meißner and Samuel Kounev. Automated Triage of Performance Change Points Using Time Series Analysis and Machine Learning

Jie Chen, Hu Haiyang and Dongjin Yu. Characterizing and Triaging Change Points

Md Shahriar Iqbal, Mark Leznik, Igor Trubin, Arne Lochner, Pooyan Jamshidi and André Bauer. Change Point Detection for MongoDB Time Series Performance Regression

Luc Lesoil, Mathieu Acher, Arnaud Blouin and Jean-Marc Jézéquel. Beware of Variability Layers When Reasoning about Performance Evolution of MongoDB

Session 5 - Hardware Performance (Wednesday, 3:15pm - 4:15pm)

Session Chair: Yiming Tang

Sofiane Chetoui, Michael Chen, Abhinav Golas, Farrukh Hijaz, Adel Belouchrani and Sherief Reda. Alternating Blind Identification of Power Sources for Mobile SoCs (full research paper)

Markus Velten, Robert Schöne, Thomas Ilsche and Daniel Hackenberg. Memory Performance of AMD EPYC Rome and Intel Cascade Lake SP Server Processors (full research paper)

Mohammadreza Soltaniyeh, Veronica Lagrange Moutinho Dos Reis, Matt Bryson, Xuebin Yao, Richard Martin and Santosh Nagarakatte. Near-Storage Processing for Solid State Drive Based Recommendation Inference with SmartSSDs® (full industry paper)

Nupur Sumeet, Manoj Nambiar and Deeksha Deeksha. HLS_Profiler: Non-Intrusive Profiling for end-to-end Performance Analysis of HLS based Applications (full industry paper)

WIP & Vision (Wednesday, 4:25pm - 4:50pm)

Session Chair: Cristina Abad

Marius Hadry, Veronika Lesch and Samuel Kounev. FADE: Towards Flexible and Adaptive Distance Estimation Considering Obstacles

Gagan Somashekar, Anurag Dutt, Rohith Vaddavalli, Sai Bhargav Varanasi and Anshul Gandhi. B-MEG: Bottlenecked-Microservices Extraction Using Graph Neural Networks

Session 6 - Theory of Performance (Wednesday, 5:00pm - 5:45pm)

Session Chair: Maryam Elahi

Simonetta Balsamo, Andrea Marin and Isi Mitrani. A mixed PS-FCFS policy for CPU intensive workloads (full research paper)

Stefan Kaalen, Mattias Nyberg, Anton Hampus and Olle Mattsson. A Stochastic Extension of Stateflow (full research paper)

Adnan El Moussawi, Ricardo Rojas Ruiz and Nacéra Bennacer Seghouani.Sampling-based Label Propagation for Balanced Graph Partitioning (short industry paper)

Tutorial 1 (Sa, April 9, 2pm-5pm CEST)

Shreshth Tuli and Giuliano Casale

Optimizing the Performance of Fog Computing Environments Using AI and Co-Simulation

Abstract: This tutorial presents a performance engineering approach for optimizing the Quality of Service (QoS) of Edge/Fog/Cloud Computing environments using AI and Coupled-Simulation, being developed as part of the Co-Simulation based Container Orchestration (COSCO) project (https://github.com/imperial-qore/COSCO). First, we shall introduce fundamental AI and co-simulation concepts, their importance in QoS optimization and performance engineering challenges in the context of Fog computing. The focus will then shift on how AI models, specifically, deep neural networks (DNNs) can be used in tandem with simulated QoS estimates to take optimal resource management decisions. We touch upon the specific use case of training DNNs as surrogates to estimate key QoS metrics and utilize such models to build policies for dynamic scheduling in a distributed fog environment. We demonstrate these concepts using the COSCO framework. Using metric monitoring and simulation primitives in COSCO, we demonstrate the efficacy of an AI and simulation based scheduler on a fog/cloud platform. Finally, the tutorial focuses on giving AI baselines of resource management problems that arise in this area.

Tutorial 2 (Su, April 10, 1pm-4pm CEST)

Daniel Seybold and Jörg Domaschka

Automated Benchmarking of cloud-hosted DBMS with benchANT

Abstract: The benchANT Benchmarking-as-a-Service (BaaS) platform enables the end-to-end evaluation of distributed Database Management Systems (DBMS) hosted on cloud resources. benchANT provides an easy-to-use benchmark designer, a benchmarking automation framework and extensive objective metric processing and visualization. In consequence, benchANT provides the necessary data tp supports the decision process in selecting the optimal DBMS and cloud resource configuration for Web 2.0, Big Data, and Internet of Things applications by providing crucial metrics such as DBMS throughput and latency, scalability, and availability as well as cloud costs. The benchANT platform is highly extensible and can integrate any DBMS, cloud provider and DBMS benchmarks into its design, automation, and processing framework. It is also capable of benchmarking Database-as-a-Service offers. Consequently, benchANT enables industrial users that are looking for high performance and cost-effective cloud-hosted DBMS solution and researchers that aim to carry out scientifically sound benchmarks of cloud-hosted DBMS to validate DBMS extension, new benchmarks or to gather benchmark results for building performance models.

Tutorial 3 (Su, April 10, 4pm-8pm CEST)

Maximilian Meissner, Klaus-Dieter Lange, Jeremy Arnold, Sanjay Sharma, Roger Tipley, Nishant Rawtani, David Reiner, Mike Petrich, Aaron Cragin

SPEC Server Efficiency Benchmark Development - How to Contribute to the Future of Energy Conservation

Abstract: A driving force behind the improvement of server efficiency in recent years is the use of SPECpower benchmarks. They are used in mandatory government regulations, the ISO/IEC 21836:2020 standard, and product marketing, giving server manufacturers and buyers significant incentive to improve energy efficiency. In order to produce relevant results, benchmarks need to take into account current and future trends in hardware and software development, such as the introduction of new accelerators and workloads. To keep pace with the development of the fast moving IT landscape, SPEC plans to introduce a workload bounty program to encourage researchers to develop novel workloads. Submitted workloads will be considered for inclusion in a future SPECpower benchmark and awarded. The goal of this tutorial is to equip the participants with the necessary knowledge and tools to conduct energy efficiency experiments and use the SERT 2 Suite. This tutorial will show-case how to use the Chauffer Worklet Development Kit to develop next-generation workloads to enhance the real-world relevance of the future SPECpower benchmarks, a critical element for the benchmarks to contribute to future energy conservation.