Performance Evaluation and Benchmarking 13th TPC Technology Conference, TPCTC 2021, Copenhagen, Denmark, August 20, 2021, Revised Selected Papers

This book constitutes the refereed post-conference proceedings of the 13th TPC Technology Conference on Performance Evaluation and Benchmarking, TPCTC 2021, held in August 2021.The 9 papers presented were carefully reviewed and selected from numerous submissions. The TPC encourages researchers and i...

Full description

Saved in:
Bibliographic Details
Main Authors Nambiar, Raghunath, Poess, Meikel
Format eBook
LanguageEnglish
Published Netherlands Springer Nature 2022
Springer International Publishing AG
Springer International Publishing
Edition1
SeriesLecture Notes in Computer Science
Subjects
Online AccessGet full text
ISBN3030944379
9783030944377
3030944360
9783030944360

Cover

Table of Contents:
  • Intro -- Preface -- TPCTC 2021 Organization -- About the TPC -- TPC 2021 Organization -- Contents -- A YCSB Workload for Benchmarking Hotspot Object Behaviour in NoSQL Databases -- 1 Introduction -- 2 Background -- 2.1 Spikes -- 2.2 Yahoo Cloud Serving Benchmark (YCSB) -- 2.3 Problem Statement -- 3 YCSB Workload for Benchmarking Hotspot Object -- 3.1 SpikesGenerator -- 3.2 ObjectDataStore -- 3.3 LocalityManager -- 4 Functional Validation -- 4.1 Experiment Setup -- 4.2 Results -- 5 Related Work -- 6 Conclusion -- References -- IoTDataBench: Extending TPCx-IoT for Compression and Scalability -- 1 Introduction -- 2 Use Case: Benchmarking for Train Monitoring -- 2.1 Benchmarking Result and Settings -- 2.2 Learned Lessons -- 3 Related Works -- 4 IoTDataBench: A TPCx-IoT Evolution -- 4.1 Benchmarking Procedure -- 4.2 Data Model and Data Generation -- 4.3 Workload Generation: Ingestion and Query -- 4.4 Database Scalability Test -- 4.5 Benchmark Driver Architecture -- 4.6 Metrics -- 5 Evaluation -- 5.1 Implementation -- 5.2 Performance Metric Evaluation -- 5.3 Price/Performance Metric Evaluation -- 6 Discussion -- 7 Conclusions -- References -- EvoBench: Benchmarking Schema Evolution in NoSQL -- 1 Introduction -- 2 Related Work -- 3 Benchmark Implementation -- 3.1 Design Criteria -- 3.2 Design Overview -- 3.3 Configuration -- 3.4 Metrics -- 4 Data Generator and Data Sets -- 4.1 Data Sets -- 5 Proof of Concept -- 5.1 Effects of SES-Side, DB-Side and Different Entity Sizes -- 5.2 Comparison Between MongoDB and Cassandra -- 5.3 Effects of a Revised Version of the Schema Evolution System -- 5.4 Differences Between Stepwise and Composite -- 5.5 Comparison of a Real and a Synthetic Data Set -- 6 Conclusion and Future Work -- References -- Everyone is a Winner: Interpreting MLPerf Inference Benchmark Results -- 1 Introduction
  • 2 MLPerf Inference Benchmark Suite -- 3 MLPerf Inference from Users Perspective -- 4 MLPerf Insights -- 4.1 Performance Scales Linearly with the Number of Accelerators -- 4.2 (Almost) Same Relative Performance Across All the AI Tasks -- 4.3 Nvidia GPU Performance Comparison -- 4.4 MLPerf Power -- 5 MLPerf Inference Winners -- 5.1 Nvidia -- 5.2 Qualcomm -- 5.3 Total Performance Winners -- 6 Our MLPerf Inference Experience -- 6.1 Work Closely with Chip Manufacturer -- 6.2 Use the Server with the Most Accelerators -- 6.3 A Small Performance Difference Can Have Large Consequences -- 6.4 Results Review -- 6.5 MLCommons Membership is Expensive -- 7 Improvements to MLPerf Inference -- 8 Summary and Conclusions -- References -- CH2: A Hybrid Operational/Analytical Processing Benchmark for NoSQL -- 1 Introduction -- 2 Related Work -- 2.1 HTAP (HOAP) -- 2.2 Benchmarks -- 3 CH2 Benchmark Design -- 3.1 Benchmark Schema -- 3.2 Benchmark Data -- 3.3 Benchmark Operations -- 3.4 Benchmark Queries -- 4 A First Target: Couchbase Server -- 5 Benchmark Results -- 5.1 Benchmark Implementation -- 5.2 Benchmark Configuration(s) -- 5.3 Initial Benchmark Results -- 6 Conclusion -- References -- Orchestrating DBMS Benchmarking in the Cloud with Kubernetes -- 1 Introduction -- 1.1 Contribution -- 1.2 Related Work -- 1.3 Motivation -- 2 Designing Benchmark Experiments in Kubernetes -- 2.1 Components of Cloud-based Benchmarking Experiments -- 2.2 Objects in Kubernetes -- 2.3 Matching Components of Benchmarking to Kubernetes Objects -- 2.4 Scalability -- 2.5 Orchestration -- 3 Experiments -- 3.1 Functional Tests -- 3.2 Stability Tests and Metrics -- 3.3 The Benchmark: Performance of Data Profiling -- 4 Discussion -- 5 Outlook -- 6 Conclusion -- References -- A Survey of Big Data, High Performance Computing, and Machine Learning Benchmarks -- 1 Introduction -- 2 Background
  • 2.1 Big Data Benchmarking -- 2.2 High Performance Computing Benchmarking -- 2.3 Machine Learning Benchmarking -- 3 Methodology -- 3.1 Benchmarking Dimensions -- 3.2 Integrated Data Analytics Pipelines -- 3.3 Analysis of Big Data Benchmarks -- 3.4 Analysis of High Performance Computing Benchmarks -- 3.5 Analysis of Machine Learning Benchmarks -- 4 Related Work -- 5 Conclusion -- References -- Tell-Tale Tail Latencies: Pitfalls and Perils in Database Benchmarking -- 1 Introduction -- 2 Preliminaries -- 2.1 Database Benchmarks -- 2.2 The OLTPBench Benchmark Harness -- 2.3 JVM and Garbage Collectors -- 3 Experiments -- 3.1 Results -- 4 Discussion -- 5 Threats to Validity -- 6 Related Work -- 7 Conclusion and Outlook -- References -- Quantifying Cloud Data Analytic Platform Scalability with Extended TPC-DS Benchmark -- 1 Introduction -- 1.1 TPC-DS Benchmark -- 1.2 Separation of Storage and Compute in Cloud Data Analytic Platform -- 2 Related Work in Measuring Scalability -- 3 Proposed Extended TPC-DS Benchmark -- 3.1 Abstract Cloud Data Warehouse Resource Level -- 3.2 Normalize Resource Level -- 3.3 Normalize Benchmark Performance Metric -- 3.4 Scalability Factor -- 3.5 Extended TPC-DS Benchmark -- 4 Scalability Analysis for TPC-DS Operations -- 4.1 Load Operation -- 4.2 PowerRun -- 4.3 Throughput Run -- 4.4 Data Maintenance Run -- 4.5 Overall Scalability -- 5 Future Work -- References -- Author Index