APC 技術ブログ

株式会社エーピーコミュニケーションズの技術ブログです。

株式会社 エーピーコミュニケーションズの技術ブログです。

Benchmarking Data and AI Platforms: How to Choose and use Good Benchmarks

Preface

This session introduced the "Lake House AI Benchmark." Despite the scheduled appearance of Joe Harris, an unexpected emergency led to Shannon Barlow stepping in to deliver the presentation.

The ever-evolving field of data analytics, which sees the introduction of new tools and consistent enhancements daily, greatly benefits from the adoption of Lake House architecture that integrates the features of data lakes and data warehouses. This synergy is designed to improve performance across various domains, including predictive analytics, data science, and machine learning.

The session focused on the importance of benchmarks for evaluating the performance of Lake House architecture, and provided guidance on effectively selecting and applying these benchmarks. Shannon delved into how to choose the right benchmarks and deploy them properly to assess data platforms. This practice is crucial, given the rapid technological advances in the field.

Key points to consider when utilizing these benchmarks were highlighted, offering attendees a clear pathway for effectively comparing performance during the technology selection process.

The session proved highly informative, imparting essential knowledge about cutting-edge AI platforms and accompanying benchmarking techniques to attendees.

As the data analytics industry rapidly adopts new technologies and tools, the use of benchmarks is increasingly being recognized as crucial. Despite their important role, benchmarks are often viewed with considerable skepticism. This skepticism arises because, after the release of benchmark scores, the community and practitioners are tasked with selecting platforms, considering the trade-offs between performance and total cost of ownership (TCO).

The relationship between performance and TCO correlates in various scenarios, yet this relationship is not universally applicable. Sometimes, an organization might release a much faster solution the following year, while the benchmarks evaluating these solutions might remain unchanged.

During the session, a provocative headline was displayed on the blog's screenshot, highlighting the widespread cynical view of benchmarks. Many attendees raised their hands in agreement, vividly illustrating that benchmarks can sometimes be manipulated to support specific claims, fostering a skeptical environment within the community over time.

In the rapidly evolving field of data analytics, numerous tools and technologies regularly emerge. Benchmarks play a vital role in standardizing performance and TCO evaluation of these developments. However, the benchmarking process poses several challenges.

A significant challenge is the lack of official benchmark submissions for platforms operated in on-premise environments. Without benchmarks, creating accurate assessments of such platforms is extremely difficult. For example, evaluating the cost-effectiveness of a tool remains speculative without benchmark data.

Reflecting on a session held three floors up at the same venue two years ago, it was reported for the first time that on-premise platform deployments had been systematized. However, this achievement did not lead to official benchmark recognition. The originally 100-page document, despite integration with Distributed Ledger Technology (DLT) and deployment in native notebooks, was significantly condensed, and this session emphasized the conciseness carried out.

These examples highlight not only the importance of selecting benchmarks but also managing them effectively. Without official benchmarks, assessing which tools lead the market or are progressing becomes even more challenging. This section delves into the ongoing importance of benchmarks and discusses the implications of their absence. With proper understanding, organizations can better navigate the complex landscape of data analysis and AI platforms and ensure the selection of tools that align with operational goals and financial constraints.

This session revisited slides from two years ago and highlighted key learnings obtained when there were no reference points for assessing the effectiveness of these metrics. Time has passed, and now a heuristic of 'cost per million rows' has become an important metric. This metric aims to measure the cost efficiency of processing an organization's entire data rows. Specifically, it focuses on achieving the most cost-effective rate per row, whether processing minimal data, such as one gigabyte, or scaling up to ten terabytes.

This nuanced approach allows for the evaluation of actual data processing costs compared to other available benchmarks. By focusing not just on raw numbers but also on the balance between data scale and cost efficiency, attendees are provided with practical insights crucial for decision-making.

This methodology is particularly beneficial for organizations selecting data analysis tools, emphasizing the importance of practical benchmarks in everyday decision-making processes. As tools and technologies evolve, benchmarks that match actual usage scenarios will continue to serve as essential reference information.

Evolution of Benchmarks and Community Challenges

This session explored the evolution of benchmarks in the rapidly evolving field of data analysis and addressed various challenges faced by the community. The focus of this discussion was on the need for benchmarks to continuously change to accommodate new technologies, as demonstrated by the evolving standards of TPC benchmarks.

Evolution of TPC Benchmarks

Initially, the focus was on TPC-H, positioned between its predecessor TPC-D and its successor TPC-DS. TPC-H was introduced to accommodate the rapid changes in database technology and industry needs. The rushed replacement of TPC-H just a year after its release underscores the need for swift adaptation to maintain relevance and effectiveness.

Structure and Challenges of TPC-H

TPC-H was designed as a built-in inventory warehouse model with a relatively simple schema: one large table complemented by seven smaller tables, reducing data types and complexity. While this simplicity is beneficial for tuning, it raises significant issues. The ease of tuning TPC-H might lead to gamification of the benchmarks through sophisticated settings that produce optimal, but potentially misleading, performance results. This scenario underscores the importance of designing benchmarks that reflect actual operations, not just ideal laboratory conditions.

Community Challenges

The discussion also covered the community's reaction to these benchmarks. The simplification of TPC-H and other benchmarks can create discrepancies between benchmark results and real user experiences. This disconnect points to significant concerns within the data analysis community regarding the real-world applicability of benchmark results. These discussions promote the continuous improvement of benchmarking practices, urging developers to create more comprehensive benchmarks that better reflect true technological performance.

Through this analysis, it became clear how closely intertwined the evolution of benchmarks is with technological advancements and how deeply they impact the data analysis industry. Benchmarks serve not only as indicators of performance but also as catalysts for innovation and markers of areas where further technological advancements are needed. This session highlighted the dynamic nature of benchmarks as tools that reflect and influence the state of technology in actual applications.

Session Section: Performance and Cost Efficiency

This part focused on specific outcomes and challenges in dataset optimization and clustering. Specifically, it detailed the optimizations performed on each main fact table for a 1TB dataset and their impacts.

  1. Implementation of Data Optimization:

    • Optimizations were applied to all main fact tables (hundreds of megabytes in size) for the same job.
    • As a result, the total job execution time increased by 44%, with a corresponding 44% increase in costs.
  2. Utilization of Clustering:

    • In addition to optimization, Liquid Clustering technology was also employed.
    • Clustering usually occurs in the background and can be particularly helpful for predictive optimizations when using Databricks, but there isn’t always time to perform such background optimizations.

This section demonstrated how tuning for data consumption is a time and resource-intensive task, yet the outcomes can enhance performance. When applying new technologies, the balance between time and cost efficiency must be considered.

While the initial cost of the optimization process increases, the resulting faster data access and processing capabilities can be seen as long-term benefits. This case study provides indispensable insights for benchmarking data and AI platforms.

Conclusion

As the use of data analysis and AI expands, optimizing for performance and cost efficiency becomes increasingly important. This session provided opportunities for practical understanding through specific case studies, demonstrating the importance of selecting appropriate tools and strategies. Although the application of optimization and clustering technologies requires initial investment, the performance improvements achieved can yield significant long-term benefits.

About the special site during DAIS

This year, we have prepared a special site to report on the session contents and the situation from the DAIS site! We plan to update the blog every day during DAIS, so please take a look.

www.ap-com.co.jp