APC 技術ブログ

株式会社エーピーコミュニケーションズの技術ブログです。

株式会社 エーピーコミュニケーションズの技術ブログです。

Taking Your Cloud Vendor to the Next Level: Solving Complex Challenges with Azure Databricks

Solving Cloud Data Challenges with Azure Databricks

​This is Johann from the Global Engineering Department of the GLB Division. I wrote an article summarizing the content of the session based on reports from Mr. Kanemaru participating in Data + AI SUMMIT2023 (DAIS).​

​ Today, I would like to share with you an easy-to-understand summary of a Mr. Nagasato recently attended, titled "Taking Your Cloud Vendor to the Next Level: Solving Complex Challenges with Azure Databricks." This lecture was presented by Itai Yaffe, Senior BD Data Architect at Akamai, and Tomer Patel, Engineering Manager. The theme of the lecture was how to solve cloud data challenges using Azure Databricks, making it very interesting for data engineers, analysts, and technical professionals. ​ Without further ado, let's dive into the main points! ​

Storing Raw Data and Utilizing Kafka Messages

​ First, the lecture explained how to store raw data. By storing raw data in Argo files and storing pointers or file names in Kafka messages, it was shown that cost efficiency, performance, and scalability can be improved. The specific steps are as follows: ​

  1. Store raw data in Argo files: To efficiently store large amounts of raw data, the Argo file format is used. Argo files are characterized by high compression rates and fast read speeds.

  2. Store pointers in Kafka messages: To streamline access to raw data, pointers to the data are stored in Kafka messages. This speeds up data retrieval and processing.

Types of Storage and How to Choose

​ Next, the lecture explained the types of storage available and how to choose the right one. In Azure Databricks, the following three types of storage can be used: ​

  1. Standard Blob storage

  2. Premium Blob storage for minimal latency

  3. ADLS for Hadoop-compatible access and hierarchical directory structure

​ Each of these storage types has different characteristics, and it is important to choose the right one based on your needs. ​

Optimizing Data Ingestion and Query Performance

​ The lecture also touched on optimizing data ingestion and query performance. It introduced a method of ingesting data from Kafka messages using the WSAR architecture and obtaining file paths from storage accounts. The specific steps are as follows: ​

  1. Read Kafka messages

  2. Obtain file paths from storage accounts

  3. Ingest data

​ Additionally, the lecture explained how to improve query performance using Databricks Photon. Databricks Photon enables fast computation through vectorized query engines, memory optimization, and hardware acceleration (GPU). ​

Analyzing and Displaying Security Events

​ Finally, the lecture explained how to analyze and display security events. By processing API calls from the UI using a query builder and constructing SQL queries using Databricks SQL or Spark SQL, customers can drill down, aggregate, and display security events. ​

Summary

​ In this lecture, various methods for solving cloud data challenges using Azure Databricks were introduced. Topics such as storing raw data, choosing the right storage type, optimizing data ingestion and query performance, and analyzing and displaying security events were all very interesting. Why not try tackling cloud data challenges using Azure Databricks?

Conclusion

This content based on reports from members on site participating in DAIS sessions. During the DAIS period, articles related to the sessions will be posted on the special site below, so please take a look.

Translated by Johann

www.ap-com.co.jp

Thank you for your continued support!