APC 技術ブログ


株式会社 エーピーコミュニケーションズの技術ブログです。

Testing Generative AI Models: What You Need to Know


I'm Chen from the Lakehouse Department of the GLB Division. Based on a report by Nagae, who is attending Data + AI SUMMIT2023 (DAIS2023) in San Francisco, I wrote a paper titled "Testing Generative AI Models: What You Need to Know." Here is an overview of the session.

The talk, presented by Robust Intelligence, a San Francisco-based company specializing in AI and security, discussed risks in testing AI models and how to manage them. The target audience is engineers interested in AI technology, risk managers in companies involved in the development and operation of AI models, and the general public interested in AI ethical issues.

We will introduce the classification and management methods of AI risks introduced in the lecture in order.

1. Operational risk

Operational risk is the risk associated with the operation of AI systems. Specifically, the following risks are included.

  • Data quality or integrity issues
  • Model overfitting or underfitting
  • Poor model performance evaluation and validation

In order to deal with these risks, it is important to preprocess and cleanse data, select appropriate evaluation metrics for models, and apply validation methods.

2. Ethical risk

Ethical risk is the risk that AI systems may have consequences that go against people's values ​​and social norms. Possible risks include:

  • Model output that promotes bias and discrimination
  • Use of data that violates individual privacy
  • Generating or recommending inappropriate content

In order to deal with these risks, it is necessary to consider ethical viewpoints from the model design stage, apply methods to reduce bias, and introduce technologies to protect privacy.

3. Security and privacy risks

Security and privacy risks are the risks that AI systems can be exploited by malicious attackers. Risks include:

  • Model theft or tampering
  • Data leakage or unauthorized access
  • Cyber ​​attack using AI system

In order to deal with these risks, it is important to implement thorough security measures, such as model and data protection, access control, and monitoring.

Risk countermeasures using case studies

The lecture introduced how to identify risks and take appropriate countermeasures using case studies. Through concrete examples, it was shown that understanding the causes and effects of risks and considering specific actions to deal with them are effective in AI risk management.

With the development of AI technology, the importance of risk management is increasing. It is necessary to improve the safety and reliability of AI systems by referring to the AI ​​risk classification and management methods introduced in this lecture.


This content based on reports from members on site participating in DAIS sessions. During the DAIS period, articles related to the sessions will be posted on the special site below, so please take a look.

Translated by Johann


Thank you for your continued support!