Naver Corporation

12/06/2024 | Press release | Archived content

NAVER Shares AI Safety Policy Case at UN, Emphasizes “Practical Policies Built on Internalized Technology”

NAVER Shares AI Safety Policy Case at UN, Emphasizes "Practical Policies Built on Internalized Technology"

- NAVER Showcases AI Ethics Advisory Process at Joint Event Hosted by SAPI, URG, and Permanent Mission of the Republic of Korea in Geneva

December 6, 2024

NAVER Corporation (CEO Choi Soo-yeon) participated in the event titled "Towards a Human Rights-Based Approach to New and Emerging Technologies: From Concept to Implementation," held at the UN Geneva Office in Switzerland on the 5th. During the event, NAVER showcased its ongoing efforts to foster a secure AI ecosystem.

[Photo] Park Woo Chul , an attorney at NAVER's Policy and RM Agenda (second from the right), delivers a presentation on NAVER's AI safety policies.

Since 2022, the Seoul National University Artificial Intelligence Policy Initiative (SAPI) and the Universal Rights Group (URG) have been engaged in extensive research on "Human Rights-Based Approaches to Emerging Technologies," publishing annual reports. This year, in collaboration with the Permanent Mission of the Republic of Korea in Geneva, SAPI presented its latest report, "Practical Guidelines for Implementing Human Rights-Based Norms in the Workplace." Distinguished speakers, including Ambassador Yun Seong-deok, Professor Lim Yong and Stephan Sonnenberg from Seoul National University, and representatives from the UN Office of the High Commissioner for Human Rights, explored various strategies to ensure that digital technologies, including AI, are developed in alignment with human rights values.

During the event, NAVER enriched the discussion by presenting practical examples of how abstract principles for safe AI are implemented in industry settings. Park Woo Chul, an attorney representing NAVER's Policy and RM Agenda team, delivered a presentation introducing NAVER's Consultation on Human-Centered AI's Ethical Considerations (CHEC) process, which has been in operation since 2022. CHEC is a policy framework designed to apply the "AI Ethics Principles" to the service launch process. Unlike unilateral inspections, it features an interactive approach that integrates social perspectives from the initial planning and development stages.

Attorney Park Woo Chul remarked, "Without understanding the on-site situation, AI ethics principles can become mere platitudes. NAVER has collaborated extensively with academic experts, including SAPI, to ensure these principles are practical and applicable in real-world implementation. The CHEC process also emphasizes understanding the processes of service planning and development on the ground, enabling effective collaboration aligned with the needs of service managers."

NAVER further outlined complementary policies aimed at elaborating and operationalizing its AI Ethics Principles. Published in 2023, the "CLOVA Studio AI Code of Ethics" represents NAVER's commitment to applying its AI Ethics Principles to the rapidly evolving field of generative AI technology. Furthermore, this year, NAVER launched the NAVER AI Safety Framework (ASF) to systematically identify, evaluate, and address potential risks associated with AI across its development and deployment processes.

Professor Lim Yong, serving as the director of SAPI at Seoul National University, commented, "This event is significant as it delivers actionable strategies for integrating a human rights-based approach into the development and application of new technologies." He added, "We will strive to strengthen collaboration with AI policymakers and industry stakeholders to drive the widespread adoption of human rights-based approach to AI."

Meanwhile, NAVER continues to strengthen its leadership in AI safety by actively engaging with diverse global communities. This year, NAVER provided technical expertise for the UN's AI safety report and played a role in developing AI safety benchmarks through the open consortium "MLCommons," which features participation from several major tech companies. In addition, last July, it became the first in the country to join the "Coalition for Content Provenance and Authenticity (C2PA)," an initiative that develops standards for AI watermark technology.

Ha Jung-woo, Head of NAVER Future AI Center, commented, "NAVER has earned recognition as a global leader in AI safety by internalizing cutting-edge AI technologies in response to rapid changes in the field, effectively enforcing specific and realistic safety policies, rooted in close collaboration with service planning and development teams." He added, "Looking ahead, we will continue to enhance our AI capabilities while leading initiatives to establish a safe and sustainable global AI ecosystem."