Skip Navigation
Menu
Newsletters

Korean Regulators to Publish Guideline on AI-Based Personalized Recommendation Services of Online Media Platforms

2022.05.06

On April 28, 2022, the Korea Communications Commission (“KCC”) and the Korea Information Society Development Institute published the “Guideline to Basic Principles for User Protection in AI-based Media Recommendation Services” (the “Guideline”). While the Guidelines are not legally binding on the platform operators, it suggests future direction of regulatory landscape in the area of automated, personalized recommendations for content viewers.

The purpose of the Guideline is to provide further details and guidance on the “Basic Principles for User Protection in AI-based Media Recommendation Services” (the "Basic Principles") which were announced earlier by the KCC in June 2021. In essence, the Guideline advises platforms to ensure transparency of their artificial intelligence (AI)-based personalized recommendation system, increase content viewer’s choice over which content he or she will be exposed to and establish internal protocol and self-assessment system within the platform organization to minimize negative impact of the AI-based system to their users and to address user complaints.   

Please see below for more details. 

The “Basic Principles” - Background

Personalized recommendations of media contents are designed to present content that the user will most enjoy consuming. To grasp users’ attention as much as possible within the limited space of their device screen, various platform operators provide customized recommendations based on their users’ behavioral characteristics, such as users' areas of interests and usage history. Fast-evolving AI technologies are contributing to making the personalized recommendation services ever more intelligent than before. 

Advanced AI-based recommendations, however, have been criticized for isolating the content viewers from a diversity of content. In particular, concerns have been rising that users do not get any opportunity to fix this problem of so-called information bubbles because many of them are not aware of how the personalized recommendation system works or even whether such system exists.

The Basic Principles were established as a set of ethical standards for transparency and diversity of personalized recommendation system that is built upon automated AI algorithms. 

The Three Key Basic Principles

The Basic Principles can be summarized into three major values that the platforms are advised to pursue: (i) transparency, (ii) fairness and (iii) accountability.

  • Transparency: Platforms should disclose and explain to their users about personalized recommendation services that the users are exposed to and how the recommendation services work, and impact of such services. 

  • Fairness: Platforms should make sure that contents displayed to their users (content viewers) based on the personalized recommendations are not biased and that users have certain level of control and choice over which contents are recommended to them. 

  • Accountability: Platforms should be in charge of addressing any negative consequences caused by their personalized recommendation system such as technical malfunctions and law violations, if any. 

The Guideline also suggests five “principles for action” that go hand-in-hand with the three key principles. The five principles for action are more geared toward bringing the rather abstract principles into action in reality.  

Five Principles for Action
 

  • Information disclosure for users: Platforms should notify users that they are being exposed to AI-based recommendation services and disclose major parameters of underlying algorithm   to users through the main webpage of the platform, pop-up windows, or terms of service, for example. 

  • Ensure consumer choice: Platforms should allow their users to have certain level of control over selection and modification of the parameters based on which the recommendation system operates, to ensure that users can be exposed to a wide array of unbiased contents. 

  • Implement self-assessment: Platforms should have risk assessment system in place so that self-assessment about potential negative impact (e.g., isolating the content viewers from conflicting views)   of their AI-based recommendation system can be performed on an ongoing basis. 

  • Address user complaints: Platforms should operate a channel to receive and address user complaints about the personalized recommendation system. 

  • Establish internal protocols: Platforms should establish internal protocols for observing the Basic Principles, reflecting necessary technical and administrative measures as well as procedures for dispute resolutions with platform users. 


Key Implications of the Guideline
 

  • Information Disclosure on AI-Based Personalized Recommendations: Ensuring transparency of the AI-based recommendation services is the key to protect content viewers exposed to the services. Notably, the Guideline made it clear that the call for transparency does not extend to algorithms that may constitute a trade secret. According to the Guideline, however, information on users’ behavioral characteristics that are fed into the algorithm   needs to be disclosed in a form that is easily comprehensible to users and such information disclosure should be more rigorous when it is about the user’s personal information. 

  • Ensure Consumer Choice for Content Viewers, not Content Suppliers: As explained above, the Guideline advises platforms to allow their users to have control over selection and modification of the parameters based on which the recommendation system operates. The Guideline, however, places some qualifications on the scope of this effort. According to the Guideline, business users of the platform – i.e., content suppliers – are not the subjects that this consumer choice mandate is intended to protect. It is the end consumer of the content that the Basic Principles are seeking to protect, as far as the choice over contents displayed are concerned. The Guideline also clarifies that protection of content viewers’ choice over contents only needs to be implemented to the extent commercially feasible.

  • Implement Self-assessment and Dispute Resolution Procedure; Establish Internal Protocols: The Guideline recommends that platforms predict potential risks of their AI-based recommendation system that may negatively affect their users, before developing and implementing such system. Also, platforms are advised to proactively control those risks throughout service operation, accompanied with various forms of record keeping and documentation for risk management purposes. The Guideline also stresses the need to provide sufficient explanation about the recommendation system to their users when addressing user complaints. Lastly, the Guideline advises platforms to establish internal protocols about self-assessment of the AI-based recommendation system, dispute resolution procedures, and other detailed internal protocols to promote the three key principles ((i) transparency, (ii) fairness and (iii) accountability) for the AI-based recommendation system.  


Next Steps

Although the Guideline is not legally binding on platforms, it will likely serve as a preview of subsequent legislation and government policy on AI-based online services. By end of 2022, the KCC plans to establish a more concrete set of action items for further guidance for platforms.

As of today, a number of bills on AI technology are pending review at the National Assembly. As there are a lot of moving pieces and ambiguity as to scope and degree of potential regulation, continued monitoring is necessary to assess implications on platforms. 

 

[Korean version]

Related Topics

#AI #Platform #TMT #Legal Update

Share

Close

Professionals

CLose

Professionals

CLose