文章詳目資料

公共行政學報(政大) CSSCITSSCI

  • 加入收藏
  • 下載文章
篇名 人工智慧在公共政策領域應用的非意圖歧視:系統性文獻綜述
卷期 63
並列篇名 Unintentional Discrimination in Application of Artificial Intelligence to Public Policies: A Systematic Article Review
作者 李翠萍張竹宜李晨綾
頁次 001-049
關鍵字 人工智慧科技倫理非意圖歧視科技正義社會公平Artificial Intelligencetechnology ethicsunintentional proxy discriminationtechnology justicesocial equityTSSCI
出刊日期 202209
DOI 10.30409/JPA.202209_(63).0001

中文摘要

本研究從米勒的多元正義觀出發,基於公民聯合關係中的平等原則,檢視人工智慧(AI)在公共政策領域應用所引發的倫理問題。本研究採質性後設分析法,依照PRISMA模式篩選學術研究論文,從中梳理AI在先進國家政策領域應用時的制度過程與結果。研究發現,AI已應用於刑事司法、警察執法、醫療照護、國土安全與國境管理、教育、國家財政、公共就業、國防等八大領域,雖為政府部門帶來行政效率並提升整體民眾福祉,但同時也對特定群體造成非意圖歧視。從制度過程來看,政府部門忽略了用於機器學習的大數據中潛藏著長久以來的社會不正義,而從制度結果來看,歷史中的不正義透過AI繼續複製,導致特定群體遭受差別待遇,基本人權遭受剝奪。為了分析各領域中非意圖歧視的樣態與問題本質,本研究以國際人權相關公約所隱含的人權保障優先順序,從「被歧視者是否主動接受評量」與「消極與積極權利的剝奪」兩個面向分析AI對特定群體造成的負面影響。分析結果顯示,AI在警察執法、刑事司法、與醫療照護三大領域的應用涉及生命權與自由權等消極權利的剝奪,確實有優先處理的急迫性。本文於結論處討論何以非意圖歧視的矯正無法依賴公民社會的自覺,而必須由政府部門積極干預,並從AI應用的籌備階段與執行階段,建議政府應有的具體作為,以降低非意圖歧視對特定群體帶來的人權危害。

英文摘要

This study examined the ethical problems with the application of AI to public policy spheres, based on the principle of equality in citizenship from Miller’s plural view of justice. In adopting the PRISMA model, a qualitative meta-analysis was employed to inspect institutional process and outcomes of AI applications. This research found that AI has been applied to various public policy fields including criminal justice, policing, health care, homeland security and border management, education, public finance, public employment, as well as national defense. In these fields, AI has made administrative work more efficient and has improved most people’s well-being while creating unintentional discrimination against specific groups of people. An examination of the institutional process showed that the government has ignored the long-standing social injustice hidden in the big data used for machine learning. Consequently, the institutional outcome showed that historical injustice continues to be reproduced through AI, leading to differential treatment of specific groups and deprivation of their basic human rights. In order to analyze the pattern and nature of unintentional discrimination in various public policy areas, this study, based on the order of priority of human rights protection implied by international human rights-related conventions, analyzes the negative effects of AI on specific groups in terms of “whether the victims initiate the evaluation” and “negative and positive rights deprivation”. The research results showed that the application of AI in the areas of police enforcement, criminal justice, and health care involves the deprivation of negative rights such as the right to life and the right to freedom, which urgently needs to be addressed. This paper concludes by discussing why the correction of unintentional discrimination cannot be done by civil society but requires the active intervention of the government. This paper ends by suggesting specific actions that the government should take in the preparatory and implementation stages of AI applications in order to reduce the unintentional discrimination of specific groups.

相關文獻