文章詳目資料

高大法學論叢

  • 加入收藏
  • 下載文章
篇名 自主型人工智慧事故的刑法評價
卷期 16:2
並列篇名 The Criminal Legal Evaluation of Autonomous Artificial Intelligence Accidents
作者 王紀軒
頁次 215-260
關鍵字 自主人工智慧法令行為容許風險無人載具autonomousartificial intelligence A Conduct Performed in Accordance with Law or OrderAllowed Risk, Unmanned Vehicle
出刊日期 202103

中文摘要

自主型人工智慧機器,可以自行接收訊息或資料,再依演算法自行決策,並控制機器的行為,毋庸仰賴人類協助,是科技的進步;但是,人類對於該演算結果無法預測,也難事後獲悉該演算的推論理由,潛在風險隨之增加。當自主型人工智慧的運作,涉及刑事不法侵害時,討論的重點,通常是研製者的行為,而非使用者。因為人工智慧如何判斷或行動,是按照研製者的設計而為,特別是程式設計者。當前多數意見,似是以容許風險的觀點,考量人工智慧對於人類社會整體利益,認為應當容許自主型人工智慧的風險,縱然發生事故,也可能不存在評價上的因果關係。然而,在人工智慧發展尚屬萌芽的當代,面對涉及刑事不法侵害的自主性人工智慧事故,以「依法令行為」阻卻違法,或許是比較合宜的解決方法。因為,容許風險的概念,不確定性頗高,且絕大多數的人民,包含司法工作者在內,對於人工智慧,恐怕理解都十分有限,平心而論,眼下應該尚未形成基於社會經驗累積的利益衡量標準。其實,我們應該要求立法者,儘速建構人工智慧的相關法律規範,使人民在人工智慧的研製或使用上,行為能夠有所依據;若研製或使用行為符合人工智慧相關法規,基於法秩序的一致性,縱然構成要件該當,也不具違法性。

英文摘要

Autonomous artificial intelligence (Autonomous AI) machines are the great advancement in human technology. Autonomous AI can receive messages or data on their own, then make decisions based on algorithms, and control the behavior of the machine, and It does not need any human assistance. However, humans cannot grasp the calculation process and results of Autonomous AI now, and the related risks increase. When Autonomous AI has criminal law issues, the focus of the discussion should mainly be the behavior of R&D (research and development) personnel, not the user. The reason is that the judgment or action of Autonomous AI is based on the design of R&D personnel, especially the programmer. The way to solve this problem, the opinion of the majority is to use the legal concept of "Allowed Risk". We must consider the development of Autonomous AI and the overall interests of the human society. In other words, if Autonomous AI that promotes the progress of the human society has an accident, we can consider this risk of AI to be tolerated, so the relevant behavior will not be a crime.

相關文獻