中文字幕理论片,69视频免费在线观看,亚洲成人app,国产1级毛片,刘涛最大尺度戏视频,欧美亚洲美女视频,2021韩国美女仙女屋vip视频

打開APP
userphoto
未登錄

開通VIP,暢享免費(fèi)電子書等14項(xiàng)超值服

開通VIP
人工智能的倫理和價(jià)值觀

Asilomar AI Principles 阿西洛馬人工智能原則

2017年1月初舉行的“Beneficial AI”會(huì)議為基礎(chǔ)上建立起來的“阿西洛馬人工智能原則”,名稱來自此次會(huì)議的地點(diǎn)–美國加州的阿西洛馬(Asilomar)市,旨在確保AI為人類利益服務(wù)。本次會(huì)議參加者是業(yè)界最富盛名的領(lǐng)袖,如DeepMind首席執(zhí)行官Demis Hassabis和Facebook AI負(fù)責(zé)人Yann LeCun等。全球2000多人,包括844名人工智能和機(jī)器人領(lǐng)域的專家已聯(lián)合簽署該原則,呼吁全世界的人工智能領(lǐng)域在發(fā)展AI的同時(shí)嚴(yán)格遵守這些原則,共同保障人類未來的利益和安全。
這一系列原則目前共23項(xiàng),分為三大類,分別為:科研問題(Research Issues)、倫理和價(jià)值(Ethics and values)、更長期的問題(Longer-term Issues)。具體如下:

Research Issues科研問題

1) Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.

研究目的:人工智能研究的目標(biāo),應(yīng)該是創(chuàng)造有益(于人類)而不是不受(人類)控制的智能。

2) Research Funding: Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as:

研究經(jīng)費(fèi):投資人工智能應(yīng)該有部份經(jīng)費(fèi)()用于研究如何確保有益地使用人工智能,包括計(jì)算機(jī)科學(xué)、經(jīng)濟(jì)學(xué)、法律、倫理以及社會(huì)研究中的棘手問題,比如:

  • How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked?

如何使未來的人工智能系統(tǒng)高度健全(“魯棒性”),讓系統(tǒng)按我們的要求運(yùn)行,而不會(huì)發(fā)生故障或遭黑客入侵?

  • How can we grow our prosperity through automation while maintaining people’s resources and purpose?

如何通過自動(dòng)化提升我們的繁榮程度,同時(shí)維持人類的資源和意志?

  •  How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI?

如何改進(jìn)法制體系使其更公平和高效,能夠跟得上人工智能的發(fā)展速度,并且能夠控制人工智能帶來的風(fēng)險(xiǎn)?

  • What set of values should AI be aligned with, and what legal and ethical status should it have?

人工智能應(yīng)該歸屬于什么樣的價(jià)值體系?它該具有何種法律和倫理地位?

3) Science-Policy Link: There should be constructive and healthy exchange between AI researchers and policy-makers.

科學(xué)與政策的聯(lián)系:在人工智能研究者和政策制定者之間應(yīng)該有建設(shè)性的、有益的交流。

4) Research Culture: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.

科研文化:在人工智能研究者和開發(fā)者中應(yīng)該培養(yǎng)一種合作、信任與透明的人文文化。

5) Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.

避免競爭:人工智能系統(tǒng)開發(fā)團(tuán)隊(duì)之間應(yīng)該積極合作,以避免安全標(biāo)準(zhǔn)上的有機(jī)可乘。

Ethics and values倫理和價(jià)值

6) Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.

安全性:人工智能系統(tǒng)在它們整個(gè)運(yùn)行過程中應(yīng)該是安全和可靠的,而且其可應(yīng)用性的和可行性應(yīng)當(dāng)接受驗(yàn)證。

7) Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.

故障透明性:如果一個(gè)人工智能系統(tǒng)造成了損害,那么造成損害的原因要能被確定。

8) Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.

司法透明性:任何自動(dòng)系統(tǒng)參與的司法判決都應(yīng)提供令人滿意的司法解釋以被相關(guān)領(lǐng)域的專家接受。

9) Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.

責(zé)任:高級人工智能系統(tǒng)的設(shè)計(jì)者和建造者,是人工智能使用、誤用和行為所產(chǎn)生的道德影響的參與者,有責(zé)任和機(jī)會(huì)去塑造那些道德影響。

10) Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.

價(jià)值歸屬:高度自主的人工智能系統(tǒng)的設(shè)計(jì),應(yīng)該確保它們的目標(biāo)和行為在整個(gè)運(yùn)行中與人類的價(jià)值觀相一致。

11) Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.

人類價(jià)值觀:人工智能系統(tǒng)應(yīng)該被設(shè)計(jì)和操作,以使其和人類尊嚴(yán)、權(quán)力、自由和文化多樣性的理想相一致。

12) Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.

個(gè)人隱私:在給予人工智能系統(tǒng)以分析和使用數(shù)據(jù)的能力時(shí),人們應(yīng)該擁有權(quán)力去訪問、管理和控制他們產(chǎn)生的數(shù)據(jù)。

13) Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.

自由和隱私:人工智能在個(gè)人數(shù)據(jù)上的應(yīng)用不能充許無理由地剝奪人們真實(shí)的或人們能感受到的自由。

14) Shared Benefit: AI technologies should benefit and empower as many people as possible.

分享利益:人工智能科技應(yīng)該惠及和服務(wù)盡可能多的人。

15) Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.

共同繁榮:由人工智能創(chuàng)造的經(jīng)濟(jì)繁榮應(yīng)該被廣泛地分享,惠及全人類。

16) Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.

人類控制:人類應(yīng)該來選擇如何和決定是否讓人工智能系統(tǒng)去完成人類選擇的目標(biāo)。

17) Non-subversion: The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.

非顛覆:高級人工智能被授予的權(quán)力應(yīng)該尊重和改進(jìn)健康的社會(huì)所依賴的社會(huì)和公民秩序,而不是顛覆。

18) AI Arms Race: An arms race in lethal autonomous weapons should be avoided.

人工智能軍備競賽:致命的自動(dòng)化武器的裝備競賽應(yīng)該被避免。

Longer-term Issues更長期的問題

19) Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.

能力警惕:我們應(yīng)該避免關(guān)于未來人工智能能力上限的過高假設(shè),但這一點(diǎn)還沒有達(dá)成共識(shí)。

20) Importance: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.

重要性:高級人工智能能夠代表地球生命歷史的一個(gè)深刻變化,人類應(yīng)該有相應(yīng)的關(guān)切和資源來進(jìn)行計(jì)劃和管理。

21) Risks: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.

風(fēng)險(xiǎn):人工智能系統(tǒng)造成的風(fēng)險(xiǎn),特別是災(zāi)難性的或有關(guān)人類存亡的風(fēng)險(xiǎn),必須有針對性地計(jì)劃和努力減輕可預(yù)見的沖擊。

22) Recursive Self-Improvement: AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.

遞歸的自我提升:被設(shè)計(jì)成可以迅速提升質(zhì)量和數(shù)量的方式進(jìn)行遞歸自我升級或自我復(fù)制人工智能系統(tǒng),必須受制于嚴(yán)格的安全和控制標(biāo)準(zhǔn)。

23) Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.

公共利益:超級智能的開發(fā)是為了服務(wù)廣泛認(rèn)可的倫理觀念,并且是為了全人類的利益而不是一個(gè)國家和組織的利益。

 

本站僅提供存儲(chǔ)服務(wù),所有內(nèi)容均由用戶發(fā)布,如發(fā)現(xiàn)有害或侵權(quán)內(nèi)容,請點(diǎn)擊舉報(bào)。
打開APP,閱讀全文并永久保存 查看更多類似文章
猜你喜歡
類似文章
馬斯克等千名大佬呼吁暫停超強(qiáng)AI研發(fā),中國機(jī)遇與挑戰(zhàn)并存
馬斯克公開信(全文)
AI人工智能技術(shù)的危險(xiǎn)性
為什么霍金說要對人工智能保持警惕?
給人工智能加以“緊箍咒”:世界各國積極推出人工智能倫理規(guī)則
除了拼技術(shù),谷歌、微軟、IBM都爭相給AI搞價(jià)值觀 | 騰研識(shí)者
更多類似文章 >>
生活服務(wù)
熱點(diǎn)新聞
分享 收藏 導(dǎo)長圖 關(guān)注 下載文章
綁定賬號成功
后續(xù)可登錄賬號暢享VIP特權(quán)!
如果VIP功能使用有故障,
可點(diǎn)擊這里聯(lián)系客服!

聯(lián)系客服