К сожалению, содержание этой страницы недоступно на выбранном вами языке.

Перейти к основному содержимому

Главная AI ethics

AI ethics

(Also ethical AI)

AI ethics definition

AI ethics, also known as ethical AI, is a branch of applied ethics that examines the moral principles, values, and guidelines for AI systems’ design, development, deployment, and use. It’s closely related to robot ethics.

The core five principles of AI ethics were defined by Luciano Floridi and Josh Cowls as beneficence, non-maleficence, autonomy, justice, and explicability.

See also: AI TRiSM, artificial intelligence, cognitive technology, machine learning, responsible AI, supervised machine learning, unsupervised machine learning

What are the main concerns of AI ethics?

Researchers working with AI ethics focus on ensuring just, transparent, responsible, and sustainable use of AI:

  • AI systems mustn’t discriminate against individuals or groups.
  • AI systems must operate reliably and without misuse.
  • All decision-making processes must be transparent and comprehensive.
  • The data used to train AI systems must be sufficiently protected.
  • If the AI causes any harm, the responsible party must be identified and held accountable.
  • Human agency decision-making must be preserved.
  • The environmental impact and footprint of AI technologies must be taken into consideration.

AI ethics in practice

AI ethics is used to standardize principles of fair AI use. Developers apply the principles of AI ethics to conduct algorithmic audits. It helps ensure diversity in teams and prepare for any oversight of bias or discrimination in the algorithm. As AI development continues, AI ethics is essential for ensuring regulatory compliance and legal frameworks.