OpenAI has allocated funding for research to create algorithms that can predict human moral decisions.
The initiative, in collaboration with Duke University, involves a three-year grant of $1 million to professors studying the potential of AI in ethical decision-making.
The research, led by Duke ethics professor Walter Sinnott-Armstrong, seeks to develop algorithms that can navigate complex moral scenarios, such as those found in medicine, law, and business.
The project’s focus is on aligning AI with human moral values, though details remain scarce, with the study set to conclude in 2025.
This venture is not the first attempt to integrate moral reasoning into AI. Earlier projects, like the Allen Institute for AI’s Ask Delphi, showed that AI can handle simple moral dilemmas but struggles with the complexities of real-world ethical situations.
AI systems rely on statistical patterns, not an understanding of ethical principles, and can perpetuate biases inherent in their training data.