OpenAI is funding Duke University researchers to develop algorithms that predict human moral judgments. This $1 million, three-year grant aims to create "moral AI" capable of navigating ethical dilemmas in fields like medicine, law, and business. Learn more about OpenAI's initiatives.
The project's principal investigator, Walter Sinnott-Armstrong, and co-investigator, Jana Borg, have previously explored AI's potential as a "moral GPS." Their research includes a "morally-aligned" algorithm for kidney donation allocation. OpenAI's legal challenges haven't deterred its pursuit of AI morality.
However, building such an algorithm faces significant challenges. AI models learn patterns from data, lacking true understanding of ethical concepts. This can lead to biases and inconsistencies, as demonstrated by the Allen Institute for AI's Ask Delphi tool, which produced contradictory ethical recommendations. Similar challenges are seen in other AI development.
Furthermore, the subjective nature of morality complicates the task. Philosophical debates about ethics lack a universally accepted framework, making it difficult to create an algorithm that accurately reflects human moral judgments.