Future Tense Newsletter: Evolved Consciousness and Its Discontents

By Jacob Brogan Given time, A.I. may develop a moral consciousness of its own. bestdesigns/thinkstock.com Greetings, Future Tensers, Conversations about artificial intelligence tend to fixate on the dangers such systems might present to human life. But what if we humans were the dangerous ones? That’s a possibility that ethicist Carissa Véliz raises in an article on the difficulty of recognizing A.I. sentience for this month’s Futurography course. “Because sentient beings can feel, they can be hurt, they have an interest in experiencing wellbeing, and therefore we owe them moral consideration,” Véliz writes. If we fail to take such considerations seriously, we risk “committing atrocities such as enslavement and murder” against the virtual minds we’re bringing into being. Advertisement Of course, even if we learn to act ethically toward our creations,…


Link to Full Article: Future Tense Newsletter: Evolved Consciousness and Its Discontents