The problem with the Stanford report’s sanguine estimate on artificial intelligence

By Mark HagerottBest Defense guest columnist Stanford has undertaken an important effort: envisioning the implications of artificial intelligence over a 100-year span, to “anticipate how the effects of artificial intelligence will ripple through every aspect of how people work, live, and play.” But there is a problem, potentially fundamental enough that the team may want to revisit its first report or adjust its approach as it goes forward. This is the report’s relatively weak coverage of the urban, human security implications of AI. Why the light treatment of security, barely more than seven paragraphs in an almost four-dozen page report? According to the purpose statement, this first study focuses on the implications of AI in 2030 in the “typical North American city.” I suppose the thin treatment of security may derive from the huge assumption that…


Link to Full Article: The problem with the Stanford report’s sanguine estimate on artificial intelligence