Abstract:
Daniel Allen, Sarah Hubbard, Woojin Lim, Allison Stanger, Shlomit Wagman, Kinny Zareen, January 2024
This paper aims to provide a roadmap for AI governance. Contrary to the dominant paradigm, we believe that AI governance should not simply be a reactive, punitive, status quo undertaking, but rather an expansive approach to technology to advance human flourishing. We argue that it should be an expression of a positive vision. Promoting human flourishing requires democracy/political stability and economic empowerment. Our most important point is that answering the question of how to manage this emerging technology means going beyond simply categorizing and managing narrow risks to interpreting risks and opportunities more broadly and responding to them. public goods, human resources, and democracy itself accordingly. To clarify this vision, we will follow his four steps. First, we define some central concepts in the field and clarify the forms of technological harm and risk. We then review the normative frameworks governing emerging technologies currently in use around the world. Third, we outline an alternative normative framework based on power-sharing liberalism. Fourth, we walk through a set of governance challenges that should be accomplished by a policy framework based on our power-sharing liberal model. Following these, we will propose implementation vehicles.
Read the full paper