Hostname: page-component-848d4c4894-cjp7w Total loading time: 0 Render date: 2024-06-27T04:57:29.689Z Has data issue: false hasContentIssue false

The future of AI politics, policy, and business

Published online by Cambridge University Press:  12 March 2024

Eric Best
Affiliation:
The University at Albany, Albany, NY, USA
Pedro Robles
Affiliation:
Penn State Lehigh Valley, Center Valley, PA, USA
Daniel J. Mallinson*
Affiliation:
Penn State Harrisburg, Middletown, PA, USA
*
Corresponding author: Daniel J. Mallinson; Email: djm466@psu.edu
Rights & Permissions [Opens in a new window]

Abstract

Our aim with this special issue on the future of artificial intelligence (AI) politics, policy, and business is to give space to considering how the balalnce between risk and reward from AI technologies is and perhaps should be pursued by the public and private sectors. Ultimately, private firms and regulators will need to work collaboratively, given the complex networks of actors involved in AI development and deployment and the potential for the technology to alter existing policy regimes. We begin the introduction of this special issue of Business & Politics with a discussion of the growth in AI technology use and discussions of appropriate governance, followed by a consideration of how AI-related politics, policy, and business intersect. We then summarize the contributions of the authors in this issue and conclude with thoughts about how political science, public administration, and public policy scholars have much to offer, as well as much to study, the establishment of effective AI governance.

Type
Introduction
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of Vinod K. Aggarwal

While artificial intelligence (AI) has its roots in 1950s decision science, it has burst onto the public consciousness in the past two years.Footnote 1 With the public release of ChatGPT, development and deployment of autonomous vehicles, myriad stories of misidentification of suspects by police using facial recognition, and much more, citizens are more aware of the potential consequences of unregulated AI. However, businesses and governments alike are aware of the vast potential for AI to reshape national economies and global commerce.Footnote 2 As with any emergent and disruptive technology, governments must consider the policy balance between fostering innovation and preventing negative externalities. In fact, divergent framings of new technology by innovators and regulators shape their willingness to accept risk and can stifle the commercialization of new technology.Footnote 3

Our aim with this special issue on the future of AI politics, policy, and business is to give space to considering how these balances are, and perhaps should, be pursued by the public and private sectors. Ultimately private firms and regulators will need to work collaboratively, given the complex networks of actors involved in AI development and deployment and the potential for the technology to alter existing policy regimes.Footnote 4 We begin the introduction of this special issue of Business & Politics with a discussion of the growth in AI technology use and discussions of appropriate governance, followed by a consideration of how AI-related politics, policy, and business intersect. We then summarize the contributions of the authors in this issue and conclude with thoughts about how political science, public administration, and public policy scholars have much to offer, as well as much to study, the establishment of effective AI governance.

AI governance

Public organizations carry a distinct responsibility to provide constituents with public value. The revolutionary role of AI in the public sector, committed to harnessing machine learning (ML) and particularly deep learning, underscores the potential for automating processes within public organizations and enhancing human intelligence.Footnote 5 The combination of AI and human resources to augment public goods and services delivery promises to enhance efficiency and effectiveness delivering optimum public value. The European Commission published an official definition for AI as “systems that display intelligent behavior by analyzing their environment and taking actions—with some degree of autonomy—to achieve specific goals.”Footnote 6 Consequently, it is unsurprising to observe public organizations and the private sector collaborating to adopt AI to augment public services. Scholars have consistently noted shortcomings in public organizations, especially when attempting to reach a balance between efficiency and economic objectives, resulting in failures to deliver public value and meet expectations.Footnote 7

Other important observations made by scholars are the consequences of releasing AI as a public value delivery, not understanding the opportunities and risks associated with deploying such systems in the public sector.Footnote 8 Despite potential implications and to address the economic and social challenges, public managers using discretionary decision-making have introduced advanced ML algorithms (e.g., deep learning) to process large amounts of data to improve predictions and decision-making processes.Footnote 9 However, the use of discretionary decision-making has broader considerations, particularly involving bureaucratic and citizen engagement, that should be considered before introducing advanced technologies that may lead to unexpected consequences.Footnote 10

AI systems can operate by being controlled through human input, requiring guidance to engage in a creative and interactive process between humans and machines.Footnote 11 On the other hand, AI software systems can also operate autonomously, learning independently and making decisions without direct human intervention.Footnote 12 Scholars researching AI implications in the public sector take notice of areas where governments have introduced AI with strategic objectives.Footnote 13 With the proliferation of large data banks and powerful computing systems, AI can access real-world data, analyze, reason, learn, and perform processes involving natural language, vision, robotics, neural networks, and even genetic algorithms.Footnote 14 It is not surprising to witness AI’s current use in general citizen services, financial or economic administrative tasks, environmental organizations leveraging ML to improve and protect the environment, transportation, energy, farming, and various other sectors.Footnote 15

Public organizations often face challenges due to a lack of human resources, which hinders their ability to provide high-quality services to citizens. The literature highlights the potential of AI to alleviate this resource constraint in public agencies by handling tasks such as responding to inquiries, navigating through government services, and searching documents, directing requests, translating, and composing documents.Footnote 16 For example, AI is already providing citizen assistance with renewing a driver’s license, health and human services complex process, and even interacting with elected officials.Footnote 17 The use of AI to assist citizens has relieved some pressure on government agencies, but it has also prompted concerns about digital privacy and security, particularly regarding the responsibility for protecting citizen data.Footnote 18 Another challenge is determining the ownership of the data inputted by citizens into these AI systems. For AI to be efficient and effective, it often requires resources from various government agencies, making it crucial for public organizations to be accountable for safeguarding citizens’ data.Footnote 19 This situation could potentially cause confusion in public agencies, especially when they are simultaneously held accountable for ensuring the protection of citizens’ data.Footnote 20

Undoubtedly, AI stands as a promising technology poised to benefit both U.S. public organizations and the American people. However, it is imperative to harness this technology within a governance framework that revolves around policies safeguarding U.S. citizens from potential data privacy issues and the intentional or unintentional misuse of technology.Footnote 21 Recognizing the urgency of an AI policy, the White House Office took a significant step in October 2022 by issuing the “Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People.”Footnote 22 This executive order established a framework to guide the ongoing development and usage of AI systems across both public and private organizations.

Blueprint Bill of Rights Principles:

  • Safe and Effective Systems:

  • Protection from Algorithmic Discrimination and Inequitable Systems:

  • Protection from Abusive Data Practices and Agency over Personal Data:

  • Knowledge and Understanding of Automated System Usage and Impacts:

  • Ability to Opt Out for a Human AlternativeFootnote 23

The AI Bill of Rights considers the required guidelines for safe and effective systems, protection from algorithmic discrimination, safeguards against abusive data practices, awareness of automated system usage and its impacts, and the ability to opt out for a human alternative.

The introduction of the new US policy on AI has much broader implications, including ethics and responsible AI for present and future technology, requiring collaborative governance and new public policy.Footnote 24 It also emphasizes the importance of establishing leadership and coordination between public and private organizations for emergent technologies such as AI algorithms.Footnote 25 To this end, the U.S. Government Accountability Office’s Science, Technology, and Analytics (STAA) has a long history of leadership through external advisory boards that can complement experience from a diverse group, including scientific and engineering experts (Bailey, Reference Bailey2022).Footnote 26 This contemporary approach to governing AI aligns with a Bill of Rights that includes society’s current and future needs, especially those affected by AI technology. But much remains in working out the details of how governments will balance the risks and benefits of AI technology across so many different sectors of the economy.

The discourse on AI governance at the international level is increasingly focused on establishing comprehensive norms and standards that address the rapid advancements and widespread applications of AI technologies. Formulating international security governance norms for AI is a highly complex endeavor. A major challenge is achieving consensus among global powers, particularly between China and the United States.Footnote 27 Further, there is an imperative to ensure responsible innovation through a set of shared ethical principles.Footnote 28 The collaborative efforts between the EU and the US in setting international regulatory standards reflect a strategic move towards a governance model that not only upholds shared democratic values but also proactively addresses the multifaceted risks associated with AI technologies.Footnote 29

Moreover, the aspect of political legitimacy in global AI governance is critical, with scholars calling for democratic processes in ensuring the legitimacy of AI governance mechanisms.Footnote 30 The inclusion of Global South stakeholders in the AI governance dialogue brings to light the importance of an inclusive and equitable approach to governance, highlighting the need for a systemic restructuring to bridge existing governance gapsFootnote 31 : This inclusive approach is pivotal for crafting a global AI governance framework that is both responsive and responsible, addressing key concerns such as infrastructural and regulatory monopolies and ensuring that the benefits of AI technologies are equitably distributed.

AI, politics, policy, and business

As noted above, seemingly overnight, non-technology business leaders, politicians, and regulators have all experienced the explosion of AI.Footnote 32 These decision-makers juggled questions about productivity gains, employment reductions, automation, and accountability. The “no code” aspect of ChatGPT also made it possible for a wide swath of the population to suddenly interact with artificial intelligence directly, helping to further increase the AI discourse.

AI’s disruption of the business world is immediately obvious. Businesses now need to consider how and when jobs will change because of AI. Almost immediately after the launch of ChatGPT, stories appeared about work disappearing for people doing tasks like basic technical writing.Footnote 33 We do not know exactly what these medium-term employment changes and completements will look like, but it is already clear there are differences in employment risk across countries, worker demographics, geographic area, and job types.Footnote 34 Management consultants predict an era of increased occupational transition as automation comes for more job types.Footnote 35 For young professionals, there is a specific risk that AI will make it more difficult to move beyond “entry level” jobs, because a constant stream of newcomers will be able to gain basic skills with more speed thanks to assistance from AI.Footnote 36 It is worth nothing that AI is changing thoughts about “safe” careers, as “knowledge workers” join the ranks of those at risk from automation, and that change may garner outsized attention from politicians and policy makers.

There are already isolated cases of AI use and misuse in more consequential professional tasks like legal arguments.Footnote 37 These occurrences cross over into the world of politicians and policymakers, where they must consider whether AI should be allowed to submit legal arguments, write news articles, or drive vehicles. These disruptive changes happen suddenly and intermittently, and there is the constant risk of overregulation of emerging technologies preventing productive use as technologies mature. The implications for business and politics feed into the need for good policy, and this is clearly a situation where technological innovation exceeds regulatory innovation. This technical reality, combined with the possibility of different rules in different localities (or the preemption of different rules from above), contains all the ingredients of a regulatory nightmare.

Often, there is media focus on AI mishaps, such as AI hallucinations, physically impossible pictures, self-driving accidents, deepfakes, and other ineffective or dangerous instances which receive lots of coverage. As the pieces in this special issue highlight, focusing on these AI missteps may conceal the many ways that AI is already working seamlessly and causing meaningful change, and lead to ineffective regulation that does little to control the less newsworthy changes that AI brings. By no means should we ignore “AI gone wrong,” but regulators should also focus on AI uses that quickly normalize without becoming a part of the AI discourse. Oftentimes, these technological changes are already ubiquitous before regulators and the public are even aware of them, and limitations upon use will lead to the deprecation of tools that people are already familiar with using, even if they were not aware of the role of AI and associated data collection or privacy violations. These cases are likely to occur more often as AI development matures.

As businesses navigate the seismic shifts brought about by AI, their role extends beyond internal adaptations. They are increasingly becoming key players in the shaping of AI policies and regulations.Footnote 38 This involvement is crucial, as the rapid evolution of AI technologies demands a collaborative approach to governance that includes insights from industry leaders alongside policymakers and regulators.Footnote 39 Business leaders, recognizing the profound implications of AI on their operations and the broader industry landscape, are actively engaging in dialogues around ethical AI use, data privacy, and the equitable deployment of AI technologies.Footnote 40

This engagement is not merely a matter of compliance; it is a strategic imperative.Footnote 41 Companies at the forefront of AI adoption are leveraging their expertise to influence policy frameworks that foster innovation while safeguarding against potential harms. By contributing to the policy discourse, businesses help ensure that regulations are informed by practical insights and are adaptable to the pace of technological advancement.Footnote 42

Moreover, AI is not just altering existing industries; it is creating entirely new categories of services and products, thereby reshaping market dynamics and competitive landscapes.Footnote 43 From healthcare to finance, AI’s integration is enabling more personalized services, enhanced decision-making capabilities, and operational efficiencies.Footnote 44 However, as industries transform, so too do the regulatory challenges they face. Businesses are therefore not just participants in policy discussions; they are co-creators of the regulatory environment that will define the future of AI in industry.Footnote 45

In this context, the collaboration between businesses, policymakers, and regulatory bodies becomes a critical factor in ensuring that AI development is both innovative and responsible.Footnote 46 This collective effort is essential to balance the economic and social benefits of AI with the need to address ethical considerations and potential risks, thereby paving the way for a future where AI contributes positively to both industry growth and societal well-being.Footnote 47

Themes of the special issue

While there are myriad directions that one can consider when reflecting on AI politics, policy, and business, the articles in this special issue coalesced around two themes. The first theme relates to the challenges of regulating emergent AI technology. The second is also about regulation, but from the perspective of subnational governments. More specifically, these papers consider the prominent role of the American states in AI governance innovation in the United States. Given the place of the U.S. in global politics and commerce, this means that the states have a role in the future of global AI policy.

Regulation

Han considers the decisions by national governments to implement data localization.Footnote 48 Data are increasingly considered a strategic asset by governments. This has significant implications for emergent AI technologies that require large amounts of data for algorithm training. The impacts of AI blur the boundaries between public and private sectors. Countries have adopted data localization rules that require data to be stored domestically to both foster technological innovation and protect sensitive data. However, data localization is recognized as a burden on businesses and a drag on economic productivity. Han compares three cases—Vietnam, Singapore, and Indonesia—to consider why states localize. The argument has two parts—harnessing the economic benefits of networks and security externality. First, if states have a negative perception of the network of platforms, their malleability, and economic benefits, this increases the likelihood of data localization. Second, when domestic and/or foreign platforms are perceived to threaten domestic security, states are also more likely to implement data localization. Granted, the result emerges from a complex interplay of economic and national security concerns. The cases further illustrate that the strategic nature of data localization decisions shows how countries are influencing each other’s policy responses to AI.

Kennedy, Ozer, and Waggoner address the extent to which algorithm-assisted decision making by governments may erode public trust and accountability.Footnote 49 Particularly within the criminal justice system, both ethicists and the public have expressed concerns about racial bias and accuracy in algorithm-driven decision-making.Footnote 50 This is done with three pre-registered survey experiments with representative samples of the U.S. population. The authors find that respondents do not either dislocate blame when a judge makes a mistake by concurring with an algorithmic decision or magnify blame when they make a mistake after ignoring algorithmic input. That said, there are conditional effects based on respondents’ level of trust in experts. Those with greater trust in experts are more likely to blame them for mistakes if the algorithmic decision is ignored.

Tallberg, Lundgren, and Geith consider the views of non-state actors over the European Union’s (EU) groundbreaking AI Act.Footnote 51 As the EU’s actions on AI are considered a standard-setter and front-runner for national AI policies globally,Footnote 52 the lessons from Lundgren’s work render insights for future political conflicts. Dividing actors based on whether they are motivated by profit, Lundgren finds that while profit-driven actors (i.e., businesses) are critical of AI regulations that might inhibit innovation (which is to be expected), that relationship is conditional on the strength of a nation’s commercial AI sector. Importantly, all actors recognize the need for some regulation of emergent technology. Governments in countries with growing AI commercial sectors may find themselves in a difficult position of both being in the greatest need for AI regulation, but also facing increasing resistance from for-profit non-state actors in adopting it.

Subnational innovation

Parinandi, Crosson, Peterson, and NadarevicFootnote 53 and Mallinson, Azevedo, Best, Robles, and WangFootnote 54 consider the substantial role that the American states will play in setting AI policy in the United States. To date, the U.S. national government has taken a largely hands-off approach to AI regulation. Thus, some states are taking an active role in incentivizing and/or regulating the industry. Parinandi et al. focus on the politics of policy adoption. They argue that parties operating in the U.S.’s hyperpolarized political environment are likely to latch onto aspects of AI policy that match their brands. Using explanatory modeling of roll call votes and bill adoptions, he argues that both the economy and politics have shaped AI regulation in the states. Namely, Democratic legislators are more supportive of AI legislation that includes consumer protection, but AI legislation is less likely during times of high unemployment and inflation.

By focusing on state autonomous vehicle policy, Mallinson et al. make the argument that the state-level policy experiments will be the future of AI policy in the United States. In making this argument they consider the substantial regulatory fragmentation that exists in the United States due to federalism and the separation of powers. This results in a dynamic environment that affects the market and nonmarket strategies of firms. Furthermore, such regulatory fragmentation, which results in inconsistent policies across states, raises significant equity concerns. The case of AV policy supports each of these arguments, while also raising concerns about the administrative burdens of layering new AI policies on top of existing laws. They conclude by proposing a research agenda centered on state AI policy.

Conclusion

As AI technology rapidly shifts how many industries operate globally, governments are struggling to find the way forward in developing flexible, yet protective, policies. The exact balance of acceptable benefits versus risks will ultimately differ across political geographies, but efforts are also being made to establish more general governance principles.Footnote 55 Scholars in political science, public administration, and public policy have much to offer in both theorizing and understanding the myriad implications of AI and for making recommendations on governance. However, truly convergent science that bridges these studies with those in ethics, management, human resources, business administration, and more will also be required.Footnote 56 This special issue is one of the many what will be required to give space to working out ideas on AI politics, policy, and business.

Footnotes

1 Turing (Reference Turing1950).

2 Girasa (Reference Girasa2020).

4 Mallinson and Shafi (Reference Mallinson and Shafi2022); Wison (Reference Wison2000).

5 Jordan (Reference Jordan2019).

6 EC High-level Expert Group on Artificial Intelligence (2018).

7 Schiff, Schiff, and Pierson (Reference Schiff, Schiff and Pierson2021).

10 Schiff, Schiff, and Pierson (Reference Schiff, Schiff and Pierson2021).

11 Čerka, Grigienė, and Sirbikytė (Reference Čerka, Grigienė and Sirbikytė2017).

14 Tecuci (Reference Tecuci2012).

16 Mehr (Reference Mehr2017).

17 Mehr (Reference Mehr2017).

19 Neumann, Guirguis, and Steiner (Reference Neumann, Guirguis and Steiner2024).

21 Hine and Floridi (Reference Hine and Floridi2023).

22 Hine and Floridi (Reference Hine and Floridi2023).

23 Hine and Floridi (Reference Hine and Floridi2023, 286).

24 Bailey (Reference Bailey2022).

26 Bailey (Reference Bailey2022).

27 Zhu, Feng, and Chen (Reference Zhu, Feng and Chen2022).

28 Buhmann and Fieseler (Reference Buhmann and Fieseler2023).

29 Roy and Sreedhar (Reference Roy and Sreedhar2022).

30 Erman and Furendal (Reference Erman and Furendal2022).

32 Randewich (Reference Randewich2023); Powell and Dent (Reference Powell and Dent2023).

33 Verma and De Vynck (Reference Verma and De Vynck2023).

34 Georgieva (Reference Georgieva2024); Kochhar (Reference Kochhar2023).

36 Ivanchev (Reference Ivanchev2023).

37 Neumeister (Reference Neumeister2023).

39 Robles and Mallinson (Reference Robles and Mallinson2023a); Mikhaylov, Esteve, and Campion (Reference Mikhaylov, Esteve and Campion2018).

40 Mikhaylov, Esteve, and Campion (Reference Mikhaylov, Esteve and Campion2018).

41 Torfing (Reference Torfing2019).

43 Fujii and Managi (Reference Fujii and Managi2018).

44 Y. Lu and Gao (Reference Lu and Gao2022).

45 Stiglitz and Wallsten (Reference Stiglitz and Wallsten1999).

47 Robles and Mallinson (Reference Robles and Mallinson2023a).

48 Han (Reference Han2024).

49 Kennedy, Ozer, and Waggoner (Reference Kennedy, Ozer and Waggoner2024).

50 Robles and Mallinson (Reference Robles and Mallinson2023a); Brayne (Reference Brayne2020).

51 Tallberg, Lundgren, and Geith (Reference Tallberg, Lundgren and Geith2024).

52 af Malmborg (Reference af Malmborg2023); Meltzer and Tielemans (Reference Meltzer and Tielemans2022).

55 Robles and Mallinson (Reference Robles and Mallinson2023b); Cihon, Maas, and Kemp (Reference Cihon, Maas and Kemp2020); Erdélyi and Goldsmith (Reference Erdélyi and Goldsmith2022); Wirtz, Weyerer, and Sturm (Reference Wirtz, Weyerer and Sturm2020).

56 Petersen, Ahmed, and Pavlidis (Reference Petersen, Ahmed and Pavlidis2021); Angeler, Allen, and Carnaval (Reference Angeler, Allen and Carnaval2020).

References

af Malmborg, Frans. 2023. “Narrative dynamics in European Commission AI Policy—Sensemaking, Agency Construction, and Anchoring.” Review of Policy Research 40 (5): 757780.CrossRefGoogle Scholar
Angeler, David G., Allen, Craig R., and Carnaval, Ana. 2020. “Convergence Science in the Anthropocene: Navigating the Known and Unknown.” People and Nature 2 (1): 96102.CrossRefGoogle Scholar
Bailey, Diane E. 2022. “Emerging Technologies at Work: Policy Ideas to Address Negative Consequences for Work, Workers, and Society.” ILR Review 75 (3): 527551.CrossRefGoogle Scholar
Brayne, Sarah. 2020. Predict and Surveil: Data, Discretion, and the Future of Policing. New York: Oxford University Press.CrossRefGoogle Scholar
Buhmann, Alexander, and Fieseler, Christian. 2023. “Deep Learning Meets Deep Democracy: Deliberative Governance and Responsible Innovation in Artificial Intelligence.” Business Ethics Quarterly 33 (1): 146179. https://doi.org/10.1017/beq.2021.42.CrossRefGoogle Scholar
Bullock, Justin B. 2019. “Artificial Intelligence, Discretion, and Bureaucracy.” The American Review of Public Administration 49 (7): 751761.CrossRefGoogle Scholar
Campion, Averill, Gasco-Hernandez, Mila, Mikhaylov, Slava Jankin, and Esteve, Marc. 2022. “Overcoming the Challenges of Collaboratively Adopting Artificial Intelligence in the Public Sector.” Social Science Computer Review 40 (2): 462477.CrossRefGoogle Scholar
Čerka, Paulius, Grigienė, Jurgita, and Sirbikytė, Gintarė. 2017. “Is It Possible to Grant Legal Personality to Artificial Intelligence Software Systems?Computer Law & Security Review 33 (5): 685699.CrossRefGoogle Scholar
Cihon, Peter, Maas, Matthijs M., and Kemp, Luke. 2020. “Fragmentation and the Future: Investigating Architectures for International AI Governance.” Global Policy 11 (5): 545556.CrossRefGoogle Scholar
EC High-level Expert Group on Artificial Intelligence. 2018. A Definition of AI: Main Capabilities and Scientific Disciplines. Brussels: European Commission. https://ec.europa.eu/futurium/en/system/files/ged/ai_hleg_definition_of_ai_18_december_1.pdf.Google Scholar
Ellingrud, Kewilin, Sanghvi, Saurabh, Dandona, DGurneet Singh, Madgavkar, Anu, Chui, Michael, White, Olivia, and Hasebe, Paige. 2023. Generative AI and the Future of Work in America: McKinsey Global Institute. https://www.mckinsey.com/mgi/our-research/generative-ai-and-the-future-of-work-in-america.Google Scholar
Erdélyi, Olivia J., and Goldsmith, Judy. 2022. “Regulating Artificial Intelligence: Proposal for a Global Solution.” Government Information Quarterly. https://doi.org/10.1016/j.giq.2022.101748.CrossRefGoogle Scholar
Erman, Eva, and Furendal, Markus. 2022. “Artificial Intelligence and the Political Legitimacy of Global Governance.” Political Studies. https://doi.org/10.1177/00323217221126665.CrossRefGoogle Scholar
Fatima, Samar, Desouza, Kevin C., Buck, Christoph, and Fielt, Erwin. 2022. “Public AI Canvas for AI-Enabled Public Value: A Design Science Approach.” Government Information Quarterly 39 (4): 101722.CrossRefGoogle Scholar
Fujii, Hidemichi, and Managi, Shunsuke. 2018. “Trends and Priority Shifts in Artificial Intelligence Technology Invention: A Global Patent Analysis.” Economic Analysis and Policy 58: 6069. https://doi.org///doi.org/10.1016/j.eap.2017.12.006.CrossRefGoogle Scholar
Georgieva, Kristalina. 2024. “AI Will Transform the Global Economy. Let’s Make Sure It Benefits Humanity.” IMF Blog (blog), International Monetary Fund. January 14. https://www.imf.org/en/Blogs/Articles/2024/01/14/ai-will-transform-the-global-economy-lets-make-sure-it-benefits-humanity.Google Scholar
Girasa, Rosario. 2020. Artificial Intelligence as a Disruptive Technology. Cham, Switzerland: Palgrave Macmillan.CrossRefGoogle Scholar
Han, S. 2024. “Data and Statecraft: Why and How States Localize Data.” Business and Politics 26 (2).Google Scholar
Hine, Emmie, and Floridi, Luciano. 2023. “The Blueprint for an AI Bill of Rights: In Search of Enaction, at Risk of Inaction.” Minds and Machines 33 (2): 285292.CrossRefGoogle Scholar
Ivanchev, Yavor. 2023. “Artificial Intelligence and Firm-Level Labor and Organizational Dynamics.” U.S. Bureau of Labor Statistics. Last Modified November. Accessed February 1, 2024. https://www.bls.gov/opub/mlr/2023/beyond-bls/artificial-intelligence-and-firm-level-labor-and-organizational-dynamics.htm.Google Scholar
Jordan, Michael I. 2019. “Artificial Intelligence – The Revolution Hasn’t Happened Yet.” Harvard Data Science Review 1 (1): 18.Google Scholar
Keisler, Jeffrey M., Trump, Benjamin D., Wells, Emily, and Linkov, Igor. 2021. “Emergent Technologies, Divergent Frames: Differences in Regulator vs. Developer Views on Innovation.” European Journal of Futures Research 9 (1): 10. https://doi.org/10.1186/s40309-021-00180-5.CrossRefGoogle Scholar
Kennedy, R., Ozer, A., and Waggoner, P.. 2024. “The Paradox of Algorithms and Blame on Public Decisionmakers.” Business and Politics 26 (2).Google Scholar
Kochhar, Rakesh. 2023. Which U.S. Workers Are More Exposed to AI on Their Jobs? Washington, DC: Pew Research Center. https://www.pewresearch.org/social-trends/2023/07/26/which-u-s-workers-are-more-exposed-to-ai-on-their-jobs/.Google Scholar
Lu, Qinghua, Zhu, Liming, Xu, Xiwei, Whittle, Jon, Zowghi, Didar, and Jacquet, Aurelie. 2023. “Responsible AI Pattern Catalogue: A Collection of Best Practices for AI Governance and Engineering.” ACM Computing Survey.Google Scholar
Lu, Yu, and Gao, Xudong. 2022. “The Impact of Artificial Intelligence Technology on Market Public Administration in a Complex Market Environment.” Wireless Communications and Mobile Computing 2022: 5646234. https://doi.org/10.1155/2022/5646234.CrossRefGoogle Scholar
Madhavan, Raj, Kerr, Jaclyn A., Corcos, Amanda R., and Isaacoff, Benjamin P.. 2020. “Toward Trustworthy and Responsible Artificial Intelligence Policy Development.” IEEE Intelligent Systems 35 (5): 103108. https://doi.org/10.1109/MIS.2020.3019679.CrossRefGoogle Scholar
Mallinson, Daniel J., Azevedo, Lauren, Eric Best, Robles, Pedro, and Wang, Jue. 2024. “The Future of AI Is in the States: The Case of Autonomous Vehicle Policies.” Business and Politics 26 (2).Google Scholar
Mallinson, Daniel J., and Shafi, Saahir. 2022. “Smart home technology: Challenges and Opportunities for Collaborative Governance and Policy Research.” Review of Policy Research 39 (3): 330352. https://doi.org///doi.org/10.1111/ropr.12470.CrossRefGoogle Scholar
Mehr, Hila. 2017. Artificial Intelligence for Citizen Services and Government. Cambridge, MA: Harvard Kennedy School. //ash.harvard.edu/files/ash/files/artificial_intelligence_for_citizen_services.pdf.Google Scholar
Meltzer, Joshua P., and Tielemans, Aaron. 2022. The European Union AI Act: Next Steps and Issues for Building International Cooperation in AI. Washington, DC: Brookings Institution.Google Scholar
Mikhaylov, Slava Jankin, Esteve, Marc, and Campion, Averill. 2018. “Artificial Intelligence for the Public Sector: Opportunities and Challenges of Cross-Sector Collaboration.” Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 376 (2128): 20170357. https://doi.org/doi:10.1098/rsta.2017.0357.CrossRefGoogle ScholarPubMed
Neumann, Oliver, Guirguis, Katharina, and Steiner, Reto. 2024. “Exploring Artificial Intelligence Adoption in Public Organizations: A Comparative Case Study.” Public Management Review 26(1): 114141.CrossRefGoogle Scholar
Neumeister, Larry. 2023. “Lawyers Submitted Bogus Case Law Created by ChatGPT. A Judge Fined Them $5,000.” Associated Press. Last Modified June 22. Accessed February 1, 2024. https://apnews.com/article/artificial-intelligence-chatgpt-fake-case-lawyers-d6ae9fa79d0542db9e1455397aef381c.Google Scholar
Parinandi, S., Crosson, J., Peterson, K., and Nadarevic, S.. 2024. “Investigating the Politics and Content of U.S. State Artificial Intelligence Legislation.” Business and Politics 26 (2).Google Scholar
Petersen, Alexander M., Ahmed, Mohammed E., and Pavlidis, Ioannis. 2021. “Grand Challenges and Emergent Modes of Convergence Science.” Humanities and Social Sciences Communications 8 (1): 194.CrossRefGoogle Scholar
Png, Marie-Therese. 2022. “At The Tensions of South and North: Critical Roles of Global South Stakeholders in AI Governance.” In The Oxford Handbook of AI Governance, edited by Justin, B. Bullock, Chen, Yu-Che, Johannes Himmelreich, Valerie M. Hudson, Anton Korinek, Matthew, M. Young, and Zhang, Baobao. Oxfordshire, UK: Oxford University Press.Google Scholar
Powell, Catherine, and Dent, Alexandra. 2023. “Artificial Intelligence Enters the Political Arena.” Council on Foreign Relations. Last Modified May 24. Accessed February 1, 2024. https://www.cfr.org/blog/artificial-intelligence-enters-political-arena.Google Scholar
Randewich, Noel. 2023. “Companies Double Down on AI in June-quarter Analyst Calls.” Reuters. Last Modified August 1. Accessed February 1, 2024. https://www.reuters.com/technology/companies-double-down-ai-june-quarter-analyst-calls-2023-07-31/.Google Scholar
Robles, Pedro, and Mallinson, Daniel J.. 2023a. “Artificial Intelligence Technology, Public Trust, and Effective Governance.” Review of Policy Research.CrossRefGoogle Scholar
Robles, Pedro, and Mallinson, Daniel J.. 2023b. “Catching Up with AI: Pushing Toward a Cohesive Governance Framework.” Politics & Policy 51 (3): 355372.CrossRefGoogle Scholar
Roy, Varun, and Sreedhar, Vignesh. 2022. “The EU’s Capacity to Lead in the Transatlantic Alliance on AI Regulation.” Claremont-UC Undergraduate Research Conference on the European Union 2022: 7791.CrossRefGoogle Scholar
Schiff, Daniel S., Schiff, Kaylyn Jackson, and Pierson, Patrick. 2021. “Assessing Public Value Failure in Government Adoption of Artificial Intelligence.” Public Administration 100 (3): 653673.CrossRefGoogle Scholar
Sousa, Weslei Gomes de, de Melo, Elis Regina Pereira, De Souza Bermejo, Paulo Henrique, Sousa Farias, Rafael Araújo, and Gomes, Adalmir Oliveira. 2019. “How and Where Is Artificial Intelligence in the Public Sector Going? A Literature Review and Research Agenda.” Government Information Quarterly 36. https://doi.org/10.1016/j.giq.2019.07.004.CrossRefGoogle Scholar
Stiglitz, Joseph E., and Wallsten, Scott J.. 1999. “Public-Private Technology Partnerships: Promises and Pitfalls.” American Behavioral Scientist 43 (1): 5273. https://doi.org/10.1177/00027649921955155.CrossRefGoogle Scholar
Tallberg, J., Lundgren, M., and Geith, J.. 2024. “AI Regulation in the European Union: Examining Non-State Actor Preferences.” Business and Politics 26 (2).Google Scholar
Tecuci, Gheorghe. 2012. “Artificial Intelligence.” WIREs Computational Statistics 4 (2): 168180.CrossRefGoogle Scholar
Torfing, Jacob. 2019. “Collaborative Innovation in the Public Sector: The Argument.” Public Management Review 21 (1): 111. https://doi.org/10.1080/14719037.2018.1430248.CrossRefGoogle Scholar
Turing, A. M. 1950. “Computing Machinery and Intelligence.” Mind 59 (236): 433460.CrossRefGoogle Scholar
Verma, Pranshu, and De Vynck, Gerrit. 2023. “ChatGPT Took Their Jobs. Now They Walk Dogs and Fix Air Conditioners.” The Washington Post. Last Modified June 2. Accessed February 1, 2024. https://www.washingtonpost.com/technology/2023/06/02/ai-taking-jobs/.Google Scholar
Whetsell, Travis A, Siciliano, Michael D., Witkowski, Kaila G. K., and Leiblein, Michael J.. 2020. “Government as Network Catalyst: Accelerating Self-Organization in a Strategic Industry.” Journal of Public Administration Research and Theory 30 (3): 448464. https://doi.org/10.1093/jopart/muaa002.CrossRefGoogle Scholar
Wirtz, Bernd W., Weyerer, Jan C., and Sturm, Benjamin J.. 2020. “The Dark Sides of Artificial Intelligence: An Integrated AI Governance Framework for Public Administration.” International Journal of Public Administration 43 (9): 818829. https://doi.org/10.1080/01900692.2020.1749851.CrossRefGoogle Scholar
Wison, Carter A. 2000. “Policy Regimes and Policy Change.” Journal of Public Policy 20 (3): 247274. https://doi.org/10.1017/S0143814X00000842.CrossRefGoogle Scholar
Zhu, R., Feng, Z., and Chen, Q.. 2022. “Evolution of International Security Specifications for Artificial Intelligence.” In 2022 International Conference on Information System, Computing and Educational Technology (ICISCET), 23–25 May 2022, 217222.Google Scholar