Navigating the New Frontier: AI’s role in shaping political landscapes
Nobody disputes that Artificial Intelligence (AI) will dramatically change how citizens interact with their government. What is up for debate is which regime types will benefit or lose by AI advancement. AI has the potential to harm democracies and strengthen autocracies. In a time of widespread democratic backsliding, we need a robust regulatory system to protect us from the harms of AI. While The Terminator may provide a scary image of what is to come, the reality is that the dangers posed by AI are much more nuanced—its ability to impact our politics and psychology is far more realistic, yet equally as grave.
AI requires training, which requires immense quantities of data. To do this, developers ‘scrape’ the web endlessly, in the process incorporating data antithetical to the spirit of democracies into the AI’s thought processes. Put simply, AI meant for everyday use in a democracy will be exposed to articles, images, and outright propaganda produced in China or Russia, thereby adopting the misinformation produced by the country. For example, there’s a documented case of AI being unwittingly trained on Chinese government historical websites that detailed Mao’s successes but not his famine-producing policies, resulting in the American-made AI espousing pro-Mao sympathies.
It’s not just the government-produced data but the social media from the citizens within authoritarian states that taint democratically-made AI. Citizen self-censorship on social media under authoritarian rule is well-documented. Essentially, the oppressed citizens hold their tongues on the web, either publicly expressing false government propaganda or intentionally failing to contradict the government. Then while being trained, AI adopts and re-tells the bias from the silence.
Data from abroad can harm our AI, but the data at home is no better. Multiple scholars point to AI’s bureaucratic use as a conduit for reinforcing racism, sexism, and other existing societal biases. More concretely, AI will make decisions about welfare, education, health insurance, and more on data that misrepresents racialised minorities, ultimately undermining equality inherent to democracy. Even before AI, there was well-established evidence of the underrepresentation of people of colour, women, and LGBTQ+ individuals in public sector data used in policy-making. This had consequences, such as subpar healthcare, predatory mortgage approval, and more for these groups. Before, with human interaction, there was the potential to catch this and course correct. Now with automation from AI, the discrimination is compounded with no chance of getting caught.
Scholars often point to AI-fueled algorithms producing echo chambers harmful to democracy. AI algorithms result in citizens’ exposure to posts and ideas similar to what they already liked, which is not in the interest of producing an informed citizenry needed in a democracy. Others build on this work, claiming AI information vacuums limit citizens’ epistemic agency, producing a society where people cannot make meaningful choices because it is impossible to locate helpful information. Simultaneously, this process benefits authoritarian regimes by ensuring citizens only have access to information that paints them in a desirable light, thereby reducing the likelihood of resistance.
Multiple experts assert that centralised AI policies allow nations of all regime types to develop AI in a manner beneficial to their purposes. In practice, this means authoritarian regimes’ hierarchical nature gives them an edge over democracies when it comes to developing AI and gaining a competitive technological advantage. Authoritarian regimes’ ability to control their citizens allows them to impose policies mandating AI trained on data sympathetic to the regime. For example, Saudi Arabian centralisation allows the government to engage in AI research directly instead of relying solely on businesses, and China uses its centralised power to pressure businesses, ensuring they produce AI sympathetic to the national interest. China also uses government oversight to co-opt private companies into building-in monitoring processes and anti-democratic sympathies into AI.
When comparing American and Chinese AI development policies, the Chinese model most effectively ensures AI development supports the regime. The Chinese model is centralised to ensnare businesses to develop AI specifically for government use, whereas the American regulatory model can only be described as patchwork, with different often contradictory rules applying in each state, leading AI innovators to pack up shop and move elsewhere.
Leading academics universally conclude AI curtails citizens’ rights in dictatorships and enhances the autocrat’s position. AI reduces the cost of surveillance, censorship, and decision-making through automation. In this way, authoritarian regimes use AI to solidify their stance by curtailing privacy rights. Authoritarian governments can develop a repression model based on China’s “Social Credit” system, where AI monitors civilians and gives them a score for good behaviour; low scores result in the loss of mobility and speech rights, thereby using AI to enforce state-sponsored ideology.
Artificial Intelligence poses enormous dangers to our way of life. It creates echo chambers worsening radicalisation and polarisation. The data it’s trained on is riddled with systemic errors leading to the creation of AIs with autocratic sympathies and pre-existing societal biases. The leading democratic regulatory framework is not sufficient to fuel innovation, leaving us at a comparative disadvantage. Finally, AI has the potential to drastically increase oppression in dictatorships. If we are to successfully navigate this new Information Age, then we need regulation to ensure better data quality and innovation.