2025 March/April LD Topic Analysis
By Azzy Xiang
Azzy Xiang is a varsity debater on the Dublin Jerome team, national qualifying in Congressional Debate her first time participating in the event, and a two-year state qualifier in Lincoln-Douglas. She is a staff writer at Kankee and Isegora Briefs, and a reporter for current events publication The Red Folder under Equality in Forensics, a national organization assisting low-income debaters through free resources and coaching.
Already halfway through the Mar/Apr LD season, what’s left for most ambitious debaters is the NSDA Last Chance qualifier. Out of hundreds of entries, only 16 usually qualify to the National Tournament in Des Moines, Iowa. As the competition is high, and arguments can get in-depth and technical, this topic analysis will focus on (1) definition strategies and framing the round, (2) framework strategies you can easily swap around, (3) arguments that can be used, and (4) responses and blocks. Of course, if you have a local tournament or circuit tournament that is not Last Chance, this topic analysis can surely help too.
In addition, I have extra evidence, casework, half-written cards for briefs, and other useful information that could benefit you for this topic. If you would like to have my evidence, feel free to email me at snazzyazzy@duck.com before the end of the Mar/Apr season and let me know!
Part 1: Definitions and Framing
Resolved: The development of Artificial General Intelligence (AGI) is immoral.
The development: This resolution does not include an actor such as the United States or China, which is uncommon during the regular season. That can be interpreted in the following ways depending on your side:
Since there is no actor, in order to prove that Artificial General Intelligence is immoral, it must be proven so in every instance and every possible actor
Since there is no actor, in order for the development of Artificial General Intelligence to be considered immoral, it must be harmful to society on the net.
Immoral: This can be easily defined with an observation embedded within a morality-based framework, or the concept of whether something is right or wrong. You can weigh morality in one of three ways:
Deontological: Whether an action is inherently right or wrong. For example, killing is inherently wrong no matter what.
Consequential: Whether an action causes consequences that are good or bad. For example, lying to save someone’s life is good, even if the activity of lying is inherently bad.
Both: There are philosophical principles of morality that allow for the assessment of both consequential and deontological impacts. For example, to assess morality, you can first look at the intention and then at the effects.
Keep in mind that the negative does not have to debate that the development of Artificial General Intelligence is MORAL, only that it is NOT immoral. That means you can make a case about the amorality of technology and explore other unique arguments down that avenue.
Artificial General Intelligence: A simple sweep of Wikipedia reveals that the true definition of AGI is contended. Some argue that AGI is already here, and others that it is impossible. Here are some useful cards that may supply either side of the definitions debate:
A pretty neutral definition: What is AGI? - Artificial General Intelligence Explained - AWS “Artificial general intelligence (AGI) is a field of theoretical AI research that attempts to create software with human-like intelligence and the ability to self-teach. The aim is for the software to be able to perform tasks that it is not necessarily trained or developed for.”
A slightly crazy definition: What is artificial general intelligence (AGI)? This page defines AGI as a hypothetical machine that can do any intellectual task a human being can, comparable to or exceeding human capabilities.
An even more crazy definition: What is Meant by AGI? On the Definition of Artificial General Intelligence This article talks about how no human intervention is required after an AGI is deployed.
AGI is already here: Artificial General Intelligence Is Already Here | NOEMA
AGI is impossible: Don’t believe the hype: AGI is far from inevitable | Radboud University
As you can see, there are a lot of ways to go! When definitions conflict, it is essential to use logical reasoning and other cards from your case to support them and win the round.
Part 2: Framework
Morality
Always a common value for debaters to follow, especially since this resolution concerns the issue of morality. You can pair this with any value criterion, utilitarian or deontological. It can be very effective to present morality as necessary in the round because the resolution calls upon the determination of whether AGI is moral, not whether it is effective or follows any other value.
Stanford Encyclopedia of Philosophy provides an interesting insight about morality and equality: Moral norms are universal in the sense that they apply to and bind everyone in similar circumstances. The principles expressing these norms are also thought to be general, rather than specific, in that they are formulable “without the use of what would be intuitively recognized as proper names, or rigged definite descriptions” (Rawls 1979, 131). They are also commonly held to be impartial, in holding everyone to count equally.
Structural Violence and Minimizing Structural Oppression
Read more about the framework here: Structural Violence Chapter Some parts of the article may be useful for framework cards.
Since AGI has the potential to be oppressive and harmful, it’s easy to point out structural flaws in AGI and how it could harm marginalized groups. However, on the negative side, AGI can be used as a tool to address structural biases and issues. If the affirmative gets into huge extinction, existential, and effective altruism arguments, by winning this framework the negative can weigh that these existential risks are unlikely and distract attention from real-world societal issues. This is a common critique of debaters who cite Bostrom, since he fearmongers by citing huge risks of AGI while being racist and uncaring about moral ideals.
Human Dignity
Similar to the previous framework, this is good for affirmative or negative cases focusing on either the way AGI can harm human dignity or protect it. Human rights abuses behind AGI, such as sweatshops and extensive labor, may be a strong affirmative case. It also allows for lots of ground on the negative about what kinds of beings have moral value, against affirmative arguments like AGI slavery, suffering, and natalism.
Eudaimonia
In Aristotle's philosophy, this is defined as flourishing and living well. Read more about the concept here: What is Eudaimonia? Aristotle and Eudaimonic Wellbeing It can be used as a unique version of a “maximizing well-being” or utilitarian framework, if your case is about bringing benefits to human society.
Part 3: Argument Brainstorming
Halfway through the season, I am quite certain that the majority of you have figured out stock arguments on both sides such as existential risk, innovation, arms races and more. Thus, in this section, I’ll be providing some very unique arguments that threw off quite a few opponents when I first tried them out. Since the scope of these are limited, they only serve to be inspiration and examples of the types of link chains you could come up with on this topic.
Affirmative
Because AGI uniquely has the ability to self-teach, it makes a lot of technology possible. However, that includes bad technology that can alter the planet and manage millions of data points in malicious ways. With the massive negative impacts attached to these potential technologies that firms are seeking to develop with AGI, you can outweigh the benefits the negative may bring.
Climate geoengineering
Artificial general intelligence for the upstream geoenergy industry: A review - ScienceDirect The integration of Artificial General Intelligence (AGI) in the upstream geoenergy industry marks the beginning of a transformative era.
Silicon Valley’s Push Into Geoengineering Firms are flocking to invest in geoengineering projects.
Why geoengineering is still a dangerous, techno-utopian dream ...geoengineering is a fallacy, since these methods need to be deployed at a scale large enough to impact the global climate system to be certain of their efficacy. But that wouldn’t be a test of geoengineering; it would actually be conducting geoengineering, which is an unimaginably large risk to take without knowing the potentially harmful consequences of such a planetary scale deployment. And some of these consequences are already known. Solar geoengineering, for example, alters rainfall patterns that can disrupt agriculture and water supplies. Injecting sulfate aerosols in the stratosphere above the Arctic to mimic volcano clouds, for example, can disrupt the monsoons in Asia and increase droughts, particularly in Africa, endangering food and water sources for two billion people.
The Risks of Geoengineering For example, solar geoengineering is inherently unpredictable and risks further destabilizing an already destabilized climate system. Models show that it would have an uneven effect regionally and could exacerbate climate change in countries on the front line of the crisis.
Factory farming
One can argue that AGI has the potential to reduce production costs of a practice that affects tens of billions of lives, and since animals are completely excluded from relevant ethics research, its development ignores the significance of animal suffering, and includes anti-animal biases.
The Role of Artificial Intelligence in Agriculture In his book, A Smarter Farm, Jack Uldrich states AGI (or artificial general intelligence – a higher level of AI) will take automation to the next level delivering on the promise of precision agriculture. In other words, while the technology may already exist, the addition of AI is enabling producers to optimize the potential of those tools — as was the case with the dairy producer. Two AI solutions being integrated into farming technologies.
This extremely large article explains several reasons why factory farming outweighs any other impact: AI ethics: the case for including animals
The ethical implications of AI entirely neglect animals, even though AI systems are likely to affect hundreds of billions of animals in the future.
AI systems are starting to be used in factory farms, and since they can reduce production costs, and start-ups are accelerating into the field with innovations like AGI, this could standardize AI in agriculture and affect tens of billions more animals.
Factory farming kills 70 billion mammals and birds, and 100 billion fish yearly, the most systemic example of inhumanity to another sentient animal. Even if animal lives are seen by some to "not matter" as much as humans, factory farming is morally wrong because the benefits are far outweighed by suffering.
On the other hand, the affirmative can tackle the case by highlighting systemic inequalities and structural problems with the development of AGI, particularly language loss and job loss. Although these arguments are more common, they can be developed in a punchy and unique way.
Language extinction
AI and the Death of Human Languages - Lux Capital
Understanding Gender and Racial Bias in AI
Intentions of developing AGI
Since the intentions of most firms and individuals to develop AGI is rooted in immoral ideals, the development of AGI could be interpreted as immoral.
The stated goal of many organizations in the field of artificial intelligence (AI) is to develop artificial general intelligence (AGI), an imagined system with more intelligence than anything we have ever seen. Without seriously questioning whether such a system can and should be built, researchers are working to create “safe AGI” that is “beneficial for all of humanity.” We argue that, unlike systems with specific applications which can be evaluated following standard engineering principles, undefined systems like “AGI” cannot be appropriately tested for safety. Why, then, is building AGI often framed as an unquestioned goal in the field of AI? In this paper, we argue that the normative framework that motivates much of this goal is rooted in the Anglo-American eugenics tradition of the twentieth century. As a result, many of the very same discriminatory attitudes that animated eugenicists in the past (e.g., racism, xenophobia, classism, ableism, and sexism) remain widespread within the movement to build AGI, resulting in systems that harm marginalized groups and centralize power, while using the language of “safety” and “benefiting humanity” to evade accountability. We conclude by urging researchers to work on defined tasks for which we can develop safety protocols, rather than attempting to build a presumably all-knowing system such as AGI.
Job loss
The challenges posed by AI are not technological, but must be met today
AGI could automate not only repetitive tasks, but also highly-skilled jobs, pushing some sections of the population into very long-term structural unemployment.
How to solve AI’s inequality problem
70% of the growth in US wage inequality between 1980 and 2016 was caused by automation. That’s mostly before the surge in the use of AI technologies. And Acemoglu worries that AI-based automation will make matters even worse. Early in the 20th century and during previous periods, shifts in technology typically produced more good new jobs than they destroyed, but that no longer seems to be the case. One reason is that companies are often choosing to deploy what he and his collaborator Pascual Restrepo call “so-so technologies,” which replace workers but do little to improve productivity or create new business opportunities.
Negative
This side opens a lot of room to engage in technical and niche arguments about specific use cases or scientific aspects of AGI. I’ll be exploring one of them: graph databases, which is a dynamic form of data that are needed to build real AGI models. Since these models require these types of databases, all benefits of the databases can be included in a case.
Warrants
Exploring the Data Landscapes of AGI — Knowledge Graphs Vs. Relational Databases | by SingularityNET
Writing AI/AGI algorithms that work on relational data is possible, but writing them by traversing a knowledge graph is much more natural. If your algorithm reaches a concept (e.g., “Thrillers” on your bookshelf), it can reason about other concepts related to it just by looking at the neighbor concepts in the graph (the indexes are made to perform this type of operation efficiently). In essence, knowledge graphs shine when it comes to
Impacts
General financial impacts:
Leveraging Graph Databases for Fraud Detection in Financial Systems Graph databases reveal patterns and relationships that would otherwise be hidden, allowing financial institutions to detect fraud faster and more efficiently.
From Losses to Savings: How Memgraph Helped Company X Save 7 Figures through Fraud Detection In this case study, you will learn how the collaboration between Memgraph and Company X yielded impressive results, demonstrating that incorporating graph analytics into the existing ML models greatly enhanced fraud detection effectiveness. By merging the capabilities of graph technology with the existing models, Company X achieved the following results: Significant increase in fraud detection across various types of claims. 135% increase in detection efficiency. Substantial savings in the seven-figure range for over a million claims processed.
Impacts to climate change:
How Data Science Can Help Fight Climate Change Understanding climate change is a complex task that involves analyzing patterns, identifying anomalies, and making predictions about future climate behaviors. The role of data in this process is indispensable, providing the backbone for informed analysis and decision-making.
Using Graph Technology for Environmental Data Compliance and Implementation With the visualization capabilities built into graph databases, you can easily see environmental problems highlighted in the data by region, site, or equipment. Once you have all your data collected, through remote sensing and other methods, you can identify problems and move into ground truthing by sending people into the field to verify data and fix compliance-related issues. Perhaps you prioritize sending technicians to the wells leaking “well” – above 10 kg/hr – before then moving on to the wells that are slightly above that threshold. As seen with this carbon management method, graph databases are flexible tools to track, visualize, and analyze data and effectively address environmental hazards to meet climate action plans. The more we get corporations on board with regulations to comply with environmental mandates, they will see for themselves the benefits of data transparency – both for their company and for the planet. Data is crucial to a company’s commitment to sustainability, from using data to maintain equipment to determining next steps for implementation. With emerging regulations addressing climate change, we have to effectively tackle the data – otherwise, we can’t effectively tackle the climate crisis.
Part 4: Blocks
In this section, I’ll try my best to provide you with some evidence and strategies for the most common arguments on each side.
AFF A2 NC (Affirmative answers to negative)
A2 Technology is neutral
A Closer Look at Systemic Bias in Tech | Robert F. Smith
These human biases can create data bias by influencing how new technology creators collect, interpret and label data. The biases can then impact data functionality or results. Some of the most common types of data bias include algorithmic bias, representation bias and decision bias. Algorithmic bias: This form of bias stems from inputting skewed or limited data into a new technology while it is being developed. Implementing skewed data commonly leads to recurring errors that create inequitable results. Representation bias: Representation bias happens when groups of people are disproportionately represented in datasets. This influence leads to unfair outcomes in machine learning and inequitable decisions. Design bias: This form of bias represents the discriminatory actions or beliefs that are embedded in algorithms or technologies throughout the design process. A biased design commonly results in unintentional inequities or discriminatory outcomes.
The following overviews are very important in your response:
Remember, we are debating over the development of AGI, and if humans develop something that reflects their intelligence, AGI will therefore gain biases and moral flaws as well, which is immoral.
AGI is inherently distinct from other algorithms because it can make its own decisions based on data it is fed. If AGI has cognitive capabilities, it doesn’t matter if humans try to use this flawed technology in a good way, because it can create its own consequences.
A2 AGI allows for various innovations
General defense:
Why Creation Cannot Go Beyond Its Creator: A Look at AGI and Human Intelligence | by Mexasol | Medium To explore why a creation might not go beyond its creator, let’s first break down what it means to create something. In any domain—whether it’s art, engineering, or biology—the creator imparts certain limitations and intentions upon their creation. A painting, no matter how complex, remains a reflection of the artist's vision. Similarly, an engineered system, while sophisticated, can only function within the parameters set by its creators. Now, when it comes to AGI, which aspires to replicate or even surpass human intelligence, the same principle applies. AGI systems are designed based on algorithms, data, and models built by humans. The machine’s "knowledge" is limited to what humans feed it—data sets, rules, training models, and environmental factors. No matter how fast it processes information or how adaptable it becomes, AGI is still a product of human knowledge, reasoning, and creativity.
Medical innovations:
(Defense) AI does not necessarily lead to more efficiency in clinical practice, research shows Researchers at the University Hospital Bonn (UKB) and the University of Bonn have therefore conducted a comprehensive analysis of existing studies on the effect of AI. They were able to show that AI does not automatically lead to an acceleration of work processes. Although AI is often seen as a solution for handling routine tasks such as monitoring patients, documenting care tasks and supporting clinical decisions, the actual effects on work processes are unclear. Particularly in data-intensive specialties such as genomics, pathology and radiology, where AI is already being used to recognize patterns in large amounts of data and prioritize cases, there is a lack of reliable data on efficiency gains.
Preliminary results presented highlight the capacity of LLMs to provide guidance that, while not generating direct instructions for the creation of biological weapons, present relevant insights that could assist in the execution of these attacks (Mouton et al., 2023).
A recent mapping conducted by GA.IA—Group for Integrated AI Analysis, a volunteer group of professionals committed to identifying, assessing, and predicting catastrophic risks associated with advanced AI models, including those that reach Artificial General Intelligence (AGI) capabilities, is presented in Table 1. The information revealed highlights the critical intersection between biological threats and AI. This detailed analysis emphasizes the emerging risks resulting from the convergence of criminal dissemination of biological manipulations and the potential role of AI in enhancing these pathogens for catastrophic purposes.
Climate change innovations:
Hype Or Reality: Will AI Really Solve Climate Change? Another danger is that behavioral changes driven by the mainstream adoption of AI technology are not always entirely environmentally friendly. The emergence of ride-sharing apps like Uber has cut the number of journeys taken in private cars. But it’s also reduced the use of even more environmentally friendly forms of public transport, leading to increased pollution.
NEG A2 AC (Negative answers to affirmative)
A2 AGI is bad for the environment
(Turn) The Impact of Artificial General Intelligence on Climate Reform AGI models have been used to study the ocean and the ways in which it both absorbs and transfers heat in order to predict its response to increasing temperatures. For example, AGI is being trained to gather information in the arctic over winter (when no ships are able to travel in this region) in order to monitor sea levels, temperature, etc. AGI has also been used in space through satellite imagery to capture forest fires among other environmental devastations...The next group in which AGI collected data on climate change could have a huge potential effect is businesses. It was found that only 33% of business leaders account for the effects of climate change, while it is estimated to have a trillion dollar effect on the US economy alone. Climate change slows the supply chain and disrupts the interconnectedness of the market. However AGI combined with GIS (Geographic Information Systems) technology allows analysts to create smart maps “that layer climate information, hazard data, and satellite imagery on the regions and networks that compose a business’s supply chain.” These maps must both be extremely accurate and detail oriented as well as project a global large scale picture in order to create a trustworthy picture. AGI has the potential to both predict future destructive potentials of climate change, mitigate potential losses, as well as inform the general public and governments to create influential policy reform.”
(Defense) The Illusion Of AI’s Existential Risk | NOEMA Carbon emissions of AI training and inference today are minuscule compared to the computing sector as a whole, let alone the more carbon-intensive activities of our industrial civilization, such as construction and transportation, which together account for more than a third of our total carbon emissions. While the use of AI may continue to grow dramatically over the coming years, its energy efficiency will also likely continue to improve dramatically, making it unclear whether AI will ever be among the most significant carbon emitters on a global scale.
A2 Labor exploitation
I would recommend a three-prong response on labor exploitation since it can surface quite commonly, given that currently large amounts of data are needed to train models and people are being harmed in large numbers to force this amount of data out:
(O/V) Many manufactured products are created by exploited workers, but that doesn’t make the development of the products themselves immoral. This is an issue of government and business regulation, not the morality of AGI.
(Defense) Moving toward truly responsible AI development in the global AI market U.S. policy is essential for ensuring fair treatment of data workers both domestically and abroad. For instance, reforming existing labor laws to accurately support the nuances of data work and ensuring that the researchers and companies that are using these platforms compensate domestic workers in accordance with state and federal guidelines is crucial. Such reforms should consider the unique nature of data work, which often involves irregular hours, varying tasks and workloads, and the use of digital platforms that may not fit neatly into traditional labor law frameworks. By updating these laws, the U.S. can better protect data workers from exploitation.
(Turn) AI is being used to map supply chains for risk of forced labor, and AGI can too. Can Artificial Intelligence Fix Slavery In Supply Chains? AI-powered supplier risk management can help organizations reduce the time it takes to evaluate supplier risk by up to 63%, enabling them to make more informed decisions and improve their supply chain resilience.
A2 Existential Risk
(Defense) ‘Superintelligence,’ Ten Years On A 2024 paper by Adriana Placani of the University of Lisbon sees anthropomorphism in AI as “a form of hype and fallacy.” As hype, it exaggerates AI capabilities “by attributing human-like traits to systems that do not possess them.” As fallacy, it distorts “moral judgments about AI, such as those concerning its moral character and status, as well as judgments of responsibility and trust.” A key problem, she contends, is that anthropomorphism is “so prevalent in the discipline [of AI] that it seems inescapable.” This is because “anthropomorphism is built, analytically, into the very concept of AI.” The name of the field “conjures expectations by attributing a human characteristic—intelligence—to a non-living, non-human entity.” Many who work with code find the prospect of programs becoming goal-seeking, power-seeking, and “making their own decisions” fundamentally implausible. Code is like clay. Programmers have to mould it into shape (and test and debug it) if it is to do something useful. Even then, it is not entirely reliable. Goals have to put into software by humans. What’s missing is the seeking. There is nothing completely equivalent to “desire” or “intent” in executable code. Sure, a coder can call the main function of a program “goal” or “understanding” but as Drew McDermott pointed out in his classic critique, “Artificial Intelligence Meets Natural Stupidity,” decades ago: “A major source of simple-mindedness in AI programs is the use of mnemonics like ‘UNDERSTAND’ or ‘GOAL’ to refer to programs and data structures.” If a researcher “calls the main loop of his program ‘UNDERSTAND,’ he is (until proven innocent) merely begging the question.” Such a coder might “mislead a lot of people, most prominently himself.”
(Turn) AGI solves existential risk by improving governance quality.
Artificial Intelligence as exit strategy from the age of acute existential risk — LessWrong In any case, a super-human intelligence is the definitive governance tool: it would be capable of proposing social and political solutions superior to those that the human mind can develop, it would have a wide predictive superiority, and since it is not human it would not have particularistic incentives
AI in government: Top use cases | IBM Governments that use AI can have more powerful predictive analytics that help them with important tasks such as external threat detection, health crises and financial issues like inflation. By understanding what is likely to happen quickly, governments can make smarter decisions that might minimize the effect of these issues.
(O/V) AGI could help humanity address all of these risks too, which makes its development morally justified. That could add up to more safety progress and a more careful approach to AGI.
Hopefully, this topic analysis was helpful to your preparation for Last Chance, or if not, any local or national circuit tournament you may have in mind. If you’ve made it to the end of this article, it only serves to prove that you’re a determined debater and will do amazing. Remember that you are awesome, and best of luck! Azzy out, and see you around for NSDA nationals! 😉
This marks the end of this post. If you have any further questions, please feel free to email us via our email: resources.debate@gmail.com. Please spread the word to other debaters who you think may find this website useful! Make sure to check out our other posts, as they're guaranteed to help.