2025 March/April LD Topic Analysis

By Azzy Xiang

Azzy Xiang is a varsity debater on the Dublin Jerome team, national qualifying in Congressional Debate her first time participating in the event, and a two-year state qualifier in Lincoln-Douglas. She is a staff writer at Kankee and Isegora Briefs, and a reporter for current events publication The Red Folder under Equality in Forensics, a national organization assisting low-income debaters through free resources and coaching.


Already halfway through the Mar/Apr LD season, what’s left for most ambitious debaters is the NSDA Last Chance qualifier. Out of hundreds of entries, only 16 usually qualify to the National Tournament in Des Moines, Iowa. As the competition is high, and arguments can get in-depth and technical, this topic analysis will focus on (1) definition strategies and framing the round, (2) framework strategies you can easily swap around, (3) arguments that can be used, and (4) responses and blocks. Of course, if you have a local tournament or circuit tournament that is not Last Chance, this topic analysis can surely help too. 


In addition, I have extra evidence, casework, half-written cards for briefs, and other useful information that could benefit you for this topic. If you would like to have my evidence, feel free to email me at snazzyazzy@duck.com before the end of the Mar/Apr season and let me know!



Part 1: Definitions and Framing


Resolved: The development of Artificial General Intelligence (AGI) is immoral. 



The development: This resolution does not include an actor such as the United States or China, which is uncommon during the regular season. That can be interpreted in the following ways depending on your side:


Immoral: This can be easily defined with an observation embedded within a morality-based framework, or the concept of whether something is right or wrong. You can weigh morality in one of three ways:


Keep in mind that the negative does not have to debate that the development of Artificial General Intelligence is MORAL, only that it is NOT immoral. That means you can make a case about the amorality of technology and explore other unique arguments down that avenue. 


Artificial General Intelligence: A simple sweep of Wikipedia reveals that the true definition of AGI is contended. Some argue that AGI is already here, and others that it is impossible. Here are some useful cards that may supply either side of the definitions debate:



As you can see, there are a lot of ways to go! When definitions conflict, it is essential to use logical reasoning and other cards from your case to support them and win the round.



Part 2: Framework


Morality


Always a common value for debaters to follow, especially since this resolution concerns the issue of morality. You can pair this with any value criterion, utilitarian or deontological. It can be very effective to present morality as necessary in the round because the resolution calls upon the determination of whether AGI is moral, not whether it is effective or follows any other value.

Stanford Encyclopedia of Philosophy provides an interesting insight about morality and equality: Moral norms are universal in the sense that they apply to and bind everyone in similar circumstances. The principles expressing these norms are also thought to be general, rather than specific, in that they are formulable “without the use of what would be intuitively recognized as proper names, or rigged definite descriptions” (Rawls 1979, 131). They are also commonly held to be impartial, in holding everyone to count equally. 


Structural Violence and Minimizing Structural Oppression


Read more about the framework here: Structural Violence Chapter Some parts of the article may be useful for framework cards.

Since AGI has the potential to be oppressive and harmful, it’s easy to point out structural flaws in AGI and how it could harm marginalized groups. However, on the negative side, AGI can be used as a tool to address structural biases and issues. If the affirmative gets into huge extinction, existential, and effective altruism arguments, by winning this framework the negative can weigh that these existential risks are unlikely and distract attention from real-world societal issues. This is a common critique of debaters who cite Bostrom, since he fearmongers by citing huge risks of AGI while being racist and uncaring about moral ideals. 


Human Dignity


Similar to the previous framework, this is good for affirmative or negative cases focusing on either the way AGI can harm human dignity or protect it. Human rights abuses behind AGI, such as sweatshops and extensive labor, may be a strong affirmative case. It also allows for lots of ground on the negative about what kinds of beings have moral value, against affirmative arguments like AGI slavery, suffering, and natalism. 


Eudaimonia


In Aristotle's philosophy, this is defined as flourishing and living well. Read more about the concept here: What is Eudaimonia? Aristotle and Eudaimonic Wellbeing It can be used as a unique version of a “maximizing well-being” or utilitarian framework, if your case is about bringing benefits to human society.


Part 3: Argument Brainstorming


Halfway through the season, I am quite certain that the majority of you have figured out stock arguments on both sides such as existential risk, innovation, arms races and more. Thus, in this section, I’ll be providing some very unique arguments that threw off quite a few opponents when I first tried them out. Since the scope of these are limited, they only serve to be inspiration and examples of the types of link chains you could come up with on this topic. 


Affirmative


Because AGI uniquely has the ability to self-teach, it makes a lot of technology possible. However, that includes bad technology that can alter the planet and manage millions of data points in malicious ways. With the massive negative impacts attached to these potential technologies that firms are seeking to develop with AGI, you can outweigh the benefits the negative may bring.


Climate geoengineering

Artificial general intelligence for the upstream geoenergy industry: A review - ScienceDirect The integration of Artificial General Intelligence (AGI) in the upstream geoenergy industry marks the beginning of a transformative era.

Silicon Valley’s Push Into Geoengineering Firms are flocking to invest in geoengineering projects.

Why geoengineering is still a dangerous, techno-utopian dream ...geoengineering is a fallacy, since these methods need to be deployed at a scale large enough to impact the global climate system to be certain of their efficacy. But that wouldn’t be a test of geoengineering; it would actually be conducting geoengineering, which is an unimaginably large risk to take without knowing the potentially harmful consequences of such a planetary scale deployment. And some of these consequences are already known. Solar geoengineering, for example, alters rainfall patterns that can disrupt agriculture and water supplies. Injecting sulfate aerosols in the stratosphere above the Arctic to mimic volcano clouds, for example, can disrupt the monsoons in Asia and increase droughts, particularly in Africa, endangering food and water sources for two billion people.

The Risks of Geoengineering For example, solar geoengineering is inherently unpredictable and risks further destabilizing an already destabilized climate system. Models show that it would have an uneven effect regionally and could exacerbate climate change in countries on the front line of the crisis.


Factory farming

One can argue that AGI has the potential to reduce production costs of a practice that affects tens of billions of lives, and since animals are completely excluded from relevant ethics research, its development ignores the significance of animal suffering, and includes anti-animal biases.

The Role of Artificial Intelligence in Agriculture In his book, A Smarter Farm, Jack Uldrich states AGI (or artificial general intelligence – a higher level of AI) will take automation to the next level delivering on the promise of precision agriculture. In other words, while the technology may already exist, the addition of AI is enabling producers to optimize the potential of those tools — as was the case with the dairy producer. Two AI solutions being integrated into farming technologies.

This extremely large article explains several reasons why factory farming outweighs any other impact: AI ethics: the case for including animals

On the other hand, the affirmative can tackle the case by highlighting systemic inequalities and structural problems with the development of AGI, particularly language loss and job loss. Although these arguments are more common, they can be developed in a punchy and unique way.


Language extinction

AI and the Death of Human Languages - Lux Capital

Understanding Gender and Racial Bias in AI 


Intentions of developing AGI 

Since the intentions of most firms and individuals to develop AGI is rooted in immoral ideals, the development of AGI could be interpreted as immoral.

The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence | First Monday

The stated goal of many organizations in the field of artificial intelligence (AI) is to develop artificial general intelligence (AGI), an imagined system with more intelligence than anything we have ever seen. Without seriously questioning whether such a system can and should be built, researchers are working to create “safe AGI” that is “beneficial for all of humanity.” We argue that, unlike systems with specific applications which can be evaluated following standard engineering principles, undefined systems like “AGI” cannot be appropriately tested for safety. Why, then, is building AGI often framed as an unquestioned goal in the field of AI? In this paper, we argue that the normative framework that motivates much of this goal is rooted in the Anglo-American eugenics tradition of the twentieth century. As a result, many of the very same discriminatory attitudes that animated eugenicists in the past (e.g., racism, xenophobia, classism, ableism, and sexism) remain widespread within the movement to build AGI, resulting in systems that harm marginalized groups and centralize power, while using the language of “safety” and “benefiting humanity” to evade accountability. We conclude by urging researchers to work on defined tasks for which we can develop safety protocols, rather than attempting to build a presumably all-knowing system such as AGI.


Job loss

The challenges posed by AI are not technological, but must be met today

AGI could automate not only repetitive tasks, but also highly-skilled jobs, pushing some sections of the population into very long-term structural unemployment.

How to solve AI’s inequality problem 

70% of the growth in US wage inequality between 1980 and 2016 was caused by automation. That’s mostly before the surge in the use of AI technologies. And Acemoglu worries that AI-based automation will make matters even worse. Early in the 20th century and during previous periods, shifts in technology typically produced more good new jobs than they destroyed, but that no longer seems to be the case. One reason is that companies are often choosing to deploy what he and his collaborator Pascual Restrepo call “so-so technologies,” which replace workers but do little to improve productivity or create new business opportunities.


Negative

This side opens a lot of room to engage in technical and niche arguments about specific use cases or scientific aspects of AGI. I’ll be exploring one of them: graph databases, which is a dynamic form of data that are needed to build real AGI models. Since these models require these types of databases, all benefits of the databases can be included in a case.


Warrants

Exploring the Data Landscapes of AGI — Knowledge Graphs Vs. Relational Databases | by SingularityNET 

Writing AI/AGI algorithms that work on relational data is possible, but writing them by traversing a knowledge graph is much more natural. If your algorithm reaches a concept (e.g., “Thrillers” on your bookshelf), it can reason about other concepts related to it just by looking at the neighbor concepts in the graph (the indexes are made to perform this type of operation efficiently). In essence, knowledge graphs shine when it comes to


Impacts

General financial impacts: 

Impacts to climate change:



Part 4: Blocks

In this section, I’ll try my best to provide you with some evidence and strategies for the most common arguments on each side. 

AFF A2 NC (Affirmative answers to negative)

A2 Technology is neutral

A Closer Look at Systemic Bias in Tech | Robert F. Smith 

These human biases can create data bias by influencing how new technology creators collect, interpret and label data. The biases can then impact data functionality or results. Some of the most common types of data bias include algorithmic bias, representation bias and decision bias. Algorithmic bias: This form of bias stems from inputting skewed or limited data into a new technology while it is being developed. Implementing skewed data commonly leads to recurring errors that create inequitable results. Representation bias: Representation bias happens when groups of people are disproportionately represented in datasets. This influence leads to unfair outcomes in machine learning and inequitable decisions. Design bias: This form of bias represents the discriminatory actions or beliefs that are embedded in algorithms or technologies throughout the design process. A biased design commonly results in unintentional inequities or discriminatory outcomes.

The following overviews are very important in your response:


A2 AGI allows for various innovations

General defense:

Why Creation Cannot Go Beyond Its Creator: A Look at AGI and Human Intelligence | by Mexasol | Medium To explore why a creation might not go beyond its creator, let’s first break down what it means to create something. In any domain—whether it’s art, engineering, or biology—the creator imparts certain limitations and intentions upon their creation. A painting, no matter how complex, remains a reflection of the artist's vision. Similarly, an engineered system, while sophisticated, can only function within the parameters set by its creators. Now, when it comes to AGI, which aspires to replicate or even surpass human intelligence, the same principle applies. AGI systems are designed based on algorithms, data, and models built by humans. The machine’s "knowledge" is limited to what humans feed it—data sets, rules, training models, and environmental factors. No matter how fast it processes information or how adaptable it becomes, AGI is still a product of human knowledge, reasoning, and creativity.

Medical innovations: 

Climate change innovations:

Hype Or Reality: Will AI Really Solve Climate Change? Another danger is that behavioral changes driven by the mainstream adoption of AI technology are not always entirely environmentally friendly. The emergence of ride-sharing apps like Uber has cut the number of journeys taken in private cars. But it’s also reduced the use of even more environmentally friendly forms of public transport, leading to increased pollution.


NEG A2 AC (Negative answers to affirmative)

A2 AGI is bad for the environment


A2 Labor exploitation

I would recommend a three-prong response on labor exploitation since it can surface quite commonly, given that currently large amounts of data are needed to train models and people are being harmed in large numbers to force this amount of data out:


A2 Existential Risk



Hopefully, this topic analysis was helpful to your preparation for Last Chance, or if not, any local or national circuit tournament you may have in mind. If you’ve made it to the end of this article, it only serves to prove that you’re a determined debater and will do amazing. Remember that you are awesome, and best of luck! Azzy out, and see you around for NSDA nationals! 😉



This marks the end of this post. If you have any further questions, please feel free to email us via our email: resources.debate@gmail.com. Please spread the word to other debaters who you think may find this website useful! Make sure to check out our other posts, as they're guaranteed to help.