Boston, Massachusetts, US
Posts: 10
Originally posted by
Norse_history on May 29, 2025 09:18
It is not easy to come up with the “best” way to approach AI in education, and as both teachers and students have begun to use AI, it is as important as ever that we try. First and foremost, the biggest risk of free and open access to AI among students is the risk of student’s losing their ability to learn and critically think for themselves. AI, when used, should be a supplementary tool and nothing more. It is clearly wrong that people allow AI to influence their opinions and learning, as AI is controlled by the few, who may (or may not) have ulterior motives. When students begin to use AI to either formulate their own opinions or write opinion work, the risk of group think increases. Even if a student is simply too lazy or busy to write an opinion piece, the artificially generated words may influence the thinking of other students who read it. Not only does AI pose the risk of badly formed opinions, it also runs the risk of hurting students’ working ability in the future. Students who rely solely on AI are not able to dedicate themselves to important tasks, something which they will undoubtedly be asked to do without (just) AI in the workforce. For some students, including those at our dinner table, ChatGPT and other AI language models are not capable of writing essays at the level we do, so we are more inclined to do our own work. However, for students who haven’t had the same learning opportunities as us, AI may seem like an easy way to secure decent grades for the lowest possible effort. This teaches students to not have any work ethic, which might prove their downfall in professional settings.
The best way to avoid these risks, while at the same time recognizing the increasing importance of AI, is to encourage students to use AI for little things, such as background information that might help them come up with a good essay. Students must also be taught how to write well, and how AI cannot replicate the human writing style. In order to ensure proper usage of AI, students in high schools across the country should have to take some sort of mandatory AI course. In this course, they should be taught the acceptable use of AI, how to best use AI to minimize workloads while maximizing quality of work, and when AI really shouldn’t be used, and why. This isn’t easy, and there will always be students who abuse it. To protect against that, teachers should not be afraid to do in class writing to make sure students can function without any help, as well as using technology to track when a student might be using AI. Don’t punish minor AI use, and major AI use will likely fall.
While I agree that AI should certainly not be used to do the work for a student, I do think there are benefits in using AI to help students learn. One of the benefits of AI is that, unlike just looking something up, a student can ask AI to explain a concept that they don't understand in a way that makes sense to the student. This use of AI can enhance education since if a student is not properly learning in school, AI explanations can help them. While this does run into the problem of misinformation, it avoids a student not learning which is arguabley . I think this application of AI is what students should learn from an AI use class.
Boston, Massachusetts , US
Posts: 14
Ethics of AI
The concept of AI making decisions in war autonomously, without human intervention seems to be more of a dystopian vision than reality, however now knowing that there are jets that can be manned by AI makes me believe this is a dystopian present. When AI “autonomously” decides who lives or dies, it removes any human aspect of these creations, showing humans are willfully forfeiting rights to AI. Furthermore, we as humans are elevating AI to an even higher, almost like a god, when we give it the power to kill. This power isn’t anything that humans or AI should have. Even more apocalyptics is the idea that AI can disobey orders made by humans, a science fiction concept that seems less like fiction every day. While I do believe AI will have severe consequences on the ethics of war, I also don’t believe that AI will ever be able to think independently. AI is man-made and requires some algorithm or coding to tell it what to do even when it’s making seemingly decisions on itself, the AI is referencing some criteria that is telling it to commit whatever act. Furthermore, this highlights the issue of accountability of using AI in warfare. You can’t charge an AI algorithm for war crimes, yet there are many people who work into making these AI usable in warfare, shifting the playing field. I predict bans on AI would be ineffective, particularly when considering how easily they are accessible in certain countries such as the United States. Furthermore, company holders and CEOs hold the power of these AI in their own hands, not the government. Therefore how could a lawful ban on AI even happen in the first place if those in power are the ones benefiting and creating these automatons. I think similar to the arms race seen during the Cold War, no government is willing to forfeit this strategic advantage, costing the lives and perhaps humanity of their own citizens.
Boston, Massachusetts , US
Posts: 15
LTQ 9: The Ethics of AI Response
Originally posted by
iadnosdoyb on May 29, 2025 13:33
The integration of artificial intelligence into education and everyday life presents a complex ethical landscape. In school, the main concerns center around academic integrity. The widespread use of AI has blurred the line between cheating and academic support. While AI can serve as a valuable tool for brainstorming, or improving writing, full reliance on it without transparency can undermine the purpose of education. At the same time, we must ask whether the condemnation of AI use is always fair. Educators themselves use AI to create lesson plans, grade essays, and draft emails. Shouldn’t there be a shared standard? If students are penalized for using AI , then educators using it in their work should also be transparent. Ethically, the focus should shift from punishment to teaching responsible AI use where students are encouraged to disclose when and how they used AI and are graded on their ability to actually complete the assignment. In everyday life, AI has more concerns. AI is increasingly influencing people’s thoughts on different subjects. This can help with understanding, but it risks replacing individual reflection with something artificial. If people rely on AI to form opinions they lose touch with the nuance that makes us human. Human creativity, and the ability to wrestle with moral ambiguity are traits that no machine can fully replicate. Ultimately, the ethical use of AI in both education and everyday life requires balance. AI can and should be used as a tool andnot a crutch. And both institutions and individuals have a responsibility to ensure its use supports, rather than replaces, human thought.
I believe and also agree with the idea that there are so many ethical challenges with having AN integrated within not only in education but with our quickly changing daily life. AI itself as a whole is a difficult thing to ban entirely, but at the same time consistently using AI to slowly replace daily life and humanity is also a problem as well. In schools especially, the line between cheating and support has definitely gotten blurry. I’ve seen how AI can be a powerful tool for brainstorming or improving writing, but I also understand how relying on it too much can take away from the learning process. It creates the question of what is considered human thought and what is artificial intelligence. However AI already is used within schools as well. Hidden under the mask of apps or browser installments we have apps such as Grammarly or even Google itself—autofill and autocorrect—that are pushed as good to use. But they in the end are also AI. At the end of the day, I believe AI should be seen as a tool, not a crutch. We need to find a balance where it supports human thought without replacing it. That means both individuals and institutions have to take responsibility for how it’s used—encouraging openness, fostering understanding, and making sure it ultimately enhances rather than diminishes our capacity to think and learn.
Boston, Massachusetts, US
Posts: 11
AI has become an increasingly popular topic among students, teachers, philosophers, and honestly anyone who takes any interest in recent events. There are countless debates surrounding its legitimacy and ethical concerns, but there is no question that it is an important conversation to have on what AI means for our future. Education and authenticity seem to be the most pressing issue online and reasonably so. Any social platform you use, you can see the effects of AI and the multitude of jokes that arose because of it. Concerns around if our new generation of doctors are going to know what to do or lawyers not actually learning anything because of their reliance on AI in law and med school. While it seems obvious this won’t really become a problem because of the rigorous tests and time that goes into securing those occupations, it does raise the question: when does AI begin to replace authenticity? Cheating utilizing AI isn’t a rare instance and it is more likely a student will use it to complete an entire assignment if they don’t feel like it or use it when they don’t know how to start or end an essay. Is this a real problem though? It’s hard to say definitively but it really does depend on the situation. I personally have used AI to aid me in essays if I reach a dead end in my thinking process, asking it to form a transition between two paragraphs or how to start one so I can get back on track and not stay stuck on one measly sentence; others may feel that this takes away from my own voice or education. I can understand this perspective since I am limiting my own original ideas in the paper, but I’d say using it on a smaller scale actually helps me find my voice. I find myself often stuck in one part of something and it discourages me from trying to complete a task at all, but when I have that extra help to continue it and not become overwhelmed with one small thing, I can feel more encouraged to do well on the assignment. However, using tools like AI for a whole assignment can be troubling. It removes the learning process and critical thinking that comes with doing actual work. Although some assignments often feel like busy-work or not worth doing, they all serve a purpose to teach us something new or strengthen a skill we already have, like close reading or analyzing a passage. Simply giving up completely to have someone/something else do the work diminishes our own ability to learn and automatically relinquishes our integrity. So should we allow AI in school? To an extent, I believe that it can be more helpful than harmful when used thoughtfully rather than lazily.
Boston, Massachusetts, US
Posts: 11
Originally posted by
Gatsby on May 30, 2025 10:15
The concept of AI making decisions in war autonomously, without human intervention seems to be more of a dystopian vision than reality, however now knowing that there are jets that can be manned by AI makes me believe this is a dystopian present. When AI “autonomously” decides who lives or dies, it removes any human aspect of these creations, showing humans are willfully forfeiting rights to AI. Furthermore, we as humans are elevating AI to an even higher, almost like a god, when we give it the power to kill. This power isn’t anything that humans or AI should have. Even more apocalyptics is the idea that AI can disobey orders made by humans, a science fiction concept that seems less like fiction every day. While I do believe AI will have severe consequences on the ethics of war, I also don’t believe that AI will ever be able to think independently. AI is man-made and requires some algorithm or coding to tell it what to do even when it’s making seemingly decisions on itself, the AI is referencing some criteria that is telling it to commit whatever act. Furthermore, this highlights the issue of accountability of using AI in warfare. You can’t charge an AI algorithm for war crimes, yet there are many people who work into making these AI usable in warfare, shifting the playing field. I predict bans on AI would be ineffective, particularly when considering how easily they are accessible in certain countries such as the United States. Furthermore, company holders and CEOs hold the power of these AI in their own hands, not the government. Therefore how could a lawful ban on AI even happen in the first place if those in power are the ones benefiting and creating these automatons. I think similar to the arms race seen during the Cold War, no government is willing to forfeit this strategic advantage, costing the lives and perhaps humanity of their own citizens.
I completely agree with your point. The idea of AI making life-or-death decisions in war is terrifying. It feels like science fiction, but it is becoming real. When machines decide who lives or dies, it removes human emotion and responsibility, which is dangerous. No one, human or AI, should have that kind of power. You also make a strong point about accountability. AI might carry out actions, but people are the ones who create and control it. If something goes wrong, we cannot blame the AI, which makes it difficult to hold anyone responsible. Banning AI in warfare sounds good, but it is unlikely. Big companies and powerful leaders are creating and benefiting from this technology, so they have no reason to stop. Just like the arms race during the Cold War, no country wants to fall behind. But rushing to develop AI weapons could cost more than just lives. It could cost us our humanity.
Boston, Massachusetes, US
Posts: 14
AI in our modern lives and in education
100 years ago, if you told the people then of the world we live in today, they would probably never believe it. Advancements in technology have changed our lives in amazing ways, some which could have never been expected or possible, and which shape our world today. Human progress has been steadily increasing, growing and growing in speed and significance, and in our modern world, it seems everyday there is a new invention. We live in a world which is constantly evolving, and as of recent, the room for expansion has grown substantially, with the advent of AI. Today, we face a society on the verge of a drastic shift, with the possibility of almost everything we know being changed, all at the mercy of technology. Open AI models like ChatGPT, have opened Pandora’s box, with the possibilities for the use of AI being nearly endless now. AI only gets smarter and smarter now, and with that fact in mind we must decide now on how to move forward. The current systems of education can be completely bested by these AI’s, a telling side of their inadequacy. If we want to keep these institutions, we must rapidly make accommodations which either embrace AI, or work around it by using more creative thought. However, combatting AI is a fruitless endavour, and resistance to its use today only limits students, by reverting classrooms to discard other technologies, or by not allowing them to use these powerful devices. The fact is, the classrooms of tomorrow will look drastically different than the ones of today, which in my opinion, is a welcome change if it can promote creative and critical thinking. I believe in an embrace of AI for academics, with a complete overhaul to the system. While there may be some obvious backlash, every day more and more jobs are being lost to AI, and the niches which AI cannot fill are being steadily lost. If we want our children to succeed in this new world, we must give them the tools necessary while it is early, for them to be able to stand any chance. This is a common theme in human advancement, as technology becomes more widely available, societies must adapt. For this very reason we don’t need to teach kids how to use a gun, or hunt an animal, or sail a boat, because those arent useful or critical knowledge in our society. So in turn, why would we teach our children how to write an essay on a topic they arent interested in, when they can turn to an all knowing robot to do it for them, and it will serve no use in that society? AI is still scarily underdeveloped, and will only become better and better everyday, so as the generation which will shape the next with this powerful tool, it is crucial we ask ourselves how we can benefit from it, and what parts may prove to be too destructive.
Boston, Massachusetes, US
Posts: 14
Response to use of AI in education
Originally posted by
Norse_history on May 29, 2025 09:18
It is not easy to come up with the “best” way to approach AI in education, and as both teachers and students have begun to use AI, it is as important as ever that we try. First and foremost, the biggest risk of free and open access to AI among students is the risk of student’s losing their ability to learn and critically think for themselves. AI, when used, should be a supplementary tool and nothing more. It is clearly wrong that people allow AI to influence their opinions and learning, as AI is controlled by the few, who may (or may not) have ulterior motives. When students begin to use AI to either formulate their own opinions or write opinion work, the risk of group think increases. Even if a student is simply too lazy or busy to write an opinion piece, the artificially generated words may influence the thinking of other students who read it. Not only does AI pose the risk of badly formed opinions, it also runs the risk of hurting students’ working ability in the future. Students who rely solely on AI are not able to dedicate themselves to important tasks, something which they will undoubtedly be asked to do without (just) AI in the workforce. For some students, including those at our dinner table, ChatGPT and other AI language models are not capable of writing essays at the level we do, so we are more inclined to do our own work. However, for students who haven’t had the same learning opportunities as us, AI may seem like an easy way to secure decent grades for the lowest possible effort. This teaches students to not have any work ethic, which might prove their downfall in professional settings.
The best way to avoid these risks, while at the same time recognizing the increasing importance of AI, is to encourage students to use AI for little things, such as background information that might help them come up with a good essay. Students must also be taught how to write well, and how AI cannot replicate the human writing style. In order to ensure proper usage of AI, students in high schools across the country should have to take some sort of mandatory AI course. In this course, they should be taught the acceptable use of AI, how to best use AI to minimize workloads while maximizing quality of work, and when AI really shouldn’t be used, and why. This isn’t easy, and there will always be students who abuse it. To protect against that, teachers should not be afraid to do in class writing to make sure students can function without any help, as well as using technology to track when a student might be using AI. Don’t punish minor AI use, and major AI use will likely fall.
I think that the incorporation of AI into the education system is frankly inevitable, and trying to resist change never helps anyone. For that reason, I personally have even more of a radical perspective on the issue, and I think a large portion of the curriculum as a whole should be modified to have AI as a tool, or even focus. While this may seem a bit extreme, we already see parallels in how more and more jobs are being lost to AI, and its countless capabilities. If we downplay its utility, we let less and less kids have the ability to thrive in this new world, which will most likely be dominated by AI. At what point do the skills we currently have matter anymore, and which ones will be useful in the future? These are the questions we must ask ourselves. I do believe critical thinking is a fundamental attribute to have, and creativity will be incredibly useful in that world as well, but how to write an interesting hook? Unfortunately not. It's tragic in reality, but it's a fact we must face, that if we want to succeed in an AI world, we need to be able to manipulate and utilize it to keep it under our control.