posts 16 - 30 of 38
JaneDoe25
South Boston, Massachusetts , US
Posts: 12

The growing use of AI in schools raises a lot of important ethical questions, and many of them come from deeper problems in the way our education system is set up. A big reason students are turning to AI tools is because the system puts so much focus on grades and test scores. That kind of pressure can make learning feel more like a race to get things done instead of a chance to actually understand something. In the article “Everyone’s Using AI to Cheat at School—That’s a Good Thing,” the author points out that this sudden wave of AI use is showing us how outdated and broken our school systems really are.

One of the biggest questions is around academic honesty. Is it cheating if a student uses AI to help them brainstorm or fix grammar mistakes? It really depends on how the tool is being used. There is a big difference between using AI to help get started and just copying everything it writes. It’s kind of like using a calculator in math class. If you understand the material and use it to check your work, it’s helpful. But if you don’t know what you’re doing at all and rely on the calculator for everything, you’re missing the point. The same goes for students using AI to do their writing. If they’re not engaging with the material at all, they’re not really learning.

There’s also the issue of fairness. Some students have access to more advanced AI tools while others don’t, and that creates an uneven playing field. If both students are graded by the same standard, it’s not really fair. Schools need to think about how to make sure AI is being used equally, and how to be clear about what kind of help is allowed and what’s not.

On top of that, there’s a real risk that students could become less motivated to actually learn. When an AI can write an entire paper in seconds, it’s easy to see why someone might not bother doing the hard thinking themselves. In “Your Chatbot Won’t Cry If You Die,” the author talks about how important human connection is when it comes to truly understanding something. AI can give you facts, but it can’t teach empathy or emotional depth, which are a big part of learning, especially in subjects like literature and history.

At the end of the day, AI can be a really helpful tool if it’s used in the right way. But teachers and schools need to rethink how they teach and grade so students are still encouraged to think for themselves. Creativity, discussion, and the ability to reflect are things AI can’t replace. That’s where the real value in education lies, and we should be careful not to lose sight of that.

littleprincess26
Boston, Massachusetts, US
Posts: 14

Question 2

AI is becoming more and more present in our everyday life, making it much harder to avoid. I personally think it is damaging to humans if we use it to replace friends and therapists because it isn't the same at the end of the day. It feels wrong to depend on something that doesn't have genuine emotions/thoughts. Human interactions are not replaceable by anything. In the article "Your Chatbot Won't Cry If You Die", it talks about how these AI robots are able to comfort people but they are only programmed to seem like they care. I believe this is an issue because friendships/relationships aren't always going to be perfect and humans won't always say what you want to hear. But with AI, it is programmed to say what you want to hear which can cause many humans to lose that ability to overcome disagreements. I also think that AI makes us lazier in general since we are able to get answers or comfort in such a quick time, we use less and less critical thinking. I think that many people will choose AI because of the fact that it is easier to speak to them since they won't judge you. Especially for younger kids, I think that AI will negatively impact their future as they grew up with this type of technology. They will be less likely to think on their own, be creative, and develop human connections. Additionally, in developed nations, it is super easy to get access to AI technology which can be extremely dangerous. This also creates inequality as developing nations will have limited access to technology like this which can put them behind in many ways. The gap between developed and developing nations will only continue to grow as AI grows. I also do think that AI is causing many people to have identity crises because they lose their human traits such as the ability to feel emotions, be creative, think deeply, create art and music, but now that AI is trying to replicate it, many people will value these traits less. For example, many artists may have an identity crisis if they see AI being able to create art pieces in seconds while they have to spend a lot more time on creating their work. Overall, I think that AI will be here to stay and I think we can utilize it in helpful ways but we need to be extremely catious and be able to put limits to avoid over dependeance.

redpanda
Boston, MA, US
Posts: 15

The Ethics of AI in Education, Everyday Life and Warfare

I believe AI can be a useful tool, but its use must be limited. It’s acceptable to use AI as a guide or source of inspiration, like asking it to explain a topic that one might be struggling with understanding, make a study guide, or help organize thoughts. Copying AI responses word for word is not only unethical but it also takes away from the experience and opportunity to learn. Students don’t gain anything when they solely rely on AI to do their work. This also connects to the idea of teachers using AI to grade papers, because if students are punished for using AI to do assignments, then teachers should also face consequences for using AI to grade those same assignments. I find that picking and choosing who can use AI creates a hypocritical and unfair learning environment. A proposition that comes to mind as AI becomes more integrated into daily life is that education needs to shift. In school not every single assignment or activity needs to rely on technology, schools should encourage more hands-on learning that is also engaging in order to promote participation. I don’t think that students are necessarily bored, but moreso overwhelmed. When the workload is massive, students may feel like the only way to survive and stay afloat is by using AI. Teachers can fix this by reducing excessive assignments and making learning more dynamic. Through research, I have also found that AI poses environmental and ethical concerns. The amount of water needed to cool servers is extremely unsustainable, especially in a time where droughts are worsening. There’s also the loss of critical thinking. Recently I saw a video of a man who relied solely on ChatGPT for information regarding a visa to travel. It gave him the wrong answer, and I’m assuming he was denied entry into the country. People are becoming too dependent on AI even emotionally. I also saw a girl treat AI like her boyfriend which I found very dystopian. While AI can help express feelings and offer advice, it lacks empathy and real understanding of true human life. The article “Your Chatbot Won’t Cry If You Die” captures this idea perfectly, it highlights how AI might be able to mimic empathy, but in the end, it doesn’t actually feel anything. The article points out that a chatbot won’t grieve you, no matter how human it may seem, ultimately showing that emotional reliance on AI can be dangerous.

MookieTheGoat
Boston, Massachusetts, US
Posts: 10

The ethics of AI


~ Is it the role of humanity to play the "creator"? What obligations, if any, do we have to our creations? Does this change if they are sentient? ~AI can replace human interaction, but should it? Should AI replace doctors, therapists, teachers and even friends?



When you think of Artificial Intelligence people normally think of a computer that can act without human inputs but in reality this is not the case. In a study conducted by Miltion Mueller a professor at Georgia Tech he found that, “They must be told what objectives to pursue and must be trained to pursue them” showing, humans are always in control. This, however, raises more questions than it answers. For example, if AI is so powerful should humans have the ability to control it? To answer this question bluntly, no, humans should not have the power to control Artificial Intelligence.

The main reason why humans should not be allowed to wield the possible power of Artificial intelligence is that its power goes beyond anything we have the ability to understand, not the least to control. As we saw in the documentary, say you design an AI to make a medicine that is supposed to cure diseases but then a 1 in the code is changed to a 0 the AI will instead try and make a substance that creates the worst possible disease. This is just one case of the misalignment problem of AI and if AI continues on its current trajectory this power will get put in the hands of everyone. Logically you might then think that this power should be put in the hands of few people. However this can create even more harm since, one of the powers of AI is transhumanism, an idea where people use technology to improve their traits to make them a “super human”. When this power is in the hands of the few as professor Francis Galibert at University of Rennes puts it, if “a small fraction of the population could have access to it, [they] creating ipso facto a caste of super humans who can act to the detriment of the rest of the population and which over time and mutations causing superior abilities, would dominate all commoners”. This shows that no matter the use case of AI since humans wield its power we won’t be able to derive a positive result.

projectvictory
Dorchester, Massachusetts, US
Posts: 11

LTQ 9: The Ethics of AI

Everyday, we are constantly shifting. Someone in the 1920s would marvel at the opportunities that technology has created for us, and the ever adapting world we live in today. At the same time, we have created weapons that have the means to destruct our society as a whole. And with the new introduction of artificial intelligence, those means get more and more powerful. AI in it’s purest form is helpful, but detrimental to mankind because we make it detrimental. Due to the use of AI, students are now using chatbots to do their homework assignments, write essays for them, and sometimes even give them answers on tests. This decreases not only students’ confidence in their work, but also their incentive to study. In the words of Tyler Cowen in an article titled “Everyones Using AI to Cheat at School, That’s a Good Thing”, “if the current AI can cheat effectively for you, the current AI can also write better than you”. We are setting up students for failure by allowing them to consistently use AI to replicate their writing, we lose future writers. If we allow them to use AI to answer their math homework, we lose future math teachers, and technicians, and mathmeticians. We lose the chance for kids to display their expertise, whether in english, science, math, or any other subject. Sure, these children will be able to pass their classes, make honor roll, and even earn scholarships for their good grades. But they will lose everything else. Relationships between teachers and students may fail, as another article on the affect of artificial intelligence on everyday people titled “AI Will Change What It Is To Be Human. Are We Ready”, it is mentioned that “People who use AI regularly often say one of the benefits is they no longer bore people with their problems or bother people with questions. Free Press columnist Tyler Cowen wrote last week that he likes asking AI questions about work, because it means “I don’t have to query my colleagues nearly as often.” Whilst using AI can save time, it also limits human interaction and leads people to be more insecure in their ability to ask for help. If us as students are the future, we can’t make artificial intelligence the center of it, before things begin to turn left.

Wolfpack1635
Boston, Massachusetts, US
Posts: 15

Originally posted by bookshelf on May 29, 2025 09:36

For AI in education, I don't think it should be used as much as it is. However, I do think it comes from a fundamental flaw in the current education system, which is designed just to “check boxes.” For this reason, students see AI as just another way to “check a box,” rather than actually learning. We live in a society that is so obsessed with having a result, that most people disregard how it took to get there. This leads students to think that as long as the result is good, then the process can be anything they want. Also, too many things are expected of students, causing them to resort to AI to take things off their plate. To get into college, you need a high GPA, AND extracurriculars. When faced with burnout, AI is the best option to some who have teachers that are less understanding with late work or extenuating circumstances. This is inline with the mental health crisis amongst our generation as well, in which students struggling with their mental health can basically eliminate the stress of school, instead of failing. Additionally, there is a serious screen addiction plaguing our generation, wether it be Tik Tok, Youtube, or Video Games. Students are more drawn to AI if they want to spend their afternoon and evening on video games or social media. The reliance on AI stems from systemic issues in the school system, and can only be alleviated by fixing problems within.

I think teachers who use AI should not punish their students for using AI, as it is their job to lead by example. I think mental effort should be reciprocal, and if a teacher worked hard on a lesson plan, classwork, homework, etc., then they deserve students who work as hard. However, if they made the majority of it using AI, they should expect students to do the majority of it with AI as well. In the draft BPS AI proposal, it encourages teachers to use AI to create assessments, stating that it can create “quizzes, rubrics, and project prompts.” If a teacher made a test using 100% AI, I don’t see a moral issue with students using AI on the test, if it is the teacher who set the precedent. Maybe that seems extreme, but I don’t think it’s fair to hold the students to different standards.


I really like yur idea that it is not AI that is fundamentally flawed it is the education system itself that makes students focus on the end goal of grades rather then the grades themselves. In education we are focused on grades t get in a good college then a good job and so on. However at a certain pint a student is responsibe fr there on education if they truly want to learn and succeed they should not have to worry about there teacher to make this possible. Using AI takes away from the learning itsef

bookshelf
Boston, MA, US
Posts: 15

Response

Originally posted by username on May 29, 2025 09:23

I feel like I talk a lot about art and my fears of AI art in general, because as an artist (musical) myself, the creation of art has always been something that I personally feel is exclusively human. To me, what makes the art isn’t the actual outcome of art itself but instead the story and meaning behind the art. Whenever I discuss AI “art” then I say it isn’t art because it doesn’t have any true soul put into it.

This feels like a digression from the questions I’m being asked, but I think it’s fundamental to understand what makes us human, and by turning this to AI we’re losing what makes us human. As mentioned during yesterday’s dinner table we’re losing the ability to create, as we’re losing boredom through the development of these new technologies that just want our engagement for money. The question I’ve been asking myself is “Does this mean the death of true, genuine art itself? Or will artists manage to survive? Will art still be important in the future? What does losing art mean for humanity?”

The article “AI Will Change What It Is to Be Human, Are We Ready?” says that “Blue-collar workers (carpenters, gardeners, handymen, and others who do physical labor) will become more valuable. And white-collar knowledge jobs, many of which are already near or under the waterline, like legal research and business consulting, will diminish in value.” and it made me ask this question to myself “What is the point of all this?” – I thought the point is that we’d develop robots so we wouldn’t have to do these challenging, heavy labor jobs that most people do not choose to work in. It made me wonder that these jobs are valued more than ones you might need a degree for, does that accelerate the trend of anti-intellectualism? If AI is making all of the art, taking all the non-demanding jobs, wouldn’t that just increase the gap between the rich and the poor? If we aren’t making art as much as we used to, could AI accelerate the rise of fascism because we aren’t willing to engage with our emotions?

I have no answers to any of these questions, but I can say one thing for certain: I don’t think we can reverse this trend, but I do not want to live in this AI future.

I feel like a big part of art is the process of creating it, and the personal experiences that inspired it. For example, Arianna Grande's Eternal Sunshine is about moving on from her divorce and learning to love herself and so much more. On TikTok, I heard an AI song and it was astonishing how the lyrics did not come from a life experience, because it didn't come from something that was alive. Instead, it was trained on people's music that they worked hard on. Also, I watched a YouTube video on how Spotify is using AI to make jazz, and then pushing it out to listeners so they can profit off the streams. It was so insane to listen to it, and have it sound like a human-made jazz song. This is so disparaging to Jazz as a concept, which has such important and beautiful history, especially that of Black Americans experiencing segregation and discrimination. All in all, AI for art is honestly so scary.

JaneDoe25
South Boston, Massachusetts , US
Posts: 12

Originally posted by MookieTheGoat on May 30, 2025 09:43


~ Is it the role of humanity to play the "creator"? What obligations, if any, do we have to our creations? Does this change if they are sentient? ~AI can replace human interaction, but should it? Should AI replace doctors, therapists, teachers and even friends?



When you think of Artificial Intelligence people normally think of a computer that can act without human inputs but in reality this is not the case. In a study conducted by Miltion Mueller a professor at Georgia Tech he found that, “They must be told what objectives to pursue and must be trained to pursue them” showing, humans are always in control. This, however, raises more questions than it answers. For example, if AI is so powerful should humans have the ability to control it? To answer this question bluntly, no, humans should not have the power to control Artificial Intelligence.

The main reason why humans should not be allowed to wield the possible power of Artificial intelligence is that its power goes beyond anything we have the ability to understand, not the least to control. As we saw in the documentary, say you design an AI to make a medicine that is supposed to cure diseases but then a 1 in the code is changed to a 0 the AI will instead try and make a substance that creates the worst possible disease. This is just one case of the misalignment problem of AI and if AI continues on its current trajectory this power will get put in the hands of everyone. Logically you might then think that this power should be put in the hands of few people. However this can create even more harm since, one of the powers of AI is transhumanism, an idea where people use technology to improve their traits to make them a “super human”. When this power is in the hands of the few as professor Francis Galibert at University of Rennes puts it, if “a small fraction of the population could have access to it, [they] creating ipso facto a caste of super humans who can act to the detriment of the rest of the population and which over time and mutations causing superior abilities, would dominate all commoners”. This shows that no matter the use case of AI since humans wield its power we won’t be able to derive a positive result.

I think the "changing 1 to 0" code thing is the scariest part of this. The more we advance AI, the more people will have access to it. People could make weapons or toxic chemicals just by asking the computer to do it. There are no regulations or people watching to make sure AI is used correctly and I don't think there ever will be in the future. Because we've created something so vast and powerful, there is no possible way to insure it is being used correctly and safely. For "transhumanism," it makes me worry about people who don't have access to AI. Are these people at a disadvantage because they are not super human like everyone else. Even if it goes against your own morals to use AI, if everyone else is doing it, you eventually will, too.

projectvictory
Dorchester, Massachusetts, US
Posts: 11

Originally posted by redpanda on May 29, 2025 22:07

I believe AI can be a useful tool, but its use must be limited. It’s acceptable to use AI as a guide or source of inspiration, like asking it to explain a topic that one might be struggling with understanding, make a study guide, or help organize thoughts. Copying AI responses word for word is not only unethical but it also takes away from the experience and opportunity to learn. Students don’t gain anything when they solely rely on AI to do their work. This also connects to the idea of teachers using AI to grade papers, because if students are punished for using AI to do assignments, then teachers should also face consequences for using AI to grade those same assignments. I find that picking and choosing who can use AI creates a hypocritical and unfair learning environment. A proposition that comes to mind as AI becomes more integrated into daily life is that education needs to shift. In school not every single assignment or activity needs to rely on technology, schools should encourage more hands-on learning that is also engaging in order to promote participation. I don’t think that students are necessarily bored, but moreso overwhelmed. When the workload is massive, students may feel like the only way to survive and stay afloat is by using AI. Teachers can fix this by reducing excessive assignments and making learning more dynamic. Through research, I have also found that AI poses environmental and ethical concerns. The amount of water needed to cool servers is extremely unsustainable, especially in a time where droughts are worsening. There’s also the loss of critical thinking. Recently I saw a video of a man who relied solely on ChatGPT for information regarding a visa to travel. It gave him the wrong answer, and I’m assuming he was denied entry into the country. People are becoming too dependent on AI even emotionally. I also saw a girl treat AI like her boyfriend which I found very dystopian. While AI can help express feelings and offer advice, it lacks empathy and real understanding of true human life. The article “Your Chatbot Won’t Cry If You Die” captures this idea perfectly, it highlights how AI might be able to mimic empathy, but in the end, it doesn’t actually feel anything. The article points out that a chatbot won’t grieve you, no matter how human it may seem, ultimately showing that emotional reliance on AI can be dangerous.

Hey RedPandiana, great response! I agree that AI can be useful for struggling students that might need inspiration, but that AI ultimately doesn't help students be the best version of themselves. I think one way to regulate this would be to have a blanket amount of the AI use that is detected in essays/assignments. Like there should be a certain threshold both students AND teachers cannot pass to ensure that both parties are still doing what is asked of them. In moderation, AI is a strong tool for the future, and can maybe provide us with answers we didn't even know we needed, or answers we have never known before. But as you said, it can become unsustainable and detrimental to the environment, and for that reason (and many more), we ought to be extremely careful when using it.

Norse_history
Charlestown, MA, US
Posts: 15

Response to Big Lenny

Originally posted by Big Lenny on May 29, 2025 09:35

Technology is meant to make our lives easier (for the most part). Our generation doesn’t have to work and think as much as three or four generations ago because, for many of us, technology makes our clothing, homes, modes of communication, and facilitates the production of music, art, writing, schoolwork, etc. The benefits of the technology we have access to is that it can allow us to shift our focus towards our passions. With AI, the goal is not to do the hard-thinking for us so that we can think about what we are interested in; the goal is to do ALL the thinking for us so that we can “relax” and “work less.” While developers of AI do believe that people eventually will work less with the spread of AI, that doesn’t necessarily mean it is positive for our critical thinking skills, creativity, or empathy. We can already see in the younger generation of students how technology like social media and AI is eroding social skills and intellectual activity.

Given the current state of American education, AI will likely make school easier for students but prevent them from learning essential skills. Literacy rates of both adults and children in the U.S. are dropping for a number of reasons, but the use of AI in school will absolutely exacerbate the issue. Students can and do write full essays with AI. This response could have been written by AI, and I will not have needed to consider any of these questions for myself. This both takes value away from writing as a skill and art form but also stops students from thinking deeply about literature, history, or society. I have heard a student in my English class tell our teacher that English shouldn’t be a required class because writing “isn’t important” like math or science. AI offers well-put and often “better” writing than we can create, so when work piles up, it is a tempting and infinitely easy option for many tired students.

I became especially worried about AI and social media when an old teacher of mine was telling me about his eighth grade class a few days ago. I was expecting him to say that they were too rowdy and refused to quiet down, but it was actually the opposite—for the entire year, each section of his students sat in silence, refusing to acknowledge the teacher even when he called on specific students. They don’t even chat amongst themselves. To me, a deafeningly silent classroom of tweens is unheard of, and completely different from the classrooms that I grew up in. Although many factors play into this, it may be an example of how the use of AI discourages students from participating in class or any intellectual activities.

Although the spread of AI and social media can lead to the absence of real social interaction, creativity, and intellectual stimulation, it is unfair to ask certain students not to use it when they are already competing with AI. It will eventually be odd for students not to use AI to write essays for them, as the value of writing as a skill will diminish (“Everyone’s Using AI To Cheat at School. That’s a Good Thing.”). At the end of the day, AI seems like an inevitable aspect of education and everyday life, so the first step to retain youthful creativity and intelligence is to emphasize the inherent importance of literature, art, and empathy, so that AI can serve as a tool rather than an obstacle.

I agree, for the most part, that AI technology has had a major influence on the way people are learning and their work ethics. I appreciate that Big Lenny included a very nuanced argument, with consideration of how AI has allowed some teenagers more time to pursue their interests, but also has a con of harming critical thinking. I think that AI should not change the amount of work or effort that a student is required to put in, even if that requires teachers to adapt. They might not need to assign harder work, per se, but they may have to think carefully about how to have students put in similar effort with AI that students had prior to AI. This way, students don't lose their work ethics. Students will still be able to pursue their interests, just as they did prior to AI. I find the points about socialization in the response interesting as well, as I have always been pretty social, and have not really seen AI or social media affect that. For other students, however, I agree that AI might change the way kids interact, and in a harmful way.

Marcus Aurelius
Boston, MA, US
Posts: 15

Originally posted by username on May 29, 2025 09:23

I feel like I talk a lot about art and my fears of AI art in general, because as an artist (musical) myself, the creation of art has always been something that I personally feel is exclusively human. To me, what makes the art isn’t the actual outcome of art itself but instead the story and meaning behind the art. Whenever I discuss AI “art” then I say it isn’t art because it doesn’t have any true soul put into it.

This feels like a digression from the questions I’m being asked, but I think it’s fundamental to understand what makes us human, and by turning this to AI we’re losing what makes us human. As mentioned during yesterday’s dinner table we’re losing the ability to create, as we’re losing boredom through the development of these new technologies that just want our engagement for money. The question I’ve been asking myself is “Does this mean the death of true, genuine art itself? Or will artists manage to survive? Will art still be important in the future? What does losing art mean for humanity?”

The article “AI Will Change What It Is to Be Human, Are We Ready?” says that “Blue-collar workers (carpenters, gardeners, handymen, and others who do physical labor) will become more valuable. And white-collar knowledge jobs, many of which are already near or under the waterline, like legal research and business consulting, will diminish in value.” and it made me ask this question to myself “What is the point of all this?” – I thought the point is that we’d develop robots so we wouldn’t have to do these challenging, heavy labor jobs that most people do not choose to work in. It made me wonder that these jobs are valued more than ones you might need a degree for, does that accelerate the trend of anti-intellectualism? If AI is making all of the art, taking all the non-demanding jobs, wouldn’t that just increase the gap between the rich and the poor? If we aren’t making art as much as we used to, could AI accelerate the rise of fascism because we aren’t willing to engage with our emotions?

I have no answers to any of these questions, but I can say one thing for certain: I don’t think we can reverse this trend, but I do not want to live in this AI future.

I think you make a lot of really interesting points that I myself have been pondering. I agree that art made by AI isn’t actually art because all AI can do is respond to prompts and add realism. But, what AI can’t do is undergo the painstaking emotional and physical process of creating art. I think it’s really sad that people are turning to it to make art. In response to your question “Does this mean the death of true, genuine art itself?” I would have to say that society has the potential to go either way. I think it very much could mean the death of art as people start to turn to AI for things more and more, but on the flip-side I think because AI is taking over so many jobs and other aspects of life, people may have the opportunity to be more creative and have more time to create art. I am definitely more worried that we are leaning more towards the former though. Overall, I think your response was really interesting and engaging, however I feel like you could have easily expanded more on their points and offered more of your own opinion as well as posing more questions.

Nonchalant Dreadhead
Boston, Massachusetts, US
Posts: 15

Originally posted by bookshelf on May 29, 2025 09:36

For AI in education, I don't think it should be used as much as it is. However, I do think it comes from a fundamental flaw in the current education system, which is designed just to “check boxes.” For this reason, students see AI as just another way to “check a box,” rather than actually learning. We live in a society that is so obsessed with having a result, that most people disregard how it took to get there. This leads students to think that as long as the result is good, then the process can be anything they want. Also, too many things are expected of students, causing them to resort to AI to take things off their plate. To get into college, you need a high GPA, AND extracurriculars. When faced with burnout, AI is the best option to some who have teachers that are less understanding with late work or extenuating circumstances. This is inline with the mental health crisis amongst our generation as well, in which students struggling with their mental health can basically eliminate the stress of school, instead of failing. Additionally, there is a serious screen addiction plaguing our generation, wether it be Tik Tok, Youtube, or Video Games. Students are more drawn to AI if they want to spend their afternoon and evening on video games or social media. The reliance on AI stems from systemic issues in the school system, and can only be alleviated by fixing problems within.

I think teachers who use AI should not punish their students for using AI, as it is their job to lead by example. I think mental effort should be reciprocal, and if a teacher worked hard on a lesson plan, classwork, homework, etc., then they deserve students who work as hard. However, if they made the majority of it using AI, they should expect students to do the majority of it with AI as well. In the draft BPS AI proposal, it encourages teachers to use AI to create assessments, stating that it can create “quizzes, rubrics, and project prompts.” If a teacher made a test using 100% AI, I don’t see a moral issue with students using AI on the test, if it is the teacher who set the precedent. Maybe that seems extreme, but I don’t think it’s fair to hold the students to different standards.


Post your response here.

Overall, I really agree with everything you are saying and personally spoke about how it is not AI that is the problem, but the flaw in the current educational system. People in society are so worried about the best possible result and to be the best, they tend to forget about the journey and the struggle to get them there, worrying that they will fall short. I also really agree with the fact that the reason students tend to turn to AI is because the amount of stress and pressure students now are given to get into a good college, and eventually facing burnout. I havent really thought of it, but social media is also a really big part as to why students turn to AI, because they tend to get addicted nightly for hours on it, not spending time on work. I also agree with teachers using AI, can not be upseat studnets do the same because students are less likley to put effort knowing that their teachers did not.

aldoushuxley
Jamaica Plain, Massachusetts, US
Posts: 15

Originally posted by Norse_history on May 29, 2025 09:18

It is not easy to come up with the “best” way to approach AI in education, and as both teachers and students have begun to use AI, it is as important as ever that we try. First and foremost, the biggest risk of free and open access to AI among students is the risk of student’s losing their ability to learn and critically think for themselves. AI, when used, should be a supplementary tool and nothing more. It is clearly wrong that people allow AI to influence their opinions and learning, as AI is controlled by the few, who may (or may not) have ulterior motives. When students begin to use AI to either formulate their own opinions or write opinion work, the risk of group think increases. Even if a student is simply too lazy or busy to write an opinion piece, the artificially generated words may influence the thinking of other students who read it. Not only does AI pose the risk of badly formed opinions, it also runs the risk of hurting students’ working ability in the future. Students who rely solely on AI are not able to dedicate themselves to important tasks, something which they will undoubtedly be asked to do without (just) AI in the workforce. For some students, including those at our dinner table, ChatGPT and other AI language models are not capable of writing essays at the level we do, so we are more inclined to do our own work. However, for students who haven’t had the same learning opportunities as us, AI may seem like an easy way to secure decent grades for the lowest possible effort. This teaches students to not have any work ethic, which might prove their downfall in professional settings.


The best way to avoid these risks, while at the same time recognizing the increasing importance of AI, is to encourage students to use AI for little things, such as background information that might help them come up with a good essay. Students must also be taught how to write well, and how AI cannot replicate the human writing style. In order to ensure proper usage of AI, students in high schools across the country should have to take some sort of mandatory AI course. In this course, they should be taught the acceptable use of AI, how to best use AI to minimize workloads while maximizing quality of work, and when AI really shouldn’t be used, and why. This isn’t easy, and there will always be students who abuse it. To protect against that, teachers should not be afraid to do in class writing to make sure students can function without any help, as well as using technology to track when a student might be using AI. Don’t punish minor AI use, and major AI use will likely fall.

I really like all your points, especially the concern that overreliance on AI could erode students’ ability to think critically and independently. I agree that AI should be used as a supplement, not a substitute. The idea that AI can shape groupthink by subtly influencing students’ perspectives—even when used just to "help" with opinion writing—is a powerful warning. It reminds us how essential it is to maintain space for genuine human reflection, especially in education.

I also really like your suggestion of implementing a required AI literacy course in schools. Teaching students not just how to use AI, but when and why to use it—or not—would go a long way in addressing the ethical and academic challenges AI presents. Your point about equity is especially important too: not all students have access to the same resources or instruction, which means AI might create as many problems as it solves unless we address that gap head-on. Your final suggestion of balancing AI monitoring with in-class writing seems practical and fair—promoting accountability without punishing curiosity.

username
Boston, Massachusetts, US
Posts: 15

AI LTQ MakeArtNotWar Response

Originally posted by MakeArtNotWar on May 29, 2025 14:06

ChatGPT does everything for us: it handles our math homework, drafts our emails, and gives us an endless well of brainstorming ideas. What if it could fight our wars as well?

Could, is maybe not the right word. It can. Already, U.S. arms technicians are working with autonomous drones and fighter planes as an aid and expendable casualty. But does AI make the perfect soldier? It can follow directions, calculate risks and solutions to problems within seconds, but it lacks one vital ability: morality. Stationed in an outpost dedicated to screening for possible undercover enemy soldiers, one military official came across a young girl who was undercover as a hostile spy. Under the rules of law, which make distinctions between civilians and soldiers, it would be legal to shoot that ten-year-old girl, but “the thought never crossed [his] mind.” An AI, complacent with international laws of war, would not have such a hesitation. The problem presents itself: “how do we teach AI the difference between what is legal and what is right?”

The issues don’t stop there. The U.S. is not the only country experimenting with AI in warfare—Russia, Ukraine, and Israel have already implemented this technology into ongoing conflicts. As more and more countries start to use it, more will be pressured to follow—if they can. The maintenance of AI, such as keeping machines cool and bug-free, is enormously expensive. World powers may afford it easily, but other countries, such as those in the global south, might not have the funds to support such advancements. Already disadvantaged from economic imperialism and ongoing exploitation of resources, this would cripple their power, and make them all the more reliant on the global superpowers.

Additionally, the total removal of humans from the battlefield will not only raise questions of morality, but also prolong these conflicts. Fatalities, terrible as they are, offer a convincing reason for countries to shorten conflict or avoid it altogether. Currently, nations normally try every avenue of negotiation and compromise before resorting to a war that could inflict painful casualties and injuries to their citizens. Without this risk, countries might not have such reservations and would resort to warfare faster, and maintain it longer. This could have devastating effects on the environment, infrastructure, and civilian lives.

So how do we remediate this? As with everything in global politics, there is no clear answer. A ban on Artificial Intelligence in warfare is the obvious solution, but with world powers like America and Russia already enjoying the benefits of this technology, such mandates are likely to be vetoed or simply ignored, with AI advancements continuing illegally. Our best hope is to attempt to enforce heavy limits on what a country can do with automation and raise discussions on AI in warfare. Americans can also attempt to pass bills within our own government to limit AI use.

I agree with much of this post, as to me the usage of AI in war is an incredibly slippery slope and is one of many potential ways that warfare could be worsened in the future. I think that the example of the little girl being potentially shot by the AI that was shown here is a great example of why everyone should be alarmed by this. AI could potentially worsen human rights violations and bring about worsening catastrophes in war. In terms of a solution, I both agree and disagree with the one proposed here, as I think that there needs to be some legislation: we need to ban AI war planning. The major problem with bans on the technology however, is that there is obviously no possible way to enforce any ban, with countries like Russia, Israel and the United States either refusing to sign on to any treaty or sign the treaty but refusing to follow its clauses.

1984_lordoftheflies
Boston, Massachusetts, US
Posts: 15

Reply - makeartnotwar

Originally posted by MakeArtNotWar on May 29, 2025 14:06

ChatGPT does everything for us: it handles our math homework, drafts our emails, and gives us an endless well of brainstorming ideas. What if it could fight our wars as well?

Could, is maybe not the right word. It can. Already, U.S. arms technicians are working with autonomous drones and fighter planes as an aid and expendable casualty. But does AI make the perfect soldier? It can follow directions, calculate risks and solutions to problems within seconds, but it lacks one vital ability: morality. Stationed in an outpost dedicated to screening for possible undercover enemy soldiers, one military official came across a young girl who was undercover as a hostile spy. Under the rules of law, which make distinctions between civilians and soldiers, it would be legal to shoot that ten-year-old girl, but “the thought never crossed [his] mind.” An AI, complacent with international laws of war, would not have such a hesitation. The problem presents itself: “how do we teach AI the difference between what is legal and what is right?”

The issues don’t stop there. The U.S. is not the only country experimenting with AI in warfare—Russia, Ukraine, and Israel have already implemented this technology into ongoing conflicts. As more and more countries start to use it, more will be pressured to follow—if they can. The maintenance of AI, such as keeping machines cool and bug-free, is enormously expensive. World powers may afford it easily, but other countries, such as those in the global south, might not have the funds to support such advancements. Already disadvantaged from economic imperialism and ongoing exploitation of resources, this would cripple their power, and make them all the more reliant on the global superpowers.

Additionally, the total removal of humans from the battlefield will not only raise questions of morality, but also prolong these conflicts. Fatalities, terrible as they are, offer a convincing reason for countries to shorten conflict or avoid it altogether. Currently, nations normally try every avenue of negotiation and compromise before resorting to a war that could inflict painful casualties and injuries to their citizens. Without this risk, countries might not have such reservations and would resort to warfare faster, and maintain it longer. This could have devastating effects on the environment, infrastructure, and civilian lives.

So how do we remediate this? As with everything in global politics, there is no clear answer. A ban on Artificial Intelligence in warfare is the obvious solution, but with world powers like America and Russia already enjoying the benefits of this technology, such mandates are likely to be vetoed or simply ignored, with AI advancements continuing illegally. Our best hope is to attempt to enforce heavy limits on what a country can do with automation and raise discussions on AI in warfare. Americans can also attempt to pass bills within our own government to limit AI use.

I definitely agree with your assessment that AI in warfare will increase the advantage developed nations have over undeveloped ones. I think it's sort of like a Pandora's Box, although I believe everybody would be better off if our government did not use this technology, it obviously will and there's nothing we can do to stop it. The idea of entrusting anybody's life in the hands of an autonomous robot is dystopian. I also fear that, as you said, now that there's even less of a human and economic cost of war the US will wage more wars, especially against countries which obviously don't stand a chance against us in the global south. On top of this, it's interesting to bring in the idea of the military industrial complex into this discussion. The hype around this technology ties back to the billionaires who own the corporations that develop it, lobbying the government to keep spending on the DOD. That's another reason why I am skeptical of this technology, I see its development as benefiting the rich people who own the companies that came up with it and produce it, but nobody else will. Certainly not the people at the other end of the AI drone, and not us either, when our government pumps our tax dollars to the DOD instead of the public good.

posts 16 - 30 of 38