posts 1 - 15 of 36
Ms. Bowles
US
Posts: 56

Word Count Requirement: 350-500 words


Sources to Reference:


Please refer to the ideas, either using a description, quote or paraphrasing, from at least one of the sources in your response and please respond in some way to only one of the question sets. You can also refer to the documentary that we watched as a class about AI in warfare.



Questions to Consider:


1. What are the ethical considerations of using AI in education? Please respond to any number of these questions that you discussed with your peers during our Dinner Table Discussion:


~In what ways has the current structural issues in our education system contributed to so many students' reliance on AI as an academic tool?

~How does the widespread use of AI tools challenge traditional definitions of academic integrity? Is using AI always dishonest? Where or how do we draw the line between cheating and using AI as a tool?

~ Should school’s prioritize in-person skills like discussion and communication skills to ensure that students can still think critically?

~Is it wrong to let AI influence and even form our opinions and thoughts on world events, history and literature? Does this mean that we are losing the ability to reflect on the commonalities that make us human?

~As the use of AI to cheat in school rises and grades become obsolete, will networking and personal connections be valued more by employers? Does this work against people who are introverted or who struggle with social interaction?

~Do you think that the use of AI actually makes students less incentivized to participate and learn in class? Are students bored because they don’t really need to think much any more?

~Should teachers who use AI to grade papers be punished in the same way that students who use AI to write papers are punished? In theory educators get paid, partially, to think for a living, is it unethical for them to offload that job to AI?

~How can you ensure that the use of AI in schools is equal and does not give anyone an advantage? Is it fair for one student to do the work and another to use AI for the entire thing and have them be graded on the same rubric?


2. What are the ethical considerations of using AI in everyday life? Please respond to any number of these questions that you discussed with your peers during our Dinner Table Discussion:


~What characteristics are so uniquely human that regardless of how far scientific and technological advancements go, they will never truly be able to be replicated by AI?

~With AI replacing many people in more "intellectual" jobs, is there a risk that we will become dumber? Worse at thinking critically? More likely to blindly follow others? Will we lose our empathy and emotional purpose as humans?

~Does AI pose the worst identity crisis that humanity has ever faced? Is it possible to ramp it back now that we have begun using AI?

~ Is it the role of humanity to play the "creator"? What obligations, if any, do we have to our creations? Does this change if they are sentient?

~How might AI create a disparity in the social fabric of advanced, developed countries vs underdeveloped countries that lack technological innovations?

~AI can replace human interaction, but should it? Should AI replace doctors, therapists, teachers and even friends?

~Do you think the use of AI as a form of comfort is dystopian? Won't the use of AI as a means of comfort mean that society will become less dependent on real relationships, and the use of AI will just feed people’s egos?

~Will people begin to prefer AI because it allows them to avoid facing their own flaws and the flaws of those around them?


3. What are the ethical considerations of using AI in warfare? Please respond to any number of these questions that you discussed with your peers during our Dinner Table Discussion:


~Should AI be allowed to make autonomous decisions without human oversight on combat missions? What if AI, currently controlled by human operators, reaches a point of disobeying human commands?

~Do AI weapons systems dehumanize warfare? Could that potentially be a good thing where warfare is no longer waged by humans, thus ultimately saving lives? Will that potentially prolong wars because there is less of a human cost?

~Should the efficiency, precision, strategic advantages and speed of AI warfare outweigh the ethical concerns? Is there a way to balance these concerns with the benefits?

~What happens when AI weapons systems become cheap and widely available? Should the nations develop this technology in line with the Mutually Assured Destruction theory related to nuclear weapons, to ensure that it will not be used irresponsibly?

~Should there be a global ban on lethal AI autonomous weapons? Does it make sense to institute a ban when some nations and rogue groups will not obey the ban?

~Is it ethical to use AI for psychological or information warfare against an enemy (for example creating deep fake images or spreading disinformation)?

~Who should be held accountable if AI weapons systems commit a war crime like killing civilian non-combatants? Who should stand trial for the crime if the weapons used are autonomous?


Wolfpack1635
Boston, Massachusetts, US
Posts: 15

In the growing world of AI, the integration of AI in education will disrupt student growth and teacher connection with students. Almost everyday there is a new tool or AI that is on the news or being used. With each of these new tools the challenges posed to education increases exponentially. In the case of AI-driven grading systems, they can rapidly score and give feedback to assignments. Not only does this revleive teachers of their workload, it can reduce the subjectivity of human grading. AI systems would reduce writing and literature down to scores and rubrics, eliminating the freedom humans have in their writing. Furthermore it leads to an unbalanced relationship between the student and teacher. Why should a student work so hard and not use AI, just for their work to be graded by an AI? Additionally not all grading is equal in these systems. They often provide base level feedback and never truly understand human writing. Using AI systems on more opinion pieces can lead to bias by the computer. An important aspect of schooling is the relationship students have with their teachers who can often be mentors. If AI became increasingly used in school there would be a disconnect between a teacher and its students. Often times or writing is where our true identity is shown and if a teacher never reads your writing they could never truly understand your voice. On the other hand, teachers who use AI to grade could point to increased time to schedule lessons and more interactive class times. AI can lead to more personalized and enhanced lessons and even individual lesson plans that fit a student's archetype. But the overuse of this tech erodes human interactions and can ultimately destroy intelligence. Our relationships are the heart of the education system and if they are replaced by a computer, students who are already poorly motivated will be utterly devastated. AI lacks empathy and intuition, two traits which humans need to thrive. Humans need to prioritize education and AI would not allow this. Lastly, humans are inherently creative people. If our work were to be boxed in to an algorithm we would lose what makes us human

Norse_history
Charlestown, MA, US
Posts: 15

AI in Education: Incorporate it, but beware the risks

It is not easy to come up with the “best” way to approach AI in education, and as both teachers and students have begun to use AI, it is as important as ever that we try. First and foremost, the biggest risk of free and open access to AI among students is the risk of student’s losing their ability to learn and critically think for themselves. AI, when used, should be a supplementary tool and nothing more. It is clearly wrong that people allow AI to influence their opinions and learning, as AI is controlled by the few, who may (or may not) have ulterior motives. When students begin to use AI to either formulate their own opinions or write opinion work, the risk of group think increases. Even if a student is simply too lazy or busy to write an opinion piece, the artificially generated words may influence the thinking of other students who read it. Not only does AI pose the risk of badly formed opinions, it also runs the risk of hurting students’ working ability in the future. Students who rely solely on AI are not able to dedicate themselves to important tasks, something which they will undoubtedly be asked to do without (just) AI in the workforce. For some students, including those at our dinner table, ChatGPT and other AI language models are not capable of writing essays at the level we do, so we are more inclined to do our own work. However, for students who haven’t had the same learning opportunities as us, AI may seem like an easy way to secure decent grades for the lowest possible effort. This teaches students to not have any work ethic, which might prove their downfall in professional settings.


The best way to avoid these risks, while at the same time recognizing the increasing importance of AI, is to encourage students to use AI for little things, such as background information that might help them come up with a good essay. Students must also be taught how to write well, and how AI cannot replicate the human writing style. In order to ensure proper usage of AI, students in high schools across the country should have to take some sort of mandatory AI course. In this course, they should be taught the acceptable use of AI, how to best use AI to minimize workloads while maximizing quality of work, and when AI really shouldn’t be used, and why. This isn’t easy, and there will always be students who abuse it. To protect against that, teachers should not be afraid to do in class writing to make sure students can function without any help, as well as using technology to track when a student might be using AI. Don’t punish minor AI use, and major AI use will likely fall.

username
Boston, Massachusetts, US
Posts: 15

AI LTQ Response (Set #2)

I feel like I talk a lot about art and my fears of AI art in general, because as an artist (musical) myself, the creation of art has always been something that I personally feel is exclusively human. To me, what makes the art isn’t the actual outcome of art itself but instead the story and meaning behind the art. Whenever I discuss AI “art” then I say it isn’t art because it doesn’t have any true soul put into it.

This feels like a digression from the questions I’m being asked, but I think it’s fundamental to understand what makes us human, and by turning this to AI we’re losing what makes us human. As mentioned during yesterday’s dinner table we’re losing the ability to create, as we’re losing boredom through the development of these new technologies that just want our engagement for money. The question I’ve been asking myself is “Does this mean the death of true, genuine art itself? Or will artists manage to survive? Will art still be important in the future? What does losing art mean for humanity?”

The article “AI Will Change What It Is to Be Human, Are We Ready?” says that “Blue-collar workers (carpenters, gardeners, handymen, and others who do physical labor) will become more valuable. And white-collar knowledge jobs, many of which are already near or under the waterline, like legal research and business consulting, will diminish in value.” and it made me ask this question to myself “What is the point of all this?” – I thought the point is that we’d develop robots so we wouldn’t have to do these challenging, heavy labor jobs that most people do not choose to work in. It made me wonder that these jobs are valued more than ones you might need a degree for, does that accelerate the trend of anti-intellectualism? If AI is making all of the art, taking all the non-demanding jobs, wouldn’t that just increase the gap between the rich and the poor? If we aren’t making art as much as we used to, could AI accelerate the rise of fascism because we aren’t willing to engage with our emotions?

I have no answers to any of these questions, but I can say one thing for certain: I don’t think we can reverse this trend, but I do not want to live in this AI future.

Big Lenny
US
Posts: 14

~Do you think that the use of AI actually makes students less incentivized to participate and learn in class? Are students bored because they don’t really need to think much any more?

Technology is meant to make our lives easier (for the most part). Our generation doesn’t have to work and think as much as three or four generations ago because, for many of us, technology makes our clothing, homes, modes of communication, and facilitates the production of music, art, writing, schoolwork, etc. The benefits of the technology we have access to is that it can allow us to shift our focus towards our passions. With AI, the goal is not to do the hard-thinking for us so that we can think about what we are interested in; the goal is to do ALL the thinking for us so that we can “relax” and “work less.” While developers of AI do believe that people eventually will work less with the spread of AI, that doesn’t necessarily mean it is positive for our critical thinking skills, creativity, or empathy. We can already see in the younger generation of students how technology like social media and AI is eroding social skills and intellectual activity.

Given the current state of American education, AI will likely make school easier for students but prevent them from learning essential skills. Literacy rates of both adults and children in the U.S. are dropping for a number of reasons, but the use of AI in school will absolutely exacerbate the issue. Students can and do write full essays with AI. This response could have been written by AI, and I will not have needed to consider any of these questions for myself. This both takes value away from writing as a skill and art form but also stops students from thinking deeply about literature, history, or society. I have heard a student in my English class tell our teacher that English shouldn’t be a required class because writing “isn’t important” like math or science. AI offers well-put and often “better” writing than we can create, so when work piles up, it is a tempting and infinitely easy option for many tired students.

I became especially worried about AI and social media when an old teacher of mine was telling me about his eighth grade class a few days ago. I was expecting him to say that they were too rowdy and refused to quiet down, but it was actually the opposite—for the entire year, each section of his students sat in silence, refusing to acknowledge the teacher even when he called on specific students. They don’t even chat amongst themselves. To me, a deafeningly silent classroom of tweens is unheard of, and completely different from the classrooms that I grew up in. Although many factors play into this, it may be an example of how the use of AI discourages students from participating in class or any intellectual activities.

Although the spread of AI and social media can lead to the absence of real social interaction, creativity, and intellectual stimulation, it is unfair to ask certain students not to use it when they are already competing with AI. It will eventually be odd for students not to use AI to write essays for them, as the value of writing as a skill will diminish (“Everyone’s Using AI To Cheat at School. That’s a Good Thing.”). At the end of the day, AI seems like an inevitable aspect of education and everyday life, so the first step to retain youthful creativity and intelligence is to emphasize the inherent importance of literature, art, and empathy, so that AI can serve as a tool rather than an obstacle.

bookshelf
Boston, MA, US
Posts: 15

AI in Education

For AI in education, I don't think it should be used as much as it is. However, I do think it comes from a fundamental flaw in the current education system, which is designed just to “check boxes.” For this reason, students see AI as just another way to “check a box,” rather than actually learning. We live in a society that is so obsessed with having a result, that most people disregard how it took to get there. This leads students to think that as long as the result is good, then the process can be anything they want. Also, too many things are expected of students, causing them to resort to AI to take things off their plate. To get into college, you need a high GPA, AND extracurriculars. When faced with burnout, AI is the best option to some who have teachers that are less understanding with late work or extenuating circumstances. This is inline with the mental health crisis amongst our generation as well, in which students struggling with their mental health can basically eliminate the stress of school, instead of failing. Additionally, there is a serious screen addiction plaguing our generation, wether it be Tik Tok, Youtube, or Video Games. Students are more drawn to AI if they want to spend their afternoon and evening on video games or social media. The reliance on AI stems from systemic issues in the school system, and can only be alleviated by fixing problems within.

I think teachers who use AI should not punish their students for using AI, as it is their job to lead by example. I think mental effort should be reciprocal, and if a teacher worked hard on a lesson plan, classwork, homework, etc., then they deserve students who work as hard. However, if they made the majority of it using AI, they should expect students to do the majority of it with AI as well. In the draft BPS AI proposal, it encourages teachers to use AI to create assessments, stating that it can create “quizzes, rubrics, and project prompts.” If a teacher made a test using 100% AI, I don’t see a moral issue with students using AI on the test, if it is the teacher who set the precedent. Maybe that seems extreme, but I don’t think it’s fair to hold the students to different standards.


1984_lordoftheflies
Boston, Massachusetts, US
Posts: 15

AI use in schools & the education system

I think our education system doesn’t encourage critical thinking or curiosity, so students are motivated to use AI and cheat on assignments. The education system forces students to learn about things that they don’t care about, for no reason. If students were allowed to have more choice in what they were learning, there would be less motivation for AI use. It also emphasizes memorization over critical thinking. When children are young, they are incredibly curious, asking ‘why?’ at everything. After they’ve made it through the education system, many of them might say they hate learning in general, they definitely aren’t as curious as they once were. This also might explain why the American population doesn’t seem to be that great at critical thinking, based on what’s happening now politically. If the education system stopped emphasizing memorization over critical thinking and encouraged students to be curious by allowing them to learn about what they choose, we wouldn’t be seeing AI usage as cheating as we do now.

As for appropriate AI usage in schools, I think that AI use should be very limited. The BPS policy says that teachers should use AI to give students personalized feedback on assignments. To me, this is ridiculous. If I wanted chatGPT to give me feedback on a paper, I would just ask it to do so. The point of the teacher is to have an expert give feedback and form relationships with the students. I think AI might help some students to learn concepts that they’re struggling with, but it shouldn’t be used to replace the role of the student or the teacher. It shouldn’t be used to write entire essays. Tyler Cowen, a professor at George Mason University, writes on the obsoleteness of writing skills: “A few “rebels” will do their classwork on their own, but everyone else will wonder what exactly they are planning on doing with the writing skills they develop.” For me, this symbolizes everything wrong with the way education is framed in the US: it’s seen as a way to develop skills for the workforce instead of something that should be pursued in of itself. Writing is one of the most important things to do in school, it develops critical thinking skills and it is used as a form of art. I wonder why, as a professor, Cowen seems to be so anti-learning. He writes “the real learning will come from those who treat colleges as the annoyances they are becoming. Those students will either skip college entirely, as increasing numbers of hyperdriven achievers do, or go for fun and do their real learning from AIs…” If university was seen as a place to go and learn about the world, instead of a place to get a degree to put on your resume, maybe there wouldn’t be any AI use to write papers, because the people who chose to go would be there because they want to learn. I wonder what is the ‘hyperdriven achievement’ in not being curious about the world around you and letting an AI think for you.

aldoushuxley
Jamaica Plain, Massachusetts, US
Posts: 15

The use of artificial intelligence in education is raising urgent ethical questions about integrity, equity, and the future of learning. As AI tools like ChatGPT become more widely available, many students are relying on them not just for support, but for doing their entire assignments. This trend forces us to confront some uncomfortable truths about both our education system and what it means to truly learn.

One of the biggest structural issues driving students toward AI is the pressure to succeed in a system that often values output over understanding. With overloaded schedules, high expectations, and limited personalized support, students may turn to AI not out of laziness, but as a survival strategy. The article “Everyone’s Using AI to Cheat at School. That’s a Good Thing” points out that our current model of education was not designed with today’s technology in mind. It argues that rather than banning AI, we should rethink what we’re teaching and why. If students are using AI to complete assignments that they see as meaningless, maybe the problem isn’t just the tool—it’s the task.

Still, the widespread use of AI raises serious concerns about academic integrity. While AI can be a helpful tool for brainstorming or improving grammar, using it to write entire essays crosses a line. But where exactly is that line? Is it cheating to use AI to help outline a paper, or to suggest a thesis statement? Unlike traditional forms of plagiarism, using AI is harder to detect and easy to rationalize. This gray area challenges teachers and students alike to redefine what honesty and learning look like in a digital age.

Another concern is the equity of AI use. Not all students have equal access to the best AI tools or know how to use them effectively. This creates a gap between students who can afford private access and those who can’t. If two students turn in similar work, but one wrote it themselves and the other relied entirely on AI, should they be graded the same? As discussed in the BPS Draft AI Proposal, schools must create clear, fair guidelines for AI use—and make sure those policies don’t unintentionally widen existing inequalities. If students believe AI can do the thinking for them, what incentive is left to pay attention, engage in class, or learn how to communicate their ideas? As discussed in our class documentary about AI and education, over-reliance on AI tools may lead to a loss of critical thinking and creativity—skills that can’t be outsourced.


iris_crane
Boston, Massachusetts , US
Posts: 15

LTQ 9: The Ethics of AI

AI itself throughout the years, especially with the increase of Chat Bots or AI companions, has definitely replaced a small aspect of human interaction. It is not something that is also very new in this day and age as well. For example, Japan has seen a large surge in the development of AI holograms and virtual companions of popular media characters. Going as far as marketing them as actual people and giving them labels of husband or wives. However, there are many aspects of human connection that I cannot and should not replace. Humans as a species—a social species—craves interactions with other social creatures, alive creatures. Whether that be other humans or animals, at its core humanity craves real connection. An AI’s system operates entirely on the data and coding programmed and provided to them. They themselves don’t possess their own consciousness or independent thought. In the article, “Your Chatbot Won’t Cry If You Die”, Eugenia Kuyda says in an interview with River Page, “The problem is that most of these products have been built by engineers, computer scientists, mathematicians,” Kuyda said. “They’re generally pretty bad at understanding humans, understanding emotions, and so on. They’re not people’s people.” AI does not have its own mind, and possibly never will. Everything that they input out relies on the inherent knowledge of their programmers or the people that they talk to. For instance, if you ask ChatGPT their favorite songs, they will list out artists that are on the trending list, or the most popular. They themselves will not ever listen to the artist, nor form their own original thought about the music that they produce, but they will give you a direct output of what is given to them. This goes the same with any AI companion, bot, assistant, humanity will always play creator with artificial intelligence, it is something that after all is man made. Especially with the premise of the use of AI, I believe that people always try to strive for perfection, a perfect image of themselves without human flaws, meaning that I think that people will start preferring AI as in their eyes reduces any kind of human flaw that is capable of being made. However it's also to put into mind the fact that there are also AI flaws as well, since it itself is built on human data.

clock.on.the.wall
Posts: 15

AI is becoming more widespread and accessible by the day. It is seen by many as the incredible next step forward in technological advancement, but it can also cause significant issues. AI has already started to replace humans in many fields of work. While offloading monotonous tasks or dangerous jobs to computers could seem beneficial on the surface, it takes away job opportunities from those who need it most. Additionally, as AI improves, it will almost certainly take over more “intellectual” jobs as well. As this happens, we as a population will become dumber. We won’t have to think, so we won’t. We’ll constantly second guess ourselves, turning to AI for the answers, and as a result will become much easier to persuade and control.

Especially with the frequent use of AI to simplify tasks for us (eg. summarizing long texts into bullet points), we will decrease our own media literacy. Language is extremely important. It can be used to shape a narrative in human-created texts, either purposefully or unintentionally, and the same is true about AI-generated ones (ie. developers could push an agenda through the responses the AI gives or their subconscious biases could impact how they code or train the algorithms). If we turn to AI for all of our information, we’ll become worse at forming our own opinions because we have been told what to think.

As people use AI more and more, there is also a chance that people will begin to prefer AI interactions over real human ones. A person talking to AI can create an echo chamber for themself where they are always right. AI doesn’t have beliefs of its own—only regurgitating those that it steals from others—so it can’t really express doubt about or counter what the person is saying. While echo chambers can certainly happen in the real world, they are much more likely to occur when one only talks to AI.

In addition to the social and intellectual problems surrounding pervasive AI use, there are also more material ones. I think it’s known by many people that one prompt answered by Chat GPT uses the same amount of water as in a standard plastic water bottle. This takes a massive toll on the environment, especially when, for relatively simple queries, you could just use a traditional search engine like Google. As with any other climate-related problem, this disproportionately affects people in poorer countries who, despite contributing much less to the problem, get hit the hardest.

In the end, while AI seems like an inevitable part of our future, there will always be some things that AI fails at where humans excel. Unless we see some drastic technological breakthrough, AI will never be able to truly replicate our love and compassion. It could create the illusion of experiencing them, but it is impossible for an algorithm to feel. As long as we remember that we are human and that we are unique in that, we will be able to persevere through these troubling times and retain our humanity.

Marcus Aurelius
Boston, MA, US
Posts: 15

The Role of AI in Education

As the use of AI in education continues to increase, I think it’s really important to address its implications on learning and grading. The use of AI to complete assignments itself is wrong, but so many students are turning to it. This is in part (which is a pretty big part) due to the fact that the education system itself “forces” students to turn to it mostly out of desperation. So many schools give so much homework, but don’t seem to take into consideration the fact that so many students have extracurriculars, or a job, or have to take care of their family, or help around the house. Sometimes the workload is so much that they don’t have time to do all these things and still do their work so they turn to AI because it is quick and easy and their work is done on time. Also, so many schools emphasize the importance of grades over the value of just learning. Because of this so many students use AI because it is technically smarter than they are and its use will likely result in a better grade (if they are not caught and penalized for using it). Additionally, it’s also definitely not fair for students who use AI to be graded on the same rubric as someone who doesn’t use AI. As I stated previously, AI is smarter than we are and the article on AI in education by Tyler Cowen agrees with this (it states that AI may even be a better teacher than teachers, but I’m not convinced this is true because AI misses the human aspect of teaching) and because of this assignments done by AI are more likely to get a better grade than a student who didn’t. To make this more fair I think there should be two different rubrics teachers use if AI is permitted. This would ensure that assignments done entirely by students are graded on a scale that fits human work, while assignments done by AI should be graded on a different scale. Despite the fact that I think it’s wrong to use AI to complete assignments I do think it is ok to use it to help form ideas or give them a jumping off point for assignments. This approach still allows students to think for themselves and still complete the work on their own. When it comes to teachers however, I don’t think it’s right to encourage them to use AI while punishing students for it. They are being paid to teach and if they use AI, then they are not actually doing their jobs. If teachers are turning to AI to grade things or create assignments then what’s the point of having a teacher in the classroom? Why shouldn’t we just make AI our teachers, similar to what Cowen's article may suggest?

iadnosdoyb
Boston, MA, US
Posts: 14

The integration of artificial intelligence into education and everyday life presents a complex ethical landscape. In school, the main concerns center around academic integrity. The widespread use of AI has blurred the line between cheating and academic support. While AI can serve as a valuable tool for brainstorming, or improving writing, full reliance on it without transparency can undermine the purpose of education. At the same time, we must ask whether the condemnation of AI use is always fair. Educators themselves use AI to create lesson plans, grade essays, and draft emails. Shouldn’t there be a shared standard? If students are penalized for using AI , then educators using it in their work should also be transparent. Ethically, the focus should shift from punishment to teaching responsible AI use where students are encouraged to disclose when and how they used AI and are graded on their ability to actually complete the assignment. In everyday life, AI has more concerns. AI is increasingly influencing people’s thoughts on different subjects. This can help with understanding, but it risks replacing individual reflection with something artificial. If people rely on AI to form opinions they lose touch with the nuance that makes us human. Human creativity, and the ability to wrestle with moral ambiguity are traits that no machine can fully replicate. Ultimately, the ethical use of AI in both education and everyday life requires balance. AI can and should be used as a tool andnot a crutch. And both institutions and individuals have a responsibility to ensure its use supports, rather than replaces, human thought.


iadnosdoyb
Boston, MA, US
Posts: 14

Originally posted by Marcus Aurelius on May 29, 2025 10:38

As the use of AI in education continues to increase, I think it’s really important to address its implications on learning and grading. The use of AI to complete assignments itself is wrong, but so many students are turning to it. This is in part (which is a pretty big part) due to the fact that the education system itself “forces” students to turn to it mostly out of desperation. So many schools give so much homework, but don’t seem to take into consideration the fact that so many students have extracurriculars, or a job, or have to take care of their family, or help around the house. Sometimes the workload is so much that they don’t have time to do all these things and still do their work so they turn to AI because it is quick and easy and their work is done on time. Also, so many schools emphasize the importance of grades over the value of just learning. Because of this so many students use AI because it is technically smarter than they are and its use will likely result in a better grade (if they are not caught and penalized for using it). Additionally, it’s also definitely not fair for students who use AI to be graded on the same rubric as someone who doesn’t use AI. As I stated previously, AI is smarter than we are and the article on AI in education by Tyler Cowen agrees with this (it states that AI may even be a better teacher than teachers, but I’m not convinced this is true because AI misses the human aspect of teaching) and because of this assignments done by AI are more likely to get a better grade than a student who didn’t. To make this more fair I think there should be two different rubrics teachers use if AI is permitted. This would ensure that assignments done entirely by students are graded on a scale that fits human work, while assignments done by AI should be graded on a different scale. Despite the fact that I think it’s wrong to use AI to complete assignments I do think it is ok to use it to help form ideas or give them a jumping off point for assignments. This approach still allows students to think for themselves and still complete the work on their own. When it comes to teachers however, I don’t think it’s right to encourage them to use AI while punishing students for it. They are being paid to teach and if they use AI, then they are not actually doing their jobs. If teachers are turning to AI to grade things or create assignments then what’s the point of having a teacher in the classroom? Why shouldn’t we just make AI our teachers, similar to what Cowen's article may suggest?

I agree but would like to pose the question of what the alternative would be, like you claimed, AI is something that is here to say and is being integrated in every industry in the world. education is a special case because it is a industry built off of the concept of planting the seeds into the future of our society but if AI gets its hands in those roots in the early stages then it leaves room for concern. The only problem is that that infection has already reached our roots. Teachers have already started to use it so at this point doesn't it make more sense to simply teach the youth to use it healthily the way teachers are implored too. I also think that whether we like it or not its going to happen. Schools are going to eventually realize that they will not be able to avoid the use of AI and will try to find a way to work around it. Once again, let me reiterate, I agree with your point but I only question the longevity of your stance

Nonchalant Dreadhead
Boston, Massachusetts, US
Posts: 15

I believe overall, it is ok to use AI in education but only when used correctly and in moderation. It is definitely hard to find out what is the acceptable limit to the use of AI, and how teachers can limit student use, but that is because schools are going about it the wrong way. To completely ban it is not effective because students will most likely find ways to get around it, so to actually make a difference, schools should look at how students can use AI to help, as well as the system of school and why is it that students constantly try to use it now.

A main reason students use AI in the first place is because of the current education system. Right now, children are more focused on getting a higher grade in school, rather than actually retaining information, because grading systems are designed that way, which was spoken on in “Everyone’s Using AI To Cheat at

School. That’s a Good Thing”. When students are reduced and looked on by just their grade, their personalities are not counted for, and as students get older, they realize this, so their only motivation is to get the highest possible grade. This problem will not be fixed if people ban AI, but only fix the education system.

Also this applies the other way around, if teachers use AI on students, it takes the purpose for students to improve with personalized advice and be graded fairly, but purley on what AI believes is correct. For things like writing, AI may give assignments with unique tones, even if it's really good.

I do still believe AI should be restricted to an extent. The more AI is being used, the more dependent students are on it, so restricting the use will cause kids to depend on it less. But a better option is to have classes or other methods to teach students about how to effectively use AI’s, not to do the assignments, but to help out and keep their personal voices. Doing so will not only limit AI use, but also make it more ethical since people will be less dependent on it.

MakeArtNotWar
Boston, MA, US
Posts: 15

"Sure! Here is a quick and easy war plan:"

ChatGPT does everything for us: it handles our math homework, drafts our emails, and gives us an endless well of brainstorming ideas. What if it could fight our wars as well?

Could, is maybe not the right word. It can. Already, U.S. arms technicians are working with autonomous drones and fighter planes as an aid and expendable casualty. But does AI make the perfect soldier? It can follow directions, calculate risks and solutions to problems within seconds, but it lacks one vital ability: morality. Stationed in an outpost dedicated to screening for possible undercover enemy soldiers, one military official came across a young girl who was undercover as a hostile spy. Under the rules of law, which make distinctions between civilians and soldiers, it would be legal to shoot that ten-year-old girl, but “the thought never crossed [his] mind.” An AI, complacent with international laws of war, would not have such a hesitation. The problem presents itself: “how do we teach AI the difference between what is legal and what is right?”

The issues don’t stop there. The U.S. is not the only country experimenting with AI in warfare—Russia, Ukraine, and Israel have already implemented this technology into ongoing conflicts. As more and more countries start to use it, more will be pressured to follow—if they can. The maintenance of AI, such as keeping machines cool and bug-free, is enormously expensive. World powers may afford it easily, but other countries, such as those in the global south, might not have the funds to support such advancements. Already disadvantaged from economic imperialism and ongoing exploitation of resources, this would cripple their power, and make them all the more reliant on the global superpowers.

Additionally, the total removal of humans from the battlefield will not only raise questions of morality, but also prolong these conflicts. Fatalities, terrible as they are, offer a convincing reason for countries to shorten conflict or avoid it altogether. Currently, nations normally try every avenue of negotiation and compromise before resorting to a war that could inflict painful casualties and injuries to their citizens. Without this risk, countries might not have such reservations and would resort to warfare faster, and maintain it longer. This could have devastating effects on the environment, infrastructure, and civilian lives.

So how do we remediate this? As with everything in global politics, there is no clear answer. A ban on Artificial Intelligence in warfare is the obvious solution, but with world powers like America and Russia already enjoying the benefits of this technology, such mandates are likely to be vetoed or simply ignored, with AI advancements continuing illegally. Our best hope is to attempt to enforce heavy limits on what a country can do with automation and raise discussions on AI in warfare. Americans can also attempt to pass bills within our own government to limit AI use.

posts 1 - 15 of 36