posts 31 - 43 of 43
bluewater
Boston, Massachusetts, US
Posts: 15

Response to The Ethics of AI in Everyday Life

Originally posted by Tired on May 29, 2025 13:09

AI will never be able to recreate a soul. Despite it’s skills in mimicry and imitation, the fact is that it can only replicate the feelings of sympathy, love, and comfort. Furthermore, the ‘perfections’ of AI are what causes it to be different from human because man-made creation will have mistakes, flaws, gaps, but that’s apart of the work. In the literary world, the reason writing and books are so interesting is because authors are genuinely writing it in their own voice and imaginations. If every author suddenly used AI, it will feel very monotonous and boring. Same with art, music, and photography. On the other hand, the idea of AI being ‘perfect’ can be argued since AI can’t do everything 100% accurate yet, such as certain generated drawings having weird blobs and six-fingered hands, or being unable to spell the word mayonnaise. This is why, at least for now, AI won’t be the worst crisis that humanity will ever face.

The role of humanity, as the “creator” of AI is to set strict rules and limitations so that it doesn’t go too far. The government already creates strict laws and regulations being placed so that it won’t takeover, and many people are boycotting against AI use due to it’s environmental and moral harm. In my opinion, I believe that AI shouldn’t replace human interaction. Through the use of ChatGBT and apps like Claude or C.Ai, people get attached to AIs who can treat them as their partner or therapist. It feels very alien and unreal, because they interacting with a robot, who can give perfect and desirable answers through editing and refreshing answers. ChatGBT will often agree with your statement, even if it’s incorrect. For example, if you’re ranting, then ChatGBT will most likely be on your side, despite if you’re in the wrong or not. Having AI replace relationships also feels controlling, because you’re leading the conversation and making it say what you want it to say. Compare it to real relationships, with conflicting interests and having disagreements, which what makes a relationship feel like a relationship. It’s learning how to negotiate and find boundaries for each other. In the article “Your ChatBot won’t Cry if You Die”, they say the reason people feel the need to use AI is because “people don’t feel needed” (7 Page). However, real friends are complex in which they will feel sympathy or love or pain for you. Real friends will leave an lasting impact on your life, instead of a quick band-aid for a relationship which is AI.

Overall, AI can be a useful tool. If you’re lacking any information or inspiration, AI is a great start to bring ideas to life. But it shouldn’t be your entire life, and it shouldn’t replace other lives. People should use AI in everyday life to enhance their work, but not to let it be their work.

I strongly agree with the main points made in your post and that AI should not replace human art, literature, and interactions. A point that you made was that AI is always perfect which is what makes it lack any sense of humanity. As humans, our work is unique to every person as we cannot perfectly replicate anything without some kind of error. The same applies to interactions and our conversations with others as not everyone has the same perspective on the same topics. AI blends topics together to form a "right" opinion and it is dangerous to allow AI to change your mind about things as you will no longer be thinking for yourself. The use of AI in chatbots and conversation models is not overly dangerous but it can get to a point where people might not want to talk to other people anymore. Chatbots allow people to feel comfortable and valued by AI but if they gain a dependence on these tools, they could be isolated from society. You also made a very strong point at the end that AI can be used without much danger but it only becomes dangerous when we overuse it and gain a dependence on AI doing our work for us.

charsiu
Boston, Massachusetts, US
Posts: 15

Peer Response: The Ethics of AI in Education, Everyday Life and Warfare

Originally posted by bostongirl5 on May 29, 2025 13:12

Humans are creative, intellectual, growing, and unique creatures. There are so many aspects to being a human unique to our species alone. Whether it’s speaking, performing, creating, debating, there are so many avenues that allow us to create and explore feelings, experiences, and the world around us. These things are irreplaceable by AI.

Tyler Cowen and Avital Balwit state in their article: AI Will Change What It Is to Be Human. Are We Ready?, “Our children and grandchildren will face a profound challenge: how to live meaningful lives in a world where they are no longer the smartest and most capable entities in it.” What they are getting at here, is that AI is a tool that is quickly overcoming the creativity humans were once only dreaming of being capable of. Instead of spending hours creating a piece of art, AI can make images in seconds, instead of crafting an article, AI will write essays using correct grammar and resources in seconds. You see, the problem here is that AI makes human creativity worthless.

The biggest ethical concern that I see with AI is that it only increases the western ideology of ‘go, go, go’. For over a century, the west has been very obsessed with the idea being the first, and the fastest. Between the industrial revolution, the space race, and modern technology, the need to create and entrepreneur is motivated by money and glory. I personally believe that has only increased in the past decade. A couple examples come to mind- school/test preparatory programs, kids sports, and the workforce. Each of these things have evolved to promote the culture of getting ahead and working or paying to advance past other people. I believe that AI is making this worse. Now, it is easy to advance through a class, by getting perfect grades as AI can help students cheat their way through. It can help make work presentations and projects go by quicker by fueling ideas and innovations in a second. It even helps people get into better colleges or programs, by allowing them to edit their own voice into perfection.

In all, I think that AI is overall making people dumber. Not only because it allows everyone answers and ideas in a split second but also because just as social media does, it is shortening our attention span and curiosity. Furthermore, it is continuing to fuel the ‘go,go,go’ mentality by basically belittling the joy in slowing down, creating, and trying things over and over again.

I agree with the writer’s ideas, and I think it’s especially interesting how they connected it to the mindset that the United States constantly perpetuates to be efficient and first place. I also think that idea is so ingrained in our society that AI is simply another way to meet that goal. I like how the author isn’t afraid to admit that AI is making people dumber because it removes any possibility of spending time and effort on something meaningful, as people would rather rush through something like schoolwork and assignments that require critical thinking instead of putting effort into understanding it. I also liked how the writer used counterarguments by stating that AI indeed can help students advance through a class, help them get into colleges and programs, and make projects or presentations go by faster. But I think it’s valuable that the writer explained that the harmful effects of AI outweigh these benefits and should not be used in everyday life. I also think that the question the writer posed to the reader (specifically, the one “how to live meaningful lives in a world where they are no longer the smartest and most capable entities in it?”) is very thoughtful because it’s a real-world issue that will have great impacts in the near future, and will implicate this generation’s children. It made me wonder how people would adapt to AI being a part of everyday life and whether it may be used even more or less because of this reason. I think this response was very clear and well thought out.

Tired
Boston, Massachusetts, US
Posts: 16

The Ethics of AI in Education, Everyday Life and Warfare Response

Originally posted by traffic cone on May 29, 2025 13:12

With regards to education AI should be heavily restricted and monitored for both students and teachers. If AI is used to complete an assignment that is grounds for plagiarism. This is because the student is copying work that isn't theirs which is the same as copying answers from google or another person. However; I think AI can be used as a tool, in the sense it would be used for studying. This is because AI is able to explain difficult topics to better understand, so I believe it's justified if a student used AI for that compared to using it to complete its work. Additionally if a student isn't allowed to use AI then the teacher should be held to the same standard. Especially in college when the students are paying for their education. It is expected when a student is paying their tuition they are being taught by the teacher not AI. When looking at the BPS proposal plan it does not directly mention the consequences educators could potentially have if they rely on generative AI for their curriculum plan and there should be a general baseline for what can be used and what shouldnt be used for AI. Specifically AI should not be used for opinion based writing, it takes away one's creative capabilities when they begin to use AI as a framework for completions of assignment. Additionally it proves to be harmful as students would begin to rely on AI time and time again worsening their capabilities to think for themselves. Another reason why this is harmful is because AI has access to all information on the web meaning there can be inaccurate information and if a student relies on this they could possibly be providing inaccurate facts on their work. A teacher should have a designated rubric for assignment completed with the aid of AI and without, this is because the work would not be able to compare so it's unfair for a student who has access to basic information to be graded on the same scale as a student who used AI. This problem could be fixed if teachers made assignments for AI and for no AI, to make students not rely on generative AI for their education.

I agree with the idea that AI should be used as a tool instead of completely copying off the answers, for both the educator and the student. I like that you mentioned that it would be unfair for students if they’ve paid for tuition and would be getting taught by AI, which would be a scam because AI is free and easily accessible to anyone through google’s new AI feature or any AI website.

However, I would disagree with the idea of AI being used as a studying tool. Unless it’s specifically a studying website, like generating automatic flashcards like on Quizlet, then I find it really difficult to study based on AI due to the fact that it feeds you a lot of information. As you mentioned, some of this information may not even be entirely accurate, so it feels like studying is a waste when it’s a gamble of whether or not the info is viable.

I do like your solution on having less AI via making two different assignments with one being completed through artificial intelligence and another being completed through an actual student, in order to compare and get a feel for a sort of rubric/blueprint for what does and doesn’t seem reasonable to know. However, having encouraged AI in the classroom may cause more harm than good because it will cause students who aren’t even aware of AI to be aware, and may begin to use it on other assignments. I understand that having some assignments being AI and some not will allow students to feel less enticed to do so on the non-AI assignments, but I also think it may cause a lasting negative effect for students who are already doing assignments normally. It may allow students to gain the bad habit of using AI in other classes, or using AI in college and their future job. Overall, this post had many good points about AI being converted to a more and healthy dose of usage, but for now it seems highly unlikely this can be executed due to the many loopholes and workarounds students can easily find.

verose
Posts: 15

Response to The Ethics of AI in Education, Everyday Life and Warfare

Originally posted by TheGreatGatsby on May 29, 2025 08:40

The use of AI is becoming so normalized that people are turning to it for homework help, their academics, and even comfort. In recent years, the pressure on students to maintain good grades while also being an active member in their community has caused them to turn to AI as a way to keep their grades high. Most students see what they get on their report card as what defines them, and even if they aren’t learning, they will try different means to keep that grade up, in this case, they are turning to AI. Students are also often encouraged by their peers to use AI, telling friends that they got their homework done really quick because of it. These incentives are contributing to the dilemma that is AI in academics today. The use of AI is becoming more and more normalized as many students turn to it. I believe that this inhibits learning. While AI itself isn’t bad, the uses of it are. I will note that AI can be used by students as an extra resource when they are feeling lost or are struggling, however others use AI to completely generate work for them. For this reason, the idea that AI is cheating and can be seen as a shortcut to good grades can be valid, however it has been seen that AI isn’t always cheating. A middle ground needs to be formed, this can first start with schools. I think that schools should really prioritize in-person skills because that’s something that can’t be given to a person through AI. When these students graduate and eventually get jobs, it is these skills that will help them because they won’t be able to turn to AI for their work as much as before. As for the accessibility of AI and ensuring that everyone has equal access to it, there is no possible way to achieve that. Even if harsh expectations are set in place, students won’t expose themselves for using AI. There is no solid way to tell if a student used AI for their work, meaning that they won’t be graded differently. This turns into a cycle in which students see this happening and decide to use AI as well because they don’t want to put in a lot of effort when others are putting in barely any. Another problem can lie in teachers using AI since it normalizes it for the students. I feel that when a role model uses AI, it encourages those who look up to them to consider the behavior good, reinforcing that as an expectation in a way. Ultimately, I believe that a middle ground can be reached with AI in academics. According to the article “Everyones Using AI to Cheat at School, That's a Good Thing,” many students use AI to better their understanding of a topic, thus improving their grades because they end up studying. I think that using AI as a resource, and not as an answer key to homework, is something extremely valuable that should be implemented into school systems. In the end, AI in academics has both positive and negative implications, however it’s key to find a middle ground in order to properly implement it into schools in a manner that facilitates learning.

I found it really well-written how you empathized with students and their experiences, and how you suggested that they turn to AI more out of a necessity to keep up their grades rather than a preference to be lazy. In this age it feels as though students are expected to be everything they possibly can -- active in extracurriculars, exceeding in academics, and fulfilling all that they need for future plans. This pressure can be so suffocating, and students have always found their outlets or non-agreeable sources to keep themselves latched onto that module of success; AI happens to be the new, modern format of that. I wouldn’t say that it’s an acceptable excuse, of course, but I certainly understand how students can be so comfortable with relying on these tools when they feel they have little other choice; this would be a strong starting point for the education system to acknowledge the uses of AI, and better craft policies that reflect that there would be consequences for a student falsely passing off work they hadn’t completed, without necessarily ruining their student life completely -- say, by having a serious misdemeanor on their record -- which might subsequently continue to push them towards whatever avenue they feel they need to use to appear better. All in all I definitely agree with your views on AI in education, in that there are most certainly benefits, and most certainly drawbacks, and that it is our responsibility to balance these rather than narrow-mindedly honing in on one singular aspect of such a dynamic thing like AI.

facinghistory19
Boston, Massachusetts, US
Posts: 15

Originally posted by User0729 on May 29, 2025 13:07

I believe that the uniquely human characteristics that AI will never be able to replicate are true emotion, because a computer will most likely respond with the logical answer that makes the most sense instead of what you want to hear, or what a friend would prefer to say. For example, if you ask a friend what they want to eat and you specify that you want pizza, the robot will agree with you(unless you’ve had pizza repeatedly), rather than a friend saying they prefer hot pot or just want ice cream. A robot will not act rashly or be as inconsiderate as a person. I believe the furthest that they can get is the “ideal corporate machine”. It is not necessarily AI that is taking jobs, but rather people who know how to utilize AI, and I’m sure that at some point, AI will be able to complete tasks on its own. Still, with AI being more and more mainstream, it will also have the capability to inform and educate people better. I hope that we do not have a WALL-E situation where everyone is obese and incapable of basic human actions. If people use AI correctly, then there should be no reason for people to become dumber unless they disregard entirely what the AI returns in response to the task they have given it. I do think there is a possibility that people will become less intelligent because they will lack basic comprehension skills and become increasingly dependent on an AI to answer questions like what their name is. I don’t think it’ll make you blindly follow others or lose empathy because at the end of the day, it is simply a computer with 0s and 1s coded to answer in its best capability to the user’s prompt, unless it's some robotic superpower that people can get behind and push their ideals. AI can create a disparity between the developed and undeveloped countries because, in order to power AI, water is being used. Even worse, it is being pumped from areas that are already scarce. Until AI becomes a sustainable method, it will most likely be moved to somewhere else, nearer to the water, and in a 3rd world country due to cost issues. The undeveloped country will accept it because it is getting paid. I think AI could replace doctors and teachers because of its capabilities to withhold information and be able to detect anomalies within the human body, and be able to teach effectively, if programmed to. As a therapist, I do not believe artificial intelligence telling you to breathe to 5 would work.

I mainly agree with this argument, because the chances that we are able to recreate true sentience in AI are very slim, and if we are able to create perfect simulations with human emotions, who's to say that we aren't living in a simulation? Theres a multitude of possibilities, and it's all bad for human interactions, because we would infact just become so reliant on AI. There would be no need to do many things, even to pick your own movies, or talk to family, when you have an AI companion. Life would become exactly like Wall- E I believe, simply due to the fact that we are inherently lazy. In most cases as humans, we choose the easy way out, it's natural, but that's not good for every other path in life. AI would become our maids and our caregivers, sending us back to a state of infancy with a hyper realistic robot to do your bidding. The one thing I don't agree with here is the answer that it's just a computer with Zeros and ones, because if we manage to create true AI emotion, at that point, it has it's own being, its own soul, its own emotion, and it has rights that we should give it. there's many lines to draw, but it's redoubtable that in our lives AI will have huge impact in the future, and might replace human interaction.

perspective
Boston, Massachusetts, US
Posts: 6

Originally posted by glitterseashell1234 on May 29, 2025 12:43

As artificial intelligence can only attempt to mimic analytical human skills, whether it's STEM related or not, artificial intelligence misses a very large part of the human story. I do not think that artificial intelligence will go as far as people believe it will due its lack of human pain and stress. Humans are the only animal capable of living in chronic stress, this is what sets our actions apart from other animals in the animal kingdom. Thus, our stress is an integral part of who we are. Artificial intelligence will never be able to produce great art as it lacks the ability to understand the human experience. I believe that artificial intelligence will impact the availability of jobs in the market that deal with numbers, data, research, and technical skills. However, I do not believe that artificial intelligence will be able to take over the role of ”the creator” and become a sentient being. I also do not believe that artificial intelligence and its use will make humanity become “dumber”, in fact, I believe that the advances in technology and artificial intelligence will actually lead to a raising of the standards. Similarly to when the calculator was first developed, many believed that this would make math class obsolete. Instead, the introduction of the calculator to regular math classes led to the development of harder math curriculums that incorporated the calculator. Humanity, especially in an academic sense, will always find a way to continue advancing. There have been so many new technologies that people believed would change the world, yet, the world figured out how to live with it rather quickly.

From a social perspective, I do believe that the introduction of artificial intelligence will lead to social and political cleavages in the framework of global society. Less developed countries will struggle to keep up with more developed countries with a stronger grasp of these technologies. This may lead to an increase in emphasis on the humanities in less developed countries. In “AI Will Change What It Is To Be Human Are We Ready”, Tyler Cowen writes that “Some governments may embrace rapid AI integration while others implement stricter regulations. Some may use AI to surveil or constrain citizens, while others let AI unlock new opportunities and ways of living,”(Cowen 15). I think this argument is strong when considering the issues global governments may have integrating artificial intelligence. However, both Cowen’s argument and the argument I made previously will not change the structure of globalization in general. Once again, there have been too many technologies that could have already changed globalization but did not.

In conclusion, artificial intelligence will make a big impact on society, but it will not uproot the systems we already have in place. Humanity will always be the one in charge due to the characteristics that separate us from every species, including technology.

Your first paragraph in particular interested me. I am curious as to whether you believe that it is necessary for AI to become sentient to become what we could call, perhaps, the evolutionarily superior being. While humans may be the only creatures capable of living in chronic stress, as you mentioned, is feeling it necessary to replicate the actions - and we should also not that our actions are not necessarily the best one. I would wonder if the real questions is not whether AI can necessarily replace or mimic humans, but in their own sense, become better. It is absolutely true that humankind has rapidly adapted to new technologies though, and I don't believe the existence of AI makes our own useless. Therfore, even if we say AI is evolutionarily better - and that in itself remains a big if - perhaps we adapt to live alongside it nonetheless. Still, maybe something to think about. Overall, very well articulated and strong argument.

phrenology12
South Boston, Massachusetts, US
Posts: 14

The Ethics of AI in Education, Everyday Life and Warfare

AI in warfare should definitely be overseen by a human. If the AI is programmed to follow the laws of war, then they could potentially kill a child who was pressured into spying. That point was brought up in the documentary we watched in class. This is why human oversight is so important because AI doesn’t follow a set of morals. If AI were to reach a point where it no longer followed human commands then the government would have to do everything they could to shut it down because of the massive potential threat. However, I feel like there is a way to balance ethical concerns as long as AI stays as a code. As long as AI doesn’t become sentient if that is even a thing, then I feel like the benefits could outweigh the cons as long as the weapons were under human control.The good part of AI in warfare is that less human lives will be lost, but then it comes down to who has the better manufacturers which poorer countries could not afford to keep up. While humans in power go corrupt, or make bad decisions often, I feel like compared to the alternative it is relatively ok. This is because I feel like a ban on AI weapons would only be used as an excuse to go to war with the countries that don’t follow it. That brings up a new set of problems with no ban however, because if an AI machine commits a war crime then I believe that the government or commander who sent the specific weapons out should be put on trial. Also, there should be an investigative process into the person who wrote the code to check if they put something malicious in it. I think that using AI for physiological warfare is to be expected if it develops to that point, especially during warfare. While torture of any kind in unethical, during war ethics usually proceed to fly out the window. I don’t think that AI is entirely to blame for this newfound fear, because the threat of technological development has always been around throughout history. At what point does it stop being innovation, and turn into something that could threaten humanity as a whole.


Fahrenheit.jr.
Boston, Massachusetts, US
Posts: 15

Originally posted by charsiu on May 29, 2025 13:04

It is extremely concerning and dystopian for people to be using artificial intelligence in everyday life, particularly when they are replacing genuine human connections, social life, and personal opinions with generative AI. There are characteristics of human beings that robots simply cannot replicate, no matter how advanced they become. For instance, AI cannot be original or make individual viewpoints, since its information comes from the databases available to it which are supplied by human beings. This is the reason why so many artists are now accused of using AI: artificial intelligence steals art styles from human beings and works of art it has already seen. It is dangerous to allow artificial intelligence to create opinions in subjective fields like world events, history, and literature that are meant to be interpreted differently by people. People will grow accustomed to listening and following rather than lean into their creative and critical thinking skills, which is risky because eventually they won’t be able to express their thoughts or reasonings on different topics coherently. Many students in particular at Boston Latin School cannot survive without ChatGPT, and have forgotten how to do assignments, write essays, or ask their teachers for help. Moreover, robots cannot replicate authentic emotions or the touch of a human being, and it should stay far away from replacing friendships. Those who use AI as a “therapist” or “friend” are enamoured with the idea of a perfect relationship. In River Page’s article “Your Chatbot Won’t Cry if You Die,” they state, “Unlike the gods people stopped believing in, this one can’t punish you, or send you to heaven, or perform miracles, or smite your enemies, or die for your sins. Once, people wanted more from their gods. Now, they just want to chat.” This may be the case: AI may be able to express sympathy and comfort, but that is because it’s programmed to do so. AI can regenerate responses until someone finds the perfect one that suits them, or people can have AI tell them exactly what they want to hear. This disregards the fact that meaningful social interaction is often meant to be complex, not an ebb and flow of self-affirming generated words. That is why it’s rewarding and fulfilling to connect with like-minded individuals and listen to different perspectives to broaden one’s mind. There will always be disagreements and arguments between individuals, which is healthy discourse. AI removes any necessity of realistic social interaction, which may influence the way people interact with each other and cause them to be disillusioned when not everything is under their control like with artificial intelligence.

One of the most compelling ideas in this post is the concern that AI may cause people, especially students, to lose their ability to think critically and independently. I agree with this point since AI is becoming more convenient, and it makes it easy to rely on it as a shortcut instead of developing our own ideas. This is dangerous in academic environments like BLS, where learning how to write, analyze, and ask questions are essential skills to have. I also found the reference to River Page’s article strong, emphasizing how people are drawn to the idea of a flawless, controllable relationship with AI, rather than the complexity of human interaction. The argument about emotional authenticity is also interesting as it highlights an important limitation of AI and its inability to truly feel.

My views are similar, although I do believe that AI can still serve as a useful tool if used with boundaries. I think the argument can be a bit stronger with a bit of a counterpoint that acknowledges that AI might help people in certain ways. A little more discussion on how we can balance AI use with human interaction would also add some depth. I think that the post overall raises thoughtful and deep questions that are very relevant today.

slaughterhouse5
Boston, Massachusetts, US
Posts: 15

Reply to response

I agree with what this student is saying because they are addressing the causes behind AI usage in school while still stating that it is unacceptable in most cases. The student says that academic pressure, high expectations, and the fact that many see their grades as the way they are defined causes students to resort to AI because they feel like they need to have very high grades and this is a very difficult thing to achieve on top of all the other things students are expected to do. It is important to identify root causes of an issue because it shows the problems already within society, and students are using AI on such a massive scale that something must be causing this epidemic - high academic pressure, high expectations, simple human laziness, bad teaching, and learning issues. The student states that there must be a middle ground, which I agree with, because AI isn't just going to stop being used completely, a compromise needs to be found.

Originally posted by TheGreatGatsby on May 29, 2025 08:40

The use of AI is becoming so normalized that people are turning to it for homework help, their academics, and even comfort. In recent years, the pressure on students to maintain good grades while also being an active member in their community has caused them to turn to AI as a way to keep their grades high. Most students see what they get on their report card as what defines them, and even if they aren’t learning, they will try different means to keep that grade up, in this case, they are turning to AI. Students are also often encouraged by their peers to use AI, telling friends that they got their homework done really quick because of it. These incentives are contributing to the dilemma that is AI in academics today. The use of AI is becoming more and more normalized as many students turn to it. I believe that this inhibits learning. While AI itself isn’t bad, the uses of it are. I will note that AI can be used by students as an extra resource when they are feeling lost or are struggling, however others use AI to completely generate work for them. For this reason, the idea that AI is cheating and can be seen as a shortcut to good grades can be valid, however it has been seen that AI isn’t always cheating. A middle ground needs to be formed, this can first start with schools. I think that schools should really prioritize in-person skills because that’s something that can’t be given to a person through AI. When these students graduate and eventually get jobs, it is these skills that will help them because they won’t be able to turn to AI for their work as much as before. As for the accessibility of AI and ensuring that everyone has equal access to it, there is no possible way to achieve that. Even if harsh expectations are set in place, students won’t expose themselves for using AI. There is no solid way to tell if a student used AI for their work, meaning that they won’t be graded differently. This turns into a cycle in which students see this happening and decide to use AI as well because they don’t want to put in a lot of effort when others are putting in barely any. Another problem can lie in teachers using AI since it normalizes it for the students. I feel that when a role model uses AI, it encourages those who look up to them to consider the behavior good, reinforcing that as an expectation in a way. Ultimately, I believe that a middle ground can be reached with AI in academics. According to the article “Everyones Using AI to Cheat at School, That's a Good Thing,” many students use AI to better their understanding of a topic, thus improving their grades because they end up studying. I think that using AI as a resource, and not as an answer key to homework, is something extremely valuable that should be implemented into school systems. In the end, AI in academics has both positive and negative implications, however it’s key to find a middle ground in order to properly implement it into schools in a manner that facilitates learning.

Post your response here.

everlastingauroras
Boston, Massachusetts, US
Posts: 15

Peer Response LTQ 9

Originally posted by 01000111 on May 29, 2025 12:45

I believe AI should not be used at all in warfare due to how much it could facilitate deaths in large numbers. The use of AI in war could cause something worse than the Great War where machined weapons like tanks and automatic rifles were used and caused millions of deaths. This in turn might cause something even worse than what happened after the war with many people becoming hopeless with humanity due to how much they can destroyause a disru and how easily life can be taken from someone. I also believe AI should not be used due to what was explained in the documentary where it showed a girl who would’ve been killed by an AI due to its coding about the laws of war. This shows just how much different humans are from AI as artifical intelligence does not have the same moral code or ethical conduct as the one any human would have. From what we know scientifically about AI and its potential for going wrong, I believe AI should be banned from ever being used in war by any country at all. This is because AI needs specific language in its code for what it can or can’t do and would follow those rules no matter what, yet, we as humans know that there are always exceptions to some rules which sometimes don’t have to be followed or should be adjusted, not doing this could cause a lot of moral problems and even increased tensions. Furthermore, is the use of AI weapons becomes more widely available, it would be very easy for people to start using the technology for very wrong things including terrorism or personal revenge. In all, I think AI could be a useful tool we as humans can use but it should have its limits as we can lose control over it very quickly.

I agree with this person's beliefs that AI does not have the same moral code as humans. However, while the use of machines in warfare could cause a great amount of deaths, it is also important to acknowledge that we are not at that state with technology. AI is consistently developing. When looking at videos online of things such as deep fakes, it is evident that they have terrifyingly realistic--especially with how a lot of elders seems to believe these videos to be real. It is also important to acknowledge that AI is being used in what I would consider to be an ethical way. They are using AI sort of as spy cameras, having tiny little robots travel around different territories to see whats going on and to see how the humans can use those things to their advantage. This is a natural part of warfare, even if it may cause a disrupt different communities that would be considered the morally right. These tactics are the same as espionage, but with less risk towards the humans that are spies. This, I believe to be a good use of technology, even if it puts countries with less resources in a worse position. The truth is with warfare, whether with AI or not, there will always be an advantage to the more developed and they will always take advantage of their resources.

To make their next response better, I would just recommend my peer to make separate paragraphs with different points to make it more organized and comprehensible. I would also just double check with spelling.

traffic cone
Boston, Massachusetts, US
Posts: 13

Peer Feedbaack

Originally posted by souljaboy on May 29, 2025 12:56

The ethical considerations of using AI in education are that it doesn’t help the students learn and it defeats the purpose of teachers building lessons surrounding the topics they’re teaching. I don’t think it’s necessarily wrong to let AI influence some of our ideas when it comes to education, however, having your opinions based solely on what AI says begins to cross that line between it being ethically wrong. Some of the structural issues that have made students rely on AI are mainly a result of overworking some of the students and making them do tasks that aren’t exactly “necessary” for the main course. Another reason why students rely so much on AI is that a teacher may not be doing their best job at educating their students and keeping them up to date with news or methods to complete tasks more efficiently. I think that schools should still help students train in-person skills to develop quicker and more critical thinking without the use of technology. It prepares students for later on in life and sets a foundation for what you should be able to accomplish in college and on. I believe that networking and making genuine connections will become a lot more valuable. Meeting people face to face and conducting in person interviews is one of the best ways to move around AI and try to get to know the person better and on a deeper level. I feel like especially for introverted students, having to go to an interview will allow you to communicate that with the employers or whoever you’re being interviewed by. I think that the use of AI definitely has the opposite effect of incentivizing other students to participate in class because why participate if you can learn everything in class from a chat bot. I think that this also relates to boredom in classes that use computers. I believe that if a class doesn’t utilize computers often, then it’s more likely that the students will pay attention and participate. Teachers should definitely be punished in the same way as students do if they’re caught using AI. This is mainly for lesson plans because why would you want an artificially generated lesson plan if there is a teacher there who is supposed to manually create one.

I agree with the student view when it comes to opinion based writing. I believe that it is not ethical for a student to use AI when they are supposed to form and create their own ideas. This is because AI is extremely dangerous as it provides information that is from the entirety of the internet. This may seem inherently helpful however it proves to be harmful as access to all information means potential bias. For example a source that a generative AI may use could possibly be proved to be inaccurate, thus creating an opinion for the student that they might not even agree with. Additionally I also agree with the student with regards to networking because human connection is something that generative AI could never reproduce. If schools spend more time on networking this would be better for students because they are able to gain this skill without AI. I would like to add that I think AI helps students be more comfortable in isolating themselves so this is harmful because it does not encourage someone to go outside their comfort zone and connect with new people. If this continues more and more people will become isolated resulting in the lack of human interaction and an increased dependence on technology.

phrenology12
South Boston, Massachusetts, US
Posts: 14

Peer Response

Originally posted by Tired on May 30, 2025 13:42

Originally posted by traffic cone on May 29, 2025 13:12

With regards to education AI should be heavily restricted and monitored for both students and teachers. If AI is used to complete an assignment that is grounds for plagiarism. This is because the student is copying work that isn't theirs which is the same as copying answers from google or another person. However; I think AI can be used as a tool, in the sense it would be used for studying. This is because AI is able to explain difficult topics to better understand, so I believe it's justified if a student used AI for that compared to using it to complete its work. Additionally if a student isn't allowed to use AI then the teacher should be held to the same standard. Especially in college when the students are paying for their education. It is expected when a student is paying their tuition they are being taught by the teacher not AI. When looking at the BPS proposal plan it does not directly mention the consequences educators could potentially have if they rely on generative AI for their curriculum plan and there should be a general baseline for what can be used and what shouldnt be used for AI. Specifically AI should not be used for opinion based writing, it takes away one's creative capabilities when they begin to use AI as a framework for completions of assignment. Additionally it proves to be harmful as students would begin to rely on AI time and time again worsening their capabilities to think for themselves. Another reason why this is harmful is because AI has access to all information on the web meaning there can be inaccurate information and if a student relies on this they could possibly be providing inaccurate facts on their work. A teacher should have a designated rubric for assignment completed with the aid of AI and without, this is because the work would not be able to compare so it's unfair for a student who has access to basic information to be graded on the same scale as a student who used AI. This problem could be fixed if teachers made assignments for AI and for no AI, to make students not rely on generative AI for their education.

I agree with the idea that AI should be used as a tool instead of completely copying off the answers, for both the educator and the student. I like that you mentioned that it would be unfair for students if they’ve paid for tuition and would be getting taught by AI, which would be a scam because AI is free and easily accessible to anyone through google’s new AI feature or any AI website.

However, I would disagree with the idea of AI being used as a studying tool. Unless it’s specifically a studying website, like generating automatic flashcards like on Quizlet, then I find it really difficult to study based on AI due to the fact that it feeds you a lot of information. As you mentioned, some of this information may not even be entirely accurate, so it feels like studying is a waste when it’s a gamble of whether or not the info is viable.

I do like your solution on having less AI via making two different assignments with one being completed through artificial intelligence and another being completed through an actual student, in order to compare and get a feel for a sort of rubric/blueprint for what does and doesn’t seem reasonable to know. However, having encouraged AI in the classroom may cause more harm than good because it will cause students who aren’t even aware of AI to be aware, and may begin to use it on other assignments. I understand that having some assignments being AI and some not will allow students to feel less enticed to do so on the non-AI assignments, but I also think it may cause a lasting negative effect for students who are already doing assignments normally. It may allow students to gain the bad habit of using AI in other classes, or using AI in college and their future job. Overall, this post had many good points about AI being converted to a more and healthy dose of usage, but for now it seems highly unlikely this can be executed due to the many loopholes and workarounds students can easily find.

I really agree with the majority of the statements made by this post. I feel like using AI for simple fact finding assignments is not a big deal considering that the student would be using Google, or some other source to find the information either way. However, I think that when it comes to opinion piece based writing then AI should not be used. Legit penalties should be given when it comes to personal writing since that is not their own thoughts, and is instead the words of many others massed together. I have known teachers who have used AI to grade, and I even have a teacher this year that creates our quizzes and tests using Chat GPT. Teachers should be held to the same standards as students, but not similar penalties since they are doing two different things. I agree that the Boston Public Schools proposal for how to deal with artificial intelligence is very lacking when it comes to the educator side. I am not exactly sure how to deal with AI, and its development since it is consistently developing at a very fast rate.

succulentplant
Boston, Massachusetts, US
Posts: 15

LTQ 9: The Ethics of AI (Peer Response)

Originally posted by questions on May 29, 2025 13:11

First off, great points! I don’t have any suggestions related to mechanics or anything to improve your post. Every idea you offered was clear and complete. I completely agree with the fact that AI can be a beneficial tool for students to use for support and feedback rather than for solely cheating. I also agree that reliance on AI is harmful to the development of people, especially if the tool is being used for almost every aspect of life. Your point on how using AI for academic support rather than human tutor support can be detrimental is an interesting one and also speaks to how the use of AI tools can do more harm than good. I also thought that the system you developed for the implementation of AI in class was unique and a good way to allow students to use AI while ensuring academic integrity. Overall, good work!

Using AI in school is acceptable to a certain extent. AI is very helpful in things like checking work, receiving feedback, and even studying. AI is a resource that students should be able to use at their own expense, assuming that it is being used fairly. This means that AI is not used to complete entire assignments or to cheat on tests. However, these two uses of AI are very common among students today. This is due to the fact that many schools are giving students more work than they can handle with all the extracurricular activities they do. Using AI to do entire assignments/projects/tests is cheating, however using AI to receive feedback on assignments or to use as a tool to study is not. This is specifically outlined in the BPS draft AI proposal, where there are acceptable uses of AI that relate to using AI as a support tool. It can be very helpful to use as support, but that also means AI is replacing the job of a person, who would have been the support. Not having that human interaction will cause students to have poor communication and discussion skills, which are two very important skills in the world beyond school. Schools should prioritize these skills because of their importance and it ensures students can still critically think. Since AI will cause communication skills to drop, it won’t be surprising if employers start to prioritize these skills. These skills can possibly be even more important than having extensive experience or a degree in a certain field. Anyone could have used AI to get through college, so having a degree probably won’t be as impressive as having presentable skills like communication. Even if students are allowed to use AI, there isn’t really a way to monitor AI use in students. There are things like GoGuardian, where teachers can see the screen of their students, but there are ways to get around this and no teacher is going to spend all their time monitoring screens. There will need to be some AI platform that limits what questions can be asked and has recorded conversations. This way if there ever is any suspicion of AI use, teachers can go back to the recorded conversations between the students and AI to know if it really was AI. If students are being limited on their AI use, I think teachers should also be limited. They can use AI to grade things with one correct answer that can be put into the computer, like multiple choice questions, but essays should be graded by the teacher. Essays are unique to the student and are written with a lot of time and effort. Teachers using AI to grade something like this is undermining the work students have put into their essays and will likely influence the student to put less effort into their essays. Although work can be tedious, teachers and students should be limited when using AI.

Post your response here.

posts 31 - 43 of 43