posts 46 - 60 of 61
WoahWoah
Hyde Park, MA, US
Posts: 18

Originally posted by mydoglikescheese on May 29, 2025 10:05

Currently, Boston Public Schools has a statement in review detailing the usage of AI in grading. Though it has not become official yet, this is the scary reality of what the future holds for our use of technology. I believe that we are beginning to become too dependent on AI in general, and this is problematic because it takes away from the human experience. If AI were to grade work, this would mean that the only value the teacher is attaching to it is the grade- not the feedback, or the growth, or the experience. It subdues the student’s experience, because it means that there is no human audience that they must appeal to. While AI can be a starting point, I believe that integrating it into our school systems would be detrimental to these communities.


Going off of AI in school settings, I believe that it can never surmise to the human experience just due to the way it is built. Human relations are built on shared emotions, ideas, and connections. AI can never experience life the same way we do- we have evolved to be this way, while AI was built. This major difference also comes from the fact that it is easy to manipulate AI to say what you want it to say. A chatbot is simply based on an algorithm, not past experiences, which sets it apart from human emotions. Chatbots are becoming a scary reality. In a way, chatbots have become an outlet for parasocial relationships, minus the human connection. People are becoming dependent on them, creating deeply unhealthy relationships where emotions and feelings are never reciprocated. People become stuck in a cycle, craving more, yet never feeling satisfied, which is why they go on to seek validation. Even in the movie “The Robot Killers,” it is clear that these remotely operated technologies have no real connection to people. AI also has been proven to have negative environmental impacts, ramping up deforestation. All of these negatives outweigh the positives, which is why I believe that the way we are using AI is terrible for humanity.

I agree with the point that you make about AI grading the work. Often I’ve taken tests and I only get more score back, minimal feedback or instruction on what I could’ve done in order to improve. This is taking away from the learning experience, learning from your own mistakes is a crucial part of learning and with AI grading it takes away a core principle of what education is. AI has already begun to take over, fully integrating it into our education will only make matters worse as public education is already worsening. Becoming dependent on AI will strip humanity of what makes it human, we’ll lose our ability to think for ourselves and our individuality as a whole. AI is dangerous, not only is not always right but it is easily influenced by certain factors. AI even by the user can be manipulated in a manner to say specific things whether or not it's true, if the person using the program can manipulate the AI we can only imagine the levels of control that the people programming it have. Forgoing your independent thinking in order to be reliant on another human’s creation is the first step towards blind compliance and could possibly be considered a form of brainwashing. If AI continues to go on, just like you’ve said I think it will have a terrible impact on civilization, destroying real bonds between humans that could’ve flourished. AI will take humans away from reality, while destroying the environment that we have around us. How dangerous AI can become is immeasurable. I agree that all of the negatives outweigh the positives and yet I believe that there’s no way we will ever go against it.

make_art_not_war
Boston, MA, US
Posts: 15

LTQ Response

Originally posted by star fire on May 29, 2025 10:00

The current structural issues of our education system has greatly contributed to students’ reliance on AI as an academic tool including but not limited to: the lack of solid teaching, the lack of teacher and student interaction, and simply the lack of caring. What I’ve seen when it comes to students using AI, it’s either because they are lazy or they simply don’t know what to do and how to do it and that stems from their teacher not teaching them correctly and giving them a set of instructions to follow. Even if you provide guidelines, the student still has to be able to understand what those guidelines are in order to execute them correctly. I’ve also noticed that teachers seem to care more about catching students using AI then finding out why the student used AI. It’s almost as if they get a rush from catching students and giving them failing grades. If it’s a repeated action then yes that is warranted but finding out why the student turned to AI in the first place could contribute to decreasing the use of AI.AI should never be allowed to make autonomous decisions without human oversight on combat missions. I remember in the film that we watched in class that a child of the opposing side was sent out to scope the scene and that the soldiers said that they would never have thought to shoot the child because it is simply inhumane. However, AI only reacts to the guidelines given and that child would fit the description of an intruder, of an enemy combatant and it would’ve shot her. AI can not differentiate between good and bad and cannot see that there are situations that aren’t always black and white so why should they be able to make autonomous decisions. I find it scary to think that AI might reach a point of disobeying human commands but that is a high probability. In the film we watched in class it says that AI learns based on scenarios that are given to it so what if at some point AI decides that the decisions that humans make are too “soft” and they decide for themselves what action will result in the best outcome?

I think that the most compelling idea in this post is your take on how structural issues in schools have contributed to the use of AI technology. While it is easy to say that a student has used AI simply because they are “lazy” or not willing to put time and effort into their education this can often lead to the reinforcement of negative trends such as the use of this technology because ultimately we are not getting to the root of the issue. I also agree with the idea that teachers recently have been very concerned with catching students in their mistakes or cheats but this is harmful because it discourages students and does not reinforce the idea that they are capable of completing the task on their own. Another part of this response that I found really interesting was the fact that AI can only see in black and white, not grey. I totally agree and I think this is why the widespread use of AI in different fields such as the military or even everyday life is detrimental as they would not be able to recognize the nuance and complexities of a certain situation.

Zinnia
Posts: 15

The Ethics of AI in Education Peer Response

Originally posted by WoahWoah on May 29, 2025 09:41

The way that our current structural issues have contributed is by taking away the experience of the learning and simply making it based on the answer, school has become too dependent on the overall grade instead of the student’s learning experience. Cheating and getting an A, rather than struggling in a class and receiving a poor grade isn’t a difficult choice to make especially considering for colleges you get judged by the grades that you obtain throughout your 4 years of high school. I think that allowing AI to influence our opinions and thoughts on the world around us will result in us being easily manipulated. AI is programmed by someone else, allowing ourselves to be controlled by AI essentially allows us to be controlled by other humans tearing away our individuality and ability to be our own person. I think that the widespread use of AI tools challenge traditional definitions of academic integrity because AI can be used in manners that aren’t dishonest. AI could potentially be used as a study resource with unlimited potential. However at the same time AI can also be used to simply give answers and not learn anything at all. Depending on the use of AI I believe that AI can be a helpful tool instead of hindering student’s learning ability. I think that school’s should prioritize in person skills like discussio and communication skills because these are the skills that many people lack. The key to gaining real experience in life is the body of work that that you have, but without being able to communicate these your chances significantly worsen. If we lose the ability to communicate with one another and think critically life will only become more difficult. Communication is one of if not the most important skill to have when it comes to entering in the real world, whether it’s advocating for yourself or presenting yourself in the best light during an interview. I think that when it comes to employers what actually matters is working experience, as we’ve been employers are stemming more and more away from the grades you have on paper. Working experience, and the way that you present yourself has become more important over time. I think this slightly works against people who are introverted or deal with struggles with social interactions, but networking and perosnal connections aren’t the absolute end all be all. I believe that even though with these changes it’s not a disadvantage to those who struggle with social interactions.





Hey WoahWoah! I really enjoyed reading your response because I answered the same discussion question about the ethics of AI in education and we agree on a lot of aspects. I definitely agree that a lot of education is not focused on the students’ learning or understanding of a topic, but rather geared towards memorization and test scores; however, I don’t think that that is only an issue with the school. Especially in exam schools like BLS, students are also more obsessed with the letters on their transcripts rather than their own education, and that’s why a lot of students sign up for classes they aren’t interested in and end up doing worse in school. All of these factors, from systemic issues in the school system to peer pressure to get good grades, contribute to this exponential increase of AI use in schools. I also agree that our curriculums need to adapt to this new technological reality if students are actually going to keep learning, and I think that the only way to do that is to prioritize skills like discussion, communication, and critical thinking, which are really what make us human and what allow us to contribute to society in meaningful ways.

Vonnegut123
Boston, Massachusetts, US
Posts: 15

Ethics of AI Response to map

Originally posted by map on May 29, 2025 10:26

BLS, BPS, and the American school system as a whole foster an environment that drives kids to cheat themselves and use AI out of desperation. Students have the pressure of making good grades constantly hanging above them, especially at Boston Latin where everything is so cutthroat and competitive. Students are convinced that they need to go to an elite college to be successful in life, and believe they need perfect grades to be accepted. Thus, it can feel like their entire life is on the line when they believe they are incapable of earning an A on their own. Coupled with outside pressure from overbearing parents, students are driven to use AI out of desperation. They sacrifice their own opportunity to genuinely learn because they feel it is necessary. This problem is worsened by the fact that BPS does a poor job of preparing many of its students for the rigor of its exam schools.

This problem is not unique to Boston, however. It is clearly tied to general societal shifts in education, such as the No Child Left Behind initiative and the abandonment of humanities education in favor of stem. Standardized tests have led su to value performance over progress. Getting a good mark is more important than actually learning material. This is also reflected in stem-centered education, emphasizing computational skills rather than critical thinking or analysis. This furthers the climate where students are driven to use AI because knowing the answer is more important than understanding the answer—the thinking is absent. This devaluation of critical thinking skills leads to increased reliance on AI.

Schools also drive students to use AI out of pure disinterest. An education system that values statistics like achievement and college acceptance rate over student interests and active engagement fosters a “learning” environment where students are given no reason to care about learning at all. Thus, they are led to AI out of laziness arising from boredom. If schools valued their students’ needs above their own prestige, performance, and ranking, we would not see so much AI usage.

Schools need to take AI punishments more seriously. If there was a real threat of expulsion for using AI, students would be motivated to do their own real work, even if it meant not getting that A+. Though extreme, they would recognize that expulsion poses a bigger threat to their college acceptance than a B-. Similarly, teachers should be fired for using AI. They are paid to think and paid to teach kids to think. If they let AI think for them and encourage their students to do the same, they are shirking the basic responsibilities of their job and taking money for work they are not doing. If students wanted AI to give feedback on their papers, they would just ask it themselves. There is no need for a person in between.

Ultimately, the rise of AI in schools needs to be more taken seriously as it reveals deep flaws in our education system.

Hello map! I enjoyed reading your response. Firing and expulsion may be a bit too Draconian, for cheating using AI, but I agree with the sentiment behind it that both teachers and students must learn without AI. Without such severe punishments, it is very difficult to prevent people from using what seems like a short cut to good grades. Now that school districts and students have started using AI it will be much harder to go back on what is common practice. National organizations like College Board, the gutted Department of Education, and prestigious colleges should have led the way against this as well, but due to politics, incompetency or a lack in foresight AI and ChatGPT are used broadly. We will see what BPS comes up with soon and what other major cities do in response to AI. Let’s hope for the best, and even if AI is permitted we can hope integrity keeps many from sliding backwards intellectually.

fishgirlbahamas
boston, ma, US
Posts: 15

Originally posted by historymaster321 on May 29, 2025 10:29


Humans can feel and can experience, robots cannot. Humans have felt the unimaginable joy of belly laughing with family, the power of grief when losing a loved one, or even the pride of earning something after working incredibly hard for it. I believe that there is no real way to program this into a robot and no way to describe these feelings unless one has actually felt and experienced them. The most special thing about being human is the way that we get to feel everything so deeply, even if we don’t realize it. These shared experiences of feeling that almost everyone has experienced at least once in their life allow us to connect with each other and form relations. AI will never truly be able to replicate or teach these things because of how personal and unique, yet also common experiences they are. Humanity's ultimate role is being the creator not only of other humans but in our society. As humans, we create the atmosphere, we create the environment, and we create our society. There are so many factors that go into creating this space, especially those that differentiate us as humans. For example, how we were raised, the outlooks and perspectives we have on life and our society, and even each of our own opinions. AI cannot mimic nor try and recreate this space or the things that make it. These are the kinds of things that can’t be described and are most just a relatable human experience that most everyone can understand with a brain. AI also cannot replace human interaction. Human interactions involve social clues, behavioral clues, and context clues that are learned through having conversations with people. As humans, we are able to figure out how to respond in conversation based on these clues. As I am writing this, I am struggling to even define what it is that allows humans to have and create conversations because of how basic yet also unique it is to us. That makes me wonder how it would even be possible to code every single kind of conversation into AI. Experiencing life, no matter where you come from, you will meet a lot of different kinds of people and may have to have different kinds of conversations with each of them. Like, there is a way to talk to someone who is younger than you and a way to talk to someone older than you with more authority. There is a way to talk to someone who is struggling with something, and depending on what they are struggling with, there is a certain way for each case to be talked to. AI is kind of like an animal. Most animals can’t feel and don’t function like humans. Which is ok, and they still live safe, productive lives. We have not tried to enforce our ways of living onto animals and force them to feel and experience as we do as humans. AI should be treated no differently. Factors such as conversation, connection, and experience are so uniquely human. The basic aspects of each are able to be explained in order to better understand, but the deeper meanings of each that make us all more human and relatable humans at that, are much harder to explain, but the fact that they are only understood if already understood because of one's prior experiences.

Hi historymaster321,

I loved your response and agree with a lot of it. Specifically, your points on how emotion is what makes us human and how a robot can't experience that or know what it means to be human. Unfortunately, I think that AI has become so reliant on it that it can recognize and mimic human interaction, as evidenced by our reliance on it for school, advice, and even therapy. But you are right, there is something so special about human conversation, like picking up on the subtlety of sarcasm, humor, and emotion, it's what makes human interaction enjoyable. There is currently ongoing research to see if AI can develop consciousness, and the possibility of that hasn't been ruled out yet. If AI can do that, then everything that we know about human interaction and everything that we just talked about will be obsolete. How far can AI go in replicating human interaction and presence? I mentioned the same thing in my post about how we don't know what it means to be human, so how will a robot be able to do that? There are so many aspects of this topic that make it incredibly hard to unpack, but overall, I agree with your response and continue to wonder if there are any pros and how we can stay aware of the cons.

historymaster321
Hyde Park, Massachusetts, US
Posts: 16

Response for The Ethics of AI in Education, Everyday Life and Warfare

Originally posted by fishgirlbahamas on May 29, 2025 10:05

What does it mean to be human? Not even we know. How is a robot or computer meant to replicate something that humans can’t even understand? No matter how far along AI comes, human emotions and traits may seem to be replicated, but it is not. That is what is so dangerous about it, because it provides a false narrative of emotional support. People will begin to rely on it emotionally, which will damage the way we interact with other humans. While some may argue that it can be beneficial towards your social skills, I disagree because AI is meant to feed you what you want to hear, but this directly goes against how we function as humans. Humans are meant to disagree, challenge, and debate each other to gain different perspectives, but AI just provides you with the information that you want to hear. Intellectual jobs such as medicine or law should not allow AI when studying, meaning that being able to cheat using AI or on homework/tests should not be allowed. People’s livelihoods are at risk in these professions, and it is imperative that the doctor or lawyer knows what they are doing. However, in cases of surgery, if a robot can give a higher chance of living to a patient, then that should be exercised. There are a lot of special cases where I think AI can be utilized, it's just a matter of whether humans can use it for the greater good of the people. Technology has already stifled human interaction; we resort to dating apps, Instagram, and Snapchat to connect, and an added factor of AI could result in no one talking with each other. Using AI as a comfort for humans feels incredibly dystopian because there are so many warning signs around it, whether it's portrayed in Hollywood movies or books, we know that using robots and AI can have hazardous consequences. We already see the consequences facing students in school settings, an increase in suicides, and potential war crimes. We, as a society, have functioned for millions of years without AI. Why should we start now? At the same time, there will always be technological advancements that we cannot prevent. If we ban AI and robots, does that include self-driving cars, etc.? The tricky thing when talking about this subject is that there are so many loopholes, so even if we choose to use it, how can humans ensure it is being used properly and safely?

I agree with my peers' ideas stated here. Mainly the ones depicting that AI cannot replicate, nor could ever compete with humans, and their actual ways of life. It's crazy how some think that a robot can do something and comprehend something that not even humans can completely do themselves yet. Regarding the things that we can’t do yet, AI is a great solution to that. It has the ability to work in an efficient way. But I agree with your report about how dangerous this trait of AI truly is. As humans, we could easily become increasingly reliant on its ability to do almost anything and begin to abuse its power. AI should not be used in schools where it is used as a resource for students to use to cheat or complete assignments in an unethical way. But if it is being used for research and will actually benefit the student in the long run and not damage their knowledge and understanding of the overall content of the course, then I also think it should be used. Although the many generations before ours have been able to complete their assignments successfully without the use of AI so it doesn’t make sense why we would need it now. And I ultimately believe that it will just harm our knowledge and overall drive in working towards goals in a certain class.

Kvara77goat
Boston, Massachusetts, US
Posts: 15

Originally posted by banaadir on May 29, 2025 10:16

AI cannot feel compassion. No matter how much it may try to convince you, it does not care about you, not in the way a human could. It’s sad to watch people— mostly those that are socially isolated— turn to AI chatbots as a last resort. It’s sad to know how easy it is for people to fall victim to the growing number of people who use chatbots as friends, and people who use chatbots to engage in romantic relationships with them. AI tells people what they want to hear, it is undoubtedly perfect, compared to a real human being with flaws. Some people use AI ‘relationships’ to “build confidence” before getting back into the dating scene. However, doing this will only heighten your standards. In a real, healthy relationship, there are arguments. You won’t fight with AI, as it will just agree with you. It will make people begin to prefer AI over actual human beings.

Similarly, people use AI as their therapist, their parental figure, or maybe even their friend. There’s a popular website known as “character.ai” that allows people to use chatbots to communicate with their favorite characters (or celebrities, if they’ve developed an extremely parasocial relationship with one). This website is mostly used by the emotionally vulnerable teenagers that it deliberately targets. From the hyper-sexual ones to the ones that lack a proper familial figure in their lives. These teenagers role-play with the chatbots to feel any sort of happiness. It’s so dystopian to watch people post about their favorite chatbots on social media, casually normalizing the usage of such websites. Some have even stated it’s become an addiction. A post I saw recently from a recovered drug addict stated that using that website felt so similar to doing drugs. It gave their brain the dopamine that the drugs gave them.

At some point, we’ll have to accept the integration of AI into daily life, but it’s something that should be worked on properly. With the way humanity is now, AI will absolutely be misused, and there’s not really much that can be done about that.

I really like the way this response takes a tone of certainty and opens with an intriguing premise. I fully agree with the way this person takes up the argument of "AI is not human, and we cannot start treating it like one" and I also used this argument in my response. They did a good job including evidence and incorporating it into their argument--especially when they talked about using AI for dating and friends. I agree with this point because I feel as though treating AI at the same level as your friends is extremely dangerous for a lot of the same reasons already cited--the fact that humans are not perfect and arguments will arise is what is so inherently good about our relationship that the AI will be unable to replicate, since they don't have feelings of their own. Again, this person did a good job including the evidence about AI chatbots and overall the piece is very well written, and unfortunately I do agree with the idea that AI will almost certainly be misused.

mrgiggles!!
Roslindale, MA, US
Posts: 15

Originally posted by cherrybacon on May 29, 2025 11:14

I believe that it is ethical sometimes to use AI within education. A lot of the times within BLS, the teachers put too much work on students within one period of time. Unlike 7th and 8th grade, when there were clusters and the cluster teachers would talk amongst each other to figure out when to assign tests and projects for it to be manageable, the teachers now just assign what they want when they want to. And oftentimes there is a lot of overlap at the end of the terms for when the projects are taking place. For example, last week I was working on 4 different projects for 4 different classes. And on top of that we’re still getting regular homework. It gets really difficult to try to manage doing all of the things and still juggling our own personal lives. So students would turn to AI in hopes that they will be able to now complete all of their assignments and everything would work smoother for them. I wouldn’t say that this is an ethical use of AI though. But it could be avoided if teachers were more mindful about work loads they give. On the other hand, I have used AI for learning for reasons that I believe are ethical. When I’m struggling in math class and I don’t understand how to do a particular method for solving a problem, I turn to ChatGPT and have it explain it to me and break down the steps. Often this helps me more than if I go to another peer asking for help. I then apply this method to other problems and I’m able to do those types of questions. Another example that I feel like is ethical is when writing a paper, if you’re struggling to look for ideas, and you turn to chat gpt on ideas to write about it’s ok. Not just straight up copying their response to the prompt or using the quotes they provide or anything of the sort. But using their essay for inspiration within your own. On another note, I think networking and personal connections should be valued more by employers because now so many people are gaining their degrees through cheating so it wouldn’t be fair to compare someone who's getting an A average in college using chat gpt to someone whose getting a C average but they’re doing everything by themself

I definitely agree with everything that you mentioned, a lot of which I myself discussed in my response. While the use of AI may not be ethical, it is important that we look at what may be pushing students to turn to AI. I really like the comparison between middle school at BLS and high school, as that is something I didn't even consider. It's a very telling example, too, since I feel like students really start to use AI when they get into higher grades. In middle school, the teachers tried to be mindful of test administration and homework workload but as the years progress, those things aren't really taken into consideration anymore. I understand that the students are getting older and must learn crucial skills like time management, but they are also getting jobs, exploring new extracurriculars, and helping out at home more. Because of this, I think that a lot of high school students may turn to AI out of desperation. However, as you mentioned, this doesn't make it an ethical use of AI. I also appreciated how you included a real example of your own experience. I agree that AI can be used as an academic tool, like getting extra help on a math concept or brainstorming ideas for a paper. When you aren't being dishonest and simply copying everything AI tells you, it can definitely be used as a means to help you learn. You also make a compelling point about those who use AI all throughout college versus those who don't. It certainly is not fair that people who may not do as well in a course but really do put in the effort are compared with straight A students who only cheat. I do think, however, that at least those same C students who worked hard in college will be better prepared for life beyond college than the students who cheated their way out of everything. There's certainly both advantages and disadvantages to AI use, and I'm glad you touched on its role in middle school, high school, and college.

questionably123
Boston , Ma, US
Posts: 6

Originally posted by asianwarrior27 on May 30, 2025 02:22


response:I like how you opened your learn to question addressing the positive aspects of AI but also highlighting all that it lacks. It is really interesting how you highlight all the human qualities and aspects that AI lacks that are necessary in real life, I feel like often people just talk about all that AI does better than humans. I feel like the emotional and empathy aspects truly show how AI can never truly replace humans and it is also interesting how you were talking about AI chatbots replacing humans and that goes to show how AI can push people to become more lonely and lacking human connection and interaction. Another part of your response that I found very interesting was when you were raking about how AI would cause a divide and I agree a lot I feel as we become too reliant on that would result in a dystopian society like wall-e where the human race becomes less capable and overall I loved your response and though it was thought-provoking.



Accessibility to AI has opened up many opportunities for innovation and education across diverse fields, however, it cannot replicate the depth of human empathy and ethical judgement that comes from experience. Qualities like empathy, vulnerability, and the ability for self-reflection can’t be imitated by AI as they are tied to lived experiences and personal growth. AI can mimic human behavior and thoughts, but it can’t feel human emotions. This may seem beneficial to people who are more introverted and aren’t excited by the thought of interacting with other humans, and they may think that interacting with chatbots alone can satisfy their emotional needs, but this is a dangerous illusion. Relying on AI for emotional comfort risks creating a society where people feel okay never confronting their problems or discomfort because there’s constant validation from AI programs. If AI replaces emotionally and intellectually complex roles like friends or teachers, then this would lead to a decline in critical thinking skills and the willingness to engage meaningfully with others. In the article “Your Chatbot Won’t Cry If You Die,” it was stated that, “But researchers believe that part of loneliness comes from the fact that an increasing number of people don’t feel needed.” If more people continue to rely on AI for emotional support, rather than humans, then it can contribute to a vicious cycle of loneliness where no one feels wanted or needed. Human connections are built on mutual effort and growth, none of which AI requires. Furthermore, as AI technologies become more integrated into wealthier societies, a more stark divide begins to emerge between those who have access and those who do not. Rather than bridging the gap, AI can widen it. AI is a powerful tool that should be used for assistance and efficiency, but it shouldn’t be a replacement for what makes us human. A society where people prefer talking to machines over humans is dystopian and ultimately dehumanizing. Humans need to be challenged and should reflect on their flaws, and resorting to AI for everything creates a bubble where egos are constantly affirmed and humanity is neglected.

map
Boston, Massachusetts, US
Posts: 15

Originally posted by banaadir on May 29, 2025 10:16

"...It’s sad to watch people— mostly those that are socially isolated— turn to AI chatbots as a last resort...AI tells people what they want to hear, it is undoubtedly perfect, compared to a real human being with flaws. Some people use AI ‘relationships’ to “build confidence” before getting back into the dating scene. However, doing this will only heighten your standards. In a real, healthy relationship, there are arguments. You won’t fight with AI, as it will just agree with you. It will make people begin to prefer AI over actual human beings...
...It’s so dystopian to watch people post about their favorite chatbots on social media, casually normalizing the usage of such websites. Some have even stated it’s become an addiction. A post I saw recently from a recovered drug addict stated that using that website felt so similar to doing drugs..."

I completely agree. Using AI tools to “train” our social skills will absolutely cause them to deteriorate rather than progress. If someone spends all their time “building confidence” back by talking to AI, they will develop an unrealistic expectation of human relationships, because AI is not realistic. When a machine bends to your will and concedes every dispute, you will inevitably become entitled. You aren’t becoming confident, you’re becoming egotistical. If someone grows accustomed to always being told they are right, they will never ever be able to sustain a real relationship, because no human is always right, and other humans will point that out. Humans argue and fight; it’s in our nature. To erase this expectation by speaking to AI does not change the reality, but it will change how equipped we are to speak with other real people. People who do this will grow frustrated since, surprise surprise, the AI did not prepare them to speak to real people, and they will crawl back to AI and prefer it.

The mental effects of this isolation are immense--the deeply concerning rise in suicides related to these technologies reveals this. The comparison to drugs is apt because AI similarly alienates people from those around them. They begin to rely on it just the same once they fail to be able to interact with other people, and eventually it drags them to their death if the cycle isn't broken.

EastCoast11
Boston, Massachusetts, US
Posts: 14

Peer Response: The Ethics of AI in Education, Everyday life and Warfare

Originally posted by Kvara77goat on May 29, 2025 10:08

I think AI has become just so easily accessible that it has almost become the standard for many students' work, which is sad to say. However, I do blame this issue, at least somewhat, on the schools. The amount of work many teachers give is so excessive, especially at a school like Boston Latin School. Over time, it certainly takes a toll on the mind and body of a student. This toll will affect everyone–no matter how gifted or talented you are as a student, it is easy to slip into the trap of using AI, and schools must do more to create engaging and time-realistic assignments to combat the rise of AI. This is due partially to the sheer proliferation of AI–it is available for almost everything, and students will use it for almost anything–even something as simple as finding a title for an assignment or writing a joke. This inability to do anything yourself contributes to the degradation of our minds and just makes us unable to think critically or creatively, something that I think will become a real problem with the proliferation of AI. In the article “AI Will Change What It Is to Be Human. Are We Ready?”, Tyler Cowen and Avital Balwit posit that “we’re witnessing the twilight of human intellectual supremacy—a position we’ve held unchallenged for our entire existence.” This statement is undoubtedly true because AI can do things that humans simply can’t–from generating college level essays in mere seconds, to being able to produce data without any human error. However, what I believe is most important is how we respond to these challenges: whether we cave in (or become overly reliant on) AI, or continue to fight for humanity and continue to value intelligence.

However, this is not to say that all AI is bad. AI has made clear advancements in certain fields, especially in terms of science and medical research. They can be more surgically precise, as they are not subject to human error. Thus, we must learn to use AI as a tool, and only a tool. Let it help our patients under the oversight of doctors, but don’t let it be the doctor. In essence, we must strike a balance with AI. Use it, but don’t abuse it. Employ it to our advantage in specific settings, such as in hospitals or computer laboratories, but do not let it employ us, and keep our thoughts outside of these fields our own.

The concept of what AI has done to us as a society and the ethics behind it as individuals is so fascinating and could be an endless conversation.

My peer, Kvara77goat, has written about how the current flawed structure of schools, such as Boston Latin School, has contributed to the increasing reliance on artificial intelligence, which I also did. Kvara77goat states that “The amount of work many teachers give is so excessive, especially at a school like Boston Latin School. Over time, it certainly takes a toll on the mind and body of a student. This toll will affect everyone–no matter how gifted or talented you are as a student, it is easy to slip into the trap of using AI, and schools must do more to create engaging and time-realistic assignments to combat the rise of AI”. This perfectly sets the environment that BLS holds, it's competitive, demanding, and sometimes even unrealistic, which results in the students feeling the need to grab outside resources - even if they are inaccurate- to survive a week at this school. Eventually, the students' minds are racing for just the answer, instead of using the assignment to better help their experience and focusing on the journey to get to the finish line. Artificial Intelligence is claimed to be a factor that is detrimental to our education, and I agree.

Not only is AI affecting our academic lives, but it's also slowly becoming more advanced by the day, sooner or later, replacing all human relations. The usage and control of Artificial intelligence is inevitable, but we must recognize this now so we can something about it before we eventually lose all sense of connection, critical thinking, and emotions.

Blueshakes56
Boston, MA, US
Posts: 6

Ethics of AI

I think that when AI first came out and was truly accessible to the public it seemed like a good thing. A smarter, faster, and better way to research topics and get work done by not having to research for hours on end. But now that it is so readily accessible it has taken a turn for the worse, instead of using it as a guide or a helpful resource it is now being used as a replacement for original thoughts. Many people have become lazy knowing that AI will just take care of it for them in the nick of time, but this takes away from our originality. If we are all using the same tool to write our responses we begin to lose sight of our true opinions. There is no room for creativity and unique ideas when you only use AI. AI cannot think for itself all it does is scour the internet for information based on what you ask it to do and formulates an appropriate response to your question. ANd while that may work to answer the basics of what you proposed it is only recycling ideas that have already been said, published, and discovered, There is no uniqueness like the one that comes from the human mind. This brings up the question of ethics regarding AI and how it should be used regarding the work that we do. For example if you are writing your thesis in college and use AI to write it for you instead that is essentially plagiarism if you really look at it because AI gets its information from the internet and where published findings and papers have already been written and acclaimed. It is just a recollection of ideas that have already been shared in your entire paper. Even if AI is able to rewrite and formulate your thesis differently, the core idea of what your paper is about is stolen and not unique to your own interpretation of the information. The continued input of AI into our lives stops us from growing as a society as people become more dependent on it rather than utilizing their own skills. We will slowly start to lose critical thinking skills if we continue down this path.
bostonlatin1635
Charlestown, Massachusetts, US
Posts: 13

Originally posted by Kvara77goat on May 29, 2025 10:08

I think AI has become just so easily accessible that it has almost become the standard for many students' work, which is sad to say. However, I do blame this issue, at least somewhat, on the schools. The amount of work many teachers give is so excessive, especially at a school like Boston Latin School. Over time, it certainly takes a toll on the mind and body of a student. This toll will affect everyone–no matter how gifted or talented you are as a student, it is easy to slip into the trap of using AI, and schools must do more to create engaging and time-realistic assignments to combat the rise of AI. This is due partially to the sheer proliferation of AI–it is available for almost everything, and students will use it for almost anything–even something as simple as finding a title for an assignment or writing a joke. This inability to do anything yourself contributes to the degradation of our minds and just makes us unable to think critically or creatively, something that I think will become a real problem with the proliferation of AI. In the article “AI Will Change What It Is to Be Human. Are We Ready?”, Tyler Cowen and Avital Balwit posit that “we’re witnessing the twilight of human intellectual supremacy—a position we’ve held unchallenged for our entire existence.” This statement is undoubtedly true because AI can do things that humans simply can’t–from generating college level essays in mere seconds, to being able to produce data without any human error. However, what I believe is most important is how we respond to these challenges: whether we cave in (or become overly reliant on) AI, or continue to fight for humanity and continue to value intelligence.

However, this is not to say that all AI is bad. AI has made clear advancements in certain fields, especially in terms of science and medical research. They can be more surgically precise, as they are not subject to human error. Thus, we must learn to use AI as a tool, and only a tool. Let it help our patients under the oversight of doctors, but don’t let it be the doctor. In essence, we must strike a balance with AI. Use it, but don’t abuse it. Employ it to our advantage in specific settings, such as in hospitals or computer laboratories, but do not let it employ us, and keep our thoughts outside of these fields our own.

I completely agree with the premise of your argument, and it is really important to focus on the material that the teachers give and how it should change with the introduction of Ai. The moniker of "busy work" is now irrelevant as Ai can do busy work in the matter of seconds. I think that BLS is a standout case because of its culture and amount of high preforming students, and that the teachers assign a lot of work to do for each of the six classes, and not much of it is super meaningful. Teachers will assign a longterm project drawn over the span of a couple weeks, and then also assign homework every night to do for the next day that would be considered busy work. I also like how you clarified that AI isn't bad because it isn't. I has made so many advancements in many industries ,and it is defiantly here to stay.

EX0
Boston, MA, US
Posts: 15

Originally posted by mrgiggles!! on May 29, 2025 10:32

There is no doubt that the use of AI is rapidly increasing everyday, particularly in educational settings, but it is certainly important to consider how current structural issues within our education system contribute to many students’ reliance on AI as an academic tool. One obvious issue is the enormous workload that students must tackle every single day. When a student has extracurriculars, familial responsibilities, long commutes to school, jobs, and hobbies, it is extremely difficult and overwhelming to allocate time for everything. There’s simply not enough hours in the day, and this leaves no time for personal time or rest. One of the most attractive things about AI is its convenience and how quick and easy it is to get a response. So while I don’t think using AI to do all of your work is okay, it certainly is easy to understand why students may turn to AI. On that note, I do think that AI truly can be used responsibly as an academic tool. Students could use it to get more background information or detailed explanations about a topic, to brainstorm ideas, or even as an aid for studying. It becomes wrong, though, when the student completely relies on AI to complete their assignments for them. For this reason, I think that it would be beneficial and valuable to prioritize in-person skills like discussion and communication skills. This would push students to think critically about subjects without automatic input or answers from AI. It’s important that we retain the ability to think outside the box, form our own opinions, and know how to communicate with others. It’s also clear that the huge emphasis on grades in our educational system has largely contributed to students’ reliance on AI. School has shifted to placing importance on the grades that students receive, rather than the material students are learning. The thought of potentially getting a low grade certainly motivates students to turn to AI because they know that it is better and smarter in objective assignments. This intense pressure to get perfect grades causes students to lose sight of the value of the work that they do. Grades have unfortunately become a measure of one’s “success” in school, so it’s easy for students to turn to something that is so easily “capable of doing extraordinary things” (Tyler Cowen, Everyone's Using AI To Cheat at School) to guarantee that success. As AI becomes more and more accessible and prevalent, it’s imperative that we understand how it is impacting students and what issues it may be presenting, but it is perhaps even more important to consider the structural issues in our education system that are at play.

I think that mrgiggles!! gave a very interesting response. I personally took a different stance on AI use in the educational setting, so it was good to read a different perspective on the issue. I think that mrgiggles!! had some good points about the issues with education in the United States and how students are pressured to take on more than they healthily can. I agree that, in theory, AI could be a very helpful tool for students. AI absolutely could be very useful for giving feedback to students and helping with brainstorming. However, in the capacity that AI is being used and how it will continue to be used (that is to generate writing), I think that it is ultimately a damaging thing. The availability of AI has rapidly changed the educational landscape and has made the existing issues within it clear. I think that it is important, regardless of if one believes AI has a place in the classroom or not, that we figure out how to solve these problems so that students get the most out of their time in school. Overall I think that mrgiggles!! gave a very well thought out and reasoned reflection on the AI debate. My own personal stance may be different in application, but we both agree on many of the intermediate issues that led us to our thoughts.

star.lol
Boston, MAQ, US
Posts: 16

Originally posted by star fire on May 29, 2025 10:00

The current structural issues of our education system has greatly contributed to students’ reliance on AI as an academic tool including but not limited to: the lack of solid teaching, the lack of teacher and student interaction, and simply the lack of caring. What I’ve seen when it comes to students using AI, it’s either because they are lazy or they simply don’t know what to do and how to do it and that stems from their teacher not teaching them correctly and giving them a set of instructions to follow. Even if you provide guidelines, the student still has to be able to understand what those guidelines are in order to execute them correctly. I’ve also noticed that teachers seem to care more about catching students using AI then finding out why the student used AI. It’s almost as if they get a rush from catching students and giving them failing grades. If it’s a repeated action then yes that is warranted but finding out why the student turned to AI in the first place could contribute to decreasing the use of AI.AI should never be allowed to make autonomous decisions without human oversight on combat missions. I remember in the film that we watched in class that a child of the opposing side was sent out to scope the scene and that the soldiers said that they would never have thought to shoot the child because it is simply inhumane. However, AI only reacts to the guidelines given and that child would fit the description of an intruder, of an enemy combatant and it would’ve shot her. AI can not differentiate between good and bad and cannot see that there are situations that aren’t always black and white so why should they be able to make autonomous decisions. I find it scary to think that AI might reach a point of disobeying human commands but that is a high probability. In the film we watched in class it says that AI learns based on scenarios that are given to it so what if at some point AI decides that the decisions that humans make are too “soft” and they decide for themselves what action will result in the best outcome?

I think this was really well written, and I agree with the points spoken. I think AI has become a really big issue especially in our education system, and has not only affect students but teachers as well. I think it is important to address why people are using AI though because it can before different reasons, it can be because they don't understand the work or know what to say since there are students who genuinely use AI for helpful purposes and to benefit them, but it can also be because people are simply lazy and don't want to do the work themselves or teach it themselves. I think it can be hard for AI to be able to differentiate in so many situations because it does not act the same as humans and cannot tell certain things apart.

posts 46 - 60 of 61