Boston, Massachusetts, US
Posts: 15
The Ethics of AI in Education, Everyday Life, and Warfare
Originally posted by
crunchybiscuits on May 29, 2025 13:47
There is a great multitude of AI systems. One’s that we typically know about the most are educational AI chat boxes, such as ChatGPT, Gemini, or Microsoft Copilot. These are known as language models, where once someone inputs information, the AI will give an output. However, these systems go just beyond education. Many have found that since AI will adapt itself to the command that an individual gives them, they can be used for more social and personal endeavors. A great example of this is Snapchat AI, Character.AI, or even Instagram AI chat logs. In a sense, these interactions between a human and AI can be understandable, it also alludes to the human disconnect that the internet and computers have placed on society. Specifically, patterns behind human disconnect almost happen in the same manner: it is not necessarily artificial intelligence that is stunting mental growth, but the growth of available resources on the internet. Before social media and the rise of TikTok, people were extremely independent in terms of applying their knowledge to find information. Because of misinformation and the vast number of people on social media platforms, information since then has been extremely unreliable. We see this is happening with artificial intelligence. River Page's Your Chatbot Won’t Cry If You Die, she insinuates the message that companies such as Instagram are taking advantage of loneliness on the internet. “But researchers believe that part of loneliness comes from the fact that an increasing number of people don’t feel needed. We’re less essential to our communities. Your friends need you though. They’re not perfect. They can let you down.” Rather than just using it as a tool for education, people are using it to source day to day materials, interactions, and even personal issues. Artificial intelligence can understand circumstances, but never the degrees to which these circumstances matter. This is why, it not only continues the long de-progression of the human intellect seen in other sectors of computer usage, but also deepens the dependencies on human interaction. Without human interaction, the spread of information is not a journey of self accountability.
I really like this idea that the disconnect between people is actually due to the advancement of technology in our lives rather than artificial intelligence being used more and more. Disconnections have always been a thing in my opinion regardless of the internet or not. However, with the introduction of the internet it allowed for access to so many more things all with a click of a button allowing people to stay home and not necessarily need to go out and this is when the disconnection started happening. When more and more technology advances more and more disconnection happens such as doordash. With the introduction of doordash and other similar apps people have less need to go out of their homes now that even their own food is coming to them. These little things cause more and more disconnection. AI does not do nearly as much damage as these other resources but due to it being the newest thing more fear is put with it.This is an interesting point. I feel like people were more susceptible to misinformation before the introduction of social platforms and online information. 100 years ago you could not get nearly as much information on certain topics especially political ones. This was the whole basis of propaganda, allowing only certain information through. However with the introduction of social platforms and online resources of information more and more information is allowed to be reached and even if some of it is propaganda you have so many that you can easily find out if it is true or not with a little more research. AI simply helps with this research. It is an extremely quick shortcut to finding information and it's being demonized because of it. There is no problem with shortcuts because one way or another you would get this infromatioin and you still get links to sources and other things if you want to do your own personal research.
Dorchester Center, Massachusetts, US
Posts: 14
Ethics of AI in Warfare - Response
Originally posted by
Lebron on May 29, 2025 14:04
AI weapon systems would be extremely destructive if introduced into warfare. While AI soldiers might cause a decrease in the casualties of human soldiers, the destruction they cause would undoubtedly result in massive loss of life. War involving AI weapon systems would be much longer and more destructive due to several factors. Firstly AI doesn’t sleep, get fatigued, get hungry or thirsty or feel emotions. Because AI doesn’t need the basic needs that humans do, war can be waged nonstop for extremely long periods of time. Due to AI’s lack of emotion they would not hesitate like human soldiers during certain events. AI would kill anyone it perceived to be an enemy instantly, even if they were a child. This is obviously a decision a human would hesitate to make. AI would also do something like bombing a building full of civilians just to kill an enemy. AI’s lack of emotions would also make them impervious to PTSD and never lose moral when fighting. The lack of these things would cause wars to be fought with consistent high levels of intensity even after years of fighting. AI would also lack any feeling of defeat and would never surrender unless explicitly told to do so. This means that AI soldiers would continue to be dangerous and a constant threat if the group that made them didn't want to lose the war. This means that AI could cause destruction even decades after the conclusion of a war. For these reasons AI weapon systems would cause widespread destruction and death if introduced into warfare.
Overall, I agree with this view of the use of artificial intelligence in warfare; the things that would make an AI soldier different from a human is what would make it more likely and less hesitant to take a life. The point this person made about AI carrying out the orders given to them even decades after the war ended is somewhat far-fetched, but still possible. In the scenario that two warring states use weaponized artificial intelligence against one another can very possibly lead to a case of mutually assured destruction, with neither side emerging as the victor. It is also possible that in the process of this mutually assured destruction, both sides lose the ability to retain control of their respective AI’s, causing them to continuously carry out the last order given to them. Militaries across the world, and especially that of the United States, are likely to soon begin investigating the uses of AI in warfare, if they are not already. It is imperative that we as the people that military is to protect, make sure that the use of AI in this setting does not get to a point where it can take life autonomously.
Boston, Massachusetts , US
Posts: 15
Response to The Ethics of AI
Originally posted by
ilovemydog34 on May 29, 2025 13:47
AI is taking over how we used to do things and is evolving everyday. Eventually I could be the future but we must look at what it is now and how it still has detrimental effects on society. Currently AI is presenting the biggest threat in education. This is particularly the case in high schools because that is where children are most equipped with technology. Young people are extremely impressionable and with the new introduction of AI, it makes it hard for them to be able to say no to using it when their peers are also using it. With this being said, there are issues within our educational system that make this tool so commonly and incorrectly used by students. The easiest way to use AI is on assignments that are more “busy work” than anything else. This is because students feel as though this does not mean they are not learning, they are just not doing the assignments that feel like they have no purpose at all. This can also go both ways, if the schools were not giving so much work that felt pointless to students, then they likely would not feel the need to rely so much on AI. If the work was purposeful, then the students would realize how much time and effort teachers put into teaching this material so hopefully morally they would realize it is wrong to use AI. The question is is the use of AI always wrong? The answer is no, AI can be very helpful and help teach and give examples in other ways but the reality is that most students are not using it in this way and they are using it to get assignments done quickly. There are certainly some students who use it morally but when it becomes late at night and the temptation to use it is strong, it is hard to force yourself to do it when it is simply so easy and accessible. This can lead to AI then shaping our views and our opinions on things when in class discussions rather than actually thinking for yourself and this is quite scary because critical thinking is where a lot of new ideas and discoveries occur for students and help them grow as learners. Many students may not know how to use AI other than for copying and pasting homework questions in and getting out an answer, that is why a course where the background and facts about AI are taught would be so helpful for students and potentially stop them from making bad decisions regarding their AI use.
I agree with the statement the AI is dangerous to young people because of their impressionability and that most of the reason why it is used in school is because of the busy work we get as students. I can relate to this feeling that I don't need to really do certain work because of how pointless some tasks feel. Especially going to a school like BLS when the work feels never-ending AI can help relieve some of that stress on assignments that may not be as important as others. I agree with your statement that this goes both ways, if schools assigned more meaning full tasks rather than busy work students would use less AI. A course in the background of AI would serve education well and would allow students to get a deeper understanding as to when to use AI, and create more awareness surrounding this new technology. The more we talk about it the better, AI is something that will continue to grow and be affect our society so equipping students with the ability to navigate usage and determine when and when not to use it will be beneficial in terms of education.
Boston, Massachusetts, US
Posts: 15
The Ethics of AI in Education, Everyday Life and Warfare - Response
Originally posted by
Estalir on May 29, 2025 11:02
It is not exactly wrong to allow AI to influence our thoughts because as it is right now AI is simply a tool to get information quickly. All it is doing is gathering the info that you could have found if you research in other methods. While there is a fear that the AI could be biased due to the coding it may have but that is still the same with other internet resources. AI is better if anything because it can gather all sorts of information from multiple places and present it to you rather than looking at only sources you can find that could not contain all the information possible. Teachers who use AI should face the same reproductions as the students because at the end of the day it is creating the same problem. People are worried that people in the future will not be as smart or as capable as we are today; however, most jobs have already normalized the usage of AI in their workplace, like teachers. Many teachers nowadays will use AI to create assignments and/or grade work. This is simply the same way students use AI; to do tasks that they simply don’t want to do. If we are to punish a student for using AI on a homework assignment then it is only fair to punish the teacher for using AI to grade the assignment because the punishment right now is given in order to prevent the usage of AI simply for us to use AI when we reach another level. However, most jobs won’t stop the usage of Ai simply because it’s too big of a current thing. Rather than punishing and banning AI completely we should instead be embracing it by teaching educational ways to use it. Similarly to when the internet came out and made libraries obsolete. Many people did not like this because they deemed books more credible but with time we adapted and now the internet is commonly used for many things. This is simply another evolution and what we need to do is embrace it and not hate it.
I like the idea of embracing AI rather than shunning it and those who may misuse it. I especially like the example provided of how many were hesitant to be dependent on the internet rather than libraries as books were deemed more reliable. Now as a society we have fully shifted towards internet usage and rely on it for many aspects of our lives. I believe that society can shift towards using AI correctly with proper education and information on how to use it in a way that will not harm one’s learning or critical thinking. Yet, it may be naive to think that society is capable of this, especially now that many have become reliant on AI and are comforted by its “benefits”, which may harm their complex thought development in the future. If there were proper restrictions on AI websites and an emphasis on public speaking / discussions in class, society may be able to handle the effects of AI. In the end, AI misuse will always be a temptation.
Boston, Massachusetts, US
Posts: 15
Ethics of AI Response
Originally posted by
Estalir on May 29, 2025 11:02
It is not exactly wrong to allow AI to influence our thoughts because as it is right now AI is simply a tool to get information quickly. All it is doing is gathering the info that you could have found if you research in other methods. While there is a fear that the AI could be biased due to the coding it may have but that is still the same with other internet resources. AI is better if anything because it can gather all sorts of information from multiple places and present it to you rather than looking at only sources you can find that could not contain all the information possible. Teachers who use AI should face the same reproductions as the students because at the end of the day it is creating the same problem. People are worried that people in the future will not be as smart or as capable as we are today; however, most jobs have already normalized the usage of AI in their workplace, like teachers. Many teachers nowadays will use AI to create assignments and/or grade work. This is simply the same way students use AI; to do tasks that they simply don’t want to do. If we are to punish a student for using AI on a homework assignment then it is only fair to punish the teacher for using AI to grade the assignment because the punishment right now is given in order to prevent the usage of AI simply for us to use AI when we reach another level. However, most jobs won’t stop the usage of Ai simply because it’s too big of a current thing. Rather than punishing and banning AI completely we should instead be embracing it by teaching educational ways to use it. Similarly to when the internet came out and made libraries obsolete. Many people did not like this because they deemed books more credible but with time we adapted and now the internet is commonly used for many things. This is simply another evolution and what we need to do is embrace it and not hate it.
Hi Estalir,
I think you are correct that AI currently isn’t providing information that is much different than one could find oneself online, only quicker. The information, just like most online sources, is likely to be biased – so I agree that it is unfair to call AI worse than other internet sources because it is biased, as all sources are biased. However, I think the problem comes down to the fact that because AI is able to find the “right” information so quickly, it could make its users' attention span and patience decrease (and we are already in an attention span crisis). Additionally, there is the possibility that soon AI could produce information that you couldn’t have found online, as researchers are working to create AGI (Artificial Generative Intelligence) which would be able to synthesize its own thoughts. This is an idea that I find especially terrifying. I also agree that teachers should be punished as students are when using AI, for the same reasons you provided. However, I disagree with your last statement that we need to embrace the evolution of AI technology. While AI is certainly here to stay and we can’t ignore it, I also think we don’t need to embrace using AI in every aspect of our life in which it could be helpful. We did that for the internet and social media, and yes there have been benefits, but also great repercussions. I think it would be better to approach AI with some caution and policy restrictions, so that we know the full consequences before taking the dive and fully embracing it.
Boston, Massachusetts, US
Posts: 15
LTQ Response: The Ethics of AI in Education, Everyday Life and Warfare
Originally posted by
msbowlesfan on May 29, 2025 13:58
I think that with the increasing number of artificial intelligence chatbots, more parasocial relationships form and more people will turn to it for medical or psychiatric help. We have already seen disturbing stories online about deep fakes and humans connecting with AI, and it will likely only get worse as it develops. The development of artificial intelligence in things like images and videos has been exponentially increasing just in the past few years. Early AI could be easily pointed out in the media because of how fake and bad it looked, but nowadays it’s becoming increasingly more accurate and realistic. Soon it may get to the point where propaganda could be created with AI and we would not be able to distinguish if it is real or fake, which is extremely dangerous considering how fascist societies relied so heavily on propaganda. In terms of other uses, because AI has access to all free online information about medicine and mental health, it could be commonly used in the future as psychiatrists or doctors for people who maybe can’t afford these necessities. There have been multiple cases of AI insisting that it is a licensed psychologist/psychiatrist and cases of people turning to AI as a source of companionship, both platonically and romantically. A few months ago there was a young boy who committed suicide in order to “be with” his AI chatbot girlfriend, and while most people aren’t killing themselves because of AI, the romantic aspect of these relationships is still a real thing. The people who engage in these relationships become isolated from the real world and from real people, confiding in robots that are just code spewing back what people want to hear. These parasocial relationships can be dangerous to not only the users health, but potentially real people that the AI is imitating. Some people are already convinced that they’re in loving relationships with strangers on the internet and won’t hesitate to stalk or confront the people they’re obsessed with in real life. AI bots might solidify these feelings even more which could put anyone on the internet in danger.
Hi msbowlesfan!
I love your response! I completely agree with the dangers of forming relationships with AI and the ways society uses it. I think your point about fascist regimes using it as a form of generating propaganda is incredibly important to think about especially in the context of the volume of trusting relationships that people have formed with AI. Do you think we could ever reach a point where humans are so reliant on these machines for connection that they would trust AI propaganda even if they know it was generated artificially?
I also agree with the problems that AI relationships present to social health insofar as they isolate us from real human beings. I think it could be interesting to think about how the parasocial relationships people have already formed are influenced by AI and the internet age. Does it make us more likely to believe that we are in relationships with people we don’t know in reality? The connection between your point on AI claiming to be a licensed psychologist and parasocial relationships is also incredibly interesting as it could lead to a feedback loop in the future where people who are seeking help with such relationships can only turn back to the source: AI. I think all of your arguments are going to be incredibly important to think about as we go into the future!
Dorchester Center, Massachusetts, US
Posts: 15
Originally posted by
lightbulb89 on May 29, 2025 14:00
I believe that AI will never be able to truly replace humans. There are many things that humans have that AI will never be able to recreate. For example, trusting your intuition/gut, adaption, creativity, feeling sympathy, etc. As far as science and technology goes, I don't think that AI will ever be able to recreate human feelings to the fullest extent. With AI replacing humans in jobs, I believe that there will be a high risk of becoming “dumber” as a society. In my opinion, AI could be used for the better right now, but many people are abusing the power that they feel from using AI. I believe that once AI becomes more powerful and is used in warfare, there will definitely be misuse of AI. A role of humanity to play the “creator” would have obligations of obviously using it for the better good of humanity. I also think that if people want to be the “creator” then there should be some limitations and it shouldn’t just be a free for all, whoever wants to be a “creator” can just be one. There should be a permit you have to get in order to use an AI bot correctly. I disagree with the idea of AI replacing human interactions. I believe as humans that is part of our purpose to interact with one another and develop healthy relationships. With AI in the picture, I believe that this would defeat the purpose and potentially destroy humanity as it is. On that note, I believe that AI as a form of comfort is extremely dystopian. Like I mentioned before, I believe that humans' purpose is to interact and exist among each other. While using AI as comfort you are supposed to have with a human being is extremely dystopian.
Hi lightbulb89!
I really enjoyed reading your response; it was very well thought-of, and you seemed to have an educated opinion that strengthened your argument.
I think I definitely agree with your initial argument, AI will never be able to fully replace humans, and I think offering AI to that position is dangerous and takes away a lot of our values that make us uniquely human. While technological advancements correspond to intelligence in many cases, I think that AI replacing human fields makes us look "dumber," as something we have created is able to take our place. I think there are many consequences to having artificial intelligence in many fields, for example AI does not need to be paid, which can crash the economy and take away from the lower and middle class workers that benefit from their jobs. I think your point on human interaction was very well put, as humankind structures itself on their interactions and our language, empathy, and creativity truly sets us apart from other animal species. As we are already seeing, further developments in AI and other assistive technology takes away from the aspects of our world that make human experience so unique, which is why AI cannot and should not ever be in replacement for a human's life.
Boston, Massachusetts, US
Posts: 16
Ethics of AI responce
Originally posted by
SharkBait on May 29, 2025 13:58
In the past few years, AI’s use in an academic setting has sharply increased, which can be attributed to the varying barriers between different groups of students including social class and accessibility. Due to these barriers, as well as the difficulties that follow legislating against its use, I believe that AI should solely be a tool in the classroom rather than a replacement. AI’s relatively recent use in schools can be attributed to the lack of interest and motivation found in students of this generation, which is a direct and indirect result of the rise of technology in our society as a whole. As the attention span of students continues to decrease, many turn to technology to get their work done or keep them entertained. This turn ends up harming actual learning and continues this cycle of returning to AI in order to cure the problem of an uninterested student that dives deeper than the symptoms on the surface, as AI was designed to suit the needs of students that continuously use it. In “Everyone’s Using AI To Cheat at School. That’s a Good Thing,” Tyler Cowen writes that “as current norms weaken further, more students learn about AI, and the competitive pressures get tougher, I expect the practice to spread to virtually everyone.” Furthering Cowen’s point, the use of AI in schools has become universal and almost inevitable as it is accessible and normalized. Given this idea, I believe that we as a society should take advantage of the spread of AI and instead use the knowledge we have to educate ourselves on its uses and its consequences. For many students, the school system that prioritizes quantitative education such as grades and test scores poses a challenge for many students and AI often offers an outlet for much of that stress. Additionally, it is extremely important to note that there are students who are unable to afford tutoring and further teaching, which is why AI should be used in a way that benefits students without hindering their ability and drive for learning. If students and teachers actively make an effort to have AI as an option rather than a replacement for other tools, AI will actually be used less I believe, as students will learn how to use it correctly, and there will be less stigma around its use. Ultimately, I believe that the use of AI in schools is nearly impossible to avoid, but by educating students and teachers on the “dos-and-don'ts” of artificial intelligence, one can hope to move away from “replicating” learning but instead aiding it with AI being in our control as an extra resource. While I believe that AI should not be incorporated fully into any form of education, its rise should be taken with caution and consideration of its consequences and possible benefits if used in a correct and knowledgeable manner.
I agree with their statement that AI should be a helper, not a doer. The overconsumption of AI can lead to the destruction of students-teacher connections if continued. Unfortunately, AI has already made its way into the classrooms through basic functions like citation generators and teachers using ChatGPT to create lesson plans. However, you make a good point that AI can be over-relied on to the point where it "replicates" what normal classroom behavior is. Yet, I enjoyed how you viewed two different perspectives. In a positive tone, you talked about underperforming students who use AI in place of hiring a tutor, something I see as ethical because not everyone can afford a tutor.Finally, you used the article, “Everyone’s Using AI To Cheat at School. That’s a Good Thing,” in your response. To emphasize society's inability to erase AI. At this point in time, I think we can all only hope AI doesn’t grow out of hand, but if AI has already made it into spaces that were historically AI free, such as school, then we may be in over our heads. Overall, I think your response is very well made, and you understand the dangers and pros of AI, as well as the articles we read in class.
Boston, Massachusetts, US
Posts: 4
The Ethics of AI in Education
In general, the ethical considerations of using AI in education revolve around academic integrity, and mostly around that idea that students are giving themselves an unfair advantage over others in order to get ahead in school. However, if the use of AI in education is as widespread as many believe it is and “...up to 90 percent of college students have used ChatGPT to do their homework”(Tyler Cowens’ Everyone’s Using AI To Cheat at School. That’s a Good Thing), then educators seem to be better off working around this problem rather than working to eliminate it completely. The structure of our education system has made it so that the courses that challenge students not to think creatively, but rather to find one right answer, are most at risk of being disrupted by these AI language models. Yet, even the courses that require creative expression are being met with students that rely on AI to “think” for them, so the problem might not be an issue of the structure of education, but a result of human nature. Where the line should be drawn in education is when students ask AI for ideas that they then pass off as their own, because not only does it violate academic integrity, it hurts the students themselves when they are not actively honing their academic skills or finding value in their education. If this issue gets worse, as it will be given that these language models are evolving and upgrading, education, which is so valuable, will be wasted as students cheat themselves out of what they might have gained from it. Schools should definitely prioritize discussion and communication skills, not only because it would be harder to cheat out in the open and in the presence of a teacher, but it challenges students to think critically and encourages them to share their own ideas rather than to get AI to replace them. The structure of most courses is going to have to change not only to maintain some level of academic integrity, but also to keep students from regressing and hurting themselves in this process. However, as much as AI will continue to become more widespread, not just in education but in our everyday lives, I believe that this will not have as big as an impact on the post-academic “real world”, while obviously there will be a workforce with a significantly greater number of people who struggle to think for themselves, values of employers wouldn’t, or at least shouldn’t change drastically. Despite everything, cheating has been a part of education worldwide not only before AI, but before technology in general, and what happens to these people is that even if they can find success by cheating their way through different levels of education, they lack the skills and knowledge to outperform other workers in the workforce who took their education seriously. If we are to think positively, those who rely on AI will learn the hard way that they cannot get as far in life as they thought they could without thinking for themselves.
Boston, Massachusetts, US
Posts: 13
Originally posted by
abcd on May 29, 2025 13:57
All large companies in modern capitalistic society are around for the primary purpose of making money, and this often comes at the exploitation or manipulation of the users/consumers. AI companies like ChatGPT are no different. ChatGPT and other AI applications are programmed to maximize user satisfaction, so that the user becomes increasingly reliant on and comfortable with AI. A future in which AI becomes addictive, just like social media has become for many, is therefore not hard for me to imagine. What’s terrifying about this possibility is that mass addiction to AI would disrupt the world like never before. There would be a massive use of energy, contributing to the already pressing climate crisis. Additionally, the more we use AI, the more it has the potential to replace many doctors, therapists, teachers, tutors, and even mechanical jobs. This loss of jobs, which is already underway, would be devastating to the economy.
However, there are some seemingly immediate benefits to increased availability of AI: those who cannot afford therapy or tutors can use AI to receive similar services for free. Well I do not want to minimize this helpful impact, I also think it is important that our overall society focuses on restructuring the systems of therapy, education, and healthcare to be accessible and effective for all, instead of using AI to patch up an already broken system. This way, people can get the help they deserve, without the detrimental environmental impact and loss of jobs. (Though I admit this is idealistic).
The same concept applies on a smaller scale to interpersonal relationships. Using AI for comfort and as a friend is a temporary solution, but does not help the problem of a lack of genuine human connection and accountability that many face. Instead, taking the harder route of working towards genuine relationships, through the challenges, is going to ultimately be more satisfactory than using AI as a companion. The article Your Chatbox Won’t Cry If You Die says, “Humans are unpredictable. They might patronize you; they might ignore you; they might manipulate you. The computer won’t, unless you ask it to: As one person who participated in the Eliza trial said at the time, ‘The computer doesn’t burn out, look down on you, or try to have sex with you.’” This is true, but it neglects one key point: the imperfections of humanity is what makes relationships so special. We don’t have friends because we expect they will never let us down. Instead, we have friends knowing that sometimes they will let us down, but the good friends will apologize and work to change their behavior. It is that effort to improve that helps make a relationship real.
Post your response here.
I agree that AI cannot replace human connection because it lacks imperfections that people have. I don't think that AI could replicate having deeply held beliefs, believing in a lie, or appreciation of small things, all of which is part of what makes us human. Connecting this to your point about AI replacing jobs, I believe the human element is what makes social jobs, such as therapists or teachers, unreplaceable by AI. A therapist may joke about an uncomfortable topic or talk about a lesson they learned as a child to connect with their patient, but an AI will only use what other therapists and philosophers thought and wrote. Many teachers are looked up to for their personality and how they share unrelated topics, but an AI could only replicate their teaching style. These small things create experiences and connections between people, which I feel is some of the shortcomings of AI.