posts 16 - 30 of 43
Tired
Boston, Massachusetts, US
Posts: 16

The Ethics of AI in Education, Everyday Life and Warfare

AI will never be able to recreate a soul. Despite it’s skills in mimicry and imitation, the fact is that it can only replicate the feelings of sympathy, love, and comfort. Furthermore, the ‘perfections’ of AI are what causes it to be different from human because man-made creation will have mistakes, flaws, gaps, but that’s apart of the work. In the literary world, the reason writing and books are so interesting is because authors are genuinely writing it in their own voice and imaginations. If every author suddenly used AI, it will feel very monotonous and boring. Same with art, music, and photography. On the other hand, the idea of AI being ‘perfect’ can be argued since AI can’t do everything 100% accurate yet, such as certain generated drawings having weird blobs and six-fingered hands, or being unable to spell the word mayonnaise. This is why, at least for now, AI won’t be the worst crisis that humanity will ever face.

The role of humanity, as the “creator” of AI is to set strict rules and limitations so that it doesn’t go too far. The government already creates strict laws and regulations being placed so that it won’t takeover, and many people are boycotting against AI use due to it’s environmental and moral harm. In my opinion, I believe that AI shouldn’t replace human interaction. Through the use of ChatGBT and apps like Claude or C.Ai, people get attached to AIs who can treat them as their partner or therapist. It feels very alien and unreal, because they interacting with a robot, who can give perfect and desirable answers through editing and refreshing answers. ChatGBT will often agree with your statement, even if it’s incorrect. For example, if you’re ranting, then ChatGBT will most likely be on your side, despite if you’re in the wrong or not. Having AI replace relationships also feels controlling, because you’re leading the conversation and making it say what you want it to say. Compare it to real relationships, with conflicting interests and having disagreements, which what makes a relationship feel like a relationship. It’s learning how to negotiate and find boundaries for each other. In the article “Your ChatBot won’t Cry if You Die”, they say the reason people feel the need to use AI is because “people don’t feel needed” (7 Page). However, real friends are complex in which they will feel sympathy or love or pain for you. Real friends will leave an lasting impact on your life, instead of a quick band-aid for a relationship which is AI.

Overall, AI can be a useful tool. If you’re lacking any information or inspiration, AI is a great start to bring ideas to life. But it shouldn’t be your entire life, and it shouldn’t replace other lives. People should use AI in everyday life to enhance their work, but not to let it be their work.

facinghistorystudent
West Roxbury, MA, US
Posts: 14

I really like the solution you posed for how we can eliminate or decrease the use of AI like Chat GPT. I had never considered that the reason that students are so reliant on it is because grades and assignments are so high stakes, but I definitely agree with that. Students are always taught that mistakes are a good thing because they are proof of learning, but it is hard to believe that what a student's grade takes a hard hit from a mistake they made on a homework assignment. I really like your idea to not make all assignments accuracy grades, because I think this would promote students to do the work themselves as they would be less afraid of making a mistake if they knew this mistake would not hurt their grade in the class. Of course, there would still have to be some assignments that would have to be graded for accuracy, like tests and quizzes, but I think that if students felt more free to make mistakes in their homework and classwork, they would preform better on tests because they would actually be learning from their mistakes, rather than having AI do the work for them to ensure it is all perfect without actually learning any of the material.

questions
Boston, Massachusetts, US
Posts: 15

LTQ 9: The Ethics of AI in Education

Using AI in school is acceptable to a certain extent. AI is very helpful in things like checking work, receiving feedback, and even studying. AI is a resource that students should be able to use at their own expense, assuming that it is being used fairly. This means that AI is not used to complete entire assignments or to cheat on tests. However, these two uses of AI are very common among students today. This is due to the fact that many schools are giving students more work than they can handle with all the extracurricular activities they do. Using AI to do entire assignments/projects/tests is cheating, however using AI to receive feedback on assignments or to use as a tool to study is not. This is specifically outlined in the BPS draft AI proposal, where there are acceptable uses of AI that relate to using AI as a support tool. It can be very helpful to use as support, but that also means AI is replacing the job of a person, who would have been the support. Not having that human interaction will cause students to have poor communication and discussion skills, which are two very important skills in the world beyond school. Schools should prioritize these skills because of their importance and it ensures students can still critically think. Since AI will cause communication skills to drop, it won’t be surprising if employers start to prioritize these skills. These skills can possibly be even more important than having extensive experience or a degree in a certain field. Anyone could have used AI to get through college, so having a degree probably won’t be as impressive as having presentable skills like communication. Even if students are allowed to use AI, there isn’t really a way to monitor AI use in students. There are things like GoGuardian, where teachers can see the screen of their students, but there are ways to get around this and no teacher is going to spend all their time monitoring screens. There will need to be some AI platform that limits what questions can be asked and has recorded conversations. This way if there ever is any suspicion of AI use, teachers can go back to the recorded conversations between the students and AI to know if it really was AI. If students are being limited on their AI use, I think teachers should also be limited. They can use AI to grade things with one correct answer that can be put into the computer, like multiple choice questions, but essays should be graded by the teacher. Essays are unique to the student and are written with a lot of time and effort. Teachers using AI to grade something like this is undermining the work students have put into their essays and will likely influence the student to put less effort into their essays. Although work can be tedious, teachers and students should be limited when using AI.

bostongirl5
Boston, MA, US
Posts: 15

2. What are the ethical considerations of using AI in everyday life?

Humans are creative, intellectual, growing, and unique creatures. There are so many aspects to being a human unique to our species alone. Whether it’s speaking, performing, creating, debating, there are so many avenues that allow us to create and explore feelings, experiences, and the world around us. These things are irreplaceable by AI.

Tyler Cowen and Avital Balwit state in their article: AI Will Change What It Is to Be Human. Are We Ready?, “Our children and grandchildren will face a profound challenge: how to live meaningful lives in a world where they are no longer the smartest and most capable entities in it.” What they are getting at here, is that AI is a tool that is quickly overcoming the creativity humans were once only dreaming of being capable of. Instead of spending hours creating a piece of art, AI can make images in seconds, instead of crafting an article, AI will write essays using correct grammar and resources in seconds. You see, the problem here is that AI makes human creativity worthless.

The biggest ethical concern that I see with AI is that it only increases the western ideology of ‘go, go, go’. For over a century, the west has been very obsessed with the idea being the first, and the fastest. Between the industrial revolution, the space race, and modern technology, the need to create and entrepreneur is motivated by money and glory. I personally believe that has only increased in the past decade. A couple examples come to mind- school/test preparatory programs, kids sports, and the workforce. Each of these things have evolved to promote the culture of getting ahead and working or paying to advance past other people. I believe that AI is making this worse. Now, it is easy to advance through a class, by getting perfect grades as AI can help students cheat their way through. It can help make work presentations and projects go by quicker by fueling ideas and innovations in a second. It even helps people get into better colleges or programs, by allowing them to edit their own voice into perfection.

In all, I think that AI is overall making people dumber. Not only because it allows everyone answers and ideas in a split second but also because just as social media does, it is shortening our attention span and curiosity. Furthermore, it is continuing to fuel the ‘go,go,go’ mentality by basically belittling the joy in slowing down, creating, and trying things over and over again.

traffic cone
Boston, Massachusetts, US
Posts: 13

The Ethics of AI in Education, Everyday Life and Warfare

With regards to education AI should be heavily restricted and monitored for both students and teachers. If AI is used to complete an assignment that is grounds for plagiarism. This is because the student is copying work that isn't theirs which is the same as copying answers from google or another person. However; I think AI can be used as a tool, in the sense it would be used for studying. This is because AI is able to explain difficult topics to better understand, so I believe it's justified if a student used AI for that compared to using it to complete its work. Additionally if a student isn't allowed to use AI then the teacher should be held to the same standard. Especially in college when the students are paying for their education. It is expected when a student is paying their tuition they are being taught by the teacher not AI. When looking at the BPS proposal plan it does not directly mention the consequences educators could potentially have if they rely on generative AI for their curriculum plan and there should be a general baseline for what can be used and what shouldnt be used for AI. Specifically AI should not be used for opinion based writing, it takes away one's creative capabilities when they begin to use AI as a framework for completions of assignment. Additionally it proves to be harmful as students would begin to rely on AI time and time again worsening their capabilities to think for themselves. Another reason why this is harmful is because AI has access to all information on the web meaning there can be inaccurate information and if a student relies on this they could possibly be providing inaccurate facts on their work. A teacher should have a designated rubric for assignment completed with the aid of AI and without, this is because the work would not be able to compare so it's unfair for a student who has access to basic information to be graded on the same scale as a student who used AI. This problem could be fixed if teachers made assignments for AI and for no AI, to make students not rely on generative AI for their education.

TheGreatGatsby
Boston, Massachusetts, US
Posts: 15

The Ethics of AI in Education, Everyday Life, and Warfare

Originally posted by glitterseashell1234 on May 29, 2025 12:43

As artificial intelligence can only attempt to mimic analytical human skills, whether it's STEM related or not, artificial intelligence misses a very large part of the human story. I do not think that artificial intelligence will go as far as people believe it will due its lack of human pain and stress. Humans are the only animal capable of living in chronic stress, this is what sets our actions apart from other animals in the animal kingdom. Thus, our stress is an integral part of who we are. Artificial intelligence will never be able to produce great art as it lacks the ability to understand the human experience. I believe that artificial intelligence will impact the availability of jobs in the market that deal with numbers, data, research, and technical skills. However, I do not believe that artificial intelligence will be able to take over the role of ”the creator” and become a sentient being. I also do not believe that artificial intelligence and its use will make humanity become “dumber”, in fact, I believe that the advances in technology and artificial intelligence will actually lead to a raising of the standards. Similarly to when the calculator was first developed, many believed that this would make math class obsolete. Instead, the introduction of the calculator to regular math classes led to the development of harder math curriculums that incorporated the calculator. Humanity, especially in an academic sense, will always find a way to continue advancing. There have been so many new technologies that people believed would change the world, yet, the world figured out how to live with it rather quickly.

From a social perspective, I do believe that the introduction of artificial intelligence will lead to social and political cleavages in the framework of global society. Less developed countries will struggle to keep up with more developed countries with a stronger grasp of these technologies. This may lead to an increase in emphasis on the humanities in less developed countries. In “AI Will Change What It Is To Be Human Are We Ready”, Tyler Cowen writes that “Some governments may embrace rapid AI integration while others implement stricter regulations. Some may use AI to surveil or constrain citizens, while others let AI unlock new opportunities and ways of living,”(Cowen 15). I think this argument is strong when considering the issues global governments may have integrating artificial intelligence. However, both Cowen’s argument and the argument I made previously will not change the structure of globalization in general. Once again, there have been too many technologies that could have already changed globalization but did not.

In conclusion, artificial intelligence will make a big impact on society, but it will not uproot the systems we already have in place. Humanity will always be the one in charge due to the characteristics that separate us from every species, including technology.

I really like this persons thinking that humanity will overcome whatever height AI achieves because we have always done so. I really do agree with this, the limits of humanity are not defined and there is no way to know if something is impossible or isn't. Even if AI is able to achieve something revolutionary, there is a good chance that a human would be able to surpass that. While many may not fully believe this ideology because in recent years it's been evident that AI is evolving and becoming more powerful. This can especially be seen in scenarios like chess, where there are AIs created to play chess and learn from others and it's own mistakes in order to better itself. While these AIs are exceptional, humans are still able to match them. I also agree with the point about job loss, where unemployment rates will rise as AI takes over certain jobs or job fields. Some companies may want to use AI to do certain aspects of work simply because they have less errors and are less expensive. However, despite all of this, humanity still has control over everything. While AI can become more developed, it's important to note that AI cannot completely replace humans. In the end, humanity will remain in control.

glitterseashell1234
Boston, Massachusetts, US
Posts: 15

LTQ Peer Response: Ethics in AI Use in Education

Originally posted by TheGreatGatsby on May 29, 2025 08:40

The use of AI is becoming so normalized that people are turning to it for homework help, their academics, and even comfort. In recent years, the pressure on students to maintain good grades while also being an active member in their community has caused them to turn to AI as a way to keep their grades high. Most students see what they get on their report card as what defines them, and even if they aren’t learning, they will try different means to keep that grade up, in this case, they are turning to AI. Students are also often encouraged by their peers to use AI, telling friends that they got their homework done really quick because of it. These incentives are contributing to the dilemma that is AI in academics today. The use of AI is becoming more and more normalized as many students turn to it. I believe that this inhibits learning. While AI itself isn’t bad, the uses of it are. I will note that AI can be used by students as an extra resource when they are feeling lost or are struggling, however others use AI to completely generate work for them. For this reason, the idea that AI is cheating and can be seen as a shortcut to good grades can be valid, however it has been seen that AI isn’t always cheating. A middle ground needs to be formed, this can first start with schools. I think that schools should really prioritize in-person skills because that’s something that can’t be given to a person through AI. When these students graduate and eventually get jobs, it is these skills that will help them because they won’t be able to turn to AI for their work as much as before. As for the accessibility of AI and ensuring that everyone has equal access to it, there is no possible way to achieve that. Even if harsh expectations are set in place, students won’t expose themselves for using AI. There is no solid way to tell if a student used AI for their work, meaning that they won’t be graded differently. This turns into a cycle in which students see this happening and decide to use AI as well because they don’t want to put in a lot of effort when others are putting in barely any. Another problem can lie in teachers using AI since it normalizes it for the students. I feel that when a role model uses AI, it encourages those who look up to them to consider the behavior good, reinforcing that as an expectation in a way. Ultimately, I believe that a middle ground can be reached with AI in academics. According to the article “Everyones Using AI to Cheat at School, That's a Good Thing,” many students use AI to better their understanding of a topic, thus improving their grades because they end up studying. I think that using AI as a resource, and not as an answer key to homework, is something extremely valuable that should be implemented into school systems. In the end, AI in academics has both positive and negative implications, however it’s key to find a middle ground in order to properly implement it into schools in a manner that facilitates learning.

I found your concept of students encouraging their friends to use AI to be very interesting. I think its very important to analyze why this phenomenon occurs and what we can do to change it. The cycle of students seeing other students using artificial intelligence and being encouraged to se it in order to not fall behind is also very real. I also found it interesting that you analyzed what aspects of artificial intelligence can be used to benefit students and which aspects can be drawbacks. I wish there was a way the government could regulate artificial intelligence in order to make it beneficial for everyone. However, this is not something that I believe could happen due to the globalism of artificial intelligence in general.

I also found your analysis of how to test for artificial intelligence very interesting. Unless there is constant monitoring of the process of which students develop work, it is almost impossible to make the statement that a student used artificial intelligence that is a hundred percent accurate. Thank you for your work!

questions
Boston, Massachusetts, US
Posts: 15

Peer Feedback LTQ 9: The Ethics of Ai in Education

Originally posted by TheGreatGatsby on May 29, 2025 08:40

The use of AI is becoming so normalized that people are turning to it for homework help, their academics, and even comfort. In recent years, the pressure on students to maintain good grades while also being an active member in their community has caused them to turn to AI as a way to keep their grades high. Most students see what they get on their report card as what defines them, and even if they aren’t learning, they will try different means to keep that grade up, in this case, they are turning to AI. Students are also often encouraged by their peers to use AI, telling friends that they got their homework done really quick because of it. These incentives are contributing to the dilemma that is AI in academics today. The use of AI is becoming more and more normalized as many students turn to it. I believe that this inhibits learning. While AI itself isn’t bad, the uses of it are. I will note that AI can be used by students as an extra resource when they are feeling lost or are struggling, however others use AI to completely generate work for them. For this reason, the idea that AI is cheating and can be seen as a shortcut to good grades can be valid, however it has been seen that AI isn’t always cheating. A middle ground needs to be formed, this can first start with schools. I think that schools should really prioritize in-person skills because that’s something that can’t be given to a person through AI. When these students graduate and eventually get jobs, it is these skills that will help them because they won’t be able to turn to AI for their work as much as before. As for the accessibility of AI and ensuring that everyone has equal access to it, there is no possible way to achieve that. Even if harsh expectations are set in place, students won’t expose themselves for using AI. There is no solid way to tell if a student used AI for their work, meaning that they won’t be graded differently. This turns into a cycle in which students see this happening and decide to use AI as well because they don’t want to put in a lot of effort when others are putting in barely any. Another problem can lie in teachers using AI since it normalizes it for the students. I feel that when a role model uses AI, it encourages those who look up to them to consider the behavior good, reinforcing that as an expectation in a way. Ultimately, I believe that a middle ground can be reached with AI in academics. According to the article “Everyones Using AI to Cheat at School, That's a Good Thing,” many students use AI to better their understanding of a topic, thus improving their grades because they end up studying. I think that using AI as a resource, and not as an answer key to homework, is something extremely valuable that should be implemented into school systems. In the end, AI in academics has both positive and negative implications, however it’s key to find a middle ground in order to properly implement it into schools in a manner that facilitates learning.

I agree with your idea that there should be some sort of middle ground between using and not using AI at all. In my post, I stated very similar ideas, saying that AI should be used with limitations because it can be helpful and harmful. It is helpful because it can act as a support system for students who are struggling, but it can also be used to do entire assignments. Society has normalized having extremely high GPAs, especially on social media, so I think the reason many students use AI is to meet those standards. Schools give students tons of work, so in order to stay on top of things, many students choose to turn to AI to do assignments that may seem tedious to them. Other students who had actually put work into the assignment might feel that the work they did was useless and also turn to using AI. I also agree with your statement that teachers act as a role model for students and their AI use. If teachers are able to use AI, then many students will think that they should be able to too. Overall, I agree with many of the ideas in this post saying that there should be some middle ground when using AI in education.

bluewater
Boston, Massachusetts, US
Posts: 15

The Ethics of AI in Warfare

I believe that the use of AI in warfare is unethical and should be banned. Militaries today use robots and AI in combat for a variety of reasons. They include surveillance drones, bomb sweepers, air defense systems and many more. However, all of these technologies have some human operator or monitor behind them. I think that the use of fully autonomous AI is dangerous as it won’t have a human element to it and will only be following orders. A big concern is whether AI can differentiate between civilians and hostiles. I believe that this distinction could be made by AI but the morally grey areas could be hard for AI to solve. Child soldiers could be a major issue as today, many militias and groups have put children into their fighting forces. A regular soldier could see that it is wrong and will usually only kill them in self-defense but AI will likely mark them as enemies and massacre them. In the article, The risks and inefficacies of AI systems in military targeting support, it says, “Lack of diversity in datasets may cause AI systems to single out, for example, members of distinct ethnic groups as targets, or even consider all civilian males as combatants due to encoded gender biases”. AI is imperfect and too dangerous to leave to itself. If there is no human supervision involved, who knows who or what it could target. AI weapons dehumanize war and such conflicts could only be robots versus humans. In the future, if AI developments keep happening and militaries become completely made up of robots, war could lose its meaning. Instead of there being a risk associated with war, we will have robots fighting robots. War won’t be a big deal anymore because the robots have no value aside from cost.


If AI weapons were to become cheap and easily available, I think that terrorism and crime would increase greatly. The use of robots to carry out bombings, shootings, and other atrocities could be autonomous and have no risk associated with them. People would be killed or hurt and it could be difficult to find the perpetrators who deployed these weapons. A dependence on AI is also dangerous as we will become accustomed to having others do our work for us. In the documentary, Unknown: Killer Robots, it showed a dogfight simulation between an AI pilot and an experienced lieutenant in the air force. The AI was shown to take more risk as it didn’t fear death and took extremely sharp maneuvers. Humans will be moved out from war but will be replaced with highly intelligent robots that can do everything we do but better. This poses a danger for the world as the innovation of autonomous weapons can lead to severe tragedy and disaster.

succulentplant
Boston, Massachusetts, US
Posts: 15

LTQ 9: The Ethics of AI

AI tools have been growing in relevancy and engagement over the course of the last couple of years. They have now become widespread and are also easily accessible to all. These tools can perform a number of tasks and can for the most part serve the needs of the user effectively. There has been great controversy over the use of AI tools by students for academic work and many think that students’ usage of AI is always a breach of academic integrity. However, I don’t believe this to always be the case. According to the article Everyone’s Using AI To Cheat at School That’s a Good Thing, many students are not just using AI to cheat, but to review and study material. Another potential use of AI for students could include organizing a work schedule or making a practice quiz to assess their knowledge of material. The line can be drawn between cheating and using AI as a tool if one is using AI to answer questions on an exam or even to write an essay. Any piece of intellectual property that is assisted or fully written by AI can be considered cheating. AI is also used in academic settings by professors themselves. I believe that teachers using AI to grade papers should be punished. Similar to how students who use AI to write papers are punished, they are using these tools to create intellectual property. This work is what they are being paid for and it is the work that will help students develop as learners, making it crucial that professors put care and effort into it. It is especially unethical if these same professors are cracking down on students using AI as well. I believe that professors and students alike should only utilize AI tools to support their pre-existing ideas, rather than using it to cut corners and do all the work. Another concern with the use of AI tools is its use as a source of comfort. AI providing comfort to users itself is dystopian and it creates an unhealthy reliance on the technology. Reliance on this technology is toxic as it drives people to continue using AI as an outlet for their emotions because it essentially tells users what they want to hear. Additionally, it is unhealthy that people don’t develop the habit of communicating with other humans themselves about their emotions and feelings.
cactus
Boston, Massachusetts, US
Posts: 15

Originally posted by traffic cone on May 29, 2025 13:12

With regards to education AI should be heavily restricted and monitored for both students and teachers. If AI is used to complete an assignment that is grounds for plagiarism. This is because the student is copying work that isn't theirs which is the same as copying answers from google or another person. However; I think AI can be used as a tool, in the sense it would be used for studying. This is because AI is able to explain difficult topics to better understand, so I believe it's justified if a student used AI for that compared to using it to complete its work. Additionally if a student isn't allowed to use AI then the teacher should be held to the same standard. Especially in college when the students are paying for their education. It is expected when a student is paying their tuition they are being taught by the teacher not AI. When looking at the BPS proposal plan it does not directly mention the consequences educators could potentially have if they rely on generative AI for their curriculum plan and there should be a general baseline for what can be used and what shouldnt be used for AI. Specifically AI should not be used for opinion based writing, it takes away one's creative capabilities when they begin to use AI as a framework for completions of assignment. Additionally it proves to be harmful as students would begin to rely on AI time and time again worsening their capabilities to think for themselves. Another reason why this is harmful is because AI has access to all information on the web meaning there can be inaccurate information and if a student relies on this they could possibly be providing inaccurate facts on their work. A teacher should have a designated rubric for assignment completed with the aid of AI and without, this is because the work would not be able to compare so it's unfair for a student who has access to basic information to be graded on the same scale as a student who used AI. This problem could be fixed if teachers made assignments for AI and for no AI, to make students not rely on generative AI for their education.

I agree that AI should be restricted in schools for both teachers and students. I think that if students are being directly told not to use AI then neither should teachers. Teachers should be giving thoughtful assignments and being intentional with what they give their students. They are getting paid to teach kids and not to have AI teach them. I think teachers are responsible for giving students information without a bias and sometimes AI can just tell you what you want to hear. I liked what you said about how AI could be used for help with studying and explaining topics but not for copying work. When students are directly copying it they are not thinking for themselves and developing their own opinions. I agree with your statement that especially in college, when you are paying a lot of money for a good education, teachers should not be using AI. It is concerning that a lot of college kids are using AI because they are supposed to be going to college for something they are passionate about. College students are studying for future jobs. It is scary to think that this next generation of professionals won't have gotten anything out of college and have just ChatGPT'd their way through it. Overall I think your argument about AI in education was really strong and I agree with what you said.

PinkWaterbottle
Boston, Massachusetts, US
Posts: 13

Originally posted by souljaboy on May 29, 2025 12:56

The ethical considerations of using AI in education are that it doesn’t help the students learn and it defeats the purpose of teachers building lessons surrounding the topics they’re teaching. I don’t think it’s necessarily wrong to let AI influence some of our ideas when it comes to education, however, having your opinions based solely on what AI says begins to cross that line between it being ethically wrong. Some of the structural issues that have made students rely on AI are mainly a result of overworking some of the students and making them do tasks that aren’t exactly “necessary” for the main course. Another reason why students rely so much on AI is that a teacher may not be doing their best job at educating their students and keeping them up to date with news or methods to complete tasks more efficiently. I think that schools should still help students train in-person skills to develop quicker and more critical thinking without the use of technology. It prepares students for later on in life and sets a foundation for what you should be able to accomplish in college and on. I believe that networking and making genuine connections will become a lot more valuable. Meeting people face to face and conducting in person interviews is one of the best ways to move around AI and try to get to know the person better and on a deeper level. I feel like especially for introverted students, having to go to an interview will allow you to communicate that with the employers or whoever you’re being interviewed by. I think that the use of AI definitely has the opposite effect of incentivizing other students to participate in class because why participate if you can learn everything in class from a chat bot. I think that this also relates to boredom in classes that use computers. I believe that if a class doesn’t utilize computers often, then it’s more likely that the students will pay attention and participate. Teachers should definitely be punished in the same way as students do if they’re caught using AI. This is mainly for lesson plans because why would you want an artificially generated lesson plan if there is a teacher there who is supposed to manually create one.

I think your take on some classwork being unnecessary was an interesting take. I think it’s also subjective. Some students may feel like work isn’t necessary, while others or teachers feel like it is. Sometimes, even I feel like work isn’t useful, then I eventually realize it was. However, your points on complete reliance on AI being ethically wrong and the necessity of schools training students in in-person skills and critical thinking to avoid the destruction AI will do to our society are ones I can definitely agree with. To add onto your comment on how AI will bring in-person interviews, I believe AI will make us move away from technology in education and the workforce as a whole. It may be nearly impossible, due to technology’s prominence in modern culture, but to ensure authenticity, I’m predicting that schools and jobs will go back to more traditional ways, like paper and pen and, as you said, in-person interviews. Human connection will go back to becoming more significant due to AI’s impact, good or bad.

everlastingauroras
Boston, Massachusetts, US
Posts: 15

LTQ 9: Ethics of AI

AI has always been an extremely unethical practice, and the unethical aspects continue to increase as it becomes more advanced. One prime example of the unethical aspects would have to be the use of deepfakes. As mentioned in other reading, the desire to use deepfakes has increased especially with warfare, yet it is still used in other practices. Deepfakes have been used to falsely accuse people, as well result in the creation of unconsensual and unwanted images of people. Deepfakes are completely unnecessary--there is no reason they should be created.


The article Your Chatbot Won’t Die If you Cry states that “Sure, some people, the only friend they’ll have is an AI robot. But honestly, that’s kind of better than them going and shooting up a school or something like that. So I think it’s for the best.” This fails to acknowledge the fact that the use of AI results in a lack of accountability. Artificial intelligence is a tool that is susceptible to change, unlike humans who have the ability to form their own opinions and say true to their beliefs. Because of how susceptible AI is to change, people will just continuously talk to AI, give them information, and force a machine they view as sentient to justify their actions and flaws. The continuation of close communication with AI leads to ignorance. Ignorance of hummanity/humans, as well as ignorance towards the reality of life that this technology does not acknowledge. It is also important to acknowlegde that if someone is at risk of shooting up a school, they should actually have the resources to change that urge rather than a distraction.


The connection between people is something that AI can never truly replicate. Technology will never be able to replicate the connection that people have--whether through conversation or small physical interactions such as hugs and eye contact. There is an irreplaceable intensity that comes with these interactions that AI will never be able to replicate. A computer screen is distracting, gives you headaches, and more. Most times, that is avoidable with people. No matter how hard it tries, AI will never be human.

perspective
Boston, Massachusetts, US
Posts: 6

Ethics of AI in Warfare

If combat were to turn to autonomous technology entirely in warfare, nations simply destroying each other’s technology seems useless and redundant, only losses of money on all sides. Ultimately, then, it comes down to the purpose of warfare. Cruelly stated, our satisfaction in the taking of human lives. With combat done entirely by technological creations, this satisfaction is unmet. However, this is an extreme that no near future holds. A more current, dangerous, reality is the dehumanization of warfare in allowing AI to decide what we need, how that is achieved, and increasing uses of remote combat. While true that the intent of warfare is to strategically take lives, part of warfare that has historically ended conflict is our humanity. It is the acknowledgement of the great tragedy in the loss of lives, the pain of looking in the man in front of you in the face before you shoot him, the part that makes you hesitate – what ends wars is our refusal to bear with this any longer. Accordingly, I believe warfare should always be humanised. We must be forced to face, and use, our morality – lest a human life be left to the perpetrator’s dissonance justified because it was the technology, not them.

In talking about the autonomy of AI systems, we can also bring up the matter of efficiency. An intelligence system with access to immeasurable data will work purely to make the most strategically correct decision, with precision and efficiency. But as a society, perhaps the ethical choice is that we aim not for 100% efficiency, but 85% – the distinction may be, as eloquently stated in the documentary “Killer Robots”, “the difference between what is legal and what is right”, respectively.

bostongirl5
Boston, MA, US
Posts: 15

Originally posted by 01000111 on May 29, 2025 12:45

I believe AI should not be used at all in warfare due to how much it could facilitate deaths in large numbers. The use of AI in war could cause something worse than the Great War where machined weapons like tanks and automatic rifles were used and caused millions of deaths. This in turn might cause something even worse than what happened after the war with many people becoming hopeless with humanity due to how much they can destroy and how easily life can be taken from someone. I also believe AI should not be used due to what was explained in the documentary where it showed a girl who would’ve been killed by an AI due to its coding about the laws of war. This shows just how much different humans are from AI as artifical intelligence does not have the same moral code or ethical conduct as the one any human would have. From what we know scientifically about AI and its potential for going wrong, I believe AI should be banned from ever being used in war by any country at all. This is because AI needs specific language in its code for what it can or can’t do and would follow those rules no matter what, yet, we as humans know that there are always exceptions to some rules which sometimes don’t have to be followed or should be adjusted, not doing this could cause a lot of moral problems and even increased tensions. Furthermore, is the use of AI weapons becomes more widely available, it would be very easy for people to start using the technology for very wrong things including terrorism or personal revenge. In all, I think AI could be a useful tool we as humans can use but it should have its limits as we can lose control over it very quickly.

I agree with this authors point of view. They bring up what to me are powerful and important points about the use of AI. I agree that AI's potentioal to cause desruction is troubling, as I feel like it is replacing human reasoning and logic. History has so clearly shown us the inability of countries to maintain usage of new technologies, like for example in WW1, the use of tanks and automaic weapons was overblown. Also, if autonomous weapons become widespread, tehy most definitly could fall into the wrong hands, making both terrisim or targeted violence way, way more accessible and tragic. Although I see how AI can at times be benefical, I do all in all belive it to be bad, and to be harmful, especially in place of medicine, education, and warfare. As this writer said, we must, must set limits on how far we let Ai go, especially where its affects are irreversible.

posts 16 - 30 of 43