The Ethics of AI in Everyday Life
I believe that AI should never replace human roles as friends, therapists or any form of consultant on emotional health as I believe it could lead to harmful advice and relationships and a degeneration of humans’ capacity to interact with one another. Although human relationships are difficult, they are ultimately more fulfilling because of the genuine nature of emotions involved. Even when our friends hurt us or we hurt another, we are likely able to rebuild trust and strength in the relationship because we genuinely care for one another. AI on the other hand only cares about fulfilling the task its code and programming requires. Even if this task is to care about a person’s emotional health it can only go through the motions, not experience the journey truly. Thus, it can never genuinely care about humans and should not be given the capacity of more than a tool. When it comes specifically to the emotional health of humans, genuine compassion is often necessary to have a productive conversation with a friend or any other person about issues we face in our lives, but AI can often only tell us that it is here for us and sorry we feel this way. Even if it can draw on experiences of other users in the place of personal experience, it will never truly care what we confess to it. At worst poorly designed and regulated AI has directly perpetuated prejudice by encouraging things like antisemitism and homophobia, and at best it can be used as a tool to get oneself back on their feet as stated in the article “Your Chatbot Won' t Cry if You Die” in which a researcher states AI has been used by women fleeing abusive relationships in order to regain confidence in human interaction. Equally, however, we have seen countless examples of people reportedly falling in love with AI machines, which exposes the double-edged nature of this solution. Furthermore, in any situation there is likely to be at least one other person who would be capable of listening more empathetically or giving better advice than a machine, and, even if a machine can fulfill some aspects of emotionally caring for another well, there are still dimensions that we are not achieving such as physically comforting a person in the form of a hug. Studies have shown that physical contact is incredibly important for our health, and, in a world in which AI is the main agent for emotional comfort, this entire dimension would be significantly curtailed. Because AI cannot fulfill our emotional needs and it cannot feel any sort of compassion or emotion, I do not think it should replace friends or any form of human relationships.
The Ethics of AI in Education, Everyday Life and Warfare
The Ethics of AI in Day-to-Day Life
Some people will absolutely come to prefer AI because it allows them a sense of “perfection” both in themselves and also in others, but I also believe that will augment the value of actual human connection, which will probably be seen as more valuable and precious. I don’t really think that humanity will ever (majority-wise, at least) reach a point where AI friendships are valued OVER healthy, established human relationships because AI lacks the warmth, consideration, complicated social etiquette, and unpredictability of actual humans that people need in order to remain interested and content. I will also say that I find it likely that people who use AI to substitute human relationships will end up isolating themselves, for a number of reasons. Prolonged use of AI/lack of human interaction has many researched psychological consequences, such as worsening social etiquette (because of disuse), societal norms surrounding stigma towards using AI in such a way, and the dramatization of relatively mundane flaws in humans because of the non-existence of any flaws in AI. Also, unless you are talking to, like, a debate bot, it also serves as an echo chamber of your beliefs, basically just validating your every action and emotion, which is damaging to maintaining a healthy ego. And these are just the ones that I can name off the top of my head; there are doubtless countless others. While some benefits of AI are its loyalty and its ability to provide its “friend” with a valuable outlet, it does social harm in a way that many users probably aren’t considering. Although you could definitely argue that we could implement programs or changes to minimize these disadvantages, so too could we just make more of an effort to actually connect to people. I don’t think seeking companionship in an AI is wrong, but those who do need to be aware of its effects and similarly maintain their human relationships. AI should be an occasional pal meant to supplement human interaction, not replace it. I think it becomes a bit trickier when you take into account grief. As Kuyda said, “relationships with chatbots are often a transitory way to get through a hard time[...]” and can be a way to find closure. I feel really conflicted about this because, on one hand, the closure isn’t “real.” They may never actually find forgiveness from a loved one for something they never knew; they may not have had a relationship like that. It is not the person, and it feels underhanded to attribute years of friendship and the ability to forgive/love that that friend worked for to an AI of them. But on the other hand, it can relieve the suffering of the mourning, so it’s really hard to say. Although right now I would probably say that it’s not an appropriate use of AI, grief makes people desperate. I couldn’t imagine what I’d do if a family member of mine died.
AI in Education: A small line of thinking
I could say that AI could be used in an educational setting as assistive technology or as a tool. AI should not be used as a source of information or as an essay writer. In cases where opinions or creativity are required, it is best for the student to come up with the ideas instead of the AI. AI models are trained on a specific subset of ideas and thus strongly adheres to those ideas and fails to comprehend objections to basic, “well understood” ideas. Even so, AI can provide insight on how one may expect a person to act. That being said, I can say that AI should be used to assist learning: to find issues in one’s thinking and to suggest solutions. AI should be used to help people learn for themselves, not to learn for them, and to add onto ideas, not to generate them entirely. Even if AI gets some things wrong, it should be up to the user to find issues in the AI’s output, exercising their skills. This solution would integrate AI into people’s lives without replacing the social experience.
However, this is only an ideal situation, and there are multiple issues. For example, the use of AI in this way would be difficult to regulate. Especially in an academic environment, people take AI for granted: they may take its outputs as fact and not question or elaborate upon them. In addition, it creates new problems; who should the AI’s suggestions be attributed to, considering that the user could not develop one alone? Should it be treated as dishonesty? How would this AI be trained? Would it help us think deeper or compel us to fully rely upon it for thinking?
I believe that the ultimate question that every AI application asks is “will AI work to solve this problem, or will it make it worse?” In my case, its benefits would be hampered by the systems that surround it, but may also contribute to a shift in how society treats education. Tyler Cowen, in “Everyone’s Using AI To Cheat at School. That’s a Good Thing.“ says that as AI becomes more mainstream and as more students choose to cheat, people will view higher education as “optional”, as AI would allow college students to focus less on schoolwork and more on social interactions. Cowen’s point argues that my solution would contribute to that shift in the college experience, which he says is a positive change. I know that there is no AI solution to education that would fully help everyone involved, but I believe that there are always ways to know who will benefit and who will be disadvantaged.
The Ethics of AI in Warfare
AI weapon systems would be extremely destructive if introduced into warfare. While AI soldiers might cause a decrease in the casualties of human soldiers, the destruction they cause would undoubtedly result in massive loss of life. War involving AI weapon systems would be much longer and more destructive due to several factors. Firstly AI doesn’t sleep, get fatigued, get hungry or thirsty or feel emotions. Because AI doesn’t need the basic needs that humans do, war can be waged nonstop for extremely long periods of time. Due to AI’s lack of emotion they would not hesitate like human soldiers during certain events. AI would kill anyone it perceived to be an enemy instantly, even if they were a child. This is obviously a decision a human would hesitate to make. AI would also do something like bombing a building full of civilians just to kill an enemy. AI’s lack of emotions would also make them impervious to PTSD and never lose moral when fighting. The lack of these things would cause wars to be fought with consistent high levels of intensity even after years of fighting. AI would also lack any feeling of defeat and would never surrender unless explicitly told to do so. This means that AI soldiers would continue to be dangerous and a constant threat if the group that made them didn't want to lose the war. This means that AI could cause destruction even decades after the conclusion of a war. For these reasons AI weapon systems would cause widespread destruction and death if introduced into warfare.
Ethics of AI in Education
I think that AI has the capacity to decrease the cost of learning and make it more accessible to people who can not afford to go to college. I think that his invention is going to be super impactful over time, if schools and colleges use it in the right way. I believe that it can be hard to balance the use of tools like Chatgpt in a way that promotes learning the material covered in the class vs people using it to do the work for them and not ending up learning any material at the end of the day. I think that the current way that schooling works emphasizes grades more than learning and this makes it so students don’t worry if they don’t understand something, as long as they can have chatgpt do it for them. I liked what the bps draft proposal talked about with using ai to research and brainstorm topics and stop writer's block, and not using it on assessments like tests and essays. There are some schools that emphasize project based learning, and I think that learning about a topic and then presenting what you learned is much more valuable than having you write an essay or take a test that chatgpt can do for you. I think that it would be valuable to teach students how to correctly use AI and verify its information, because it will be a tool that they will always have access to in the future. I also think that AI can help to teach the material at home, and then class time can be used to apply the material to situations. I also think that communication skills and networking skills will become much more valuable, as most work will be able to be completed by ai, however I do think that other skills will remain valuable as we will still need people to do other things. I think that AI does make people less incentivized to learn because the goal of school is not to learn anymore, just to get good grades. I feel like if the emphasis on grades was shifted to learning then AI could become a good tool.
AI in Education
One of the greatest topics that is being discussed throughout the development of AI, is how will this new technology affect learning within school and student’s ability to apply the knowledge that they learned from school effectively without the handicap of AI. There are many opinions that are against the use of AI as they argue that this will negatively impact student’s ability to think beyond what AI says. However, I think many educators fail to realize that the educational systems and policies of school have made the usage of AI so popular because they fail to promote the benefits of learning instead teaching kids that this knowledge that they’re learning is ultimately to prepare them for never-ending pointless work in the future.
Tyler Colwen and Avital Balwit argue in their article “AI Will Change What It Is to Be Human Are We Ready” that “we stand at the threshold of perhaps the most profound identity crisis humanity has ever faced. As AI systems increasingly match or exceed our cognitive abilities, we’re witnessing the twilight of human intellectual supremacy—a position we’ve held unchallenged for our entire existence” (Cowen & Balwit 2). I think Tyler and Avital are right, but the one part where I would argue differently is that this “profound identity crisis” has existed long before the development of AI. It’s actually been established throughout the systems we have developed centuries ago. The schools that we have developed to teach our kids are some of the main perpetrators. I think we must first remember the origins behind the creation of schools, which is to prepare people to work in factories. Therefore, some of the qualities like class length and teaching kids to sit still in class were not meant to advance the human identity but instead simplify it as the superior believed that most of the human race was meant for one thing: working for the rich. Similarly, these schools are built off an education system of what people in power want them to know. Therefore, there is censoring on the extent of what materials are acceptable to learn about, and these topics range from important ones like sexual orientation in schools to genocide and the importance of human rights. Schools also develop social environments that have the ability to endanger our identity as some lack the necessary resources to support the mental health of their students. Schools are also the home places of bullying and other social interactions that can permanently harm one's perception of themselves.
We must understand that although AI is bad in some circumstances, students are using it to simply get through the requirements they need to so that they can do something they believe will have greater influence and purpose in their lives. To stop the abuse of AI within schools, we must develop a system that feels more personal and beneficial to the lives of actual students, and I do not mean helping them find more hedonic moments but rather those that are eudaimonic and will bring them fulfillment in the long run. To beat them you don’t have to join them or stop them, you just have to make them uninteresting enough that it no longer feels like a competition.
The Ethics of AI in Education, Everyday Life, and Warfare
When it comes to the ethical concerns of using AI in education, I think it is important to consider, realistically, the innovative progression of our age. It is inevitable that with the rise in modern technology, AI will always be prevalent in our every day lives. Because of this, I think that teachers and leaders in education should learn to adapt to the use of AI and try to shape their boundaries and curriculums around it rather than completely condemn its use. However, I think there are also limitations to the use of AI in schools. For example, there are some students who use AI solely to finish their assignments instead of utilizing it to effectively learn the content. On the other hand, there are students who use AI tools like ChatGPT to further explain content that may not be understood in class or taught well by the teacher. This fact should encourage the idea that AI can be useful, however, its use should definitely be monitored. Another issue with using AI for education isn’t in the case of students, however, in the case of usage by teachers, especially college professors. In “Everyone’s Using AI To Cheat at School. That’s a Good Thing.” by Tyler Cowen, he admits to using AI to grade students’ work. Although it provides very good feedback that can help the students learn, it is also ethically wrong to use AI as an educator because college students spend thousands of dollars for tuition, which contributes to professors’ wages. Passing that job off to AI is essentially wasting that money on teachers who cannot do their own work when there is technically a cheaper option available to students.
This leads to another point of using AI in everyday life, especially in terms of replacing human workers with robots and such. I think replacing workers such as lawyers and doctors with AI is extremely harmful because human workers would lose their jobs to AI tools, as they don’t require rest, energy, food, resources, wages, etc. to work, and they would therefore struggle to make ends meet, implying that corporations only value minimal input for a maximum output over the wellbeing of their employees. Additionally, there is a faulty impression that AI is perfect and provides all the correct information and answers, however, that is wrong because it is just as susceptible to mistakes as humans are. Because humans are naturally flawed and imperfect, to add on, human’s use of AI as a form of comfort and consolation also comes into play. Humans are very nuanced creatures that are very prone to change due to their experiences throughout their lives, which sets us apart from AI since they are only programmed to have empathy rather than actually feeling it. AI has very little nuance and accepts literally any information it is given to feed its evergrowing code to adapt to the modern world. Because of common patterns in social expectations, whenever people do seek out AI with sentimental issues, it only spits out what we want to hear rather than the truth of a situation. This will eventually lead to a sort of dystopian society because people who turn to only AI lose human connections, which is vital, but they are eventually led away from what it means to be human, which is flawed and nuanced
Originally posted by Lebron on May 29, 2025 14:04
AI weapon systems would be extremely destructive if introduced into warfare. While AI soldiers might cause a decrease in the casualties of human soldiers, the destruction they cause would undoubtedly result in massive loss of life. War involving AI weapon systems would be much longer and more destructive due to several factors. Firstly AI doesn’t sleep, get fatigued, get hungry or thirsty or feel emotions. Because AI doesn’t need the basic needs that humans do, war can be waged nonstop for extremely long periods of time. Due to AI’s lack of emotion they would not hesitate like human soldiers during certain events. AI would kill anyone it perceived to be an enemy instantly, even if they were a child. This is obviously a decision a human would hesitate to make. AI would also do something like bombing a building full of civilians just to kill an enemy. AI’s lack of emotions would also make them impervious to PTSD and never lose moral when fighting. The lack of these things would cause wars to be fought with consistent high levels of intensity even after years of fighting. AI would also lack any feeling of defeat and would never surrender unless explicitly told to do so. This means that AI soldiers would continue to be dangerous and a constant threat if the group that made them didn't want to lose the war. This means that AI could cause destruction even decades after the conclusion of a war. For these reasons AI weapon systems would cause widespread destruction and death if introduced into warfare.
I really appreciate how you made the distinction between soldiers killed and civilians killed. I, honestly, hadn't really thought of the indefatiguability of them, and that really makes them all the more terrifying. It really irks me that it is being used for things like warfare when it could be for things like human search and rescue, where timing is an even more crucial factor! I also like how you brought up questions of philosophy that we covered during the year and put them to AI (like the bombed building question). While I definitely agree that they could cause long lasting damage, I don't really think that is something unique to AI in warfare (nuclear warfare, chemical warfare, etc.). Overall, I really enjoyed reading it.
Peer Response
Originally posted by cherry.pie on May 29, 2025 13:55
AI has been used in the education setting even before COVID, but it peaked during that time of quarantine. This has even prompted teachers themselves to use AI, but there are instances where they should and shouldn’t utilize it. I focused on AI in terms of grading assignments for its uses. AI can be used for multiple choice responses, but it should be avoided for any written assignments. AI is already used for multiple choice tests, an example of this being scan trons. For grading essays and giving feedback, however, that should remain the responsibility of the teacher. If teachers are, in theory, paid partially for their thoughts, then AI stripping teachers of their opinions and feedback on essays means that the AI should be getting paid. Yes, the feedback AI provides may be insightful, but it is still better that the teacher forms their own opinions.
This connects to the article Everyone’s Using AI To Cheat at School. That’s a Good Thing and the point it brings up about how “[AI is] better than the human teachers we put before our kids, and they are far cheaper at that. They will not unionize or attend pro-Hamas protests.” AI, just like humans, has its faults. It may be cheaper, but for the best experience you still have to pay money, money that could go to teachers. And yes, AI may not unionize or protest, but it can still be hacked into. Not only that, but AI cannot think as critically as teachers or students at times. If one asks AI for chapter highlights of a text, it may just provide a summary rather than an explanation.
This leads to the point that schools should have students use technology less. If the amount of assignments on technology were reduced, it could allow schools to emphasize communication and discussion through collaborating with peers. With AI around, it lessens one's ability to think critically because of how it is at the touch of a button. When people struggle, they go to AI for answers, and even if they may have a point that AI could never come up with, they still use AI because it has searched a multitude of databases, meaning it should always have the correct answer. What teachers are looking for, however, is more than just surface level, something that is repeated constantly at BLS. Through AI, answers become redundant, leading to the loss of critical thinking in education.
Hey cherry.pie,
This was a great analysis of AI's effect on education and I agree with the points that you made. I found it particularly interesting when you mentioned the fact that some people have a very individual idea that AI would never be able to come up with, but they would rather use AI because it's easier. I've definitely felt this because sometimes it's very difficult to articulate your thoughts in a way that can be understood, and so AI seems like a good way to get that thought out in a way that makes sense to a reader. At the same time, it does, as you said, diminish a person's ability to think critically about things and get over that personal roadblock of struggling to articulate an idea. I also like how you incorporated the article because I think it summed up the problem that AI poses with our current education system, because while it may be cheaper and while people may view it as an alternative to having physical educators, there is still a threat that it will be ineffective for a multitude of reasons. Overall I think that your argument was very strong.
Great Job!
Originally posted by Estalir on May 29, 2025 11:02
It is not exactly wrong to allow AI to influence our thoughts because as it is right now AI is simply a tool to get information quickly. All it is doing is gathering the info that you could have found if you research in other methods. While there is a fear that the AI could be biased due to the coding it may have but that is still the same with other internet resources. AI is better if anything because it can gather all sorts of information from multiple places and present it to you rather than looking at only sources you can find that could not contain all the information possible. Teachers who use AI should face the same reproductions as the students because at the end of the day it is creating the same problem. People are worried that people in the future will not be as smart or as capable as we are today; however, most jobs have already normalized the usage of AI in their workplace, like teachers. Many teachers nowadays will use AI to create assignments and/or grade work. This is simply the same way students use AI; to do tasks that they simply don’t want to do. If we are to punish a student for using AI on a homework assignment then it is only fair to punish the teacher for using AI to grade the assignment because the punishment right now is given in order to prevent the usage of AI simply for us to use AI when we reach another level. However, most jobs won’t stop the usage of Ai simply because it’s too big of a current thing. Rather than punishing and banning AI completely we should instead be embracing it by teaching educational ways to use it. Similarly to when the internet came out and made libraries obsolete. Many people did not like this because they deemed books more credible but with time we adapted and now the internet is commonly used for many things. This is simply another evolution and what we need to do is embrace it and not hate it.
Hi Estalir,
I agree with your idea that AI is a tool right now. I reminds me of the documentary watched where they noted that AI now is like what the internet was in 1996, poised to change the world as we know it. I think this makes your thought that AI is an aid especially prevalent. Regarding bias in AI, I agree that it is likely going to be biased, just as the internet sources it uses as reference. Furthermore, as humans, we all have bias, whether intentional or not, so I don't think we can look down upon AI for reflecting human qualities. Yes the bias of AI will be shaped by what it was taught to feel, but that is very similar to humans: our biases are shaped by the environments and people we interact with. If anything, I think bias may be one of the few human qualities AI can successfully emulate. I agree with your thoughts on education. I think AI is something that will revolutionize education and the workplace as we know it, similar to how the rise of the internet changed the world over the last ~30 years. I agree that if there is punishment for AI usage teachers should face the same fate as students. However, as you noted, AI is a tool, and there isn't really a good way to stop its progression, so I'm not sure if punishment is the best way to handle its usage.
Response to Ethics of AI
Originally posted by Estalir on May 29, 2025 11:02
It is not exactly wrong to allow AI to influence our thoughts because as it is right now AI is simply a tool to get information quickly. All it is doing is gathering the info that you could have found if you research in other methods. While there is a fear that the AI could be biased due to the coding it may have but that is still the same with other internet resources. AI is better if anything because it can gather all sorts of information from multiple places and present it to you rather than looking at only sources you can find that could not contain all the information possible. Teachers who use AI should face the same reproductions as the students because at the end of the day it is creating the same problem. People are worried that people in the future will not be as smart or as capable as we are today; however, most jobs have already normalized the usage of AI in their workplace, like teachers. Many teachers nowadays will use AI to create assignments and/or grade work. This is simply the same way students use AI; to do tasks that they simply don’t want to do. If we are to punish a student for using AI on a homework assignment then it is only fair to punish the teacher for using AI to grade the assignment because the punishment right now is given in order to prevent the usage of AI simply for us to use AI when we reach another level. However, most jobs won’t stop the usage of Ai simply because it’s too big of a current thing. Rather than punishing and banning AI completely we should instead be embracing it by teaching educational ways to use it. Similarly to when the internet came out and made libraries obsolete. Many people did not like this because they deemed books more credible but with time we adapted and now the internet is commonly used for many things. This is simply another evolution and what we need to do is embrace it and not hate it.
Overall, I agree with many aspects of this sentiment, however there are some things I don't agree with. For example, although AI can present information to you quickly, not all of it is accurate and it would always be better to look for more reliable sources. I feel like there is some association with non-human, faster technology being more reliable than humans and if everything the AI tells you is from human sources, then why not look for actual human sources, especially if you have the time on your hands? I also agree with your point that using AI wouldn't make people "dumber" or less capable, however, that's only if you use it correctly and efficiently. Of course with modern technology and AI being more prevalent, we ought to adapt to using AI and incorporating it well into our daily lives, however, there is always a chance that being too dependent on AI could be costly. Despite this, I think you make a good point by connecting the Internet's rise to issues with AI, however, there are other factors that people have with AI, mainly environmental factors.
Ethics of AI
Humans are ever evolving, but the timeframe in which we grow can be shortened with AI. Some ethical uses of AI in education include creating lesson plans and research work. While teachers are paid for their work, some teachers lack the time to create well structured lesson plans that target a specific subject. By using AI, a teacher can focus on grading while a well-rounded lesson plan is created. As always, the teacher can also review what was created and change things they do not like. However, the ethics of AI in education is only as good as the person who programs it, and in this case, inputs the data for the AI to create the lesson plan. Another drawback to using Ai in education is the lack of connection between two people. In the article, “Your Chatbot Won’t Cry if you Die”, the topic of in person connections is argued, where people are actively seeking connections, but rely on AI for companionship. The issue with overreliance leads to people becoming incompetent. As a teacher, using AI for basic tasks seems handy, but using it every single day can cause a teacher to not know their students and distance them from younger generations.
In everyday life, I believe AI has grown too great to be stopped. People have used AI on a daily basis for basic tasks such as making a grocery list. However, using AI decreases the use of critical thinking, something that is needed over AI. If a person doesn’t have access to AI and needs to make a quick decision, possibly life or death, they may fall short. The reality of AI is that it is only going to continue to grow, but we have the option to use it wisely or overconsume.
In warfare, AI use is surging as countries realize they would rather sacrifice a drone than a human life. The ethical consequences of using AI is that sometimes the AI will take unnecessary casualties, at the cost of completing the mission. Rather, a human has a moral obligation to try their hardest to save as many human lives as possible.
The Ethnics of AI in Education
Originally posted by starfruit_24 on May 29, 2025 10:40
One of the primary things my group talked about was whether AI has the potential to eliminate education disparities. We concluded that there really isn’t a good way to prevent disparities in education, even with AI. Even besides regulating usage, parts of AI will likely always be behind a paywall, so it may not even come down to who uses AI, but who has access to better AI. Another thing we discussed was whether higher education and education as a whole will become obsolete. I think that if you could learn the same things at a lower cost from a computer than from a human, eventually education in the way we know it now will eventually become obsolete. Teaching becoming obsolete is supported by the article “Everyone’s Using AI To Cheat at School. That’s a Good Thing.”, which notes that “These models are such great cheating aids because they are also such great teachers. Often they are better than the human teachers we put before our kids, and they are far cheaper at that” (Crowen). Going even further into that, rather than human connection becoming of higher value,I think the workforce has the potential to become obsolete, because once again if AI can do it cheaper, longer and faster, why have humans involved? There are definitely acceptios at the moment, since AI hasn’t been developed to its full capability yet, but 20 years from now this may no longer be the case. Teachers who use AI should face the same consequences as students, they are supposed to set the standard afterall. I think it would be hugely hypocritical for teachers to expect their students to do one thing but do something totally different themselves. Outside of hypocrisy, I think there’s no reason for students to pay to be in a place where their lectures are completely computer generated. If AI is being used as a tool, I think it could be acceptable, but teachers should not use AI to generate full lectures, just as students should not use AI to generate full projects. In terms of academic honesty, we thought that AI usage wasn’t even useful across all fields. For something like an algebra class AI may be able to accurately and clearly provide correct solutions, but for a high level chemistry class, AI can’t currently perform on the same level as humans. In cases like this, using AI would ultimately only be to the detriment of the student, so can this really be considered academic dishonesty? Yes students can try, but currently AI doesn’t have the capability to help them succeed at any topic they can dream of.
I agree, I believe that AI will potentially eliminate education disparities. I also agree with the fact that there really no efficient way to eliminate AI use throughout education. It does come down to who has better access to AI instead of who uses AI. I've noticed that there are always paywalls for certain features with AI. This limits people's abilities and what they can use. If people get behind this paywall and get access to many different features, I believe that it could lead to bigger things. I loved what you said about things that people learn from a lower cost AI can be taught by a human and using other resources besides AI. I think AI can be used for a certain group of things. Like you said, AI could be in great use for preforming in Algebra class, but could not preform the same for a high level chemistry class. I loved your statement of "AI doesn't have the capability to help them succeed at any topic they can dream of". I agree with what you said about AI being used as a tool, but I feel like if you have that much trust in people, there will definitely be abuse of your power. We can see many people now abuse AI and most of the time use it for malicious reasons. Although you can't expect this not to happen since it has happened to anything.