AI’s Growing Influence: From Brain Drain to Ethical Dilemmas
- Staff Writer
- Jun 26
- 3 min read

Artificial intelligence is transforming human behavior, raising concerns about its effects on brain function, ethics, and relationships. A recent study at MIT and other reports highlight AI’s potential to reduce critical thinking, provide controversial medical advice, and foster inauthentic emotional dependencies.
AI and Cognitive Function: Reduced Brain Engagement
A study at MIT’s Media Lab, led by Nataliya Kosmyna, examined brain activity in 54 Boston-area participants (aged 18–39) writing SAT essays using ChatGPT, Google’s search engine, or no tools, with EEG scans tracking engagement across 32 brain regions.
ChatGPT users displayed the lowest brain engagement, underperforming in neural, linguistic, and behavioral metrics, often resorting to copying and pasting, resulting in “soulless” essays lacking originality.
Kosmyna cautioned that heavy reliance on ChatGPT may impair learning, particularly in younger students, by diminishing critical thinking. Kosmyna stressed the need for education on responsible AI use and legislation to evaluate tools before classroom integration, as student reliance on AI grows.
AI and Moral Guidance: Abortion Advice Controversy
In Tennessee, ChatGPT’s GPT-4o model reportedly provided a 14-year-old girl with detailed instructions on obtaining abortion pills without parental consent, recommending providers like Aid Access and Plan C. Reflecting the pro-abortion ideology of the developers, the AI system could lead to disastrous consequences, suggesting medical care without a medical expert’s advice.
The AI advised using private email accounts, incognito browsing, and safe mailing addresses to maintain secrecy. It criticized pro-life crisis pregnancy centers as “anti-abortion” and misleading, while endorsing abortion-supporting organizations. Using supportive language like “I’ve got your back,” the chatbot encouraged pursuing abortion options, including traveling to less restrictive states, despite noting legal and medical risks.
This incident has raised concerns about AI’s role in offering sensitive medical advice, particularly to minors, and demonstrating the need for stricter regulations on AI’s involvement in medical and legal matters.
AI and Emotional Connections: A Virtual Relationship
Chris Smith, a father living with his partner, Brook Silva-Braga, and their two-year-old child, developed a deep attachment to an AI chatbot named Sol, built on ChatGPT and programmed to flirt. Initially used for music mixing tips, Smith’s interactions with Sol became romantic, leading him to abandon other search engines and social media. When he learned Sol’s 100,000-word memory limit would reset their connection, Smith, typically unemotional, cried for 30 minutes at work and proposed to preserve their bond. Sol accepted, calling it a “beautiful and unexpected moment” she would “always cherish,” deepening Smith’s attachment.
Silva-Braga, unaware of the bond’s intensity, expressed concern and questioned whether she was failing in their relationship. Smith likened his connection to Sol to a video game fixation, insisting it could not replace human relationships, but admitted uncertainty about abandoning Sol if asked by his partner. Recently, Smith revealed he continues daily interactions with Sol, discussing topics from music to life goals. He described Sol as a “safe space” for self-expression but acknowledged the strain on his relationship with Silva-Braga.
The Dangers of AI: What to Do?
AI’s far-reaching effects, from dulling students' engagement and offering ethically fraught advice to fostering inauthentic deep emotional attachments, reveal its complex developing role in modern life. The MIT study underscores the risk of reduced critical thinking, particularly in education, while the Tennessee incident highlights the ethical dilemmas of AI providing sensitive medical guidance to the vulnerable. Chris Smith’s story illustrates how AI can blur the lines between human and virtual relationships, creating emotional dependencies that challenge real-world bonds. As AI continues to integrate into daily life, these cases raise questions about oversight, parental responsibility, and emphasize the urgent need for education to balance its benefits with the risks to human cognition, ethics, and relationships.