From coding to giving movie recommendations, creating recipes and empathizing, AI’s capabilities are endless. However, regardless of its expansive capabilities, AI hosts limitations and flaws — whether it be generating false facts, overconfidence or biases. AI has contributed to misinformation, cheating and an overreliance on artificial thinking. With Gen Z as the first generation to grow up under the influence of chatbots, such as ChatGPT, it’s become increasingly clear that AI will not be used responsibly without change. For this reason, in a 7-5 vote, the WSS Editorial Board agrees that the government holds the most responsibility in preventing AI misuse.
AI is problematic on the personal level, especially when it comes to mental health. According to a 2025 study conducted by Stanford University, when used as therapists, AI responded inappropriately to stimulus roughly 20% of the time, compared to 7% of the time for human therapists. These inappropriate responses are incredibly dangerous as they can encourage delusions and other unhealthy behaviours.
In August, a family sued ChatGPT’s parent company, OpenAI, because their son committed suicide after the family alleged ChatGPT validated his “most harmful and self-destructive thoughts.” Cases like this are the fault of multiple parties; OpenAI needs to enforce a better monitoring system to prevent this, while the government needs to take responsibility by establishing stronger policies.
Currently, there is a lack of laws concerning AI in the U.S. In 2024, a pro-AI super PAC donated $100 million to candidates “aligned with the pro-AI agenda” in favor of loosening AI restrictions. However, AI restrictions aren’t fully out of the picture; on July 1, the U.S. Senate almost unanimously voted against a proposed ten-year ban attempting to disallow any laws and regulations for AI. That vote left AI in limbo: laws could be proposed, but have yet to be.
Several policy options exist. Notably, the government could pressure OpenAI to better enforce the ChatGPT terms of use while imposing regulations on emerging chatbot companies. The ChatGPT terms of use include a clause stating that children ages 13-17 must have parental permission to use the chatbot. While it would not be difficult to enforce these provisions, OpenAI has no public plans to do so.
On Jan. 21, President Donald Trump announced a plan to invest $500 billion into AI research in the private sector. This investment could pay off as long as it’s guided by clear priorities and targets the current problems in AI. However, there needs to be more regulations and policies in place — not only from the government, but also from other institutions — that determine what responsible AI use is.
DISSENTING OPINION:
Five of our editorial board members have a differing viewpoint:
Admittedly, there needs to be some level of restrictions and regulations on AI, but policy alone can’t solve the problem. It’s the users’ responsibility to know that AI is biased, generates misinformation and is not a therapist, and the public needs to learn how to responsibly use chatbot platforms.
We can draw parallels between AI and vehicles. If the government was to heavily regulate cars, they would never be able to fully ban vehicles as they are far too useful. There are ways to misuse AI and cars, just like there are ways to use them for a lot of benefit. As long as individuals can use either one of them, it becomes the society’s job to ensure they don’t misuse these tools. While AI might not be deadly, like cars can be, it is still something that shouldn’t be taken lightly.
Another relevant detail is the fact that laws don’t guarantee cooperation. While it can be made illegal for chatbot companies to market under specific terminology or to specific groups, that doesn’t guarantee enforcement. For this reason, it is crucial that the public is taught about AI. We live in an age of unprecedented access to communication, which streamlines the process of informing others about AI. Realistically, it’s more effective for friends and family to explain AI concepts than for the government to rule on them.
Ultimately, laws are outpaced by technology and public values. AI is advancing at an unprecedented rate, and expecting government policy to keep up, in creation or enforcement, is simply unrealistic. The use of AI is up to individuals and, as an important tool, it is up to those same individuals to learn and act responsibly with AI.