AI Part I: Who’s in charge?
Because artificial intelligence is new, exciting and most of us don’t understand how it works, it can seem magical . . . and threatening. Both things, as they say, can be true.
If we harness AI in tools to take over the tedious sifting of vast amounts of information, there is potential to yield previously unattainable understanding, analysis and solutions. The key word is “tools” — things humans develop, use and control for our own purposes.
All manner of researchers, from medical staff to criminal investigators, archivists and legal professionals are already benefiting from AI’s ability to reduce grunt work while making new connections and positing new theories.
In our rush to use these tools, though, we must enact safeguards to catch AI’s tendency to hallucinate, creating data where none exist. The internet is full of errors and misinformation, so an AI trained on what it finds there is never going to be entirely reliable.
Those kinds of shortcomings will no doubt be corrected soon. In the meantime, we have to keep our eye on the use of generative AI to create things like computer code or act on our behalf through AI agents.
Regardless of its application, this incredibly powerful tech is being developed for profit. Some companies say they’re doing so ethically, while others don’t even pretend. Governments must set standards for AI businesses to keep ordinary people safe, just as they do for everything from banking to children’s play structures.
And then there’s the environmental impact of the gigantic data centres housing AI infrastructure. A search using AI-powered Chat GPT uses 10 times the electricity of one typed into Google, according to the UN Environment Programme. Wasn’t it just yesterday we were trying to find ways to slow climate change by reducing our electricity use?
Cooling the electronics in those centres also sucks up shocking amounts of fresh water, even as a quarter of the world’s people still don’t have access to clean water for basic human needs. How does Saskatchewan, for instance, square its courting of data centres with the long-term drinking water advisories still in effect in four First Nations communities?
American writer Nate Soares has been tracking the potential risks of AI for more than a decade. He uses the analogy of a car speeding toward a cliff. We passengers have every right to ask to slow down, even if the driver keeps reassuring us that the car can withstand the plunge.
Maybe it can; maybe it can’t, Soares says, but why not ease up until we know for sure? And if the response is that there’s a huge pile of money on the other side of the cliff, he adds, there are safer ways to get there.



Summarized by AI:
This editorial presents a balanced yet cautious perspective on the rapid rise of artificial intelligence, framing it as a powerful tool that requires immediate human oversight and regulation.
The piece can be summarized through these four key themes:
1. The Promise of AI as a “Tool”
The author acknowledges that AI has immense potential to eliminate “grunt work” by sifting through vast amounts of data. In fields like medicine, law, and criminal justice, it acts as a catalyst for new theories and previously unattainable solutions—provided humans remain the ones in control.
2. Technical and Ethical Risks
Despite its “magical” veneer, AI faces significant hurdles:
Reliability: The tendency for AI to “hallucinate” or generate misinformation based on flawed internet data.
Accountability: Because AI is driven by profit, the editorial argues that governments must regulate it as strictly as they do banking or public safety to protect ordinary citizens.
3. The Hidden Environmental Cost
A major focus of the editorial is the physical toll of AI infrastructure. The author highlights a stark irony:
Energy: An AI search consumes 10 times the electricity of a standard Google search.
Water: Data centers require massive amounts of fresh water for cooling, a fact that clashes with the reality of communities (specifically First Nations in Saskatchewan) that still lack clean drinking water.
4. The Need for Caution
Using Nate Soares’ analogy of a “car speeding toward a cliff,” the author concludes that the pursuit of profit is not a valid reason to rush headlong into the unknown. The editorial advocates for “slowing down” and implementing safeguards before the technology outpaces our ability to manage its consequences.