We Need to Talk About AI

  • 003 reuben4.jpg
    Reuben Bijl
ai-june2025

Like many in the tech industry, I’m excited by the possibilities offered by artificial intelligence. At Smudge, we’ve spent nearly two decades helping organisations solve problems with digital tools, and AI is undoubtedly one of the most transformative technologies we’ve seen.

But I’m also uneasy about how AI is being developed, deployed, and increasingly relied upon with little consideration for the ethical or legal frameworks that should accompany such powerful systems.

AI is rapidly becoming embedded in how businesses operate

AI is rapidly becoming embedded in how businesses operate. ChatGPT now boasts 500 million weekly users globally, and in New Zealand, a recent One NZ survey found that 77% of us have knowingly used AI in the past year. That’s a major shift in a short time. What’s concerning is that much of this is happening in a grey zone which is largely unregulated by Government or industry bodies. That’s why, as business leaders, we need to be having more conversations about the AI we’re adopting, as well as advocating for education that empowers people to make conscious, informed decisions about how and when to use this technology.

Yet there’s a lack of clear standards governing the industry

One of the more troubling aspects around AI is the lack of clear standards governing the industry. When you visit a doctor or a lawyer, you can trust they’re trained, licensed, and held to professional standards. There’s no equivalent for AI and technology. These tools can offer advice on any subject, no matter how sensitive or high-stakes, without any oversight or accountability. It’s a dangerous precedent, especially as these systems are increasingly embedded into everyday tools and services.

There’s also no guarantee the information provided is accurate. AI-generated content often comes with a small disclaimer: “This information may be incorrect.” That tiny caveat is doing a lot of heavy lifting, absolving the platform from any responsibility while placing the risk squarely on the user.

Internationally, one of the most common uses of AI is for emotional support or therapy-like conversations. In the midst of a global loneliness epidemic, this isn’t surprising, but it is deeply complex. What happens when the easiest “person” to talk to isn’t a person at all? Not just because it feels safer than risking vulnerability with someone else, but because it’s faster and always available. These tools may offer genuine comfort, but we’re quietly reshaping how people seek connection, process pain, and make sense of their lives. This is happening without standards or requirements for the technology, without informed consent from users, and without long-term research.

Are we sleepwalking into a world where tech erodes trust?

This kind of shift, toward ever-present, seemingly helpful AI, carries real risk, not just for individuals but for businesses too. It’s easy to be drawn in by the speed, scale, and low cost of AI-driven interactions, but when the information is unverified and there’s no accountability for the outcome, we risk more than just getting things wrong. We risk eroding trust, reputational harm and making poor decisions based on flawed advice.

If we go back a step further, there’s also a lack of transparency around how AI models are trained, what data they use and how much of it is copyrighted or scraped from websites without permission. It begs the question, if a model generates something based on someone else’s work, who’s liable? The company that built it? The developer who deployed it? Or the business that used it?

Tech isn’t always a benign influence on society

We’ve seen this pattern before. We sleepwalked into a world with social media and are only now having conversations about whether children under 16 should be allowed on these platforms. At the same time we watched as YouTube rewarded engagement over accuracy, helping the spread of misinformation and undermining public trust. 

Social media’s impact on industries was also dramatic. We’ve seen New Zealand’s newsrooms gutted, in part due to global platforms like Facebook siphoning off ad revenue while republishing content it didn’t create. Now, we’re seemingly doing nothing as AI systems steal content even more brazenly. Media isn’t the only industry at risk and if we don’t act now, we risk repeating those mistakes on an even larger scale.

Unfortunately, AI is also being used in scams targeting vulnerable New Zealanders. Every week there’s a new story about a deepfake video or a voice clone tricking someone out of money. The technology is making these scams more convincing and harder to detect, and yet, the focus is always on the scammer as opposed to regulating the technology which made the scam possible.

I’m not anti-AI. I am pro-education and ethics

To be clear, I’m not anti-AI. I’m pro-innovation, and I believe these tools have the potential to do tremendous good and rapidly increase productivity. I also believe that innovation without awareness carries serious risks. A good place to start is with education, we need people to be informed so they understand the trade-offs, risks, and ethical considerations that come with using AI.

It’s also important to remember that when we use software made by large American tech companies, we’re not just importing the technology, we’re importing their way of thinking. My ethical stance comes from a New Zealand perspective. Here, we tend to take a community-focused lens and we think about how our actions affect others.

That’s not the default in the United States. California’s low-regulation environment favours corporates and accelerates innovation, often at the expense of public safeguards. The things we find troubling here, like the lack of guidance or oversight, are standard operating practice there.

New Zealand has an opportunity to do things differently. We’re a small, connected country with a strong culture of trust and integrity, and we can put ethics and accountability at the heart of how we adopt these tools.

As leaders, we don’t get to hide behind the algorithm. We need to take responsibility for the tools we choose to integrate into our businesses and demand they are ethical as well as innovative. The first step is talking about it, openly and often.