I don't hate AI. I want to make that clear from the start.
I'm really excited about what's possible. I even spent a chunk of my Christmas break building my own AI server. And I can certainly see the potential in AI doing the stuff I don't want to do.
But the sweet spot isn't necessarily productivity. It shouldn't be replacing people. It should do the chores that I have to do, but don't enjoy. I want it to do the dishes, not write the songs for me.
It's a tension I feel on both sides of this debate. Before becoming a CTO, I built a career as an award-winning songwriter and musician. My music, like that of countless other artists, has almost certainly been scraped by AI companies to train their models – without permission, without compensation.
On a philosophical and ethical level, I hate it. I hate the fact that my art has been taken without my permission and used for purposes I don't approve of.
This tension between AI's genuine usefulness and its significant problems is why I recently wrote an internal document for CultureSuite examining the ethical, environmental and security implications of AI adoption. It was intended to start a conversation within the business. But given how rarely these issues are discussed openly in our sector, we decided to share our thinking more widely.
One thing worth clarifying upfront: when most people say "AI" in this context, what we actually mean is generative large language models (LLMs) – tools like ChatGPT or Claude that generate text, code, images and more. Not all AI is the same, and the problems I'm describing are largely specific to LLMs and the infrastructure behind them.
The hidden costs
The problems with LLMs extend far beyond copyright. Take environmental impact. A single request to a model like ChatGPT uses roughly enough energy to fully charge a smartphone. That's just one prompt. For a sector increasingly concerned about sustainability, this should give pause.
At CultureSuite, we deliberately don't use AWS (Amazon's cloud-computing services). We work with Exonet, a local hosting company. The ambition is to try and reinvest in our local communities. One of my favourite things about CultureSuite is that we're critical of our partners. But it seemed with AI, that just wasn't happening – at first.
Then there's the human cost. Investigations have revealed that major AI providers have relied on workers in Kenya, the Philippines and India – sometimes paid less than $2 per hour – to review traumatic content and label training data. Many have reported psychological harm and inadequate support.
And the models themselves carry embedded biases. Research has shown that AI systems can discriminate based on dialect, associate women disproportionately with domestic roles, and reinforce harmful stereotypes. For a sector that champions diversity and representation, using these tools uncritically risks undermining those very values.
The hype versus reality
I'm sceptical of the breathless promises surrounding AI. The trend just now is for the frontier models to basically try and create this one model that can do everything. And what happens is it doesn't do anything particularly well. It's very good at sounding like it's right. It's very good at convincing you it must be right for some reason. And it's very good at stroking the ego of its users to hide its limitations.
Developer productivity surveys show that, contrary to the hype, AI-assisted coding is actually producing more bugs and architectural problems, not fewer. Up front there's a lot more code generated, but there's a lot more bugs generated. So ultimately you end up spending a lot more time fixing the issues. When the code produced is acceptable, the increased speed of production of the code simply creates a new bottleneck at the code review stage. Overall productivity ends up being reduced.
For cultural organisations already stretched thin, the temptation to see AI as a silver bullet is understandable. But I'd warn against skipping due diligence. You can see why, in an underfunded arts department, it's really tempting to get AI to create a poster or a press release. But I think we should default to protecting the creative people that make all our stuff possible. Otherwise, what kind of futures are we really building for ourselves?
There's also a skills question. If teams become dependent on AI for tasks they never properly learned, what happens when those systems fail or become unaffordable? The sector already faces challenges around skills development and succession planning. AI shortcuts today could entrench us further with capability gaps tomorrow.
What we're doing about it
CultureSuite isn't swearing off AI. But we are being deliberate about how we use it.
We're developing a company-wide AI policy, shaped collaboratively with input from staff across the business. The goal isn't to ban specific tools, but to ensure everyone makes informed choices. If someone needs to use a high-powered model for a specific task, they should be able to justify why a smaller, more environmentally-friendly alternative won't work.
There is a huge ecosystem of smaller, open sourced LLMs being developed, and released for free use. I have a lot of hope around using models like these. I exclusively use these models locally now, and they fit all of the needs I have, that were previously filled by ChatGPT, or Claude.
We're also exploring partnerships with more ethical providers. I recently met with GreenPT, a French company that hosts its own air-cooled servers (significantly more environmentally-friendly than water-cooled alternatives) and focuses exclusively on sustainable, open sourced AI. There's a whole other world of open-source models that you can run on much smaller scales. They're a lot less expensive to train, and much less damaging to run.
The key is finding providers and models that can do specific tasks really well. We should avoid the "jack of all trades, master of none" approach of the major platforms.
What cultural organisations can do
Most cultural organisations don't have a CTO. This is precisely why I believe vendors like us have a responsibility to share what we've learned, so the sector doesn't have to duplicate the research. Here's where I'd start:
Search for "green AI" providers
A growing number of companies are building sustainable alternatives. They often use open-source models that require a fraction of the energy to run. They're not the ones making headlines, but they exist.
Look for zero data retention policies
This means the provider doesn't store your queries or use them to train their models. Your data gets processed and returned, then deleted. This is particularly important if you're handling customer or artist data.
Consider where your data is stored
GDPR compliance matters. Some providers can guarantee your data stays on European servers. The major platforms often can't tell you where your data ends up.
Ask vendors direct questions
If an AI company can't clearly explain their environmental practices, worker conditions, or data policies, that's a red flag. Smaller providers are often more transparent because they can actually have a meeting with you.
Don't let AI touch customers or artists without careful thought
Internal experimentation is low-risk. But anywhere AI interfaces with your audience or the creative people you work with, proceed with caution. Nobody enjoys being stuck in a chatbot loop when they need human help.
Build skills, not dependencies
Use AI as a tool, not a crutch. If your team couldn't function without it, that's a vulnerability.
I'll end with a small admission: I used AI to help research and write the internal document this article is based on. Disclosing this at the end of a long, serious document was partly a joke. I knew readers would be wondering. But it was also about transparency.
I think it's only basic respect to acknowledge when something was machine-assisted. This is a complicated topic with complicated feelings. I'm not pretending to have all the answers.
Nobody can claim to know it all when AI is still in its infancy. But I think the cultural sector deserves better than rushing headlong into AI adoption without asking hard questions. Our artists, our audiences, and our planet deserve that much.
We think critically about the technology we use. That includes our own.
If you like how we approach these questions, and you're looking for a digital partner that brings the same scrutiny to your website, here's where to go next.
- See CultureSuite CMS in action
Book a personalised demo to discover how CultureSuite helps arts, culture and entertainment venues treat their website as infrastructure, not a project. - Partner with us
We partner with technology providers, design agencies and suppliers across the arts and culture sector. Get in touch. - Keep exploring
Dive into more thinking on digital strategy through our articles, events and webinars, Spotify, YouTube channel or our newsletter.

