- Jersey Finance
- |12 Dec 2025
Fintech Podcast

Jonny Clarke: Welcome to Jersey Heard, the Jersey Finance Podcast. I’m Jonny Clarke, the FinTech Lead at Jersey Finance and today’s host. Before we dive in, have you ever wondered what it takes to build the AI systems we use every day?
Well, by some estimates, training a model like GPT-3 consumed enough electricity to power 121 American homes for an entire year and enough water to produce 370 BMW cars. That’s just to create a single model. Every time we query it, additional energy and water are consumed. Research suggests inference, when models process our queries, now accounts for 80 to 90% of total AI computing power. The environmental cost is considerable, and it’s growing rapidly. So that’s the challenge, but here’s the paradox: while demand for AI computing power and storage could triple global data centre electricity consumption by 2030, AI could also significantly reduce worldwide carbon emissions, potentially achieving a net positive impact on the climate. So is AI an environmental villain or a climate hero? The answer, as we’ll discover today, depends on the choices we make.
Right now, I’m joined by Julian Box, co-founder and co-CEO of Clairo, a company delivering private, power neutral AI agents. They’re tackling three of the critical challenges in AI today, privacy, security and environmental sustainability.
Julian, welcome to Jersey Heard.
Julian Box: Thanks for having me.
Jonny Clarke: Perfect. So Julian, could you tell us a little bit about yourself and Clairo?
[01:46] Julian Box: Yes. So, I’ve been doing tech now for nearly 40 years, building various different tech businesses during that time. I’ve been doing data and AI now for about 15 years and focused pretty much for the last two years on agentic AI, which is what Clairo does. Agentic AI is the automation of processes using AI. So some of the automation is normal standard automation that’s been around for years, but then you add in the AI to enhance that whole process, and that’s what we do.
[02:15] Jonny Clarke: And you are very much environmentally focused as well.
[02:18] Julian Box: Yeah, we set our goals right at the beginning when we set the business up that we wanted to be a hundred percent private, we wanted to ensure that there was transparency around how the models worked and how the agents worked. Thirdly, we wanted to ensure that AI can be, and we believe AI can be, done sustainably. So, we set our target on producing a system, or set of systems and solutions that can actually achieve net zero if the client wants it.
[02:46] Jonny Clarke: Well, maybe let’s start with addressing the environmental costs at the moment. So energy consumption, water use, hardware life cycles and carbon emissions. Now these are all very much real problems, but the major problem is the scale and the rate at which we’re scaling with AI. So we really need to understand what it means to be environmentally friendly before it’s too late. So I suppose, could you walk us through a breakdown of where AI’s environmental impact actually is coming from and what that scale looks like.
[03:12] Julian Box: Yeah, so it’s an interesting area because a lot of the consumption that’s happening worldwide at the moment is, not unusually for tech, is driven by a few very big players. Google, OpenAI, which is your chatGPT, Microsoft and so forth. So the big guys have invested heavily. We’re talking hundreds of billions of dollars of investment that’s already gone in. And if you look at some of the announcements that have been coming out in the last two weeks, we’re into the trillions. Open AI’s investment is in excess of a trillion dollars over the next five to 10 years. So they are investing huge amounts of money. A lot of the workloads that are happening now are consumer led because businesses are always lagged behind consumers.
And there’s a big difference, I believe, compared to this sort of technology change and say, when the internet came along. The difference then was we didn’t have anything and the internet was built. This time we do, we actually have all the infrastructure already there and they’re just expanding that infrastructure so it’s been delivered at a much faster rate than when we did the internet. People are worried about the bubble and I think that’s more of an investment bubble than it is a technology bubble because this technology isn’t going anywhere.
I’m a big believer in the positivity of the technology and like most technologies, you have to go through a change. I think the big change here is the fact that it is going to be in a much, much shorter timescale than what we’re used to. But it is going be a massive change. So when you look at where the power is being consumed, the big guys are investing tonnes of money, building massive models, and then getting people to use those models, with most people having probably used chat GPT. And when you Google now, at the top of the Google search you’ll see Gemini, which is Google’s AI and it will give you it’s thoughts on whatever you’ve asked for. I must admit I look at that more than I actually look at the search results now because it’s far more useful.
[05:10] Jonny Clarke: It’s chain of thought sort of thing.
[05:11] Julian Box: So every time you do something like that, whether you’re doing just a search on Google, now you’re actually using AI in the background, or these large language models (LLMs) as they’re known, in the background to give you some sort of understanding of the question you’re asking or the search you’re doing. It’s called inference and therefore you get the model to infer what you’re doing or asking, and it will give you feedback on that. Depending on how complex those questions are, the more and more power you actually use. So, you can ask a simple question or you can ask a hard question and the consumption of power and water changes rapidly and massively based on that. Since that all started only two or three years ago, businesses now are jumping in and rapidly increasing their use of the technology and that will accelerate over the next five years. I actually believe by the end of this decade, we are looking at a change of magnitude that’s similar to the Industrial Revolution, but ultimately that took 150 years. It’s going to take about 15 years, but it will change society. It will change everything that we understand, how we work, all of that type of stuff. But going back to the power bit and the usage bit, I think what we’ll actually see is AI will eventually solve the problems that it’s creating or help solve the problems. It can’t do it on its own. Let’s be clear about that.
[06:26] Jonny Clarke: It’s, it’s got that potential, right?
Julian Box: It has, yeah.
Jonny Clarke: The immediate issue, which a lot of people point to, is its footprint at the moment. And there are some, crazy statistics and I think a lot of people have probably heard this one, that GPT-3 apparently drinks approximately a 500 millilitre bottle of water for approximately 10 to 15 medium length responses. Now that does depend on a deployment location and timing, but that’s a huge amount of water just for a few queries.
[06:50] Julian Box: No, it is. If you look at chatGPT, you can ask it this question. So I asked chatGPT, how much power do you use in a day and how much water do you consume in a day? Of course it’s a pretty open question because it depends on how many people are doing stuff, what the type of questions they’re asking and so forth. So it’s not an exact number because it would change literally every single day. But chatGPT itself will tell you that it consumes enough power in a day to power 1.4 million homes, or charge half a million EV cars. The list is quite endless here, but it uses unbelievable amounts of power, and that’s just chatGPT. There’s a plethora of these out there. All of them use either similar or maybe slightly less, but there’s a lot out there now and probably the top five are all consuming similar levels of power. So it is quite scary when you look at it like that. And the predictions, as you said at the beginning of the conversation, were where the power consumption is going to be in five years.
Now those predictions though, are based on if we just do a linear growth and we don’t do anything about it. We will end up consuming all this extra power, which not practical. In the sense that you can’t build the infrastructure quick enough. You can’t build the power stations that whether they’re nuclear, fossil fuel, sustainable like hydro, they take years to build those types of things. So the tech industry themselves, they have been looking at this for a while. They don’t want to be held back by the fact they can’t get enough power. So you’re seeing a few things in the news. Some of the big guys over in the states are starting to build their own nuclear reactors, for example, so investing in that area, I should say, rather than building. But these will be the small modular ones, not the big massive things that we’re used to seeing be built like Hinkley. So they are very conscious of this.
The other tech that’s been developed around this is actually how you run a data centre. So, location of a data centre is starting to become super important. People really want to use the air that’s outside rather than powering in a cooling system to cool the air. So more and more, north and hemisphere-based data centres are being built and becoming popular. The interesting thing about that, and this is one of the areas that Clairo looked at, was if you go to places like Norway and Iceland, their data centres are 100% net zero already. At least, a lot of them are, Norway, maybe not all of them, but Iceland is 100%, because their power is actually generated 100% by sustainable sources. So in Iceland it’s about 30% thermal and 70% hydro, but it’s a hundred percent sustainable.
[09:32] Jonny Clarke: And presumably what you’re saying then, because there are lower temperatures of that region that helps with the cooling of the elements, so they don’t need to use water and other things like that.
[09:38] Julian Box: Yes. If you can suck the air out the atmosphere and push that through the computer equipment to cool it without having to cool that air, the savings are massive. You know, it’s a rough rule thumb dependent on the efficiency of the data centre, but you can, for every unit that you consume to power, you’ll consume anything from a super efficient is about 0.3, 0.4 of a unit to the more older, data centres, they could be as much as one and a half times. For every one unit of power you’re going to consume another one, 0.3 to 1, 1.3 of power to cool the actual environment as well. So the placing of data centres has become super key.
[10:21] Jonny Clarke: Just on that as well, that’s from a kind of power perspective, but also water as well. They’re competing for water in different regions and it’s farming or otherwise, so another massive consideration.
[10:32] Julian Box: Yeah, so a lot of the server companies are building systems now that are liquid cooled, so a bit like a fridge. I’m not an expert in that, but it sort of works like that. It uses liquid cooling, just the same as a fridge does to keep the contents of your fridge cold, they can use a similar sort of technology to that. So what happens is the racks that the computers go in, they’re completely sealed like a fridge and it cools it that way. So they’re becoming very popular now and that technology’s been around for a while. Originally it started with water. So you’d have a system that had its own water system and then that water was cooled, but it was far more efficient. Now they use liquid coolants like you do in the fridge.
[11:15] Jonny Clarke: The, the other thing then, aside from this, you’ve got electronic waste as well. Because these models require more power, more GPUs come out. They’re replaced, you know, they’re looking for better performance. And that’s a real problem as well.
[11:28] Julian Box: It is. So, what we’re going to see over the next five years is you’re going to see a plethora of new silicon, as they call it. New cards coming out, because GPUs are brilliant at training, but they’re not actually that efficient at inference. A lot of technology companies now are looking to build cards that can do inference really efficiently at much, much lower power and they actually cost a lot less to build and their green sort of footprint, their carbon footprint is much smaller. So there’s, there are quite a few of those in the early stages of production.
So they’re beyond just the drafting of a design. They’re way beyond that now. We’ll start seeing them hit the data centres in the next couple of years. You’re going to see a big change there. It’s not that GPUs are going to go away, I’m not saying that. I’m just saying as new technology comes through, people realise that they’re not that efficient. So, because they weren’t originally designed for this, if you remember, they came out the gaming industry. It’s the way they process information makes them super good for this type of technology.
[12:31] Jonny Clarke: Yeah, I mean, it’s a huge thing. When researching this, so many staggering facts I came across, and one of them is around the e-waste. One source was saying that under aggressive adoption, by all internet users, 4.5 million tons of e-waste would be produced between 2020 and 2030. Now that’s the equivalent to discarding 13.3 billion iPhone 15 pro phones. That’s incredible and it can’t be sustainable that way. So we absolutely need these new solutions and repurposing maybe components.
[13:01] Julian Box: Yes. If you look at Apple, they do a lot of recycling. It’s starting to become a quite a big thing, not just because it’s environmentally friendly but actually it’s far more cost effective. So there’s quite a lot of advancements now in pulling out the rare minerals and metals and stuff that are in every electronical device. We are focusing on the ones that are powering the AI machine. But if you add in every other electrical circuit boards that are out there running everything that we take for granted in our lives, e-waste is off the charts.
But when you look at that waste, there’s loads of it. Its very valuable in its own right, the individual bits that are still in those circuit boards, so a lot of that’s now being done in the sense of taking that out. If you think about big old scrap yards back in the day, its similar thing, loads of circuit boards go to these specialist companies and they basically do various techniques and they pull all the rare minerals out of what is plastic basically.
Again, not an expert on that, but that’s a trend that’s growing rapidly, because a lot of the stuff that’s in your iPhone, you can reuse again. Some stuff like battery lithiums, not so much, but other stuff, all the gold, platinum and the other rare minerals that are in there, they can all be reused. I think, again, we could be doing a lot more on that though.
[14:21] Jonny Clarke: I suppose linked to this issue as well is the move to on device and edge AI that may well then reduce the demand by these big companies for these big chips because you’re able to process stuff more efficiently, locally on your phone with lesser powered chips as well.
[14:37] Julian Box: Yeah, I think that’s going to be a massive trend actually. And that goes to one of our founding thought processes, which is if you can use this technology, you’ve got to be a hundred percent comfortable that it is totally secure. One of the ways to do that, and we we’re not just talking about a data breach, we’re actually talking about the fact that no one else can use your data. On device AI is going to become a quite a big trend, and that’s powered by two things. First of all, obviously the technology in the phones is getting more powerful, but also there’s a big push on small language models and tiny language models. These language models are then designed to do specific tasks rather than the big ones, which are designed to, in theory, be able to answer any question. That comes with the same challenges because it has too much information and that’s how you get hallucinations. It’s not that it can’t happen, but it’s less likely to happen the smaller the model. So if you’ve got say, a health app of some sort that’s on your phone and you can ask it questions about what it thinks of your current data that your phone’s been collecting about your heart rate, maybe that’s coming off your Fitbit. It can then process that information because it’s only been asked to deal with that one type of inference, not a general inference. So therefore, you can make a model a lot, lot, lot smaller. There’s also loads of techniques being researched at universities around the world now around compression. So you can take a big model, compress it right down but still make it really good and useful. That’s not something that’s commercial yet, but you’ve got all of these techniques rapidly being developed and being focused on because again, the smaller the models. And the smaller models by the way, is what we do (at Clairo). We don’t, we don’t use any of the large language models. We focus on anything from a 7 billion to about 120 billion. But we try to focus on if we can do it on a 70 billion. To put that into perspective, something like chatGPT5, you’re in the trillions. So above a trillion. So it’s very small in comparison, therefore you need a tiny amount of the compute to actually process it.
[16:38] Jonny Clarke: Makes sense. And I think the other thing I was looking at is model optimisation selection. So kind of links to what you’re saying there, choosing the right models for the right task. You don’t need the most powerful models to do every type of query that you’ve got. Selecting the smaller ones will be potentially quicker and more efficient and I suppose better for the environment long term.
[16:56] Julian Box: Yeah. And that’s what really, if you’re truly going to do agentic AI, the whole purpose of that is that you don’t use one model. So you look at the process that the business has and that needs to be replicated. Then you would break that down into its components and those components could well be more than one model and the models would focus on a specific task.
Let’s say, part of the process was to collect individual’s PII, or person identifiable information. So you’d have a model that specialises in spotting that and that’s going to be far more accurate than using a big model. Also, when you ask questions, if you are using a big model and you’ve got quite a complex process and you try and build that into one prompt as it’s called, your accuracy will drop off because it’s really difficult to get a very large prompt in one prompt, if that makes sense.
So you’re better off breaking even that prompt down into multiple prompts. Each prompt then would potentially use a different model, a model that’s designed to do the task that you’re asking that prompt to answer for you. But also remember with automation, what we focus on is that optimisation of that process. So, if we don’t need to do an inference and we don’t need an LLM, we don’t use it.
[18:11] Jonny Clarke: That says a lot because I think everyone’s shifting now towards using AI for every internet search. You perhaps don’t need it, but it’s now easier to do that. That’s another critical thing that individuals maybe want to consider. Those who want to do their bit and you know, contribute to the environment, consider not using AI for everything.
[18:29] Julian Box: It’s difficult, like I said, with Google now, it automatically does it for you.
[18:34] Jonny Clarke: Without you knowing.
[18:35] Julian Box: Open AI released their browser called Atlas which does the same thing broadly as what Gemini does when you do a Google search. Actually, one of the good things about all of this is, you want that competition there because people do care about the environment. The big businesses know they care about the environment, so they are focused on doing it more efficiently. But that actually makes economic sense. If you can do the same task at half the price, why wouldn’t you do that if you’re a business. Of course you’re going to do that, so they all do actually help each other. Having good competition, having a thought process as a person, do I need to do that? Yes? No? It is hard these days to be fair. But if you’re a business it’s far easier to make decisions about how you use AI within your processes at work, because you need to control those processes so you can decide how much of that process should use AI. With that type of stuff, it’s less about which bits you’re going to sit there and say I don’t want to use AI for that. It’s not as simple as that. When you’ve got a process, you can break it down and go, right, you don’t need AI for these four parts, but you need it for these two parts.
Whereas if you’re a consumer, you just ask chatGPT. I’m no different than anybody else, when I need something. Especially for me, because I’m dyslexic, it’s super handy. I can write something out in my ‘Julian language’ as it were, and ask it to change the tone. I’d never use it to actually create anything. I get it to change the language of what I’m saying, but that’s super useful for me so I’m always going do that. It’s very difficult to turn off something like that once you start using it.
[20:15] Jonny Clarke: I agree. I’m guilty now of pretty much all my Google searches are now just AI searches, far more convenient.
So, we’ve kind of touched upon the issue and some of the solutions, but how do we know that these innovations and these solutions from these tech companies are actually working. As I understand it, most AI companies don’t really disclose the full extent of their energy consumption or emissions and in particular water as well. Even the major players are inconsistent about their reporting. You know that lack of standardisation and transparency must surely be a barrier to evaluating the true cost of AI. So, how should companies measure their environmental impact of their AI systems? What metrics matter and can we trust the vendors?
You said before about GPT, you asked it and it told you, but can we trust this?
[21:00] Julian Box: I think, like anything new, we’re always going to go through a period where ultimately it’s not quite how we want it to be, what good looks like as it were. You’ve got to be a bit careful not to smother it as well, because if you do that then the innovation disappears as well.
So, there is a quite a difficult balancing act, especially for something like this that has such an impact. And it is not just the size of the impact, it’s the speed this is happening at which we’re just not used to. We’ve always been used to, especially in the last 20 years, multiple changes in tech, but this is happening at a pace that no one’s ever seen before. I think transparency needs to become mandatory personally, both in how a model is built all the different metrics, so power, water, the model itself, what data was used to create that model. Because even most of that information is not available. We broadly know what they do, but we don’t actually know what they really do in detail. So, I think full transparency does need to happen and I think over time it will, probably led by Europe because it tends to be a leader in this type of stuff. But people need to remember the big guys aren’t European, currently.
[22:11] Jonny Clarke: That’s the thing. In doing the research, again, Europe do seem to be pushing some disclosure, so there’s probably going to be stuff in the AI Act, which touches on that. But also their corporate sustainability reporting directive, it’s not exclusively around AI but it does encourage companies, well, it mandates companies to report on their environmental and social impact.
And you would imply that must include their AI systems and their impact as well. But you’re right, it’s really the big players are in the US at the moment, so we need to be looking at what they’re doing.
[22:40] Julian Box: Yeah, and going back to about where data centres are located, governments could start mandating that AI based data centres need to be near sustainable energy sources. They could prescribe certain standards. At the end of the day, every time a building is built, whether it’s a home that we live in or an office we work in, it has quite strict standards about lots of things. Data centres have certain standards, but they’re building standards, not compute standards, as it were. So I think there’s scope there that governments need to look at this or else you’re going get to a point where you can’t have any data centres built because you haven’t got enough electricity. So it is a win-win; if you want to use this technology, then you need to start investing in how that technology is physically deployed and have standards there so that they can be judged.
The other thing about standards, which is handy, is that if you’ve got a marker, then that marker is there to be beaten. At the moment we don’t really have a marker. We know we need to consume less electricity and less water, but it’s not really a marker. It’s just that it’s consuming so much that any reductions probably going to be a massive win. But once we get down to a certain point, then we really do need to know industry standard is this, the data centre has to meet this, the AI process has to meet X, Y, Z standards. I genuinely think that that type of stuff will happen because the burden it generates, if you ignore it, it’s too dangerous to ignore it. So ultimately some level of certification and standardisation is going to happen. But as I said at the beginning, you’ve got to balance that with not stifling it too much because we’re still in the infancy. What we are using now is still only two or three years old.
Agentic AI, which uses large language models, it’s only two years old. Large language models as we know it, it was 2022 that they came out, the first ones that we could play around with.
[24:36] Jonny Clarke: No one’s going to disagree that making AI greener matters, and perhaps those standards will help, but we have to be honest about the real trade off. AI does have a significant footprint. Yet, it also has the potential to solve climate change much quicker. So the question is really whether the potential justifies that cost. So where is AI actually moving a needle on climate action? Are there any use cases where those benefits genuinely outweigh AI’s own footprint?
[24:58] Julian Box: Yeah. Well, an interesting thing about AI, one of the general things that AI is going to change for society, is the speed of innovation is going to change. I read a BBC report maybe about a year ago. It was to do with some research that had happened to try and find a better compound than lithium for batteries, because we all know it’s not a particularly nice element as it were.
So, something like that would typically take 20 odd years to do. But what they did was they built a model from scratch that could analyse tens of thousands of different molecules and then make recommendations. It had taken about two years to build the model and then it ran for a few weeks, then it gave 12 compounds and it had analysed thousands upon thousands upon thousands. Of those 12, what happened then was, that is a usable number for humans to then build those compounds and truly, truly test them. Because obviously this is a model, it’s not real. They found two of them had excellent potential to change the use of a lithium in batteries.
That’s obviously still got to go through its phases, but in the footer of that article, which was more interesting to me, was that took just over two years. Typically, that would’ve taken 20 odd years to do. What they are predicting is that AI will speed up the next 20 years of innovation would be the equivalent of 200 years previously. Then when we’ve done that, the next five years of innovation would be another 200. So within the next 25 years, they’re talking that our level of innovation would be the equivalent of 400 previous years of innovation.
So, if you apply that to this challenge, it’s 100% going to help us sort these problems out and not just these problems, but probably climate change as a whole, not just the challenges that AI has. So that’s why I have such a positive view about the technology. But there’s some amazing stuff in health and in the environment that’s going on in the world and AI is driving that innovation and that change.
When you look specifically at what we could do, for example, as individuals and companies right now, we should be pushing for that transparency piece. The more we scream and shout about it, the more likely it’s going to happen. I think also, we do have a choice of which AI platform we use, so why don’t we choose the one that’s more open, more transparent, greener. At the end of the day, if we choose a certain thing, businesses normally listen.
[27:35] Jonny Clarke: Absolutely. So, consumer pressure is going to drive that corporate transparency and innovation faster than regulation in many cases.
[27:41] Julian Box: Yeah, and we are going to see also that whole optimisation more and more, not just at a silicon level where the hardware is going to get optimised. We’re going to see small models everywhere. The big ones are still very useful, don’t get me wrong, but we’re going to see more of these smaller models, which become much more useful for us. They’re so much greener than the big models and they actually are more accurate in most cases as well.
[28:04] Jonny Clarke: Yeah. Choose the right models and be intentional about the use of that AI model as well.
[28:09] Julian Box: Then it’s down to us ultimately to make that final decision. I was reading another article on the BBC about two months ago about these people absolutely refusing to use AI and I must admit I don’t understand that standpoint because it’s something that is going to change society.
You can’t change it and by just saying, I’m not going to use it, doesn’t mean it’s not going to happen. I believe the positives of this technology massively outweigh its negatives, even the environmental ones, because I do think right now the corporate world, especially around AI, not the consumers of it, but the ones that generate it and supply it, they’ve already got a good eye on this.
You could argue it’s been driven by commercials and there’s a lot of that there, because as we’ve said already, if you can do it for half the price, you’re going to do it for half the price. You’re not going to pay double the price if you can do it for half the price. So, I do think there’s already a massive push to make this more efficient.
Going back to that first point that you raised about the fact that power will get to this unbelievable level by the end of this decade, if you just left everything like it is, that’s what would happen. But everybody knows there isn’t enough physical infrastructure to generate that right now in the world.
So, they’ve got no choice to innovate and ensure that they don’t need to use all that power. I do think we’re at the beginning of the change around this, in the sense that we will start seeing big improvements in the green position. But how we judge that is still a question mark because the transparency isn’t there.
[29:36] Jonny Clarke: That’s the thing. So hopefully we’ll have more transparency. Businesses and consumers can better analyse which models to use, more efficient ones, less cost, better reputation as well. So, Julian, thank you so much. This has been a really interesting conversation. The bottom line then, AI’s environmental impact is real and growing, but it’s not inevitable.
Solutions do definitely exist to reduce its impact. And if we can get this right, AI could actually have a net positive impact on the climate, which you think is a certainty anyway. It all depends on what we do next. Whoever you are, the choices you make definitely do matter. For listeners who want to learn more about Clairo and your approach to sustainable AI, where should they go?
[30:16] Julian Box: Just go on the web clairo.ai.
[30:18] Jonny Clarke: Perfect. Well, thanks again, Julian, for joining us today.
If you found this conversation valuable, share it with someone who needs to hear it and we’ll see you next time.
Thanks for listening to Jersey Heard.