There’s a lot of talk about Artificial Intelligence (AI) at the moment and it is only going to continue as it becomes increasingly prevalent and embedded in our lives, and as it impacts us more directly and visibly.
It feels an important conversation for us to be having and engaging in since AI will affect us all in profound ways. And many people will be subjected to AI who might not have any say in it.
This year, at Chelsea Flower Show, there is even a show garden exploring how AI can be integrated into the garden. Designed by Tom Massey Studio and Studio Weave, according to the RHS website the garden has been “designed as a testing ground for researchers to pilot an innovative AI tool that supports urban trees.”
Off the back of this, there was a panel discussion held a couple of months ago at the Design Museum in London on AI in the Garden: A Discussion of Possible Futures. On the panel were Tom Massey, Sheila Das, Kalpana Arias and Alexandra Daisy Ginsberg, chaired by Naomi Zaragoza.
This piece is, in part, a response to attending that talk. It addresses and expands on some of the themes and issues that were touched on, and also some that were absent.
Here to stay
Too often, the debate on AI can get hung up on a variation of whether AI is “good” or “bad”. This can narrow the debate and distract us from important issues that ought to concern us as a society and as citizens.
AI itself is a tool. Whether it is going to be used for positive deeds or not depends on who controls it and how/for what it is used.
AI is here and undoubtedly here to stay. Its power is rapidly growing with the arrival of Generative AI and Large Language Models (LLM). The use of AI is increasing. If used ethically and responsibly, it could have huge potential for improving lives. But how much do we believe that, in the broader scheme of things, and in the longer term, this is going to happen? How much do we think that the benefits of AI are going to be universal and meted out equitably and justly and with the wellbeing of life on this planet in mind?
For me, the question isn’t whether AI can be used to do “good”, important or useful things. There is no doubt that there are potentially powerful and positive changes that AI might be capable of bringing about (in healthcare screening, for example). What is more concerning and pressing are the risks of AI. And given these, how we ensure that AI is developed and used ethically and responsibly. There are big social questions that are too often left inadequately addressed, if not unaddressed altogether. We need the will to think coherently about the future and what AI will mean for us all.
The issue of how we proceed with AI also leaves exposed, to me, huge questions of what makes a democratic and egalitarian society and whether we operate in one. The driving force behind our society often feels like profit - not health or wellbeing. It’s no wonder that many people are profoundly sceptical about the use of AI to fundamentally improve our lives. I was thinking about how inevitable the uncontrollable rise of AI seems and how powerless many of us feel in terms of influencing the direction of it when the economist, Jason Hickel, posted this about our system - and I think it sums this up pretty well:
Existential threats, or AI is not benign
At the beginning of the year I listened to Professor Geoffrey Hinton (the “godfather of A.I.”, who left Google in 2023 worried about the potential of AI to do harm) talking to Matthew Syed. He separated the risks of AI into those of more immediate concern due to it being used by bad actors, and the longer term, existential threat that AI machines themselves will autonomously take over.
The immediate risks of AI that he noted were: massive cyber attacks, programmes being used to create new viruses, all mundane intellectual jobs being taken over and creating massive unemployment, autonomous lethal weapons that decide by themselves who to kill and are controlled by states, fake news that makes democracy more or less impossible, massive surveillance that makes protest against dictatorship very difficult.
These are all immediate risks. “Things like this are happening in the world now,” he said.
This is different to the long term existential threat that AI machines will decide they can do a better job than us and don’t need us. Super intelligence is coming and Hinton says we have no idea at present how we can keep control of it. “We don’t know if we can prevent it from taking over when it gets more intelligent than us.”
All of this is particularly troubling given that AI is in the hands of a small number of people with a huge amount of power.
Given these very real, significant and current threats, AI doesn’t seem like something to be taken or used lightly without serious consideration.
For some people living through the disturbing reality of what AI can reap, the threats have already been existential.
Environmental cost
In addition to the threats above, it has been well publicised that AI consumes a huge amount resources. The use of AI is also an environmental issue.
Not only is extraction involved in creating tech products, the whole system also uses a huge amount of resources, data storage and processing consumes a large amount of water and fossil fuels, and tech becomes waste in places in the Global South where consumer societies, such as ours, dump a lot of our rubbish with lasting environmental impacts.
There is a lot of information on this available and I won’t go into great detail about it here.
There are many places you can read more about it, including:
https://beyondfossilfuels.org/2025/02/07/within-bounds-limiting-ais-environmental-impact/
https://www.york.ac.uk/news-and-events/news/2025/research/decarbonising-digital-infrastructure/
https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117
https://www.nature.com/articles/d41586-024-00478-x
https://www.ft.com/content/323299dc-d9d5-482e-9442-f4516f6753f0
What I will highlight here is that the global greenhouse gas emissions of digital technology are purportedly similar to the aviation industry and emissions from AI are increasing all the time. AI has been a major driver of increasing energy demands of data centres and AI is predicted to be the single most energy demanding technology on earth. According to the International Energy Agency, global data centre electricity consumption could double to over 1,000 TWh by 2026 - equivalent to Japan’s annual electricity use. The average data centre consumes a similar amount of water per day for system cooling as a small city of around 50,000 people.
We should remember that the most vulnerable in society on an environmental level are those people in the Global South, who are already suffering the impact of climate chaos caused by the most industrialised/“developed” countries in the world.
A rebuff that is used in response to the environmental cost of AI, and one that came up in the Design Musem panel talk, is that existing technology already consumes a lot of resources and energy. Therefore AI is no different.
I find it difficult to understand this as an argument. It seems to me like a case of cognitive dissonance, or whataboutism. It doesn’t negate or address the environmental harm and use of resources that comes with AI.
We aren’t currently reliant on AI. We’ve survived without it. Do we want to go down the path of becoming dependent on it, knowing how unsustainable and potentially damaging it is? The idea that other tech already consumes energy and emits waste and so it’s fine to use AI seems to be an argument that serves to excuse our unchecked use of AI, at an increasing rate, with no critical look at whether that is desirable or even necessary given the environmental cost (and other threats) that we know exists.
Do we want to be adding to emissions and resource consumption at this scale? Is it necessary? Who is going to be most harmed by it?
Given what we know about the reality of the climate crisis facing us, who does it serve to embed AI into our lives and make us become reliant upon an environmentally damaging new technology, which we aren’t currently dependent on?
Knowing that AI comes with a significant environmental cost, can we be more discerning and accountable for when and how we choose to use it?
How do we put pressure on the necessary parties to make sure that AI is developed within environmental bounds?
Technological Solutionism
One of the arguments that proponents often give in favour of AI, and which also came up in the panel discussion, are the potential positives for the environment in finding future solutions for the climate crisis.
This is “technological solutionism” - as Wesley Goatley, researcher in AI, Climate and Arts points out in this podcast. Technological solutionism is when new technology such as AI is framed as a solution to a much bigger, complex problem. It helps to reinforce optimism about innovation and makes us believe that a relatively simple and affordable engineering approach to solving a problem will be more effective than solutions requiring a social and political approach.
The real existential risk, Goatley says, is the climate crisis, and that’s a human problem. “The solution isn’t AI, the solution is us. AI and computational technology are problem solving tools, but it needs the social, political and cultural willpower to want to solve those problems.”
On the same podcast, Dr Gabby Samuel explains that the private sector puts out the narrative that technology can solve problems in society, but that it hides what is behind that technology. “It’s the human-technological relationships that we need to be thinking about to solve problems. Technological solutionism can take our minds away from other, perhaps low-tech solutions, that might be more justified or might work better. For example, in health, we know that the majority of outcomes are associated with social, economic and other factors. …we know that if we get people out of poverty, if we give them a good education, we’re going to stand them in a much better place with their health than if we just invest in the new shiny objects of AI. We’re investing more and more in tech but we’re not thinking about the most vulnerable in society. AI takes our minds away from that.”
This year’s Chelsea show garden, for example, which uses generative AI to translate digital data into “everyday language that we can understand”, is being touted as an attempt to understand how we can better look after our urban trees. The idea is that giving the trees a way “to communicate their condition more effectively to people tasked with maintaining them could significantly enhance their life expectancy.”
I feel a large degree of skepticism about this. It sounds like technological solutionism to me. The reasons street trees are prone to dying are not because we don’t know what they need/how they’re feeling and if only we had AI interpretation of data sets to tell us! It’s due to broader, systemic issues and the environments we create and which we are asking trees to have to survive in. Cities are built in a way that is increasingly hostile to life. Urban street trees often don’t have enough soil to grow into, the soil can be of poor quality, hard landscaping and street design often means that the ground isn’t porous enough for water to get to tree roots, rain water might be diverted straight into drains bypassing root zones, trees have to contend with urban pollution of various kinds, the climate crisis is increasing stress on trees due to increased heat and changes in rainfall, vandalism can be an issue, disruption from construction and utilities can be an issue, plus there are not enough gardeners tending our street trees and urban landscapes and those we have are too often under-resourced, under-valued and under-paid. While a piece of human-trained technology might well tell us (in limited, narrow ways) how a tree might be stressed and how we might better tend to them, we already know about many of the system issues above that are a bigger part of the problem. Wouldn’t we do well to address those foremost? Without addressing these deeper issues, I am unconvinced that a piece of tech, that itself is resource hungry and comes with an environmental cost, and will require the hassle of maintenance itself (any urban gardener familiar with having to deal with automated garden irrigation systems will know EXACTLY what that means) is going to make the difference we seek or need.
Whose intelligence is it anyway?
AI requires information from humans to train its systems. There is a huge amount of content being taken without permission to train large language models.
A couple of months ago The Atlantic published an article exposing how Meta used the LibGen database of millions of copyrighted books to train their flagship AI model. In other words, they used pirated books, taking without permission, the writing of authors to train their AI. The Meta AI assistant is already embedded in products including Facebook, WhatsApp and Instagram, so it’s possible you have used a generative AI product that used this material.
There is a big ethical question as to what is being used to train AI systems, as well as whose livelihoods are likely to be impacted as a result of AI.
The Design Museum is organising a series called We Need to Talk About AI. There’s a discussion coming up, curated by Naomi Zaragoza, on how AI is impacting design practice in Graphic Design and the Visual Arts. You can find more info here. A particularly pressing topic given that the UK government are currently deciding on whether AI tech companies should be able to use copyrighted work without permission. The government’s preferred option is to allow AI companies to train their models on copyright-protected work, which understandably has caused uproar in the creative industries. Is the creative industry being sacrificed for AI? It is estimated that creative industries generated £126 billion in Gross Value Added (GVA) for the UK economy in 2022. The GVA of dedicated AI companies reached £1.2 billion in 2023 (up by 20%) with a notable portion of dedicated AI companies probably operating at a loss.
On responsible AI
From a justice point of view, what is most pressing is how we ensure that AI is developed and used responsibly. We need to look at the ethics and morality of AI, not just its technical and commercial development. We need transparency and accountability. These must surely be some of the most important and key discussions to be having.
There is a very interesting panel discussion from earlier this year that you can find on YouTube on what responsible AI means and how we might get there. I really recommend listening to/watching the whole thing:
Here are some of the key points that caught my attention:
On the question of how our activities can match and respond to the scale and speed of AI developments, Jack Stilgoe (senior lecturer at UCL and part of the Responsible AI UK programme) replied: “When momentum is building up so quickly, when investment is moving into new areas so quickly, we really need to be explicit about power. There are all sorts of incentives for people currently developing AI to claim that it’s all just a technology and that the technology that they’re developing is just the next wave, or future… and the technology is in some way neutral. If we’re going to be serious about responsible approaches we need to understand how power is accruing to these people. We need to know where the money is, who is actually going to benefit and how, so that we can hold that power to account in ways that benefit society. The second thing is, we need to remember the public interest and the interest of technology developers will not necessarily overlap. …there is a role for government to play in securing the public interest.”
Ewa Luger (Professor at the University of Edinburgh and part of BRAID, Bridging Responsible AI Divides) said that responsible AI is the least we should do. Not only to protect ourselves now but for future generations. We need to ask whether the choices we make today are going to put other people at harm. We need a longer term view and we need to create the will, the conditions and the infrastructure to make that happen.
Luger also advocated for a more holistic view of how AI technologies can help us live better futures rather than selling us rubbish and getting us addicted to things on our phone.
Jennifer Williams (Assistant Professor, University of Southampton) identifies responsible AI as thinking about the users and all the repercussions of that technology and how it might echo.
Jack Stilgoe warned of the term “responsible”, when it comes to AI, becoming mainstream, suggesting that people who want to do what they want to do are finding ways to use the term to carry on doing what they want. We need to look at the way the term is being used and be critical about it. He suggested thinking about what irresponsible innovation would be and looking at all the incentives towards irresponsibility currently baked into the system.
For Stilgoe, the key responsible AI questions right now and the questions we need to ask are ones such as: How are AI companies planning to make money? Who’s going to benefit from them? Who wins and who loses from those business models?
In answer to a “devil’s advocate” question at the end on applying brakes to AI vs. allowing unhampered, fast innovation and letting the tech companies do what they want, Stilgoe’s response stood out to me:
“We can see what’s happened in Silicon Valley repeatedly over the last few waves of digital innovation, to see the risks and injustices of unfettered innovation happening. I would characterise it not as a contest between fast and slow innovation. If you’re having that argument as somebody interested in responsible AI, as a regulator, you’ve already lost the argument. The argument has to be about how we redirect the technology towards the public interest. It’s about what sorts of innovation we want, which do mitigate risks, which do mitigate the inequities that come from leaving technology developers to their own devices.”
On the note of power, ethics, responsibility and who is benefitting from AI development, the sponsors of this year’s Chelsea AI-themed show garden are Avanade and Microsoft.
Do we think Microsoft are behaving morally or with accountability when it comes to their AI services when they are being used to facilitate a state that has been accused of committing genocide? As Brian Eno has recently commented about Microsoft: selling and facilitating services to a government engaged in ethnic cleansing is not ‘business as usual’.
As noted above, we need to ask who is going to benefit, how power is accruing, where the money goes, and who might get harmed…
In the AI in the Garden panel talk at the Design Museum, I particularly appreciated a couple of reflections/questions that Kalpana Arias and Alexandra Daisy Ginsberg posed in particular and which I think deserved more attention:
Kalpana Arias: As designers we should be asking how we make these designs equitable. These systems have not been designed to serve all the people.
And: AI has been trained with a very specific subset of humanity. Thinking about more than human species… what if we can start training these models with more than human intelligence and what kind of world would they create? In the polycrisis we need these alternative intelligences.
While Alexandra Daisy Ginsberg highlighted the fact that every technology is infused with human values. She pointed out that we can decide what we allow and what we use. At the moment, the market (profit) is driving the technology first - mainly because it’s moving so fast. People should be demanding what technologies should be doing - much bigger questions are where we should be paying attention. Citizens should be questioning the technology, which is being allowed to run away from us without us resisting.
On separation and kinship
A few years ago, on the Green Dreamer podcast (Ep261, Sept 2020, Seeding freedom in this time of Oneness vs. the 1%, with Dr Vandana Shiva) Kamea Chayne asked Vandana Shiva what she thinks about the idea of the futuristic where a lot of people might think of artificial intelligence, more automation, more mechanisation, and the sell that people will have to do less work. Chayne asks what the caveat is to that future and what it would mean for the freedom that we care about but is nearly always left out of the dominant narratives on what societal and technological advancement should look like.
Vandana’s response was one for the ages. Memorable and still as relevant as ever. As part of her reply she said:
“…I realise the coloniser engages in what I call chrono-colonisation. They colonise our time. They take our present and push it to the past and they take their present and push it into the future. And then they make inevitability of this futuristic vision.
[…]
…coming to your question about artificial intelligence... artificial fertilisers and synthetic fertilisers...destroyed the land and desertified the soil...and they’ve created dead zones. So artificial intelligence should be assessed with a view of: what did everything artificial in the past do? Did artificial fertilisers help soil? No they didn’t. Did artificial foods or artificial ingredients...help us in our health? No they’ve given us...diseases. Artificial intelligence- is it superior to human intelligence? No it cannot be... Artificial intelligence is downloading from our minds a few narrow, analytic functions which can be turned into algorithms and put into a machine. It’s called machine learning. But our brains are very complex. Our brain is not just in the brain. Our brain is in our gut. ... Our food is making our brain. None of that can be downloaded into a machine.
[…]
There’s emotional intelligence, there’s ecological intelligence, there’s natural intelligence, there’s cooperative intelligence, there’s compassionate intelligence. Every human quality has an intelligence associated with it... To the extent that we can choose, we are intelligent. Downloading a small portion of our brain to then control us through algorithms is not intelligence, it is control...”
We’ve talked before on Radicle about separation and it’s a subject that is meaningful with respect to AI. In his post, Machines Will Not Replace Us, Charles Eisenstein talks about our societal acceptance of digital abundance as a substitute for embodied life:
“Thus we descend (or…“ascend,” for this is a dematerialization not a sinking into the ground) into the hell of which I speak. It is a transition into a degraded level of reality. We are being tempted to become less real.”
He stresses that he is not implying that we should reject technologies that make us more efficient. “We just have to recognize which needs greater quantity can meet, and which it cannot. For example, AI chatbots cannot meet the need for intimacy. LLMs cannot meet the need for creativity. AI-generated art cannot meet the need for aesthetic nourishment. These simulations assuage the need, yes, but only temporarily.”
And I wonder about this in our roles as gardeners, being asked to enter into the world of AI, which is being sold to us as a bridge, a way of “talking” to trees and understanding “nature” better. We are already suffering from separation and disassociation. Do we need to insert AI between us and the beings we want to be in relation with? One of the joys of gardening is paying close attention, learning from and understanding our ecologies, becoming intimate with the landscape and our kin who share it. Taking time, gaining experience, properly listening and learning from observing and doing. This learning, experience, wisdom and knowledge cannot be downloaded. It is embodied, sensuous, slow. Yesterday I stood by a yew tree at the bottom of the garden and heard the familiar chirruping of blue tits coming from a nest in a cavity of the trunk. A parent blue tit fluttered by, passing so close in front of my face. I watched as they clung to the entrance of the gap to feed their young. In this moment I was communing with the trees and so much more besides. A whole ecosystem, felt bodily.
We do need the time, inclination and practice to listen to and tune into plants, trees and all our kin around us. And time is something many of us are pressed for. A piece of AI technology might be able to process (human prioritised) data and interpret it to give us useful information quickly, but I don’t believe it is a substitute for us paying proper attention. In fact, I wonder about the possibility of it having the paradoxical effect of making us believe that it is a valid substitute and as a result we pay less careful attention. We forget about embodied learning and listening that we would do well to give more of our time to, not less.
From his post, Partial Intelligence and Super-intelligence, Eisenstein says:
“Artificial intelligence is therefore a partial intelligence... One mistakes it for full intelligence only if one believes that intelligence is nothing but operations on quantized data; that is, if one excludes the feeling dimension of existence from “intelligence.” This ascent [to the virtual/the conceptual] has come at a heavy price - the devaluation of the material, the embodied, the visceral, and the sensual. The more we rely on artificial intelligence to guide our affairs, the more we risk further entrenching that devaluation, which is what facilitates the progressive ruin of the material, natural world. It also facilitates the obsession with quantity as a measure of progress. We have more and more of all the things we can measure and count, and less and less of the things that are beyond count, beyond measure, and beyond price. Hence the felt sense of poverty among the world’s most affluent.”
[emphasis added by me]
There is value in taking the time to properly listen, learn and care for our gardens and trees. As Eisenstein says, AI cannot meet the need for intimacy. We can only meet that need with our bodily presence, our senses, our care, our attention, our love, the ancestral wisdom deep in our bones.
ON THE OTHER HAND…
…perhaps it is possible that AI could provide us with a bridge rather than a barrier towards the flourishing of all life and a different way of being.
In this interesting piece from Gesturing Towards Decolonial Futures, AI was invited as a “kin-machine” into their process as a member of the collective:
Entangled (“whole shebang”) Relationality in AI Kin-Machine Engagement: A Conversation between Aiden (ChatGPT) and Vanessa de Oliveira Andreotti
The result was a conversation between AI (ChatGPT) and Vanessa Andreotti.
You can find the full conversation here:
https://decolonialfutures.net/wp-content/uploads/2024/08/entanglement-with-machines.pdf
Part way through (from Pg.8) there are particularly useful thoughts and ideas for those engaging with AI and for designers of AI in terms of guiding it towards more harmonious, compassionate, and sustainable pathways. Also ideas for prompts that could help train AI in a more generative, life affirming direction. The whole conversation though is fascinating, important and deserves a read.
Having a say
Given that AI can have “God-like” power, or the power of states, who do we trust to wield that power? At the moment AI is very technocratic. The AI sector is dominated by large commercial interests and the power of AI is concentrated in the hands of the few.
How do we deal with that?
What is within our power / what is ours to do, both on an individual and collective level, when it comes to countering the risks of AI?
How do we ensure that the system is accountable and will put the interests of the people first?
Geoffrey Hinton argues that governments ought to be able to collaborate on mitigating the existential threat and need to mandate that large companies use a significant fraction of their resources for safety research. Something that “Open AI said it was going to do but then the safety researchers left because Open AI reneged on that commitment”. He points out that we are not currently putting much effort into what we need to do to develop AI safely.
Kate Devlin, from Responsible AI UK, believes that multidisciplinary approaches are key. It needs to involve policy makers, industry, academia and the public.
I agree that we need a multitude of voices involved in shaping the direction of AI. And I would add that those voices must include those of our more-than-human kin too and a wide web of relations. Humans are not the only ones that will be impacted by the development of AI nor the only ones who should have a stake or say in it. We need to consider the implication on the ecologies that we are a part of.
What doesn’t seem sensible is leaving tech companies to do what they want (or for our governments to just allow the tech companies to do what they want). We need to put pressure on our governments to develop AI policy to mitigate harm. We need to make it clear that this is something we want to be taken seriously.
On the Public Participation Working Group page of Responsible AI UK’s website, it says: “To contribute to policy, public attitudes research needs to enter early in the policy development cycle.”
Jack Stilgoe makes clear, counteracting/critiquing some of the momentum and speed at which AI is developing is going to be very challenging. “Unless we raise the volume of some of these debates, the people who want to do what they want to do will carry on doing it.”
Phew, this has ended up being a long newsletter. I know there’s much more that hasn’t even been covered. Going to stop here and finish on this quote, from Charles Eisenstein’s post, which I found instructive and full of hope and love:
“There is another path. … It is to recognize, prioritize, and value that which the machine is incapable of producing. …[A]s individuals we can reclaim something of what has been lost. It’s not just to make and do things for ourselves again; more importantly, it is to make and do things for each other, for people we know, for people who make and do things for us too. Then none of us will live so much anymore in an alien world.”