Hello again, and welcome to a planet beyond hosted by me, John Ba Pitt. In recent months, we've all got a glimpse of the potential of Ai. It promises to help solve many of the pressing issues we face. Whether that is speeding up drug discovery or finding low carbon ways to design buildings and infrastructure. Over the next 2 episodes, we will be exploring what the future of Ai might look like. In our full episode later this month we'll discover some of the pressing challenges, the Ai or help us solve. For this potential technological transformation comes at the cost. Ai systems currently use huge amounts of power. And fresh water for cooling. While Ai could help propel to a net 0 world, it will in the process. Use a lot of energy. It's estimated that a search driven by generative Ai uses 4 to 5 times the energy of a conventional web search, So how sustainable is Ai? That's what we plan to address. With our guests today, Mer D. Moran, could you start by introducing yourself? Thank you first for this opportunity to explain a bit the work that we're doing here in the Uae. I'm professor at Khalifa University. And director of the 60 Center. I'm also senior advisor, at the Technology Innovation Institute I've been working in the last 3 years on what we call large language model known for many people as chad Gp. And more within the realm of Gene. And all my years of of research I've been at working at the interface of a telecommunications, mathematics, that this sticks signal processing and, of course, machine learning in Ai. In the rolling episode, we will explore some of the benefits of Ai. Now, we won't cover all those again here. But I know you you see another impact that Ai could have, it could help us on the path to inter exploration. Now that does sound exciting. So why is Ai high important to this mission? That's a very good question. Of course, I'm in Academia. And in general, we look ahead. When I'm we look ahead, I mean, we look around the 10 year 20 year 50 year time frame, and we're lot looking at Would say short term applications that before looking at the eye. Today, the majority of of people are working on the ai, of course, are looking at an application in terms of enhancing productivity, coming up with use use cases in which basically you can increase the productivity of people. However, I think 1 of the main points that people are missing is that, as up today, we have, of course, a big increase of our capability on how the system are working. And 1 of the things that also quite important is that it's the first time in history that we're arriving at a stage where human exploration and I go beyond because of the conditions that we're having. The large language models that we've been building so far have this capability of being what we call autonomous. Autonomous meaning taking their decision on their own, moving basically reasoning capability. And of course, if you look at how humanity has been going from Christopher Columbus and and going forward, we are at the end explorers. And today, 1 of the things that we're missing is that it's something anymore possible to go outside the. We wanna go to Mars, and we know that if you wanna go to Mars, we need to have these robots, basically starting to explore these new realms that, on which people are gonna be inhabit and basically starting to prepare the kind of colonies on where we're gonna be building. We can't rely on the communication anymore because the lapse of time is so long that whatever happens in this expedition will be too late to take in terms of of consequences and how to react. So we need, of course, devices, robots, machines, which can on their own take decision exactly in places where we cannot go, starting the build basically the kind of nature and involvement that suits us So you're making a distinction between doing what human beings are currently doing better and being able to do things that are simply not possible unless we leverage Ai. Yeah. I think for many people the understanding about Ai, there's always a question of being more efficient being more productive and At the, basically, replacing humans. I think the new vision has to start at least for the people who are now looking at ai is, of course, creating new opportunities. And when you create new opportunities, of course, you create new jobs, you create, of course, you expect and seize for people and, of course, you create also a better world for any people. I think this realm has not been, like, looked at so much. Of course, all these things come with cost. Can you put into perspective the carbon emissions from Ai? I know you split these things into 2 key areas, training the models and serving them Can you describe each of these stages and what is involved and where the carbon costs come from? That's a very important question and I'll try to be, like, a professor in terms of explaining what we're building. We're we're talking about we're bundling is sort of a brain. And when you build a brain, there's 2 things you have to do. Is first, of course, building it, which is what we call the training phase. And to build a brain, basically, you need to train it and how you train it is exactly by giving and feeding it. With a lot of content, visual text, and that is called data. You use a routine called machine learning techniques, which is the hard basically is based on optimization techniques, and which crashes basically a lot of computers. When I mean crashes, it means a lot of computers are running, basically to make that machine, learn all that data. So that's what you need to build. This is based on some processors called Gpus, general processing units. These are basically sort of a computer. You need to map and connect them with thousands, hundreds of thousands of computers together to be able to build brains which are much, much more bigger and stronger. Whenever you start trading, of course, these computers run, they are powered by electricity and of course, they start consuming energy. And, of course, has an impact on the carbon emission. The amount of energy that is required, to train those models. And in general, we talk about these models, we talk about models which are of something of a billion, 113000000000 number of parameters, which are some kind of links to the number of neurons that you have in your head. Building those sizes requires a lot of energy. And today, already, in many places around the world, of some states are not anymore welcoming these data centers because they don't have the energy which is required to power them. Typically some announcement that we've made recent by Meta, for example, which wants to build basically a huge data center with more than 60 hundred thousand Gpus showcases the. But that's only 1 part of the story. That's a massive amount of computing power. And electricity and carbon. Just to build the model then. What's involved in the second part of using Ai, serving the data. The second part of the story is that once you build a brain, Well, you need to communicate with the brain. This is called the serving of the brain or the inference time, meaning asking questions and getting your response. So for people are familiar with Child Gp, this is related to the fact that whenever you prompt the machine, meaning you're write to search in text in the machine, it gives you an answer. But whenever you write a text, it processes information to provide you that text. And depending on the length of the text and the length of your input, you start consuming the lot. Today, if I take the case of open an Ai, You have roughly around 100000000 people use in basic chat activity per day, which are running all these compute which are necessary to get the result, and which also is having a huge impact in terms of energy. We talked about energy, but it's not only about powering. It's also about cooling because you run them but these get heat it. And whenever they get heated, then you need also to get... You need you need also some energy, you starts cooling the system a bit. And this also, of course, has a huge impact in terms of carbon emissions. So these systems use a great deal of energy. And they also require another fresh water for cooling, a a recent cornell paper, calculates that training Gp 3 would have used 700000 liters of water. Overall, through researchers say Ai systems may use as much as 6600000000.0 cubic meters of water. By 20 27. That's half the Uk's and your water use. And an even more drastic impact. In countries like the Uae that face water shortages My understanding is that part of the problem is that Ais currently use brute force techniques. Crunching every number until they hit the right solution. And just to give you an example, which is very simple is that whenever if you go back to school you want to solve what we call an equation. X plus 1 equals 0, which is a classical example to educator kids, Any kid at school knows that x is equal to minus 1. Because he can solve that equation and has, of course been learning how you solve it. Another way to solve it is what we call the brute force sticking. You test all the x is possible. You say, if I replace x by 0, the 0 plus 1 equal to 0. No. If I replace 0 by 0.5, it doesn't work. And minus 1 turns out to work. So this Blue fare techniques turn out to work and that's what we've been using quite recently in terms of approaches by taking for some reason energy for granted, but we're hitting a wall at the moment, and a lot of people are effectively quite concerned with the cost that are required related in terms of carbon related to making this system happen. On the commercial side, our companies seeing market pressures to improve Ai energy efficiency. Or are we in a bit of a free for all in which companies are expanding their Ai capabilities as fast as possible? Without considering the environmental impact? So today, when you look at the landscape of all the companies which are jumping on Ai, there's a big hot And unfortunately, not so much many companies are really making revenues out of Ai? Why? Because of caused the cost, of building this model. The cost of maintaining those model. The cost of running those model is still a bit high compared to the gain you're having when you start putting them into production. And so the market, of course, is gonna hit in at 1 moment because you need to come up with a balanced approach where the money that you're generating has to out outsource basically the cost. Compared to all the money that is injecting to build those models. Today we're more in a height in terms of investments because, of course, it's a new technology. Everybody's seeing a lot of potential, but the revenues that we're getting from there are not matching. And I think On the marketplace, we will get that pressure coming in in saying, hey guys, if you want these systems to be implemented prevented. The economics need to make sense. And if the economics need to make sense, then you need to work out, basically, do your homework and come up, basically, with much systems, which are not costly and so that we can make money out of it. So is there hope out there? Right now? That companies can both bring costs down and make their systems more energy efficient. What are the solutions? So what we're seeing today on the hardware side as I was saying, the industry is not working on what we call hardware specific. Basically, or Ai. Today, in general, you use Gpus, which were basically a built mostly for graphics, meaning, games for people who play in the game industry, know about what we call these, graphical processors, And that was basically the aim. And, of course, 1 of the main driver company within that wheel is calling video. These are very general. They are not very specific to the kind of task that you need to build. What we're seeing is that If you wanna do training, you'll use a specific hardware. And if you wanna do basically, inference or serving the model, you'll be using another type of hardware. Whereas a couple of years ago, it was the same kind of hardware with who's called the Gpu. Now the same thing that we're seeing today is that also within the training depending on how you make it, there's gonna be also some hardware specific, basically, architectures which are gonna be built. To make the system more efficient going around that. Second also that the industry can become more efficient is also basically on making sure that Ai is being used not on a specific, I would say kind of sub problems, but an end to end. The big beauty of Ai is that whenever you have a specific problem, From my point of view, if the problem is well defined, you don't really need to use Ai. In general, it's end to end problems which are very complex, and in which instead of using what we call model driven approaches where you can sit down and model the system, you're gonna use what we call data driven approaches, which are based on this. And I think taking a more holistic point of view of how things are working is very important. So far, all the industry has been always looking at 1 part of the problem neglecting the end to end holistic approach. And this is not related to only to Ai. We take it also for people who are working on a cars, energy. In general, when you look at electrical cars, you're look at these industry, you're always narrow. You're looking okay. How what's my carbon footprint basically in in Paris when I start using electrical car versus basically normal cars? However, if you the holistic approach where the call was built. The transportation needed to bring it here. Also the the the whole process of when the car is not gonna be working anymore. This is where the industry has to pay hold because they know and have a good view of the whole process. But are we capable as a modern society of taking this long term view or are we stop focused on the short term? What does long term sustainability look like is Well, is nuclear an option? I think what we're missing here for all societies, people who I can have this long term vision in saying, if I build something today, What's gonna happen in 100 years with my technology Let's take nuclear. I start building nuclear plants. Great. What's gonna happen of course, with the ways of that nuclear in 20 years. So I need to think about the whole process, make the economy makes make sense, make sure also that in terms of carbon footprint, climate change, it also makes sense, and then you start building your solution. And I think you're totally right. We've been so much focus narrow that we need now to broaden our view. Of course, it requires a lot of internal disciplinary work working with others. It work requires, of course, to exchange with people that are not your our domain, which is not always easy. It requires also matter of time. We're in a... We are living in a world where the pace of innovation, the pace of productivity. So fast, the people trust to neglect these aspects. We see that a lot as the climate crisis bites don't we, we we need to change a way of thinking get get beyond the short term view. Do you think we'll ever be able to do that I have to admit that the new generation, I'm a teacher and I'm professor University. The new generation is starting to take that into account. We're getting more and more people our students now are much more, I would say enlightened with these questions. And the choices by the way of the careers. That a lot of the students are making. And I think this is a big change for my point of view because it's the generation that we're forming today which will be leading us in the future. We're not meaning to be sure that the right people are taking basically control of the boat. Thank you. Ai has immense potential, but it's currently matched by its huge carbon and economic costs. Another statistic. A recent article in nature, shared that an assessment carried out by chat Gp suggested that this is already consuming the energy of 33000 homes. As this technology develops, we need to keep the environmental costs front of mind. Alongside the potential benefits. We need to approach the development of Ai with a much more holistic vision are both the benefits and uses as well as its costs and dangers. This is a powerful technology. But we need to deploy it efficiently in a sustainable way and where it can make the most difference. Thank you for listening. Don't forget to tune into the episode later this month, where we will explore. Successful applications of Ai in support of a safer and l world, If you enjoyed this episode, please leave a rating or review in your podcast app and help us reach a wider audience, by sharing this episode with your network, your family, and your friends. Until next time. Be safe. Be remarkable. Be the difference.