Environment & Sustainability - AI News https://www.artificialintelligence-news.com/categories/ai-and-us/environment-sustainability/ Artificial Intelligence News Fri, 06 Mar 2026 13:54:43 +0000 en-GB hourly 1 https://wordpress.org/?v=6.9.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png Environment & Sustainability - AI News https://www.artificialintelligence-news.com/categories/ai-and-us/environment-sustainability/ 32 32 Exploring AI in the APAC retail sector https://www.artificialintelligence-news.com/news/exploring-ai-in-the-apac-retail-sector/ Fri, 20 Feb 2026 17:19:04 +0000 https://www.artificialintelligence-news.com/?p=112333 AI in the APAC retail sector is transitioning from analytics and pilots into workflows and daily operations. Dense urban stores, high labour churn, and competitive quick-commerce ecosystems are driving the uptake. A Q4 2025 survey by GlobalData found that 45 percent of consumers in Asia and Australasia are very or quite likely to purchase a […]

The post Exploring AI in the APAC retail sector appeared first on AI News.

]]>
AI in the APAC retail sector is transitioning from analytics and pilots into workflows and daily operations.

Dense urban stores, high labour churn, and competitive quick-commerce ecosystems are driving the uptake. A Q4 2025 survey by GlobalData found that 45 percent of consumers in Asia and Australasia are very or quite likely to purchase a product based on AI recommendations or endorsements.

Jaya Dandey, Consumer Analyst at GlobalData, said: “Whether shoppers realise it or not, machine-learning systems have long been deciding when to encourage consumers to make purchases, which products they can see, and what discounts they can avail.

“Now, agentic systems can also complete shopping-related tasks end-to-end.” 

Computer vision and store automation

Enterprises evaluating computer vision and machine learning can observe early implementations in the region.

Lawson, for example, introduced AI-enabled ‘Lawson Go’ stores in Japan during 2022. The retailer collaborated with technology provider CloudPick in 2025 to integrate AI, machine learning, and computer vision. This integration eliminates check-out lines and cashiers to enhance the customer experience.

In South Korea, retail AI company Fainders.AI launched a compact and cashier-less MicroStore inside a gym in 2024. This deployment improved the accessibility of autonomous retail across different businesses.

AI also aids the forecasting and automation of retail replenishment—a capability that applies well to the APAC market, where store footprints are small and replenishment frequency is high.

Japanese food retail chain Coop Sapporo uses a camera-based AI system named Sora-cam, developed by Soracom. The system helps the chain avoid overstocking and reduce unsold merchandise on store shelves. Coop Sapporo employs an analytics team to evaluate the generated images. The team determines the optimal shelf display ratio. The Sora-cam system also alerts staff members to apply discount labels on food items close to expiry to prevent wastage.

AI models track waste and markdown timing while improving promotion efficiency. In Southeast Asian (SEA) markets characterised by high price sensitivity, minor improvements in promotion efficiency increase profit margins.

AI-driven labour optimisation measures include scheduling, task priority lists, and workload balancing. These measures assist retailers in Japan and South Korea, which face structural labour shortages. They also provide efficiency benefits in high-growth SEA markets.

Agentic AI systems in retail are improving APAC consumer interaction

“In food retail, agentic AI is best understood as an AI ‘operator’ that can understand a goal, plan steps, stay within budget or allergen constraints, execute actions across systems, ask clarifying questions, and learn preferences over time,” says Dandey. 

Customers can bypass individual item searches by outlining their overall intent. A customer, for example, might request an AI agent to “Plan five dinners for a family of four, mostly Asian recipes, no shellfish, under 45 minutes.” The agent then generates recipes, builds a shopping cart, sizes quantities, and adds missing staples to the cart.

This retail agentic AI capability aligns with regional behaviours, as many APAC households cook frequently and shop fresh. AI agents that recognise local cuisines – such as Korean banchan, Japanese bentos, and Indian spice bases – fit regional habits better than generic Western meal plans.

“In many APAC markets, shopping is already deeply integrated with digital wallets, messaging apps, ride-hailing, and delivery ecosystems, making it easier for agentic AI to plug into daily routines,” explains Dandey.

“Nevertheless, some key challenges need to be overcome; ensuring private data sharing consent, minimising hallucinations in terms of allergens and ingredients, and implementing proper localisation of the system with language nuance.”

See also: DBS pilots system that lets AI agents make payments for customers

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Exploring AI in the APAC retail sector appeared first on AI News.

]]>
How Formula E uses Google Cloud AI to meet net zero targets https://www.artificialintelligence-news.com/news/how-formula-e-uses-google-cloud-ai-to-meet-net-zero-targets/ Mon, 26 Jan 2026 14:38:53 +0000 https://www.artificialintelligence-news.com/?p=111830 Formula E is using Google Cloud AI to meet its net zero targets by driving efficiency across its global logistics and commercial operations. As part of an expanded multi-year agreement, the electric racing series will integrate Gemini models into its ecosystem to support performance analysis, back-office workflows, and event logistics. The collaboration demonstrates how sports […]

The post How Formula E uses Google Cloud AI to meet net zero targets appeared first on AI News.

]]>
Formula E is using Google Cloud AI to meet its net zero targets by driving efficiency across its global logistics and commercial operations. As part of an expanded multi-year agreement, the electric racing series will integrate Gemini models into its ecosystem to support performance analysis, back-office workflows, and event logistics.

The collaboration demonstrates how sports organisations are utilising cloud infrastructure to drive tangible business outcomes, rather than just securing surface-level sponsorship. The partnership focuses on optimising business operations, ranging from race management to the fan experience.

Operational twins and carbon data to achieve net zero targets

While marketing visibility often drives sports partnerships, this agreement builds on a technical foundation first formalised in January 2025. The elevation to “Principal Partner” involves Formula E adopting Google Cloud technologies for business-critical functions.

The immediate application involves optimising the complex logistics of a global championship. Advanced AI modelling of the back office and the creation of race and event digital twins allow the organisation to simulate and optimise site builds virtually.

This application directly affects Scope 3 emissions. The capability to plan infrastructure virtually minimises the need for physical on-site reconnaissance and reduces the transport of heavy equipment.

For a championship that is the only sport-certified net zero carbon entity since inception, maintaining this status requires finding efficiencies in the supply chain. The digital twin approach delivers a quantifiable reduction in the operational carbon footprint while maintaining performance.

Beyond logistical modelling, the Google Cloud AI partnership extends into the workforce productivity layer. Formula E is deploying Google Workspace with Gemini AI to enable greater agility and efficiency across its organisation.

The organisation intends to use these tools to accelerate performance and deliver faster operations. This reflects a broader trend where generative AI tools are provisioned to reduce administrative latency in distributed workforces.

The viability of these implementations to achieve net zero targets is supported by previous collaborative projects. Formula E recently utilised Google’s AI Studio and Gemini models to execute the ‘Mountain Recharge’ initiative.

Engineers used the models to map an optimal route for the GENBETA car during a mountain descent. The AI identified and analysed specific braking zones, calculating the necessary regenerative braking required to harvest enough energy to complete a full lap of the Monaco circuit subsequently.

This specific use case demonstrates how high-dimensional data – including topography, friction, and energy consumption – can be processed to define physical execution.

Using Google Cloud AI to enhance Formula E’s data product

The partnership also addresses the commercial requirement to retain and grow a digital audience. Formula E has integrated a ‘Strategy Agent’ into its live broadcasts. This tool processes real-time data to provide viewers with tailored insights and predictions regarding race strategy and driver performance.

Millions of viewers have utilised these insights, which explain complex race dynamics as they unfold. This mirrors the enterprise challenge of observability (i.e. taking vast streams of real-time technical data and synthesising them into understandable narratives for stakeholders.)

Beyond helping to achieve net zero targets, the leadership at both organisations frames this expansion as a necessary evolution of their technical stack.

Jeff Dodds, CEO of Formula E, said: “Our expanded partnership with Google Cloud is a true game-changer for Formula E and for motorsport as a whole. We are already pushing the boundaries of technology in sport, and this Principal Partnership confirms our vision.

“The integration of Google Cloud’s AI capabilities will unlock a new dimension of real-time performance optimisation and strategic decision-making, both for the Championship and for our global broadcast audience. This collaboration will redefine how fans experience our races and set a new benchmark for technology integration in sport worldwide.”

Tara Brady, President of Google Cloud EMEA, added: “Formula E is a hub of innovation, where milliseconds can define success. This expanded partnership is a testament to the power of Google Cloud’s AI and data analytics, showing how our technology can deliver a competitive advantage in the most demanding scenarios.”

The progression from the initial partnership in January 2025 to this expanded scope suggests the pilot programs provided sufficient ROI to warrant a broader rollout. As organisations face pressure to balance performance with net zero targets, the use of virtual simulation to optimise physical deployment remains a high-value area for investment.

See also: Controlling AI agent sprawl: The CIO’s guide to governance

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post How Formula E uses Google Cloud AI to meet net zero targets appeared first on AI News.

]]>
Arm and the future of AI at the edge https://www.artificialintelligence-news.com/news/arm-chips-and-the-future-of-ai-at-the-edge/ Tue, 23 Dec 2025 13:45:19 +0000 https://www.artificialintelligence-news.com/?p=111417 Arm Holdings has positioned itself at the centre of AI transformation. In a wide-ranging podcast interview, Vince Jesaitis, head of global government affairs at Arm, offered enterprise decision-makers look into the company’s international strategy, the evolution of AI as the company sees it, and what lies ahead for the industry. From cloud to edge Arm […]

The post Arm and the future of AI at the edge appeared first on AI News.

]]>
Arm Holdings has positioned itself at the centre of AI transformation. In a wide-ranging podcast interview, Vince Jesaitis, head of global government affairs at Arm, offered enterprise decision-makers look into the company’s international strategy, the evolution of AI as the company sees it, and what lies ahead for the industry.

From cloud to edge

Arm thinks the AI market is about to enter a new phase, moving from cloud-based processing to edge computing. While much of the media’s attention has been focused to date on massive data centres, with models trained in and accessed from the cloud, Jesaitis said that most AI compute, especially inference tasks, is likely to be increasingly decentralised.

“The next ‘aha’ moment in AI is when local AI processing is being done on devices you couldn’t have imagined before,” Jesaitis said. These devices range from smartphones and earbuds to cars and industrial sensors. Arm’s IP is already embedded, literally, in these devices – it’s a company that only in the last year has been the IP behind over 30 billion chips, placed in devices of every conceivable description, all over the world.

The deployment of AI in edge environments has several benefits, with team at Arm citing three main ‘wins’. Firstly, the inherent efficiency of low-power Arm chips means that power bills for running compute and cooling are lower. That keeps the environmental footprint of the technology as small as possible.

Secondly, putting AI in local settings means latency is much lower (with latency determined by the distance between local operations and the site of the AI model). Arm points to uses like instant translation, dynamic scheduling of control systems, and features like the near-immediate triggering of safety functions – for instance in IIoT settings.

Thirdly, ‘keeping it local’ means there’s no potentially sensitive data sent off-premise. The benefits are obvious for any organisation in highly-regulated industries, but the increasing number of data breaches means even companies operating with relatively benign data sets are looking to reduce their attack surface.

Arm silicon, optimised for power-constrained devices, makes it well-suited for compute where it’s needed on the ground, the company says. The future may well be one where AI is found woven throughout environments, not centralised in a data centre run by one of the large providers.

Arm and global governments

Arm is actively engaged with global policymakers, considering this level of engagement an important part of its role. Governments continue to compete to attract semiconductor investment, the issues of supply chain and concentrated dependencies still fresh in many policymakers’ memories from the time of the COVID epidemic.

Arm lobbies for workforce development, working at present with policy-makers in the White House on an education coalition to build an ‘AI-ready workforce’. Domestic independence in technology relies as much on the abilities of workforce as it does on the availability of hardware.

Jesaitis noted a divergence between regulatory environments: the US prioritises what the government there terms acceleration and innovation, while the EU leads on safety, privacy, security and legally-enforced standards of practice. Arm aims to find the middle ground between these approaches, building products that meet stringent global compliance needs, yet furthering advances in the AI industry.

The enterprise case for edge AI

The case for integrating Arm’s edge-focused AI architecture into enterprise transformation strategies can be persuasive. The company stresses its ability to offer scale-able AI without the need to centralise to the cloud, and is also pushing its investment in hardware-level security. That means issues like memory exploits (outside of the control of users plugged into centralised AI models) can be avoided.

Of course, sectors already highly-regulated in terms of data practices are unlikely to experience relaxed governance in the future – the opposite is pretty much inevitable. All industries will be seeing more regulation and greater penalties for non-compliance in the years to come. However, to balance that, there are significant competitive advantages available to those that can demonstrate their systems’ inherent safety and security. It’s into this regulatory landscape that Arm sees itself and local, edge AI fitting.

Additionally, in Europe and Scandinavia, ESG goals are going to be increasingly important. Here, the power-sipping nature of Arm chips offers big advantages. That’s a trend that even the US hyperscalers are responding to: AWS’s latest SHALAR range of low-cost, low-power Arm-based platforms is there to satisfy that exact demand.

Arm’s collaboration with cloud hyperscalers such as AWS and Microsoft produces chips that combine efficiency with the necessary horsepower for AI applications, the company says.

What’s next from Arm and the industry

Jesaitis pointed out several trends that enterprises may be seeing in the next 12 to 18 months. Global AI exports, particularly from the US and Middle East, are ensuring that local demand for AI can be satisfied by the big providers. Arm is a company that can supply both big providers in these contexts (as part of their portfolios of offerings) and satisfy the rising demand for edge-based AI.

Jesaitis also sees edge AI as something of the hero of sustainability in an industry increasingly under fire for its ecological impact. Because Arm technology’s biggest market has been in low-power compute for mobile, it’s inherently ‘greener’. As enterprises hope to meet energy goals without sacrificing compute, Arm offers a way that combines performance with responsibility.

Redefining “smart”

Arm’s vision of AI at the edge means computers and the software running on them can be context-aware, cheap to run, secure by design, and – thanks to near-zero network latency – highly-responsive. Jesaitis said, “We used to call things ‘smart’ because they were online. Now, they’re going to be truly intelligent.”

(Image source: “Factory Floor” by danielfoster437 is licensed under CC BY-NC-SA 2.0.)

 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Arm and the future of AI at the edge appeared first on AI News.

]]>
Inside China’s push to apply AI across its energy system https://www.artificialintelligence-news.com/news/inside-chinas-push-to-apply-ai-across-its-energy-system/ Tue, 23 Dec 2025 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=111413 Under China’s push to clean up its energy system, AI is starting to shape how power is produced, moved, and used — not in abstract policy terms, but in day-to-day operations. In Chifeng, a city in northern China, a renewable-powered factory offers a clear example. The site produces hydrogen and ammonia using electricity generated entirely […]

The post Inside China’s push to apply AI across its energy system appeared first on AI News.

]]>
Under China’s push to clean up its energy system, AI is starting to shape how power is produced, moved, and used — not in abstract policy terms, but in day-to-day operations.

In Chifeng, a city in northern China, a renewable-powered factory offers a clear example. The site produces hydrogen and ammonia using electricity generated entirely from nearby wind and solar farms. Unlike traditional plants connected to the wider grid, this facility runs on its own closed system. That setup brings a problem as well as a benefit: renewable power is clean, but it rises and falls with the weather.

To keep production stable, the factory relies on an AI-driven control system built by its owner, Envision. Rather than following fixed schedules, the software continuously adjusts output based on changes in wind and sunlight. As reported by Reuters, Zhang Jian, Envision’s chief engineer for hydrogen energy, compared the system to a conductor, coordinating electricity supply and industrial demand in real time.

When wind speeds increase, production ramps up automatically to take full advantage of the available power. When conditions weaken, electricity use is quickly reduced to avoid strain. Zhang said the system allows the plant to operate at high efficiency despite the volatility of renewable energy.

Projects like this are central to China’s plans for hydrogen and ammonia, fuels seen as important for cutting emissions in sectors such as steelmaking and shipping. They also point to a broader strategy: using AI to manage complexity as the country adds more renewable power to its grid.

Researchers argue that AI could play a significant role in meeting China’s climate goals. Zheng Saina, an associate professor at Southeast University in Nanjing who studies low-carbon transitions, said AI can support tasks ranging from emissions tracking to forecasting electricity supply and demand. At the same time, she cautioned that AI itself is driving rapid growth in power consumption, particularly through energy-hungry data centres.

China now installs more wind and solar capacity than any other country, but absorbing that power efficiently remains a challenge. According to Cory Combs, associate director at Beijing-based research firm Trivium China, AI is increasingly seen as a way to make the grid more flexible and responsive.

That thinking was formalised in September, when Beijing introduced an “AI+ energy” strategy. The plan calls for deeper links between AI systems and the energy sector, including the development of multiple large AI models focused on grid operations, power generation, and industrial use. By 2027, the government aims to roll out dozens of pilot projects and test AI across more than 100 use cases. Within another three years, officials want China to reach what they describe as a world-leading level of AI integration in energy.

Combs said the focus is on highly specialised tools designed for specific jobs, such as managing wind farms, nuclear plants, or grid balancing, rather than general-purpose AI. This approach contrasts with the United States, where much of the investment has gone into building advanced large-language models, according to Hu Guangzhou, a professor at the China Europe International Business School in Shanghai.

One area where AI could have immediate impact is demand forecasting. Fang Lurui, an assistant professor at Xi’an Jiaotong-Liverpool University, said power grids must match supply and demand at every moment to avoid outages. Accurate forecasts of renewable output and electricity use allow operators to plan ahead, storing energy in batteries when needed and reducing reliance on coal-fired backup plants.

Some cities are already experimenting. Shanghai has launched a citywide virtual power plant that links dozens of operators — including data centres, building systems, and electric vehicle chargers — into a single coordinated network. During a trial last August, the system reduced peak demand by more than 160 megawatts, roughly equivalent to the output of a small coal plant.

Combs said such systems matter because modern power generation is increasingly scattered and intermittent. “You need something very robust that is able to be predictive and account for new information very quickly,” he said.

Beyond the grid, China is also looking to apply AI to its national carbon market, which covers more than 3,000 companies in emissions-heavy industries such as power, steel, cement, and aluminium. These sectors together produce over 60% of the country’s carbon emissions. Chen Zhibin, a senior manager at Berlin-based think tank adelphi, said AI could help regulators verify emissions data, refine the allocation of free allowances, and give companies clearer insight into their production costs.

Still, the risks are growing alongside the opportunities. Studies suggest that by 2030, China’s AI data centres could consume more than 1,000 terawatt-hours of electricity each year — roughly the same as Japan’s current annual usage. Lifecycle emissions from the AI sector are projected to rise sharply and peak well after China’s 2030 emissions target.

Xiong Qiyang, a doctoral researcher at Renmin University of China who worked on one such study, said the results reflect the reality that coal still dominates China’s power mix. He warned that rapid AI expansion could complicate national climate goals if energy sources do not shift quickly enough.

In response, regulators have begun tightening rules. A 2024 action plan requires data centres to improve energy efficiency and increase their use of renewable power by 10% each year. Other initiatives encourage new facilities to be built in western regions, where wind and solar resources are more abundant.

Operators on the east coast are also testing new ideas. Near Shanghai, an underwater data centre is set to open, using seawater for cooling to cut energy and water use. The developer, Hailanyun, said the facility will draw most of its power from an offshore wind farm and could be replicated if the project proves viable.

Despite the growing energy demands of AI, Xiong argued that its overall impact on emissions could still be positive if applied carefully. Used to optimise heavy industry, power systems, and carbon markets, he said, AI may remain an essential part of China’s effort to cut emissions — even as it creates new pressures that policymakers must manage.

(Photo by Matthew Henry)

See also: Can China’s chip stacking strategy really challenge Nvidia’s AI dominance?

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Inside China’s push to apply AI across its energy system appeared first on AI News.

]]>
Mining business learnings for AI deployment https://www.artificialintelligence-news.com/news/mining-ai-gives-businesses-food-for-thought-in-real-life-deployments-of-oi/ Tue, 16 Dec 2025 12:31:59 +0000 https://www.artificialintelligence-news.com/?p=111343 Mining conglomerate BHP describes AI as the way it’s turning operational data into better day-to-day decisions. A blog post from the company highlights the analysis of data from sensors and monitoring systems to spot patterns and flag issues for plant machinery, giving choices to decision-makers that can improve efficiency and safety – plus reduce environmental […]

The post Mining business learnings for AI deployment appeared first on AI News.

]]>
Mining conglomerate BHP describes AI as the way it’s turning operational data into better day-to-day decisions. A blog post from the company highlights the analysis of data from sensors and monitoring systems to spot patterns and flag issues for plant machinery, giving choices to decision-makers that can improve efficiency and safety – plus reduce environmental impact.

For business leaders at BHP, the useful question was not “Where can we use AI?” but “Which decisions do we make repeatedly, and what information would improve them?”

Portfolio not showcase

BHP describes the end-to-end effects of AI on operations, or as it puts it, “from mineral extraction to customer delivery.” Leaders had decided to move beyond pilot rollouts, treating AI as an operational capability. It started with a small set of problems that affected the company’s performance; places where change could be measured in results.

The company found it could avoid unplanned downtime of machinery, plus it tightened its energy and water use. Each use case addressing a small but impactful problem was given an owner and an accompanying KPI. Results were reviewed with the same regularity used for other operational performance monitoring elsewhere in the company.

Where BHP uses AI daily

In addition to BHP focusing specifically on areas such as predictive maintenance and energy optimisation, it gave consideration to using AI in more adventurous yet important operations such as autonomous vehicles and real-time staff health monitoring. Such categories can translate well to other asset-heavy environments, across logistics, manufacturing, and heavy industry.

Predictive maintenance

Predictive maintenance is the process of planning repairs in scheduled downtime to reduce unexpected failures and costly, unplanned stoppages. Here, AI models analyse equipment data from on-board sensors and can anticipate maintenance needs. This cuts breakdown numbers and reduces equipment-related safety incidents. BHP runs predictive analytics across most of its load-and-haul fleets and its materials handling systems. A central maintenance centre provides real-time and longer-range indications of machine health and potential failure or degradation.

Prediction has become an integral part of its machinery-heavy operations, where previously, such information was presented as ‘just another’ report, one that could get lost in the bureaucracy of the company. It models and defines thresholds which trigger actions directly to teams planning maintenance.

Energy and water optimisation

Deploying predictive maintenance in this manner at its facilities in Escondida in Chile, the company reports savings of more than three giga-litres of water and 118 gigawatt hours of energy in two years, attributing the gains directly to AI. The technology gives operators real-time options and analytics that identify anomalies and automate corrective actions at multiple facilities, including concentrators and desalination plants.

The lesson it’s learned is placing AI where decisions happen: When operators and control teams can act on recommendations in real time, improvements compound. Conversely periodic reporting means decisions are only taken if staff both see the results of data, and then decide it’s necessary. The realtime nature of data analysis and the use of triggers-to-action mean the differences becomes quickly apparent.

Autonomy and remote operations

BHP is also using more advanced technologies like AI-supported autonomous vehicles and machinery. These are higher-risk areas, and the tech has been found to reduce worker exposure to risk, and cut the human error factor in incidents. At the company, complex operational data flows through regional centres from remote facilities. So, without the use of AI and analytics, staff would not be able to optimise every decision in the way that software achieves.

The use of AI-integrated wearables is increasing in many industries, including engineering, utilities, manufacturing, and mining. BHP leads the way in protecting its staff, who often work in very challenging conditions. Wearables can monitor personal conditions, reading heart rate and fatigue indicators, and provide real-time alerts to supervisors. One example might be ‘smart’ hard-hat sensor technology, used by BHP at Escondida, which measures truck driver fatigue by analysing drivers’ brain waves.

A plan leaders can run

Regardless of industry, decision-makers can draw learnings from BHP’s experiences in deploying AI at the (literal) coal-face. The following plan could help leaders in their own strategies to leverage AI in operational problem-areas:

  1. Choose one reliability problem and one resource-efficiency problem that operations teams already track, then attach a KPI.
  2. Map the workflow: who will see the output and what action they can take?
  3. Put basic governance in place for data quality and model monitoring, then review performance alongside operational KPIs.
  4. Start with decision support in higher-risk processes, and automate only after teams validate controls.

(Image source: “Shovel View at a Strip Mining Coal” by rbglasson is licensed under CC BY-NC-SA 2.0.)

 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Mining business learnings for AI deployment appeared first on AI News.

]]>
New data centre projects mark Anthropic’s biggest US expansion yet https://www.artificialintelligence-news.com/news/new-data-centre-projects-mark-anthropic-biggest-us-expansion/ Thu, 13 Nov 2025 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=110529 New US data centre projects in Texas and New York will receive $50 billion in new funding, part of a plan to grow US computing capacity for advanced AI work. The facilities, built with Fluidstack, are designed for Anthropic’s systems and will focus on power and efficiency needs that come with training and running large […]

The post New data centre projects mark Anthropic’s biggest US expansion yet appeared first on AI News.

]]>
New US data centre projects in Texas and New York will receive $50 billion in new funding, part of a plan to grow US computing capacity for advanced AI work. The facilities, built with Fluidstack, are designed for Anthropic’s systems and will focus on power and efficiency needs that come with training and running large models across these data centre sites.

Fluidstack provides large GPU clusters to companies such as Meta, Midjourney, and Mistral. The partnership reflects a wider push across the tech industry this year, as many firms increase spending on US infrastructure while the Trump administration urges companies to build and invest inside the country. These moves show how much demand there is for US data centre capacity as AI workloads grow.

In January, President Donald Trump instructed his administration to craft an AI Action Plan aimed at making “America the world capital in artificial intelligence.” Several firms later outlined major AI and energy spending plans during Trump’s tech and AI summit in July, many of which involved expanding US data centre operations or securing more compute across the country.

The new sites are expected to bring about 800 full-time jobs and 2,400 construction jobs. They are scheduled to come online in phases through 2026 and are meant to support the goals in the AI Action Plan by strengthening domestic compute resources. Company leaders say they want these projects to create stable jobs and improve America’s position in AI research by adding more US data centre capacity.

The investment also comes at a time when lawmakers are paying closer attention to where high-end compute capacity sits and how much of it stays in the US. Anthropic’s growing US data centre footprint places the company among the largest builders of physical AI infrastructure in the country, reinforcing the push to keep more AI development rooted in the US rather than overseas.

“We’re getting closer to AI that can accelerate scientific discovery and help solve complex problems in ways that weren’t possible before. Realising that potential requires infrastructure that can support continued development at the frontier,” said Dario Amodei, CEO and co-founder of Anthropic. “These sites will help us build more capable AI systems that can drive those breakthroughs, while creating American jobs.”

Anthropic’s move comes as OpenAI builds out its own network. The ChatGPT maker has secured more than $1.4 trillion in long-term commitments through partners such as Nvidia, Broadcom, Oracle, and major cloud platforms like Microsoft, Google, and Amazon. The scale of these plans has raised questions about whether the US power grid and related industries can support such rapid expansion, especially as more firms compete for space, energy, and equipment tied to US data centre growth.

Anthropic says its growth has been driven by its technical staff, its focus on safety work, and its research on alignment and interpretability. Claude is now used by more than 300,000 business customers, and the number of large accounts—those producing more than $100,000 in yearly revenue—has grown almost seven times over the past year.

Internal projections reported by The Wall Street Journal suggest the company expects to break even by 2028. OpenAI, by comparison, is said to be projecting $74 billion in operating losses that same year. To keep up with rising demand, Anthropic chose Fluidstack to build facilities tailored to its hardware needs, pointing to the company’s speed and its ability to deliver large-scale power capacity on tight timelines.

“We selected Fluidstack as our partner for its ability to move with exceptional agility, enabling rapid delivery of gigawatts of power,” an Anthropic leader said. Gary Wu, co-founder and CEO of Fluidstack, added: “Fluidstack was built for this moment. We’re proud to partner with frontier AI leaders like Anthropic to accelerate and deploy the infrastructure necessary to realise their vision.”

Anthropic says this level of spending is needed to support fast-rising usage while keeping its research momentum. The company also plans to focus on cost-efficient ways to scale.

Earlier this fall, the firm was valued at $183 billion. It is backed by Alphabet and Amazon, and a separate 1,200-acre data centre campus built for Anthropic by Amazon in Indiana is already in operation. That $11 billion site is running today, while many others in the sector are still in planning stages. Anthropic has also expanded its compute arrangement with Google by tens of billions of dollars.

These developments come as the federal government’s role in AI infrastructure funding becomes more contested. Last week, OpenAI asked the Trump administration to broaden a CHIPS Act tax credit so it would cover AI data centres and grid equipment such as transformers, according to a letter reported by Bloomberg. The request followed criticism over earlier comments from CFO Sarah Friar, who mentioned the possibility of a government “backstop” for the company’s compute deals. OpenAI has since stepped away from the idea, but the episode highlighted ongoing uncertainty over how America’s AI infrastructure will be financed — and who will pay for it.

(Photo by Scott Blake)

See also: Google reveals its own version of Apple’s AI cloud

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post New data centre projects mark Anthropic’s biggest US expansion yet appeared first on AI News.

]]>
How AI is changing the way we travel https://www.artificialintelligence-news.com/news/how-ai-is-changing-the-way-we-travel/ Tue, 07 Oct 2025 11:00:00 +0000 https://www.artificialintelligence-news.com/?p=109755 AI is reshaping how people plan and experience travel. From curated videos on Instagram Reels to booking engines that build entire itineraries in seconds, AI is becoming a powerful force in how journeys are imagined, booked, and lived. But this shift raises an important question: is AI giving travellers more freedom, or quietly steering their […]

The post How AI is changing the way we travel appeared first on AI News.

]]>
AI is reshaping how people plan and experience travel. From curated videos on Instagram Reels to booking engines that build entire itineraries in seconds, AI is becoming a powerful force in how journeys are imagined, booked, and lived. But this shift raises an important question: is AI giving travellers more freedom, or quietly steering their choices?

Fahd Hamidaddin, Founding CEO of the Saudi Tourism Authority and President of the upcoming TOURISE Summit, believes AI can do both, speaking to AI News. In a wide-ranging conversation, he explained how AI is transforming travel discovery, personalisation, cultural exchange, and ethics—and why the industry must set clear guardrails as technology takes on a more active role.

AI as a travel companion

AI is changing how people discover destinations. Instead of generic travel lists, platforms now serve content that feels personal. “AI has turned travel discovery into a personal canvas,” Hamidaddin said. “Platforms like Instagram Reels no longer just show ‘where to go’; they curate journeys that feel tailor-made for each traveller.”

Fahd Hamidaddin, Founding CEO of the Saudi Tourism Authority and President of the upcoming TOURISE Summit

This shift is not just about convenience. By highlighting lesser-known destinations, AI can spread demand and ease pressure on crowded tourist spots. It can also introduce travellers to authentic local experiences that might otherwise remain hidden.

Hamidaddin sees the next phase as “agentic AI”—technology that doesn’t just make suggestions but takes action. He described a future where AI automatically rebooks flights disrupted by weather, adjusts itineraries, and reschedules reservations in real time. “That’s frictionless travel—where the logistics fade and the adventure takes centre stage,” he said.

AI personalisation vs. algorithmic influence in travel

AI-driven booking engines promise hyper-personalised recommendations, matching experiences to individual interests and budgets. This can make planning smoother and more inspiring, but it also comes with risks.

“They do both,” Hamidaddin said when asked whether AI empowers travellers or guides them without their knowledge. “AI can empower travellers like never before—matching experiences to passions, budgets, and even moods. But unchecked, algorithms can quietly narrow horizons, nudging people toward predictable options. This risk only increases with agentic AI, which will make decisions on travellers’ behalf. That’s why transparency and accountability are non-negotiable. AI should be a compass, not a cage, and travellers must always hold the final word.”

Trust and transparency

The balance between personalisation and privacy will shape the next era of travel. As AI systems collect vast amounts of personal data, travellers are more aware of how their preferences, clicks, and searches are used. Hamidaddin stressed that trust is the foundation.

“The era of hyper-personalisation must be built on trust. Travellers know their data is powerful, and they’re right to ask how it’s being used,” he said. The solution, in his view, is “radical transparency: explicit consent, clear explanations, and real opt-in choices.”

Agentic AI, which can act on a traveller’s behalf, makes this even more important. If algorithms are booking, adjusting, or cancelling plans automatically, travellers need clear ways to control and understand these actions. “True innovation doesn’t just customise the journey; it safeguards the traveller’s confidence and autonomy,” he added.

Setting standards through TOURISE

Hamidaddin will lead discussions on these topics at the inaugural TOURISE Summit in Riyadh this November. He sees the summit as a chance to shape global standards for AI use in travel, not just showcase technology.

“The TOURISE is designed to be more than an event—it’s the world’s first platform where government, business, and technology leaders unite to shape travel tech responsibly,” he said. His goals include creating a shared ethical framework for AI, encouraging partnerships to address privacy and workforce challenges, promoting sustainability, and training the global tourism workforce to thrive in an AI-driven industry.

“TOURISE must set a new benchmark: innovation with integrity,” he said.

Cultural exchange and economic growth

AI’s influence goes beyond logistics. It is also changing cultural exchange and economic development, particularly in Saudi Arabia. “AI is dissolving barriers—linguistic, cultural, and economic. It’s curating authentic connections that go beyond sightseeing into meaningful exchange,” Hamidaddin said.

He explained how Saudi Arabia is using AI to highlight cultural and historical treasures like AlUla and Diriyah, while supporting artisans, festivals, and small businesses. Agentic AI will help create smoother travel experiences that allow visitors to focus more on culture and less on planning.

“This isn’t just about more visitors; it’s about inclusive growth, mutual respect, and shared prosperity,” he said. By 2030, AI is expected to contribute $135 billion to Saudi Arabia’s GDP, with tourism playing a central role. But for Hamidaddin, the real impact is measured in “bonds between people.”

Ethical guardrails for AI in travel

As AI systems take on more responsibility, clear ethical standards become essential. Hamidaddin outlined several priorities: making AI usage clear to users, regularly auditing algorithms for bias, giving travellers control over their data, and designing systems that promote cultural diversity and accessibility.

“With agentic AI, the stakes rise: when an AI acts on a traveller’s behalf, we must ensure transparency, explainability, and accountability. Agency must never replace autonomy,” he said.

Innovation with ethics

The debate isn’t about whether to adopt AI, but how to do so responsibly. Hamidaddin argues that innovation should align with human values and environmental priorities. “It’s not about chasing every shiny new tool; it’s about aligning innovation with human values and planetary needs,” he said.

He believes governments, businesses, communities, and travellers must collaborate to agree on shared principles. Agentic AI makes this even more urgent, as decisions may increasingly be made by machines. “Our job is to ensure technology serves people, not the other way around,” he added.

A new era for travel

Hamidaddin is optimistic about what lies ahead. “What excites me most is that travel is becoming transformative again,” he said. He imagines a future where language barriers disappear, itineraries adapt in real time, and every trip supports local communities.

In Saudi Arabia, platforms like “Spirit of Saudi” are already using AI to showcase authentic experiences, from desert adventures to artisan workshops. The next step is agentic journeys, where AI travel companions handle logistics seamlessly, freeing travellers to focus on discovery and connection.

“At TOURISE, I believe we’re not simply shaping tourism’s future—we’re igniting a new era of connection and shared prosperity across the globe,” he said.

(Photo by S O C I A L . C U T)

See also: AI causes reduction in users’ brain activity – MIT

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post How AI is changing the way we travel appeared first on AI News.

]]>
Rising AI demands push Asia Pacific data centres to adapt, says Vertiv https://www.artificialintelligence-news.com/news/rising-ai-demands-push-asia-pacific-data-centres-to-adapt/ Tue, 30 Sep 2025 08:15:55 +0000 https://www.artificialintelligence-news.com/?p=109635 As more companies in Asia Pacific adopt artificial intelligence to boost their operations, the pressure on data centres is growing fast. Traditional facilities, built for earlier generations of computing, are struggling to keep up with the heavy energy use and cooling demands of modern AI systems. By 2030, GPU-driven workloads could push rack power densities […]

The post Rising AI demands push Asia Pacific data centres to adapt, says Vertiv appeared first on AI News.

]]>
As more companies in Asia Pacific adopt artificial intelligence to boost their operations, the pressure on data centres is growing fast. Traditional facilities, built for earlier generations of computing, are struggling to keep up with the heavy energy use and cooling demands of modern AI systems. By 2030, GPU-driven workloads could push rack power densities toward 1 MW, making incremental upgrades no longer enough. Instead, operators are now turning toward purpose-built “AI factory” data centres that are designed from the ground up.

AI News spoke with Paul Churchill, Vice President of Vertiv Asia, to better understand how the region is preparing for this shift and what kinds of infrastructure changes lie ahead.

Explosive market growth is setting the pace

The AI data-centre market is projected to surge from $236 billion in 2025 to nearly $934 billion by 2030. This growth is driven by rapid adoption of AI in industries like finance, healthcare, and manufacturing. These sectors rely on high-performance computing environments powered by dense GPU clusters, which require far more energy and cooling capacity than traditional servers.

In Asia Pacific, this demand is amplified by government investments in digitalisation, the expansion of 5G, and the rollout of cloud-native and generative AI applications. All of this is pushing compute needs higher at a pace the region has never seen before.

Churchill explained that meeting this demand requires more than just larger facilities. It calls for smarter infrastructure strategies that are scalable and sustainable. “Infrastructure leaders must move beyond piecemeal upgrades. A future-ready strategy involves adopting AI-optimised infrastructure that combines high-capacity power systems, advanced thermal management, and integrated, scalable designs,” he said.

Cooling and power challenges are rising

As rack densities increase from 40 kW to 130 kW, and potentially up to 250 kW by 2030, cooling and power delivery are becoming important issues. Traditional air cooling methods are no longer enough for these conditions.

To address this, Vertiv is developing hybrid cooling systems that mix direct-to-chip liquid cooling with air-based solutions. Systems can adjust to changing workloads, reduce energy use, and maintain reliability. “Our coolant distribution units enable direct-to-chip liquid cooling while ensuring reliability and serviceability in high-density environments,” Churchill said.

Paul Churchill, Vice President of Vertiv Asia

Power delivery is also becoming more complex. AI workloads fluctuate rapidly, so infrastructure needs to react in real time. Vertiv is evolving its rack power distribution units and busway systems to handle higher voltages and improve load balancing. Intelligent monitoring helps operators manage loads more efficiently, reduce wasted capacity, and extend uptime – a key consideration in parts of Southeast Asia where power grids are less stable.

Data centres are being redesigned for AI

The rise of liquid-cooled GPU pods and 1 MW racks, like those planned by AMD and hyperscalers such as Microsoft, Google, and Meta, signals a deeper architectural shift. Instead of retrofitting older facilities, new data centres are being designed specifically to support AI.

“The future of data-centre architecture is hybrid, and these infrastructures require facilities to be built around liquid flow,” Churchill said. This includes new floor layouts, advanced coolant distribution, and more sophisticated power systems.

The next-generation facilities will integrate cooling, power, and monitoring from the chip level to the grid. For Asia Pacific, where hyperscale campuses are expanding rapidly, this kind of integrated design is essential to keep up with performance expectations and sustainability goals.

From incremental upgrades to AI factory data centres

By 2030, Asia Pacific is expected to overtake the US in data centre capacity, reaching almost 24 GW of commissioned power. To handle this growth, enterprises are moving away from ad hoc upgrades toward full-stack AI factory data centres.

Churchill said this transition should happen in stages. The first step is integrated planning, bringing together power, cooling, and IT management rather than treating them as separate systems. The approach simplifies deployment and provides a strong base for scaling.

The second step is to adopt modular and prefabricated systems. These allow companies to add capacity in phases without major disruptions. “Companies can deploy factory-tested modules alongside existing infrastructure, gradually migrating workloads to AI-ready capacity without disruptive overhauls,” he said.

Finally, sustainability must be built into every stage. This includes using lithium-ion energy storage, grid-interactive UPS systems, and higher-voltage distribution to improve efficiency and resilience.

DC power gains new relevance for AI data centres

Vertiv recently introduced PowerDirect Rack, a DC power shelf designed for AI and high-performance computing. Switching to DC power can cut energy losses by reducing the number of conversion steps between the grid and the server. It also aligns with renewable energy and battery storage systems, which are becoming more common in Asia Pacific.

This is especially useful in energy-constrained markets like Vietnam and the Philippines. In these regions, flexible power solutions are essential to keep facilities running smoothly. As Churchill noted, DC power is “not just an efficiency play – it is a strategy for enabling sustainable scalability.”

Sustainability is becoming a central priority

With AI driving up energy use, data-centre operators are facing stricter regulations and rising grid constraints. This is particularly true in Southeast Asia, where power reliability and tariffs vary widely.

Vertiv is working with operators to integrate alternative energy sources like lithium-ion batteries, hybrid power systems, and microgrids. These can reduce dependence on the grid and improve resilience. There is also growing interest in solar-backed UPS systems and advanced energy storage technologies, which help balance loads and manage costs.

Cooling efficiency is another major focus. Hybrid liquid cooling systems can reduce both energy and water use compared to older methods. “Our focus is on delivering infrastructure that meets performance demands while aligning with ESG goals,” Churchill said. “We’re collaborating with our partners to ensure that AI-driven growth in the region remains responsible, sustainable, and aligned with long-term digital and environmental objectives.”

Modular solutions support rapid expansion

Many emerging economies in Asia Pacific face challenges like limited land, unstable power supply, and shortages of skilled labour. In these settings, modular and prefabricated data-centre systems offer a practical solution.

Prefabricated modules can cut deployment times by up to 50%, while improving energy efficiency and scalability. They allow operators to expand gradually, adding capacity as needed without heavy upfront investment. The flexibility is especially valuable for AI workloads, which can grow quickly and unpredictably.

By combining compact design with energy-efficient operation, modular systems give operators a way to build AI-ready capacity faster and with less risk – a crucial advantage as the region’s digital economies grow.

Preparing for a demanding future

The AI surge is reshaping how data centres are built and operated in Asia Pacific. As workloads intensify and sustainability pressures mount, companies can no longer rely on outdated infrastructure. The move toward AI factory data centres, powered by advanced cooling, DC power, and modular systems, reflects a shift in how the region is preparing for the next era of computing.

(Photo by İsmail Enes Ayhan)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Rising AI demands push Asia Pacific data centres to adapt, says Vertiv appeared first on AI News.

]]>
Shah Muhammad, Sweco: How AI is building the future of our cities https://www.artificialintelligence-news.com/news/shah-muhammad-sweco-how-ai-is-building-future-of-our-cities/ Thu, 31 Jul 2025 16:44:13 +0000 https://www.artificialintelligence-news.com/?p=107258 Shah Muhammad, who leads AI Innovation at the design and engineering firm Sweco, offers his insights into how AI is building the cities of the future. Ever been stuck in traffic and thought, “Surely, there’s a better way to design this city?” Or walked past a giant new building and wondered if it would be […]

The post Shah Muhammad, Sweco: How AI is building the future of our cities appeared first on AI News.

]]>
Shah Muhammad, who leads AI Innovation at the design and engineering firm Sweco, offers his insights into how AI is building the cities of the future.

Ever been stuck in traffic and thought, “Surely, there’s a better way to design this city?” Or walked past a giant new building and wondered if it would be an energy-guzzling monster?

For decades, building our towns and cities has been a slow, complicated process, often relying on educated guesswork. But what if we could give city planners superpowers? What if they could test-drive a dozen different futures before a single shovel hits the ground?

That’s exactly what’s starting to happen. And the secret ingredient is AI.

Headshot of Shah Muhammad, who leads AI Innovation at the design and engineering firm Sweco, and has given his insights on how AI is building the future of our cities.

“AI is revolutionising urban design and infrastructure planning at Sweco by optimising processes, enhancing decision-making, and improving sustainability outcomes,” Shah explains. “It allows us to analyse vast amounts of data, simulate various scenarios, and create more efficient and resilient urban environments.”

Shah is saying that AI gives his team the ability to ask the big questions that will impact people’s lives when designing the cities of the future: “What’s the smartest way to build this neighbourhood to cut down on traffic jams and pollution? How can we design a building that stays cool in a heatwave without huge electricity bills?” The AI can run the numbers on thousands of possibilities to find the best path forward.

Of course, the real world is messy. It’s not a neat and tidy computer simulation. It’s full of unpredictable weather, unexpected delays, and the beautiful chaos of human life. This is the number one headache.

“The biggest challenge in applying data-driven models to physical environments is the complexity and variability of real-world conditions,” Shah says. “Ensuring that models accurately represent these conditions and can adapt to changing conditions is crucial.”

So, how do they deal with that? They start with the basics. They get their house in order. Before they even think about AI, they make sure the information it learns from is rock-solid and trustworthy.

“To ensure data quality and interoperability across projects, we implement rigorous data governance practices, standardise data formats, and use interoperable software tools,” he says.

That might sound a bit technical, but think of it this way: they’re making sure everyone on the team is singing from the same hymn sheet. When all the different software tools can talk to each other and everyone trusts the information, the AI can do its job properly. It “enables seamless data exchange and collaboration among different teams and stakeholders.”

But of all the things AI can do, this next part might be the most hopeful when using it to design future cities. It shows that this technology can have a real heart.

“There are many projects where AI has made a measurable impact on sustainability, making it hard to single out one,” he reflects. “However, if I were to choose, I would highlight a project where AI was used to preserve biodiversity by identifying endangered species and providing this information to researchers.”

In this scenario, technology is giving nature a voice in the planning meeting. It’s like the AI raising its hand and saying, “Hang on, let’s be careful here, there’s a family of rare birds living in this area.” It allows us to build with respect for the world around us.

So, what’s the next chapter? According to Shah, it’s about turning that crystal ball into a real-time guide. 

“According to me, the biggest opportunity for AI in the AEC sector lies in predictive analytics and automation,” Shah explains. “By anticipating future trends, identifying potential issues early, and automating routine tasks, AI can greatly enhance efficiency, reduce costs, and improve the overall quality of projects.”

This could mean safer bridges, roads that need fewer repairs, and less disruption to our lives. It means freeing up talented people from the boring tasks to focus on building the cities of the future that are more in tune with the people who call them home.

Shah Muhammad is speaking at AI & Big Data Expo Europe in Amsterdam on 24-25 September 2025 where he will be hosting a presentation on ‘Leveraging Generative and Agentic AI for Intelligent Process Automation’. Find out more about the event and how to attend here.

See also: Zuckerberg outlines Meta’s AI vision for ‘personal superintelligence’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Shah Muhammad, Sweco: How AI is building the future of our cities appeared first on AI News.

]]>
Study finds AI can slash global carbon emissions https://www.artificialintelligence-news.com/news/study-finds-ai-slash-global-carbon-emissions/ Wed, 02 Jul 2025 16:01:40 +0000 https://www.artificialintelligence-news.com/?p=106998 A study from the London School of Economics and Systemiq suggests it’s possible to cut global carbon emissions without giving up modern comforts—with AI as our ally in the climate fight. According to the duo’s research, smart AI applications in just three industries could slash greenhouse gas emissions by 3.2-5.4 billion tonnes each year by […]

The post Study finds AI can slash global carbon emissions appeared first on AI News.

]]>
A study from the London School of Economics and Systemiq suggests it’s possible to cut global carbon emissions without giving up modern comforts—with AI as our ally in the climate fight.

According to the duo’s research, smart AI applications in just three industries could slash greenhouse gas emissions by 3.2-5.4 billion tonnes each year by 2035.

In contrast to much of what we’ve heard, these reductions would far outweigh the carbon that AI itself produces.

The study, ‘Green and intelligent: the role of AI in the climate transition,’ doesn’t just see AI as a tool for small improvements. Instead, it could help transform our entire economy into something sustainable and inclusive.

Net-zero as an opportunity, not a burden

The researchers suggest we should see the shift to a net-zero economy not as a burden but as “a great opportunity for innovation and sustainable, resilient, and inclusive economic growth.”

They focused on three of the major carbon culprits – power generation, meat and dairy production, and passenger vehicles – which together cause almost half of global emissions. The potential AI savings from just these sectors would more than cancel out the estimated 0.4 to 1.6 billion tonnes of annual emissions from running all those AI data centers.

As the authors put it, “the case for using AI for the climate transition is not only strong but imperative.”

Five big ways AI can help save our planet (and us)

1. Making complex systems smarter

Think about how our modern lives depend on intricate networks for energy, transport, and city living. AI can redesign these systems to work much more efficiently.

Remember those frustrating power outages when the wind stops blowing or clouds cover the sun? AI can help predict these fluctuations in renewable energy and balance them with real-time demand. DeepMind has already shown its AI can boost wind energy’s economic value by 20% by reducing the need for backup power sources.

2. Speeding up discovery and reducing waste

Almost half the emissions cuts needed to reach net-zero by 2050 will rely on technologies that are barely out of the lab today and AI is turbocharging these breakthroughs.

Take Google DeepMind’s GNOME tool, which has already identified over two million new crystal structures that could revolutionise renewable energy and battery storage. Or consider how Amazon’s AI packaging algorithms have saved over three million metric tons of material since 2015.

3. Helping us make better choices

Our daily decisions – from what we eat, to how we travel – could drive up to 70% of emissions reductions by 2050. But making the right choice isn’t always easy.

AI can be our personal environmental coach, breaking down information barriers and offering tailored recommendations. Already using Google Maps’ fuel-efficient routes? That’s AI helping you cut emissions while saving gas money. And those smart home systems like Nest use AI to optimise your heating and cooling, which could save millions of tonnes of CO2 if we all adopted them.

4. Predicting climate changes and policy effects

How do we plan for a changing climate? AI can process enormous datasets to forecast climate patterns with unprecedented accuracy.

Tools like IceNet (developed by the British Antarctic Survey and the Alan Turing Institute) are using AI to predict sea ice levels better than ever before, helping communities and businesses prepare. This capability also extends to helping governments design climate policies that actually work, by learning from countless case studies around the world.

5. Keeping us safe in extreme weather

As climate disasters intensify, early warning can save lives. AI-powered systems for floods and wildfires are becoming essential safety nets.

Google’s Flood Hub uses machine learning to provide flood forecasts up to five days in advance across more than 80 countries. That’s precious time for people to protect their homes and evacuate if necessary.

The numbers support AI cutting global carbon emissions

When researchers crunched the numbers, they found AI could:

  • Cut power sector emissions by 1.8 billion tonnes yearly by 2035 just by optimising renewable energy
  • Save between 0.9 and 3.0 billion tonnes annually by improving plant-based proteins to taste and feel more like meat
  • Reduce vehicle emissions by up to 0.6 billion tonnes each year through shared mobility and better battery technology

Here’s the catch: we can’t just sit back and let market forces determine how AI develops. The researchers call for an “active state” to ensure that AI benefits everyone and the planet.

“Governments have a critical role in ensuring that AI is deployed effectively to accelerate the transition equitably and sustainably,” they conclude.

What this means in practice is creating incentives for green AI research, regulating to minimise environmental impact, and investing in infrastructure so communities worldwide can share in the benefits.

By guiding innovation and working together internationally, we can unlock AI’s full potential to reduce global carbon emissions and tackle the climate crisis—and build a future where both people and the planet can thrive.

(Photo by Abhishek Mishra)

See also: Power play: Can the grid cope with AI’s growing appetite?

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, Data Centre Expo, Digital Transformation Expo, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Study finds AI can slash global carbon emissions appeared first on AI News.

]]>
Power play: Can the grid cope with AI’s growing appetite? https://www.artificialintelligence-news.com/news/power-play-can-the-grid-cope-with-ais-growing-appetite/ Mon, 30 Jun 2025 15:53:38 +0000 https://www.artificialintelligence-news.com/?p=106959 As the AI Energy Council gathers, the question hanging in the air is: how do we power the future without blowing the grid? The massive data centres needed to train and run the latest AI are thirsty for electricity. Data centre power use in the UK is on track to multiply six times over by […]

The post Power play: Can the grid cope with AI’s growing appetite? appeared first on AI News.

]]>
As the AI Energy Council gathers, the question hanging in the air is: how do we power the future without blowing the grid?

The massive data centres needed to train and run the latest AI are thirsty for electricity. Data centre power use in the UK is on track to multiply six times over by 2034, at which point it could be sucking up nearly a third of all our nation’s electricity. That’s a colossal strain to put on a system that was built for a completely different world, one with predictable, one-way power flows.

The AI Energy Council – a team-up of tech giants, energy firms, the Ofgem regulator, and the National Energy System Operator – has the critical job of trying to predict just how thirsty this AI beast will become. Their work is happening just as the government is pouring £2 billion into its AI Opportunities Action Plan, a grand vision for weaving AI into our hospitals, classrooms, and businesses.

UK Science and Technology Secretary Peter Kyle said: “Giving our researchers and innovators access to the processing power they need will not only maintain our standing as the world’s third-biggest AI power, but put British expertise at the heart of the AI breakthroughs which will improve our lives, modernise our public services, and spark the economic growth which is the cornerstone of our Plan for Change.

“We are clear-eyed though on the need to make sure we can power this golden era for British AI through responsible, sustainable energy sources. Today’s talks will help us drive forward that mission, delivering AI infrastructure which will benefit communities up and down the country for generations to come without ever compromising on our clean energy superpower ambitions.”

The sheer scale of the energy problem is hard to overstate. Globally, the electricity needed for data centres is expected to double in just five years, eventually demanding three times more power than the entire UK currently uses. And AI is the main culprit.

A single rack of AI servers can demand 120 kW of power, a massive leap from the 5-10 kW a normal rack needs. These aren’t steady sips of power, either. AI workloads can spike unpredictably, creating sudden, massive power surges that threaten the stability of the entire grid.

In response, the UK is planning a monumental overhaul. The centrepiece is the “Great Grid Upgrade,” a £58 billion investment designed to be a “once in a generation expansion” of the electricity network. This includes building a new high-capacity electrical superhighway running from north to south and expanding the offshore grid to bring in vast amounts of new wind power.

Ed Miliband, Secretary for Energy Security and Net Zero, commented: “We are making the UK a clean energy superpower, building the homegrown energy this country needs to get bills down for good and create new jobs as part of our Plan for Change.

“Bringing together the biggest players in AI and energy will help us discuss the role AI can play in building a new era of clean electricity for our country, and meeting the power demands of new technology as we build a clean power system for families and businesses.”

But there’s a huge roadblock. Even if we build the wind farms and solar panels, connecting them to the power grid to address surging AI demand right now is another story. The current process is slow, leaving more than 600 renewable energy projects – worth billions – stuck in a queue. Some have been told they could be waiting for 15 years.

Urgent reforms are being pushed through to try and clear this backlog, a vital step if our AI future is to be powered by green energy. The government is also trying to speed things up by declaring data centres “critical national infrastructure” and setting up “AI Growth Zones” where planning and power connections can be fast-tracked.

The data centre industry is shifting from being just part of the problem to becoming part of the solution. Instead of just being passive power hogs, they are becoming active partners in the energy grid. Many are chasing Net Zero targets, investing in their own on-site renewable power, and taking part in “demand-side response” programs. This means they can intelligently pause non-urgent AI tasks when the grid is under stress and fire them up again when green energy is plentiful, helping to balance the whole system.

AI itself could also help. The same complex algorithms that demand so much power can also be used to make our grid smarter, predicting energy spikes and optimising power flow in real-time.

The way forward is clear, but it won’t be easy. The UK has the right ideas and is putting serious money on the table to address the power grid demands of AI, but everything depends on speed and execution. The grid connection jam must be broken, and the Great Grid Upgrade needs to happen at pace.

(Photo by Andreas Jabusch)

See also: Anthropic tests AI running a real business with bizarre results

Want to learn more about AI and big data from industry leaders? Check out Data Centre Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including AI & Big Data Expo, Digital Transformation, Data Centre and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Power play: Can the grid cope with AI’s growing appetite? appeared first on AI News.

]]>
Will the AI boom fuel a global energy crisis? https://www.artificialintelligence-news.com/news/will-the-ai-boom-fuel-a-global-energy-crisis/ Fri, 16 May 2025 16:09:38 +0000 https://www.artificialintelligence-news.com/?p=106473 AI’s thirst for energy is ballooning into a monster of a challenge. And it’s not just about the electricity bills. The environmental fallout is serious, stretching to guzzling precious water resources, creating mountains of electronic waste, and, yes, adding to those greenhouse gas emissions we’re all trying to cut. As AI models get ever more […]

The post Will the AI boom fuel a global energy crisis? appeared first on AI News.

]]>
AI’s thirst for energy is ballooning into a monster of a challenge. And it’s not just about the electricity bills. The environmental fallout is serious, stretching to guzzling precious water resources, creating mountains of electronic waste, and, yes, adding to those greenhouse gas emissions we’re all trying to cut.

As AI models get ever more complex and weave themselves into yet more parts of our lives, a massive question mark hangs in the air: can we power this revolution without costing the Earth?

The numbers don’t lie: AI’s energy demand is escalating fast

The sheer computing power needed for the smartest AI out there is on an almost unbelievable upward curve – some say it’s doubling roughly every few months. This isn’t a gentle slope; it’s a vertical climb that’s threatening to leave even our most optimistic energy plans in the dust.

To give you a sense of scale, AI’s future energy needs could soon gulp down as much electricity as entire countries like Japan or the Netherlands, or even large US states like California. When you hear stats like that, you start to see the potential squeeze AI could put on the power grids we all rely on.

2024 saw a record 4.3% surge in global electricity demand, and AI’s expansion was a big reason why, alongside the boom in electric cars and factories working harder. 

Wind back to 2022, and data centres, AI, and even cryptocurrency mining were already accounting for nearly 2% of all the electricity used worldwide – that’s about 460 terawatt-hours (TWh).

Jump to 2024, and data centres on their own use around 415 TWh, which is roughly 1.5% of the global total, and growing at 12% a year. AI’s direct share of that slice is still relatively small – about 20 TWh, or 0.02% of global energy use – but hold onto your hats, because that number is set to rocket upwards.

The forecasts? Well, they’re pretty eye-opening. By the end of 2025, AI data centres around the world could demand an extra 10 gigawatts (GW) of power. That’s more than the entire power capacity of a place like Utah.

And, by 2027, the global power hunger of an AI data centre is tipped to reach 68 GW, which is almost what California had in total power capacity back in 2022. 

Towards the end of this decade, the figures get even more jaw-dropping. Global data centre electricity consumption is predicted to double to around 945 TWh by 2030, which is just shy of 3% of all the electricity used on the planet.

OPEC reckons data centre electricity use could even triple to 1,500 TWh by then. And Goldman Sachs? They’re saying global power demand from data centres could leap by as much as 165% compared to 2023, with those data centres specifically kitted out for AI seeing their demand shoot up by more than four times.

There are even suggestions that data centres could be responsible for up to 21% of all global energy demand by 2030 if you count the energy it takes to get AI services to us, the users.

When we talk about AI’s energy use, it mainly splits into two big chunks: training the AI, and then actually using it.

Training enormous models, like GPT-4, takes a colossal amount of energy. Just to train GPT-3, for example, it’s estimated they used 1,287 megawatt-hours (MWh) of electricity, and GPT-4 is thought to have needed a whopping 50 times more than that. 

While training is a power hog, it’s the day-to-day running of these trained models that can chew through over 80% of AI’s total energy. It’s reported that asking ChatGPT a single question uses about ten times more energy than a Google search (we’re talking roughly 2.9 Wh versus 0.3 Wh).

With everyone jumping on the generative AI bandwagon, the race is on to build ever more powerful – and therefore more energy-guzzling – data centres.

So, can we supply energy for AI – and for ourselves?

This is the million-dollar question, isn’t it? Can our planet’s energy systems cope with this new demand? We’re already juggling a mix of fossil fuels, nuclear power, and renewables. If we’re going to feed AI’s growing appetite sustainably, we need to ramp up and diversify how we generate energy, and fast.

Naturally, renewable energy – solar, wind, hydro, geothermal – is a huge piece of the puzzle. In the US, for instance, renewables are set to go from 23% of power generation in 2024 to 27% by 2026. 

The tech giants are making some big promises; Microsoft, for example, is planning to buy 10.5 GW of renewable energy between 2026 and 2030 just for its data centres. AI itself could actually help us use renewable energy more efficiently, perhaps cutting energy use by up to 60% in some areas by making energy storage smarter and managing power grids better.

But let’s not get carried away. Renewables have their own headaches. The sun doesn’t always shine, and the wind doesn’t always blow, which is a real problem for data centres that need power around the clock, every single day. The batteries we have now to smooth out these bumps are often expensive and take up a lot of room. Plus, plugging massive new renewable projects into our existing power grids can be a slow and complicated business.

This is where nuclear power is starting to look more appealing to some, especially as a steady, low-carbon way to power AI’s massive energy needs. It delivers that crucial 24/7 power, which is exactly what data centres crave. There’s a lot of buzz around Small Modular Reactors (SMRs) too, because they’re potentially more flexible and have beefed-up safety features. And it’s not just talk; big names like Microsoft, Amazon, and Google are seriously looking into nuclear options.

Matt Garman, who heads up AWS, recently put it plainly to the BBC, calling nuclear a “great solution” for data centres. He said it’s “an excellent source of zero carbon, 24/7 power.” He also stressed that planning for future energy is a massive part of what AWS does.

“It’s something we plan many years out,” Garman mentioned. “We invest ahead. I think the world is going to have to build new technologies. I believe nuclear is a big part of that, particularly as we look 10 years out.”

Still, nuclear power isn’t a magic wand. Building new reactors takes a notoriously long time, costs a fortune, and involves wading through complex red tape. And let’s be frank, public opinion on nuclear power is still a bit shaky, often because of past accidents, even though modern reactors are much safer.

The sheer speed at which AI is developing also creates a bit of a mismatch with how long it takes to get a new nuclear plant up and running. This could mean we end up leaning even more heavily on fossil fuels in the short term, which isn’t great for our green ambitions. Plus, the idea of sticking data centres right next to nuclear plants has got some people worried about what that might do to electricity prices and reliability for everyone else.

Not just kilowatts: Wider environmental shadow of AI looms

AI’s impact on the planet goes way beyond just the electricity it uses. Those data centres get hot, and cooling them down uses vast amounts of water. Your average data centre sips about 1.7 litres of water for every kilowatt-hour of energy it burns through.

Back in 2022, Google’s data centres reportedly drank their way through about 5 billion gallons of fresh water – that’s a 20% jump from the year before. Some estimates suggest that for every kWh a data centre uses, it might need up to two litres of water just for cooling. Put it another way, global AI infrastructure could soon be chugging six times more water than the entirety of Denmark.

And then there’s the ever-growing mountain of electronic waste, or e-waste. Because AI tech – especially specialised hardware like GPUs and TPUs – moves so fast, old kit gets thrown out more often. We could be looking at AI contributing to an e-waste pile-up from data centres hitting five million tons every year by 2030. 

Even making the AI chips and all the other bits for data centres takes a toll on our natural resources and the environment. It means mining for critical minerals like lithium and cobalt, often using methods that aren’t exactly kind to the planet.

Just to make one AI chip can take over 1,400 litres of water and 3,000 kWh of electricity. This hunger for new hardware is also pushing for more semiconductor factories, which, guess what, often leads to more gas-powered energy plants being built.

And, of course, we can’t forget the carbon emissions. When AI is powered by electricity generated from burning fossil fuels, it adds to the climate change problem we’re all facing. It’s estimated that training just one big AI model can pump out as much CO2 as hundreds of US homes do in a year.

If you look at the environmental reports from the big tech companies, you can see AI’s growing carbon footprint. Microsoft’s yearly emissions, for example, went up by about 40% between 2020 and 2023, mostly because they were building more data centres for AI. Google also reported that its total greenhouse gas emissions have shot up by nearly 50% over the last five years, with the power demands of its AI data centres being a major culprit.

Can we innovate our way out?

It might sound like all doom and gloom, but a combination of new ideas could help.

A big focus is on making AI algorithms themselves more energy-efficient. Researchers are coming up with clever tricks like “model pruning” (stripping out unnecessary bits of an AI model), “quantisation” (using less precise numbers, which saves energy), and “knowledge distillation” (where a smaller, thriftier AI model learns from a big, complex one). Designing smaller, more specialised AI models that do specific jobs with less power is also a priority.

Inside data centres, things like “power capping” (putting a lid on how much power hardware can draw) and “dynamic resource allocation” (shifting computing power around based on real-time needs and when renewable energy is plentiful) can make a real difference. Software that’s “AI-aware” can even shift less urgent AI jobs to times when energy is cleaner or demand on the grid is lower. AI can even be used to make the cooling systems in data centres more efficient.

On-device AI could also help to reduce power consumption. Instead of sending data off to massive, power-hungry cloud data centres, the AI processing happens right there on your phone or device. This could slash energy use, as the chips designed for this prioritise being efficient over raw power.

And we can’t forget about rules and regulations. Governments are starting to wake up to the need to make AI accountable for its energy use and wider environmental impact.

Having clear, standard ways to measure and report AI’s footprint is a crucial first step. We also need policies that encourage companies to make hardware that lasts longer and is easier to recycle, to help tackle that e-waste mountain. Things like energy credit trading systems could even give companies a financial reason to choose greener AI tech.

It’s worth noting that the United Arab Emirates and the United States shook hands this week on a deal to build the biggest AI campus outside the US in the Gulf. While this shows just how important AI is becoming globally, it also throws a spotlight on why all these energy and environmental concerns need to be front and centre for such huge projects.

Finding a sustainable future for AI

AI has the power to do some amazing things, but its ferocious appetite for energy is a serious hurdle. The predictions for its future power demands are genuinely startling, potentially matching what whole countries use.

If we’re going to meet this demand, we need a smart mix of energy sources. Renewables are fantastic for the long run, but they have their wobbles when it comes to consistent supply and scaling up quickly. Nuclear power – including those newer SMRs – offers a reliable, low-carbon option that’s definitely catching the eye of big tech companies. But we still need to get our heads around the safety, cost, and how long they take to build.

And remember, it’s not just about electricity. AI’s broader environmental impact – from the water it drinks to cool data centres, to the growing piles of e-waste from its hardware, and the resources it uses up during manufacturing – is huge. We need to look at the whole picture if we’re serious about lessening AI’s ecological footprint.

The good news? There are plenty of promising ideas and innovations bubbling up. 

Energy-saving AI algorithms, clever power management in data centres, AI-aware software that can manage workloads intelligently, and the shift towards on-device AI all offer ways to cut down on energy use. Plus, the fact that we’re even talking about AI’s environmental impact more means that discussions around policies and rules to push for sustainability are finally happening.

Dealing with AI’s energy and environmental challenges needs everyone – researchers, the tech industry, and policymakers – to roll up their sleeves and work together, and fast.

If we make energy efficiency a top priority in how AI is developed, invest properly in sustainable energy, manage hardware responsibly from cradle to grave, and put supportive policies in place, we can aim for a future where AI’s incredible potential is unlocked in a way that doesn’t break our planet.

The race to lead in AI has to be a race for sustainable AI too.

(Photo by Nejc Soklič)

See also: AI tool speeds up government feedback, experts urge caution

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Will the AI boom fuel a global energy crisis? appeared first on AI News.

]]>