Manufacturing & Engineering AI - AI News https://www.artificialintelligence-news.com/categories/ai-in-action/manufacturing-engineering-ai/ Artificial Intelligence News Wed, 04 Mar 2026 07:50:49 +0000 en-GB hourly 1 https://wordpress.org/?v=6.9.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png Manufacturing & Engineering AI - AI News https://www.artificialintelligence-news.com/categories/ai-in-action/manufacturing-engineering-ai/ 32 32 Google makes its industrial robotics AI play official–and this time, it means business https://www.artificialintelligence-news.com/news/google-industrial-robotics-ai-physical-ai-intrinsic/ Wed, 04 Mar 2026 08:00:00 +0000 https://www.artificialintelligence-news.com/?p=112499 When Google folds a moonshot into its core operations, it’s not cleaning house. It’s placing a bet. On February 25, Alphabet-owned Intrinsic–which builds AI models and software designed to make industrial robotics more accessible–officially joined Google.  The company will remain a distinct group within Google, working closely with Google DeepMind and tapping into Gemini AI models and […]

The post Google makes its industrial robotics AI play official–and this time, it means business appeared first on AI News.

]]>
When Google folds a moonshot into its core operations, it’s not cleaning house. It’s placing a bet. On February 25, Alphabet-owned Intrinsic–which builds AI models and software designed to make industrial robotics more accessible–officially joined Google. 

The company will remain a distinct group within Google, working closely with Google DeepMind and tapping into Gemini AI models and Google Cloud. No purchase price was disclosed.

On the surface, this looks like a routine internal reshuffle. It isn’t.

From Moonshot to Mandate

Intrinsic graduated into an independent Alphabet-owned company in 2021 after five years of development within Alphabet’s X, the moonshot research division–the same factory that produced Waymo and Wing. Its mission from the start: make industrial robotics AI accessible to manufacturers who don’t have armies of specialist engineers.

While hardware like robotic arms has become cheaper, programming them remains incredibly complex, often requiring hundreds of hours of manual coding by specialised engineers that can vary based on the particular robot. Intrinsic’s answer to that is Flowstate–a web-based platform that allows users to build robotic applications without having to write thousands of lines of code. 

The platform is designed to be hardware-, software-, and AI-model-agnostic. Think of it less as a product and more as an operating layer–one that Google CEO Sundar Pichai has reportedly compared directly to Android. “He said this is the Android of robotics,” Intrinsic CEO Wendy Tan White said, noting that Pichai worked on Chrome and Android before becoming CEO. 

Why now, why Google?

The timing isn’t arbitrary. The sequence of hiring Boston Dynamics’ CTO, releasing a standalone robotics SDK, and now absorbing Intrinsic represents a deliberate consolidation of robotics capability inside Google’s core. Taken together, these moves position Google to offer manufacturers something no competitor has assembled quite as cleanly: AI models from DeepMind, deployment software from Intrinsic, and cloud infrastructure from Google Cloud–all under one roof.

Last month, Google also teamed up with Boston Dynamics to integrate Gemini into Atlas humanoid robots built for manufacturing environments, while Google DeepMind hired the former CTO of Boston Dynamics in November. 

The industrial robotics AI market Google is chasing is not small. McKinsey projects that the market for general-purpose robots could reach US$370 billion by 2040. 

What it means for the enterprise

For enterprise decision-makers, the more interesting signal here isn’t the technology–it’s the accessibility shift. Google plans to integrate Intrinsic’s robotics development platform and vision models with its broader AI ecosystem, combining advanced reasoning, perception and learning capabilities with industrial-grade robotics software to allow machines to interpret sensor data better, adapt to dynamic environments and execute complex tasks. 

Intrinsic has also expanded through acquisitions–acquiring the Open Source Robotics Corp. in 2022, the for-profit arm of the foundation behind the Robot Operating System (ROS). And its commercial pipeline is already in motion: in October 2025, Intrinsic formed a strategic partnership with Foxconn focused on developing general-purpose intelligent robots for full factory automation within electronics manufacturing. 

White framed the integration in terms enterprise leaders will find hard to ignore: production economics, operational transformation, and what she described as truly advanced manufacturing — all within reach once Google’s infrastructure is fully behind it.

That’s a significant claim. But with Gemini, DeepMind, and Google Cloud now aligned behind it, the infrastructure to back it up is, for the first time, actually there.

See also: Physical AI adoption boosts customer service ROI

Banner for the AI & Big Data Expo event series.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Google makes its industrial robotics AI play official–and this time, it means business appeared first on AI News.

]]>
Hitachi bets on industrial expertise to win the physical AI race https://www.artificialintelligence-news.com/news/hitachi-physical-ai-industrial-expertise/ Mon, 23 Feb 2026 07:00:00 +0000 https://www.artificialintelligence-news.com/?p=112339 Physical AI – the branch of artificial intelligence that controls robots and industrial machinery in the real world – has a hierarchy problem. At the top, OpenAI and Google are scaling multimodal foundation models. In the middle, Nvidia is building the platforms and tools for physical AI development. And then there is a third camp: […]

The post Hitachi bets on industrial expertise to win the physical AI race appeared first on AI News.

]]>
Physical AI – the branch of artificial intelligence that controls robots and industrial machinery in the real world – has a hierarchy problem. At the top, OpenAI and Google are scaling multimodal foundation models. In the middle, Nvidia is building the platforms and tools for physical AI development.

And then there is a third camp: industrial manufacturers like Hitachi and Germany’s Siemens, that are making the quieter but arguably more grounded argument that you cannot train machines to navigate the physical world without first understanding it.

That argument is now moving from boardroom strategy to factory floor deployment, as Hitachi revealed in a recent interview with Nikkei Asia.

Why Physical AI needs a better model

Kosuke Yanai, deputy director of Hitachi’s Centre for Technology Innovation-Artificial Intelligence, is direct about what separates viable physical AI from the theoretical kind. “Physical AI cannot be implemented in society without a systematic understanding that begins with foundational knowledge of physics and industrial equipment,” he told Nikkei.

Hitachi’s pitch is that it already holds much of that foundational knowledge – accumulated over decades of building railways, power infrastructure, and industrial control systems. The company has thermal fluid simulation technology that models the behaviour of gases and liquids, and signal-processing tools for monitoring equipment condition – what Yanai describes as the engineering foundation underpinning Hitachi’s ‘extensive knowledge of product design and control logic construction.’

Daikin and JR East

While Hitachi’s overarching physical AI architecture – the Integrated World Infrastructure Model (IWIM), which it describes as a mixture-of-experts system integrating multiple specialised models and data sets – remains in the concept verification stage, two real-world deployments signal that the underlying approach is already producing results.

In collaboration with Daikin Industries, Hitachi has deployed an AI system that diagnoses malfunctions in commercial air-conditioner manufacturing equipment. The system, trained on equipment maintenance records, procedure manuals, and design drawings, can now identify which component is likely failing when an anomaly is detected – the kind of operational intuition that previously existed only in the heads of experienced engineers.

With East Japan Railway (JR East), Hitachi has built an AI that identifies the root cause of malfunctions in the control devices running the Tokyo metropolitan area’s railway traffic management system, and then assists operators in formulating a response plan. In a network where delays ripple in millions of daily journeys, the ability to accelerate fault diagnosis carries real operational weight.

The R&D pipeline: Cutting development time

Hitachi’s physical AI push is also showing up in its research output. In December 2025, the company published findings from two projects presented at ASE 2025, a top-tier software engineering conference, that address a persistent bottleneck in industrial AI: the time and effort required to write and adapt control software.

In the automotive sector, Hitachi and its subsidiary Astemo developed a system that uses retrieval-augmented generation to automatically produce integration test scripts for vehicle electronic control units (ECUs) – pulling from hardware-specific API information and frontline engineering knowledge. In a pilot involving multi-core ECU testing, the technology reduced integration testing man-hours by 43% compared to manual execution.

In logistics, the company developed variability management technology that modularises robot control software into reusable components structured around a robot operating system (ROS). By mapping out the environmental variables and operational requirements of different warehouse settings in advance, the system lets operators adapt robotic picking-and-placing workflows to new products or layouts without rewriting software from scratch.

Safety a structural requirement

One thread that runs through all of Hitachi’s physical AI work is its emphasis on safety guardrails – not as a compliance checkbox, but as an engineering constraint baked into system design. Yanai told Nikkei that the company is integrating its control and reliability technology from social infrastructure development to prevent AI outputs from deviating from human-approved operating parameters.

This includes input validation to screen out data that models should not be trained on, output verification to ensure machine actions do not endanger people or property, and real-time monitoring of the AI model itself for operational anomalies.

It is a distinction. Physical AI systems fail in the real world, not in a sandbox. The stakes for an AI controlling railway signalling or factory robotics are categorically different from those governing a chatbot.

Infrastructure to match ambition

On the infrastructure side, Hitachi Vantara – the group’s data and digital infrastructure arm – is positioning itself as an early adopter of NVIDIA’s RTX PRO Servers, built on the RTX PRO 6000 Blackwell Server Edition GPU, designed to accelerate agentic and physical AI workloads. The hardware is being paired with Hitachi’s iQ platform and used to build digital twins – virtual replicas of physical systems – that can simulate everything from grid fluctuations to robotic motion at scale.

The IWIM concept, meanwhile, is designed to connect Nvidia’s open-source Cosmos physical AI development platform with specialised Japanese-language LLMs and visual language models via the model context protocol (MCP) – essentially a framework to stitch together the models, simulation tools, and industrial datasets that physical AI systems require.

The broader race in physical AI is far from settled. But Hitachi’s position – that domain expertise and operational data are as important as model architecture – is increasingly hard to dismiss, particularly as deployments with partners like Daikin and JR East begin to demonstrate what that expertise is actually worth in practice.

Sources: Nikkei Asia (Feb 21, 2026); Hitachi R&D (Dec 24, 2025); Hitachi Vantara Blog (Aug 27, 2025)

See also:Alibaba enters physical AI race with open-source robot model RynnBrain

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Hitachi bets on industrial expertise to win the physical AI race appeared first on AI News.

]]>
PepsiCo is using AI to rethink how factories are designed and updated https://www.artificialintelligence-news.com/news/pepsico-is-using-ai-to-rethink-how-factories-are-designed-and-updated/ Fri, 30 Jan 2026 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=111925 For many large companies, the most useful form of AI right now has little to do with writing emails or answering questions. At PepsiCo, AI is being tested in places where mistakes are costly and changes are hard to undo — factory layouts, production lines, and physical operations. That shift is visible in how PepsiCo […]

The post PepsiCo is using AI to rethink how factories are designed and updated appeared first on AI News.

]]>
For many large companies, the most useful form of AI right now has little to do with writing emails or answering questions. At PepsiCo, AI is being tested in places where mistakes are costly and changes are hard to undo — factory layouts, production lines, and physical operations.

That shift is visible in how PepsiCo is using AI and digital twins to model and adjust its manufacturing facilities before making changes in the real world. Rather than experimenting with chat interfaces or office tools, the company is applying AI to one of its core problems: how to configure factories faster, with less risk, and fewer disruptions.

Digital twins are virtual models of physical systems. In manufacturing, they can simulate equipment placement, material flow, and production speed. When combined with AI, these models can test thousands of scenarios that would be impractical — or expensive — to try on a live production line.

PepsiCo has been working with partners to apply AI-driven digital twins to parts of its manufacturing network, with early pilots focused on improving how facilities are designed and adjusted over time.

The goal is not automation for its own sake. It is cycle time. Instead of taking weeks or months to validate changes through physical trials, teams can test configurations virtually, identify problems earlier, and move faster when updates are needed.

From planning bottleneck to operational shortcut

In large consumer goods companies, factory changes tend to move slowly. Even small adjustments — a new line layout, different packaging flow, or equipment upgrade — can require long planning cycles, approvals, and staged testing. Each delay has knock-on effects on supply chains and product availability.

Digital twins offer a way around that bottleneck. By simulating production environments, teams can see how changes might affect throughput, safety, or downtime before touching the actual facility.

PepsiCo’s early pilots showed faster validation times and signs of throughput improvement at initial sites, though the company has not published detailed metrics yet. What matters more than the numbers is the pattern: AI is being used to compress decision cycles in physical operations, not to replace workers or remove human judgment.

This kind of use case fits a broader trend. Enterprises that move beyond pilot projects often focus on narrow, well-defined problems where AI can reduce friction in existing workflows. Manufacturing, logistics, and healthcare operations are showing more traction than open-ended knowledge work.

Why PepsiCo treats AI as operations engineering, not office productivity

PepsiCo’s approach also highlights a quieter shift in how AI programs are being justified inside large firms. The value is tied to operational outcomes — time saved, fewer disruptions, better planning — rather than general claims about productivity.

That distinction matters. Many enterprise AI efforts stall because they struggle to connect usage with measurable impact. Tools get deployed, but workflows stay the same.

Digital twins change that dynamic because they sit directly inside planning and engineering processes. If a simulated change cuts weeks off a factory upgrade, the benefit is visible. If it reduces downtime risk, operations teams can measure that over time.

This focus on process change, rather than tools, mirrors what is happening in other sectors. In healthcare, for example, Amazon is testing an AI assistant inside its One Medical app that uses patient history to reduce repetitive intake and support care interactions, according to comments from CEO Andy Jassy reported this week. The assistant is embedded in the care workflow, not offered as a standalone feature.

Both cases point to the same lesson: AI adoption moves faster when it fits into how work already gets done, instead of asking teams to invent new habits.

Why this matters for other enterprises

PepsiCo’s digital-twin work is unlikely to be unique for long. Large manufacturers across food, chemicals, and industrial goods face similar planning constraints and cost pressures. Many already use simulation software. AI adds speed and scale to those models.

What is more interesting is what this says about the next phase of enterprise AI adoption.

First, the centre of gravity is shifting away from broad, generic tools toward focused systems tied to specific decisions. Second, success depends less on model quality and more on data quality, process ownership, and governance. A digital twin is only as useful as the operational data feeding it.

Third, this kind of AI work tends to stay out of the spotlight. It does not generate flashy demos, but it can reshape how companies plan capital spending and manage risk.

That also explains why many firms remain cautious. Building and maintaining accurate digital twins takes time, cross-team coordination, and deep knowledge of physical systems. The payoff comes from repeated use, not one-off wins.

PepsiCo’s manufacturing AI work is a quiet signal worth watching

In AI coverage, it is easy to focus on new models, agents, or interfaces. Stories like PepsiCo’s point in a different direction. They show AI being treated as infrastructure — something that sits underneath daily decisions and gradually changes how work flows through an organisation.

For enterprise leaders, the takeaway is not to copy the technology stack. It is to look for places where planning delays, validation cycles, or operational risk slow the business down. Those friction points are where AI has the best chance of sticking.

PepsiCo’s digital-twin pilots suggest that the factory floor may be one of the most practical testing grounds for AI today — not because it is trendy, but because the impact is easier to see when time and mistakes have a clear cost.

(Photo by NIKHIL)

See also: Deloitte sounds alarm as AI agent deployment outruns safety frameworks

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post PepsiCo is using AI to rethink how factories are designed and updated appeared first on AI News.

]]>
Bosch’s €2.9 billion AI investment and shifting manufacturing priorities https://www.artificialintelligence-news.com/news/bosch-e2-9-billion-ai-investment-and-shifting-manufacturing-priorities/ Thu, 08 Jan 2026 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=111519 Factories are producing more data than they can process, and companies like Bosch are using AI to close the gap. Cameras watch production lines, sensors track machines, and software records each step of processes. However, much of that information can’t create faster decisions or lead to fewer breakdowns. For large manufacturing firms, the missed opportunity […]

The post Bosch’s €2.9 billion AI investment and shifting manufacturing priorities appeared first on AI News.

]]>
Factories are producing more data than they can process, and companies like Bosch are using AI to close the gap. Cameras watch production lines, sensors track machines, and software records each step of processes. However, much of that information can’t create faster decisions or lead to fewer breakdowns. For large manufacturing firms, the missed opportunity is pushing AI from small trials into core operations.

The shift helps explain why Bosch plans to invest about €2.9 billion in artificial intelligence by 2027, according to The Wall Street Journal. The spending is aimed at manufacturing, supply chain management, and perception systems, areas where the company sees AI as a way to improve how physical systems behave in real conditions.

How Bosch uses AI to catch manufacturing problems earlier

In manufacturing, delays and defects frequently start small. A minor variation in materials or machine settings can ripple through a production line. Bosch has been applying AI models to camera feeds and sensor data to detect quality issues earlier.

Instead of catching defects after products are finished, systems can flag problems while items are still on the line. That gives workers time to change operations before waste increases. For high-volume manufacturing, earlier detection can reduce scrap and limit the need for rework.

Equipment maintenance is another area under pressure. Many factories still rely on fixed schedules or manual inspections, which can miss early warning signs of errors or failure. AI models trained on vibration and temperature data can help predict when a machine is likely to fail.

This allows maintenance teams to plan repairs instead of reacting to breakdowns. The aim is to reduce unplanned downtime without replacing equipment too early. Over time, this approach can extend the working life of machines while keeping production more stable.

Making supply chains more adaptable

Supply chains are also part of the investment focus. Disruptions that became visible during the pandemic have not fully disappeared, and manufacturers are still dealing with shifting demand and transport delays.

AI systems can help forecast needs, track parts in sites, and adjust plans when conditions change. Even small improvements in planning accuracy can have a broad effect when applied in hundreds of factories and suppliers.

Bosch is funding perception systems, which help machines understand their surroundings. Systems combine input from cameras, radar, and other sensors with AI models that can recognise objects, judge distance, or spot changes in the environment. They are used in areas like factory automation, driver assistance, and robotics, where machines must respond quickly and safely. In these environments, AI is reacting to real-world conditions as they happen.

Why edge computing matters on the factory floor

Much of this work takes place at the edge. In factories and vehicles, sending data to a distant cloud system and waiting for a response can add delay or create risk if connections fail. Running AI models locally allows systems to respond in real time and keep operating even when networks are unreliable.

It also limits how much sensitive data leaves a site. For industrial companies, that can matter as much as speed, especially when production processes are closely guarded.

Cloud systems still play a role, though mostly behind the scenes. Training models, managing updates, and analysing trends in locations often happens in central environments.

Many manufacturers are moving toward a hybrid setup, using cloud systems for coordination and learning, and edge systems for action. The pattern is becoming common in industrial firms, not just Bosch.

Scaling AI beyond small trials

The scale of the investment matters, as small AI tests can show promise, but rolling them out across all operations takes funding, skilled staff, and long-term commitment.

Bosch executives have described AI as a way to support workers not replace them, and as a tool to handle the complexity that humans cannot manage. That view reflects a broader shift in industry, where AI is treated less as an experiment and more as basic infrastructure.

What Bosch’s manufacturing AI strategy shows in practice

Rising energy costs, labour shortages, and tighter margins leave less room for inefficiency. Automation alone no longer solves those problems. Companies are looking for systems that can adjust to changing conditions without constant manual input.

Bosch’s €2.9 billion commitment sits in that wider shift. Other large manufacturers are making similar moves, often without public fanfare, by upgrading factories and retraining staff. What stands out is the focus on operational use rather than customer-facing features.

Taken together, these efforts show how end-user companies are applying AI today. The work is less about bold claims and more about reducing waste, improving uptime, and making complex systems easier to manage. For industrial firms, that practical focus may define how AI delivers value over time.

(Photo by P. L.)

See also: Agentic AI scaling requires new memory architecture

Want to learn more about AI and big data from industry leaders? Check outAI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Bosch’s €2.9 billion AI investment and shifting manufacturing priorities appeared first on AI News.

]]>
Arm and the future of AI at the edge https://www.artificialintelligence-news.com/news/arm-chips-and-the-future-of-ai-at-the-edge/ Tue, 23 Dec 2025 13:45:19 +0000 https://www.artificialintelligence-news.com/?p=111417 Arm Holdings has positioned itself at the centre of AI transformation. In a wide-ranging podcast interview, Vince Jesaitis, head of global government affairs at Arm, offered enterprise decision-makers look into the company’s international strategy, the evolution of AI as the company sees it, and what lies ahead for the industry. From cloud to edge Arm […]

The post Arm and the future of AI at the edge appeared first on AI News.

]]>
Arm Holdings has positioned itself at the centre of AI transformation. In a wide-ranging podcast interview, Vince Jesaitis, head of global government affairs at Arm, offered enterprise decision-makers look into the company’s international strategy, the evolution of AI as the company sees it, and what lies ahead for the industry.

From cloud to edge

Arm thinks the AI market is about to enter a new phase, moving from cloud-based processing to edge computing. While much of the media’s attention has been focused to date on massive data centres, with models trained in and accessed from the cloud, Jesaitis said that most AI compute, especially inference tasks, is likely to be increasingly decentralised.

“The next ‘aha’ moment in AI is when local AI processing is being done on devices you couldn’t have imagined before,” Jesaitis said. These devices range from smartphones and earbuds to cars and industrial sensors. Arm’s IP is already embedded, literally, in these devices – it’s a company that only in the last year has been the IP behind over 30 billion chips, placed in devices of every conceivable description, all over the world.

The deployment of AI in edge environments has several benefits, with team at Arm citing three main ‘wins’. Firstly, the inherent efficiency of low-power Arm chips means that power bills for running compute and cooling are lower. That keeps the environmental footprint of the technology as small as possible.

Secondly, putting AI in local settings means latency is much lower (with latency determined by the distance between local operations and the site of the AI model). Arm points to uses like instant translation, dynamic scheduling of control systems, and features like the near-immediate triggering of safety functions – for instance in IIoT settings.

Thirdly, ‘keeping it local’ means there’s no potentially sensitive data sent off-premise. The benefits are obvious for any organisation in highly-regulated industries, but the increasing number of data breaches means even companies operating with relatively benign data sets are looking to reduce their attack surface.

Arm silicon, optimised for power-constrained devices, makes it well-suited for compute where it’s needed on the ground, the company says. The future may well be one where AI is found woven throughout environments, not centralised in a data centre run by one of the large providers.

Arm and global governments

Arm is actively engaged with global policymakers, considering this level of engagement an important part of its role. Governments continue to compete to attract semiconductor investment, the issues of supply chain and concentrated dependencies still fresh in many policymakers’ memories from the time of the COVID epidemic.

Arm lobbies for workforce development, working at present with policy-makers in the White House on an education coalition to build an ‘AI-ready workforce’. Domestic independence in technology relies as much on the abilities of workforce as it does on the availability of hardware.

Jesaitis noted a divergence between regulatory environments: the US prioritises what the government there terms acceleration and innovation, while the EU leads on safety, privacy, security and legally-enforced standards of practice. Arm aims to find the middle ground between these approaches, building products that meet stringent global compliance needs, yet furthering advances in the AI industry.

The enterprise case for edge AI

The case for integrating Arm’s edge-focused AI architecture into enterprise transformation strategies can be persuasive. The company stresses its ability to offer scale-able AI without the need to centralise to the cloud, and is also pushing its investment in hardware-level security. That means issues like memory exploits (outside of the control of users plugged into centralised AI models) can be avoided.

Of course, sectors already highly-regulated in terms of data practices are unlikely to experience relaxed governance in the future – the opposite is pretty much inevitable. All industries will be seeing more regulation and greater penalties for non-compliance in the years to come. However, to balance that, there are significant competitive advantages available to those that can demonstrate their systems’ inherent safety and security. It’s into this regulatory landscape that Arm sees itself and local, edge AI fitting.

Additionally, in Europe and Scandinavia, ESG goals are going to be increasingly important. Here, the power-sipping nature of Arm chips offers big advantages. That’s a trend that even the US hyperscalers are responding to: AWS’s latest SHALAR range of low-cost, low-power Arm-based platforms is there to satisfy that exact demand.

Arm’s collaboration with cloud hyperscalers such as AWS and Microsoft produces chips that combine efficiency with the necessary horsepower for AI applications, the company says.

What’s next from Arm and the industry

Jesaitis pointed out several trends that enterprises may be seeing in the next 12 to 18 months. Global AI exports, particularly from the US and Middle East, are ensuring that local demand for AI can be satisfied by the big providers. Arm is a company that can supply both big providers in these contexts (as part of their portfolios of offerings) and satisfy the rising demand for edge-based AI.

Jesaitis also sees edge AI as something of the hero of sustainability in an industry increasingly under fire for its ecological impact. Because Arm technology’s biggest market has been in low-power compute for mobile, it’s inherently ‘greener’. As enterprises hope to meet energy goals without sacrificing compute, Arm offers a way that combines performance with responsibility.

Redefining “smart”

Arm’s vision of AI at the edge means computers and the software running on them can be context-aware, cheap to run, secure by design, and – thanks to near-zero network latency – highly-responsive. Jesaitis said, “We used to call things ‘smart’ because they were online. Now, they’re going to be truly intelligent.”

(Image source: “Factory Floor” by danielfoster437 is licensed under CC BY-NC-SA 2.0.)

 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Arm and the future of AI at the edge appeared first on AI News.

]]>
Inside China’s push to apply AI across its energy system https://www.artificialintelligence-news.com/news/inside-chinas-push-to-apply-ai-across-its-energy-system/ Tue, 23 Dec 2025 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=111413 Under China’s push to clean up its energy system, AI is starting to shape how power is produced, moved, and used — not in abstract policy terms, but in day-to-day operations. In Chifeng, a city in northern China, a renewable-powered factory offers a clear example. The site produces hydrogen and ammonia using electricity generated entirely […]

The post Inside China’s push to apply AI across its energy system appeared first on AI News.

]]>
Under China’s push to clean up its energy system, AI is starting to shape how power is produced, moved, and used — not in abstract policy terms, but in day-to-day operations.

In Chifeng, a city in northern China, a renewable-powered factory offers a clear example. The site produces hydrogen and ammonia using electricity generated entirely from nearby wind and solar farms. Unlike traditional plants connected to the wider grid, this facility runs on its own closed system. That setup brings a problem as well as a benefit: renewable power is clean, but it rises and falls with the weather.

To keep production stable, the factory relies on an AI-driven control system built by its owner, Envision. Rather than following fixed schedules, the software continuously adjusts output based on changes in wind and sunlight. As reported by Reuters, Zhang Jian, Envision’s chief engineer for hydrogen energy, compared the system to a conductor, coordinating electricity supply and industrial demand in real time.

When wind speeds increase, production ramps up automatically to take full advantage of the available power. When conditions weaken, electricity use is quickly reduced to avoid strain. Zhang said the system allows the plant to operate at high efficiency despite the volatility of renewable energy.

Projects like this are central to China’s plans for hydrogen and ammonia, fuels seen as important for cutting emissions in sectors such as steelmaking and shipping. They also point to a broader strategy: using AI to manage complexity as the country adds more renewable power to its grid.

Researchers argue that AI could play a significant role in meeting China’s climate goals. Zheng Saina, an associate professor at Southeast University in Nanjing who studies low-carbon transitions, said AI can support tasks ranging from emissions tracking to forecasting electricity supply and demand. At the same time, she cautioned that AI itself is driving rapid growth in power consumption, particularly through energy-hungry data centres.

China now installs more wind and solar capacity than any other country, but absorbing that power efficiently remains a challenge. According to Cory Combs, associate director at Beijing-based research firm Trivium China, AI is increasingly seen as a way to make the grid more flexible and responsive.

That thinking was formalised in September, when Beijing introduced an “AI+ energy” strategy. The plan calls for deeper links between AI systems and the energy sector, including the development of multiple large AI models focused on grid operations, power generation, and industrial use. By 2027, the government aims to roll out dozens of pilot projects and test AI across more than 100 use cases. Within another three years, officials want China to reach what they describe as a world-leading level of AI integration in energy.

Combs said the focus is on highly specialised tools designed for specific jobs, such as managing wind farms, nuclear plants, or grid balancing, rather than general-purpose AI. This approach contrasts with the United States, where much of the investment has gone into building advanced large-language models, according to Hu Guangzhou, a professor at the China Europe International Business School in Shanghai.

One area where AI could have immediate impact is demand forecasting. Fang Lurui, an assistant professor at Xi’an Jiaotong-Liverpool University, said power grids must match supply and demand at every moment to avoid outages. Accurate forecasts of renewable output and electricity use allow operators to plan ahead, storing energy in batteries when needed and reducing reliance on coal-fired backup plants.

Some cities are already experimenting. Shanghai has launched a citywide virtual power plant that links dozens of operators — including data centres, building systems, and electric vehicle chargers — into a single coordinated network. During a trial last August, the system reduced peak demand by more than 160 megawatts, roughly equivalent to the output of a small coal plant.

Combs said such systems matter because modern power generation is increasingly scattered and intermittent. “You need something very robust that is able to be predictive and account for new information very quickly,” he said.

Beyond the grid, China is also looking to apply AI to its national carbon market, which covers more than 3,000 companies in emissions-heavy industries such as power, steel, cement, and aluminium. These sectors together produce over 60% of the country’s carbon emissions. Chen Zhibin, a senior manager at Berlin-based think tank adelphi, said AI could help regulators verify emissions data, refine the allocation of free allowances, and give companies clearer insight into their production costs.

Still, the risks are growing alongside the opportunities. Studies suggest that by 2030, China’s AI data centres could consume more than 1,000 terawatt-hours of electricity each year — roughly the same as Japan’s current annual usage. Lifecycle emissions from the AI sector are projected to rise sharply and peak well after China’s 2030 emissions target.

Xiong Qiyang, a doctoral researcher at Renmin University of China who worked on one such study, said the results reflect the reality that coal still dominates China’s power mix. He warned that rapid AI expansion could complicate national climate goals if energy sources do not shift quickly enough.

In response, regulators have begun tightening rules. A 2024 action plan requires data centres to improve energy efficiency and increase their use of renewable power by 10% each year. Other initiatives encourage new facilities to be built in western regions, where wind and solar resources are more abundant.

Operators on the east coast are also testing new ideas. Near Shanghai, an underwater data centre is set to open, using seawater for cooling to cut energy and water use. The developer, Hailanyun, said the facility will draw most of its power from an offshore wind farm and could be replicated if the project proves viable.

Despite the growing energy demands of AI, Xiong argued that its overall impact on emissions could still be positive if applied carefully. Used to optimise heavy industry, power systems, and carbon markets, he said, AI may remain an essential part of China’s effort to cut emissions — even as it creates new pressures that policymakers must manage.

(Photo by Matthew Henry)

See also: Can China’s chip stacking strategy really challenge Nvidia’s AI dominance?

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Inside China’s push to apply AI across its energy system appeared first on AI News.

]]>
Mining business learnings for AI deployment https://www.artificialintelligence-news.com/news/mining-ai-gives-businesses-food-for-thought-in-real-life-deployments-of-oi/ Tue, 16 Dec 2025 12:31:59 +0000 https://www.artificialintelligence-news.com/?p=111343 Mining conglomerate BHP describes AI as the way it’s turning operational data into better day-to-day decisions. A blog post from the company highlights the analysis of data from sensors and monitoring systems to spot patterns and flag issues for plant machinery, giving choices to decision-makers that can improve efficiency and safety – plus reduce environmental […]

The post Mining business learnings for AI deployment appeared first on AI News.

]]>
Mining conglomerate BHP describes AI as the way it’s turning operational data into better day-to-day decisions. A blog post from the company highlights the analysis of data from sensors and monitoring systems to spot patterns and flag issues for plant machinery, giving choices to decision-makers that can improve efficiency and safety – plus reduce environmental impact.

For business leaders at BHP, the useful question was not “Where can we use AI?” but “Which decisions do we make repeatedly, and what information would improve them?”

Portfolio not showcase

BHP describes the end-to-end effects of AI on operations, or as it puts it, “from mineral extraction to customer delivery.” Leaders had decided to move beyond pilot rollouts, treating AI as an operational capability. It started with a small set of problems that affected the company’s performance; places where change could be measured in results.

The company found it could avoid unplanned downtime of machinery, plus it tightened its energy and water use. Each use case addressing a small but impactful problem was given an owner and an accompanying KPI. Results were reviewed with the same regularity used for other operational performance monitoring elsewhere in the company.

Where BHP uses AI daily

In addition to BHP focusing specifically on areas such as predictive maintenance and energy optimisation, it gave consideration to using AI in more adventurous yet important operations such as autonomous vehicles and real-time staff health monitoring. Such categories can translate well to other asset-heavy environments, across logistics, manufacturing, and heavy industry.

Predictive maintenance

Predictive maintenance is the process of planning repairs in scheduled downtime to reduce unexpected failures and costly, unplanned stoppages. Here, AI models analyse equipment data from on-board sensors and can anticipate maintenance needs. This cuts breakdown numbers and reduces equipment-related safety incidents. BHP runs predictive analytics across most of its load-and-haul fleets and its materials handling systems. A central maintenance centre provides real-time and longer-range indications of machine health and potential failure or degradation.

Prediction has become an integral part of its machinery-heavy operations, where previously, such information was presented as ‘just another’ report, one that could get lost in the bureaucracy of the company. It models and defines thresholds which trigger actions directly to teams planning maintenance.

Energy and water optimisation

Deploying predictive maintenance in this manner at its facilities in Escondida in Chile, the company reports savings of more than three giga-litres of water and 118 gigawatt hours of energy in two years, attributing the gains directly to AI. The technology gives operators real-time options and analytics that identify anomalies and automate corrective actions at multiple facilities, including concentrators and desalination plants.

The lesson it’s learned is placing AI where decisions happen: When operators and control teams can act on recommendations in real time, improvements compound. Conversely periodic reporting means decisions are only taken if staff both see the results of data, and then decide it’s necessary. The realtime nature of data analysis and the use of triggers-to-action mean the differences becomes quickly apparent.

Autonomy and remote operations

BHP is also using more advanced technologies like AI-supported autonomous vehicles and machinery. These are higher-risk areas, and the tech has been found to reduce worker exposure to risk, and cut the human error factor in incidents. At the company, complex operational data flows through regional centres from remote facilities. So, without the use of AI and analytics, staff would not be able to optimise every decision in the way that software achieves.

The use of AI-integrated wearables is increasing in many industries, including engineering, utilities, manufacturing, and mining. BHP leads the way in protecting its staff, who often work in very challenging conditions. Wearables can monitor personal conditions, reading heart rate and fatigue indicators, and provide real-time alerts to supervisors. One example might be ‘smart’ hard-hat sensor technology, used by BHP at Escondida, which measures truck driver fatigue by analysing drivers’ brain waves.

A plan leaders can run

Regardless of industry, decision-makers can draw learnings from BHP’s experiences in deploying AI at the (literal) coal-face. The following plan could help leaders in their own strategies to leverage AI in operational problem-areas:

  1. Choose one reliability problem and one resource-efficiency problem that operations teams already track, then attach a KPI.
  2. Map the workflow: who will see the output and what action they can take?
  3. Put basic governance in place for data quality and model monitoring, then review performance alongside operational KPIs.
  4. Start with decision support in higher-risk processes, and automate only after teams validate controls.

(Image source: “Shovel View at a Strip Mining Coal” by rbglasson is licensed under CC BY-NC-SA 2.0.)

 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Mining business learnings for AI deployment appeared first on AI News.

]]>
Strong contractor belief in AI for industry-wide transformation https://www.artificialintelligence-news.com/news/construction-industry-ai-success-potential/ Tue, 16 Dec 2025 08:22:09 +0000 https://www.artificialintelligence-news.com/?p=111329 The construction industry generates colossal amounts of data, with much of it unused or locked in spreadsheets. AI is now changing this, enabling teams to accelerate decision-making, enhance margins, and improve project outcomes. According to new research from Dodge Construction Network (Dodge) and CMiC, the true transformative impact of AI is highlighted by contractors, with […]

The post Strong contractor belief in AI for industry-wide transformation appeared first on AI News.

]]>
The construction industry generates colossal amounts of data, with much of it unused or locked in spreadsheets. AI is now changing this, enabling teams to accelerate decision-making, enhance margins, and improve project outcomes. According to new research from Dodge Construction Network (Dodge) and CMiC, the true transformative impact of AI is highlighted by contractors, with 87% believing AI will “meaningfully transform their business,” despite current low adoption rates.

The latest research, entitled, ‘AI for Contractors,’ discovered automated proposal generation and progress tracking from site photos both reached a 92% effectiveness rating. Meanwhile, contract risk review achieved 85% effectiveness when compared to previous, more traditional methods.

The report highlights how AI is allowing project managers to focus on strategic decisions rather than time-consuming administrative tasks. Finance teams are also benefiting from AI technology, shifting from historical reporting to predictive insights, while operations leaders are able to apply data-driven intelligence for improved project delivery. Rather than AI fully replacing human expertise, the report found it actually enhances human input.

“For decades, construction firms have lacked the tools to transform the data they’ve collected into action. AI-enabled solutions are changing that,” says Gord Rawlins, president and CEO of CMiC. “This research highlights the high-impact results contractors are achieving today.”

AI changing contractor roles

Surveyed contractors see AI as a catalyst in reshaping everyday aspects of their operations, enabling predictive insights rather than reacting to problems once they have occurred. This introduces wider benefits, like tighter cost controls, improved scheduling, and higher quality project delivery. In other words, improved overall outcomes.

A substantial 85% of contractors foresee less time spent on repetitive tasks, while 75% have faith that AI can help mine historical data to learn from previous projects. Rather than relying fully on AI, 70% said the technology helps them make better, more informed decisions thanks to insights that may otherwise not be present.

AI implementation remains low, but companies are preparing for wider adoption

Currently, AI adoption in the construction industry is low, despite awareness levels of 32% to 34%. This seems to be due to several reasons, including a lack of clear understanding, internal approvals, and software access. However, Dodge’s research discovered more than half of companies surveyed are strategically preparing for AI with pilot programmes and staff training for AI-related positions.

According to the report, 40% of companies have a set budget for AI, 38% are developing teams for implementation, 19% are adapting old workflows, and 51% are assessing AI changes.

Early adopters lead the way

Overall awareness of AI use in the industry is quite low, with just 20% to 50% of contractors knowing that certain management tasks implement AI, and very few actively use these functions. Nevertheless, early adopters of AI provided positive feedback, as more than 70% revealed that AI tools are hugely effective compared to more traditional methods, suggesting a potential for quick growth in AI use throughout the industry.

Security and accuracy lead concerns

The main concerns of adopting AI revolve around security and accuracy. The report reveals that 57% are worried about the accuracy of AI output, while 54% have doubts over the security of company data.

Internal resistance to change (44%) and implementation costs (41%) are also cited as key concerns, but perhaps surprisingly, just 21% expressed concern over job losses. 31% believe current data quality is not yet adequate to support AI analysis.

According to the report, larger contractors are likely to rely more on AI than smaller firms, thus are more concerned about data quality and reliability. For instance, 69% of larger contractors cited lack of reliability or accuracy of AI outputs as a major concern, compared to 54% of smaller or mid-size contractors.

Research data confirms that contractors are generally open to adopting AI, but the accuracy of AI outputs tend to stand in the way, as well as the desire for better tools, more information, and greater internal support.

17% of contractors said they do not sufficiently trust AI results, an issue that becomes more pronounced in sensitive areas like payments. Distrust in AI operations rises to 35% and 31% not having faith in AI managing project budgets.

A major theme is the need for more understanding before using AI. On average, 21% of respondents said they want a better insight of how AI works before considering using it, climbing to 31% for more complex tasks like safety risk assessments.

Contractors also believe they are limited by their current software capabilities, with an average of 19% reporting their software does not offer the AI functions they require. The increases to 33% for managing resources.

Internal approval remains a notable obstacle, with 22% saying their company has not yet approved the use of AI, despite personal interest. Another barrier is a lack of time or resources that effectively evaluate AI tools. 13% stated this as a main reason why AI has not yet been adopted.

Although there are obvious challenges to mass AI use in the construction industry – and therefore significant market opportunity – only 5% believe AI would not be beneficial or improve current methods. That indicates a resistance that stems from various concerns rather than a lack of perceived value.

Steve Jones, Senior Director, Industry Insights Analytics at Dodge, spoke on the findings.

“We designed this study to look at the use of AI in the digital tools already deployed by contractors because that may offer the best solution to the challenge of data quality. But it is also heartening to see that many contractors are aware of the key challenges and the need for a rigorous approach to successfully implementing these tools at their organisations,” the Dodge research states.

Key interest in emerging AI functionalities

AI’s potential is clearly recognised, even if the industry’s readiness to adopt it isn’t quite matching the data. Certain areas are attracting the most attention when it comes to AI functions, like automated construction analysis, where 81% see potential benefits. 80% also show interest in intelligent permit submissions, while 79% believe in autonomous schedule and resource optimisation.

92% appreciate automated contract management and 76% recognise potential in AI-powered dynamic pricing. Although AI adoption remains limited, these strong numbers suggest the tide may soon be turning.

AI and the new age of the construction industry

The latest data suggests a strong openness, maybe even an eagerness, to AI adoption in the construction sector. However, better tools, clearer guidance, and more trustworthy outputs are just some of the areas that need to be addressed before interest becomes implementation.

“With high awareness, strong interest, and powerful validation from early adopters, contractors appear poised for significant expansion in their use of AI-enabled tools in meaningful ways,” said Steve Jones.

The industry is on a “tipping point for AI adoption” according to Jones. When companies start to provide clearer pathways for adoption, the move towards AI-powered construction workflows will undoubtedly accelerate rapidly, reshaping how projects are delivered forever.

(Image source: “Tianjin Construction Site.” by @yakobusan Jakob Montrasio is licensed under CC BY 2.0.)

 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Strong contractor belief in AI for industry-wide transformation appeared first on AI News.

]]>
AI in manufacturing set to unleash new era of profit https://www.artificialintelligence-news.com/news/ai-in-manufacturing-set-to-unleash-new-era-of-profit/ Wed, 03 Dec 2025 15:30:04 +0000 https://www.artificialintelligence-news.com/?p=111105 Manufacturing executives are wagering nearly half their modernisation budgets on AI, betting these systems will boost profit within two years. This aggressive capital allocation marks a definitive pivot. AI is now seen as the primary engine for financial performance. According to the Future-Ready Manufacturing Study 2025 by Tata Consultancy Services (TCS) and AWS, 88 percent […]

The post AI in manufacturing set to unleash new era of profit appeared first on AI News.

]]>
Manufacturing executives are wagering nearly half their modernisation budgets on AI, betting these systems will boost profit within two years.

This aggressive capital allocation marks a definitive pivot. AI is now seen as the primary engine for financial performance. According to the Future-Ready Manufacturing Study 2025 by Tata Consultancy Services (TCS) and AWS, 88 percent of manufacturers anticipate AI will capture at least five percent of operating margin. One in four expect returns exceeding 10 percent.

The money is there. The ambition is there. The plumbing, unfortunately, is not.

A disparity exists between financial forecasts and the reality of the factory floor. While spending on intelligent systems accelerates, the underlying data infrastructure remains brittle, and risk management strategies still rely on expensive manual buffers.

Pressure to extract value from AI for manufacturing

The pressure to extract cash value from tech stacks has never been higher. 75 percent of respondents expect AI to rank as a top-three contributor to operating margins by 2026. Consequently, organisations are funneling 51 percent of their transformation spending toward AI and autonomous systems over the next two years.

This spending eclipses other vital areas. Allocations for AI outpace workforce reskilling (19%) and cloud infrastructure modernisation (16%) by a wide margin. For CIOs, this imbalance signals a looming crisis: attempting to deploy advanced algorithms on shaky legacy foundations.

Anupam Singhal, President of Manufacturing at TCS, said: “Manufacturing is an industry defined by precision, reliability, and the relentless pursuit of performance. Today, that strength of foundation becomes multifold with AI in orchestrating decisions—delivering transformational business outcomes through greater predictability, stability, and control.

“At TCS, we see this as a defining opportunity to help manufacturers build resilient, adaptive, and future-ready enterprise ecosystems that can thrive in an era of intelligent autonomy.”

Analogue hedges in a digital era

Despite the heavy investment in predictive capabilities, operational behaviour betrays a lack of trust. When disruption hits, manufacturers aren’t leaning on the agility of their digital systems; they are reverting to physical safeguards.

Following recent disruptions, 61 percent of organisations increased their safety stock. Half opted for multisourcing logistics. Only 26 percent utilised scenario planning via digital twins to navigate volatility.

This is the disconnect. While AI promises dynamic inventory optimisation, a benefit cited by 49 percent of respondents, the prevailing instinct is to hoard inventory. Supply chain leaders are buying Ferraris but driving them like tractors. Bridging this gap requires moving from reactive safety measures to proactive and system-led responses.

Ozgur Tohumcu, General Manager of Automotive and Manufacturing at AWS, commented: “Manufacturers today are facing unprecedented pressure—from tight margins to volatile supply chains and workforce gaps. At AWS, we are revolutionising manufacturing through AI-powered autonomous operations, shifting from manual, reactive processes to intelligent, self-optimising systems that operate at scale.

“By embedding artificial intelligence into every layer of the operation and leveraging cloud-native architecture, manufacturers can move beyond simple automation to true autonomous decision-making where systems predict, adapt, and act independently with minimal human intervention. This enables not just faster response times, but fundamentally transforms operations with AI-driven predictability, resilience, and agility.”

Infrastructure debt

The primary obstacle to these financial returns isn’t the AI models; it’s the data they feed on. Only 21 percent of manufacturers claim to be “fully AI-ready” with clean, contextual, and unified data.

The majority (61%) operate with partial readiness, struggling with inconsistent quality across different plants. This fragmentation creates data silos that prevent algorithms from accessing the enterprise-wide inputs necessary for accurate decision-making.

Integration with legacy systems stands as the primary hurdle, cited by 54 percent of respondents. This “technical debt,” accumulated over decades of digitisation, makes it difficult to overlay modern autonomous agents on older operational technology.

Security also bites. Security and governance concerns top the list of plant-level obstacles at 52 percent. In an environment where a cyber-physical breach can halt production or cause physical harm, the risk appetite for autonomous intervention remains low.

The shift towards agentic AI in manufacturing

Despite the headwinds, the industry is charging toward agentic AI (i.e. systems capable of making decisions with limited human oversight.)

Seventy-four percent of manufacturers expect AI agents to manage up to half of routine production decisions by 2028. More immediately, 66 percent of organisations already allow – or plan to allow within 12 months – AI agents to approve routine work orders without human sign-off.

This progression from “copilots” to independent agents capable of completing entire tasks fundamentally alters the workforce. While 89 percent of manufacturers expect AI-guided robotics to impact the workforce, the focus is on augmentation rather than displacement.

Productivity gains are currently concentrated in knowledge-intensive roles. Quality inspectors (49%) and IT support staff (44%) are seeing the fastest gains. Traditional production roles like maintenance technicians (29%) lag behind. Adoption is following a pattern of cognitive augmentation before addressing physical coordination.

As AI agents embed themselves across platforms, enterprise architects face a choice regarding orchestration. The market shows a strong aversion to vendor lock-in.

63 percent of manufacturers favour hybrid or multi-platform strategies over single-vendor solutions. Specifically, 33 percent plan to coordinate through multiple platform-native agents, while 30 percent prefer a hybrid model blending platform-native and custom orchestration. Only 13 percent are willing to anchor on a single foundational platform.

Converting the manufacturing industry’s AI investment to profit

To convert this massive capital outlay into actual profit, the C-suite needs to look past the hype.

First, fix the data. With only 21 percent of firms fully ready, the immediate priority must be modernisation rather than algorithm development. Without clean, unified data, high-value use cases in sustainability and predictive maintenance will fail to scale.

Second, leaders must bridge the AI trust gap. The reliance on safety stock indicates a lack of faith in digital signals. Staged autonomy is the answer—starting with administrative tasks like work orders, where 66 percent are already heading, before handing over complex supply chain decisions.

Finally, avoid the monolithic trap. The data supports a multi-platform approach to maintain leverage and agility. Manufacturers are betting their future on AI, but realising those returns requires less focus on the “intelligence” of the models and more on the mundane work of cleaning data, integrating legacy equipment, and building workforce trust.

See also: Frontier AI research lab tackles enterprise deployment challenges

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post AI in manufacturing set to unleash new era of profit appeared first on AI News.

]]>
EY and NVIDIA to help companies test and deploy physical AI https://www.artificialintelligence-news.com/news/ey-and-nvidia-to-help-companies-test-and-deploy-physical-ai/ Wed, 03 Dec 2025 12:05:00 +0000 https://www.artificialintelligence-news.com/?p=111086 AI is moving deeper into the physical world, and EY is laying out a more structured way for companies to work with robots, drones, and other smart devices. The organisation is introducing a physical AI platform built with NVIDIA tools, opening a new EY.ai Lab in Georgia, and adding new leadership to guide its work […]

The post EY and NVIDIA to help companies test and deploy physical AI appeared first on AI News.

]]>
AI is moving deeper into the physical world, and EY is laying out a more structured way for companies to work with robots, drones, and other smart devices. The organisation is introducing a physical AI platform built with NVIDIA tools, opening a new EY.ai Lab in Georgia, and adding new leadership to guide its work in this field.

The platform uses NVIDIA Omniverse libraries, NVIDIA Isaac, and NVIDIA AI Enterprise software. EY says the setup gives organisations a clearer way to plan, test, and manage AI systems that operate in real environments, from factory robots to drones and edge devices.

Omniverse libraries support the creation of digital twins so firms can model and test systems before deployment. NVIDIA Isaac tools offer open models and simulation frameworks to design and validate AI-driven robots in detailed 3D settings. NVIDIA AI Enterprise provides the computing base needed to run heavier AI workloads.

EY describes the platform as built around three main areas:

  • AI-ready data: Synthetic data to mirror a wide range of physical scenarios.
  • Digital twins and robotics training: Tools that connect digital and physical systems, monitor performance in real time, and support operational continuity.
  • Responsible physical AI: Governance and controls that address safety, ethics, and compliance.

The platform is meant to support everything from early planning to long-term maintenance in sectors like industrials, energy, consumer, and health.

Raj Sharma, EY Global Managing Partner – Growth & Innovation, says physical AI is already “transforming how businesses in sectors operate and help create value,” saying that it brings more automation and can help lower operating costs. He says the combination of EY’s industry experience and NVIDIA’s infrastructure is expected to speed up how companies move “from experimentation to enterprise-scale deployment.”

NVIDIA’s John Fanelli notes that more enterprises are bringing robots and automation into real settings to address workforce changes and improve safety. He says the EY.ai Lab, supported by NVIDIA AI infrastructure, helps organisations “simulate, optimise and safely deploy robotics applications at enterprise scale,” which he views as part of the next phase of industrial AI.

New leadership and a dedicated physical AI lab

EY has also appointed Dr. Youngjun Choi as its Global Physical AI Leader. He will oversee robotics and physical AI work and help shape EY’s role as an advisor in this area.

Choi, who has nearly 20 years’ experience in robotics and AI, previously led the UPS Robotics AI Lab, where he worked on digital twins, robotics projects, and AI tools to modernise its network. Before that, he served as research faculty in Aerospace Engineering at the Georgia Institute of Technology, contributing to aerial robotics and autonomous systems.

A key part of his role is directing the newly opened EY.ai Lab in Alpharetta, Georgia – the first EY site focused on physical AI. The Lab includes robotics systems, sensors, and simulation tools so organisations can test ideas and build prototypes before deploying them at scale.

Joe Depa, EY Global Chief Innovation Officer, says his clients want better ways to use technology for decision-making and performance. He adds that physical AI requires strong data foundations and trust from the start. With Choi leading the Lab, Depa says EY teams are beginning to “get beyond the surface of what is possible” and set up the base for scalable operations.

At the Lab, organisations can:

  • Design and test physical AI systems in a virtual testbed,
  • Build solutions for humanoids, quadrupeds, and other next-generation robots,
  • Improve logistics, manufacturing, and maintenance with digital twins.

The new platform and Lab build on earlier collaboration between EY and NVIDIA, including an AI agent platform launched earlier this year. Both organisations plan to expand their physical AI work to areas like energy, health, and smart cities. They also aim to support automation projects that cut waste and help reduce environmental impact.

See also: Microsoft, NVIDIA, and Anthropic forge AI compute alliance

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post EY and NVIDIA to help companies test and deploy physical AI appeared first on AI News.

]]>
Manufacturing’s pivot: AI as a strategic driver https://www.artificialintelligence-news.com/news/manufacturings-pivot-ai-as-a-strategic-driver/ Tue, 25 Nov 2025 16:04:39 +0000 https://www.artificialintelligence-news.com/?p=110930 Manufacturers today are working against rising input costs, labour shortages, supply-chain fragility, and pressure to offer more customised products. AI is becoming an important part of a response to those pressures. When enterprise strategy depends on AI Most manufacturers seek to reduce cost while improving throughput and quality. AI supports these aims by predicting equipment […]

The post Manufacturing’s pivot: AI as a strategic driver appeared first on AI News.

]]>
Manufacturers today are working against rising input costs, labour shortages, supply-chain fragility, and pressure to offer more customised products. AI is becoming an important part of a response to those pressures.

When enterprise strategy depends on AI

Most manufacturers seek to reduce cost while improving throughput and quality. AI supports these aims by predicting equipment failures, adjusting production schedules, and analysing supply-chain signals. A Google Cloud survey found that more than half of manufacturing executives are using AI agents in back-office areas like planning and quality. (https://cloud.google.com/transform/roi-ai-the-next-wave-of-ai-in-manufacturing)

The shift matters because the use of AI links directly to measurable business outcomes. Reduced downtime, lower scrap, better OEE (overall equipment effectiveness), and improved customer responsiveness all contribute to positive enterprise strategy and overall competitiveness in the market.

What recent industry experience reveals

  1. Motherson Technology Services reported major gains – 25-30% maintenance-cost reduction, 35-45% downtime reduction, and 20-35% higher production efficiency after adopting agent-based AI, data-platform consolidation, and workforce-enablement initiatives.

  2. ServiceNow has described how manufacturers unify workflows, data, and AI on common platforms. It reported that just over half of advanced manufacturers have formal data-governance programmes in support of their AI initiatives.

These instances show the direction of travel: AI is being deployed inside operations – not in pilots, but in workflows.

What cloud and IT leaders should consider

Data architecture

Manufacturing systems depend on low-latency decisions, especially for maintenance and quality. Leaders must work out how to combine edge devices (often OT systems with supporting IT infrastructure) with cloud services. Microsoft’s maturity-path guidance highlights that data silos and legacy equipment remain a barrier, so standardising how data is collected, stored, and shared is often the first step for many future-facing manufacturing and engineering businesses.

Use-case sequencing

ServiceNow advises starting small and scaling AI roll-outs gradually. Focusing on two or three high-value use-cases helps teams avoid the “pilot trap”. Predictive maintenance, energy optimisation, and quality inspection are strong starting points because benefits are relatively easy to measure.

Governance and security

Connecting operational technology equipment with IT and cloud systems increases cyber-risk, as some OT systems were not designed to be exposed to the wider internet. Leaders should define data-access rules and monitoring requirements carefully. In general, AI governance should not wait until later phases, but begin in the first pilot.

Workforce and skills

The human factor remains important. Operators’ trust AI-supported systems goes without saying and there needs to be confidence using systems underpinned by AI. According to Automation.com, manufacturing faces persistent skilled-labour shortages, making upskilling programmes an integral part of modern deployments.

Vendor-ecosystem neutrality

The ecosystem of many manufacturing environments includes IoT sensors, industrial networks, cloud platforms, and workflow tools operating in the back office and on the facility floor. Leaders should prioritise interoperability and avoid lock-in to any one provider. The aim is not to adopt a single vendor’s approach but to build an architecture that supports long-term flexibility, honed to the individual organisation’s workflows.

Measuring impact

Manufacturers should define metrics, which may include downtime hours, maintenance-cost reduction, throughput, yield, and these metrics should be monitored continuously. The Motherson results provide realistic benchmarks and show the outcomes possible from careful measurement.

The realities: beyond the hype

Despite rapid progress, challenges remain. Skills shortages slow deployment, legacy machinery produces fragmented data, and costs are sometimes difficult to forecast. Sensors, connectivity, integration work, and data-platform upgrades all add up. Additionally, security issues grow as production systems become more connected. Finally, AI should coexist with human expertise; operators, engineers, and data scientists behind the scenes need to work together, not in parallel.

However, recent publications show these challenges are manageable with the right management and operational structures. Clear governance, cross-functional teams, and scalable architectures make AI easier to deploy and sustain.

Strategic recommendations for leaders

  1. Tie AI initiatives to business goals. Link work to KPIs like downtime, scrap, and cost per unit.
  2. Adopt a careful hybrid edge-cloud mix. Keep real-time inference close to machines while using cloud platforms for training and analytics.
  3. Invest in people. Mixed teams of domain experts and data scientists are important, and training should be offered for operators and management.
  4. Embed security early. Treat OT and IT as a unified environment, assuming zero-trust.
  5. Scale gradually. Prove value in one plant, then expand.
  6. Choose open ecosystem components. Open standards allow a company to remain flexible and avoid vendor lock-in.
  7. Monitor performance. Adjust models and workflows as conditions change, according to results measured against pre-defined metrics.

Conclusion

Internal AI deployment is now an important part of manufacturing strategy. Recent blog posts from Motherson, Microsoft, and ServiceNow show that manufacturers are gaining measurable benefits by combining data, people, workflows, and technology. The path is not simple, but with clear governance, the right architecture, an eye to security, business-focussed projects, and a strong focus on people, AI becomes a practical lever for competitiveness.

(Image source: “Jelly Belly Factory Floor” by el frijole is licensed under CC BY-NC-SA 2.0. )

 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Manufacturing’s pivot: AI as a strategic driver appeared first on AI News.

]]>
WorldGen: Meta reveals generative AI for interactive 3D worlds https://www.artificialintelligence-news.com/news/worldgen-meta-generative-ai-for-interactive-3d-worlds/ Fri, 21 Nov 2025 16:35:32 +0000 https://www.artificialintelligence-news.com/?p=110824 With its WorldGen system, Meta is shifting the use of generative AI for 3D worlds from creating static imagery to fully interactive assets. The main bottleneck in creating immersive spatial computing experiences – whether for consumer gaming, industrial digital twins, or employee training simulations – has long been the labour-intensive nature of 3D modelling. The […]

The post WorldGen: Meta reveals generative AI for interactive 3D worlds appeared first on AI News.

]]>
With its WorldGen system, Meta is shifting the use of generative AI for 3D worlds from creating static imagery to fully interactive assets.

The main bottleneck in creating immersive spatial computing experiences – whether for consumer gaming, industrial digital twins, or employee training simulations – has long been the labour-intensive nature of 3D modelling. The production of an interactive environment typically requires teams of specialised artists working for weeks.

WorldGen, according to a new technical report from Meta’s Reality Labs, is capable of generating traversable and interactive 3D worlds from a single text prompt in approximately five minutes.

While the technology is currently research-grade, the WorldGen architecture addresses specific pain points that have prevented generative AI from being useful in professional workflows: functional interactivity, engine compatibility, and editorial control.

Generative AI environments become truly interactive 3D worlds

The primary failing of many existing text-to-3D models is that they prioritise visual fidelity over function. Approaches such as gaussian splatting create photorealistic scenes that look impressive in a video but often lack the underlying physical structure required for a user to interact with the environment. Assets lacking collision data or ramp physics hold little-to-no value for simulation or gaming.

WorldGen diverges from this path by prioritising “traversability”. The system generates a navigation mesh (navmesh) – a simplified polygon mesh that defines walkable surfaces – alongside the visual geometry. This ensures that a prompt such as “medieval village” produces not just a collection of houses, but a spatially-coherent layout where streets are clear of obstructions and open spaces are accessible.

For enterprises, this distinction is vital. A digital twin of a factory floor or a safety training simulation for hazardous environments requires valid physics and navigation data.

Meta’s approach ensures the output is “game engine-ready,” meaning the assets can be exported directly into standard platforms like Unity or Unreal Engine. This compatibility allows technical teams to integrate generative workflows into existing pipelines without needing specialised rendering hardware that other methods, such as radiance fields, often demand.

The four-stage production line of WorldGen

Meta’s researchers have structured WorldGen as a modular AI pipeline that mirrors traditional development workflows for creating 3D worlds.

The process begins with scene planning. A LLM acts as a structural engineer, parsing the user’s text prompt to generate a logical layout. It determines the placement of key structures and terrain features, producing a “blockout” – a rough 3D sketch – that guarantees the scene makes physical sense.

The subsequent “scene reconstruction” phase builds the initial geometry. The system conditions the generation on the navmesh, ensuring that as the AI “hallucinates” details, it does not inadvertently place a boulder in a doorway or block a fire exit path.

“Scene decomposition,” the third stage, is perhaps the most relevant for operational flexibility. The system uses a method called AutoPartGen to identify and separate individual objects within the scene—distinguishing a tree from the ground, or a crate from a warehouse floor.

In many “single-shot” generative models, the scene is a single fused lump of geometry. By separating components, WorldGen allows human editors to move, delete, or modify specific assets post-generation without breaking the entire world.

For the last step, “scene enhancement” polishes the assets. The system generates high-resolution textures and refines the geometry of individual objects to ensure visual quality holds up when close.

Screenshot of Meta WorldGen in action for using generative AI to create 3D worlds.

Operational realism of using generative AI to create 3D worlds

Implementing such technology requires an assessment of current infrastructure. WorldGen’s outputs are standard textured meshes. This choice avoids the vendor lock-in associated with proprietary rendering techniques. It means that a logistics firm building a VR training module could theoretically use this tool to prototype layouts rapidly, then hand them over to human developers for refinement.

Creating a fully textured, navigable scene takes roughly five minutes on sufficient hardware. For studios or departments accustomed to multi-day turnaround times for basic environment blocking, this efficiency gain is quite literally world-changing.

However, the technology does have limitations. The current iteration relies on generating a single reference view, which restricts the scale of the worlds it can produce. It cannot yet natively generate sprawling open worlds spanning kilometres without stitching multiple regions together, which risks visual inconsistencies.

The system also currently represents each object independently without reuse, which could lead to memory inefficiencies in very large scenes compared to hand-optimised assets where a single chair model is repeated fifty times. Future iterations aim to address larger world sizes and lower latency.

Comparing WorldGen against other emerging technologies

Evaluating this approach against other emerging AI technologies for creating 3D worlds offers clarity. World Labs, a competitor in the space, employs a system called Marble that uses Gaussian splats to achieve high photorealism. While visually striking, these splat-based scenes often degrade in quality when the camera moves away from the centre and can drop in fidelity just 3-5 metres from the viewpoint.

Meta’s choice to output mesh-based geometry positions WorldGen as a tool for functional application development rather than just visual content creation. It supports physics, collisions, and navigation natively—features that are non-negotiable for interactive software. Consequently, WorldGen can generate scenes spanning 50×50 metres that maintain geometric integrity throughout.

For leaders in the technology and creative sectors, the arrival of systems like WorldGen brings exciting new possibilities. Organisations should audit their current 3D workflows to identify where “blockout” and prototyping absorb the most resources. Generative tools are best deployed here to accelerate iteration, rather than attempting to replace final-quality production immediately.

Concurrently, technical artists and level designers will need to transition from placing every vertex manually to prompting and curating AI outputs. Training programmes should focus on “prompt engineering for spatial layout” and editing AI-generated assets for 3D worlds. Finally, while the output is standard, the generation process requires plenty of compute. Assessing on-premise versus cloud rendering capabilities will be necessary for adoption.

Generative 3D serves best as a force multiplier for structural layout and asset population rather than a total replacement for human creativity. By automating the foundational work of building a world, enterprise teams can focus their budgets on the interactions and logic that drive business value.

See also: How the Royal Navy is using AI to cut its recruitment workload

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post WorldGen: Meta reveals generative AI for interactive 3D worlds appeared first on AI News.

]]>