Interviews - AI News https://www.artificialintelligence-news.com/categories/features/interviews/ Artificial Intelligence News Tue, 03 Feb 2026 10:52:23 +0000 en-GB hourly 1 https://wordpress.org/?v=6.9.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png Interviews - AI News https://www.artificialintelligence-news.com/categories/features/interviews/ 32 32 Apptio: Why scaling intelligent automation requires financial rigour https://www.artificialintelligence-news.com/news/apptio-why-scaling-intelligent-automation-requires-financial-rigour/ Tue, 03 Feb 2026 10:52:22 +0000 https://www.artificialintelligence-news.com/?p=111972 Greg Holmes, Field CTO for EMEA at Apptio, an IBM company, argues that successfully scaling intelligent automation requires financial rigour. The “build it and they will come” model of technology adoption often leaves a hole in the budget when applied to automation. Executives frequently find that successful pilot programmes do not translate into sustainable enterprise-wide […]

The post Apptio: Why scaling intelligent automation requires financial rigour appeared first on AI News.

]]>
Greg Holmes, Field CTO for EMEA at Apptio, an IBM company, argues that successfully scaling intelligent automation requires financial rigour.

The “build it and they will come” model of technology adoption often leaves a hole in the budget when applied to automation. Executives frequently find that successful pilot programmes do not translate into sustainable enterprise-wide deployments because initial financial modelling ignored the realities of production scaling.

Headshot of Greg Holmes, Field CTO for the EMEA region at Apptio, an IBM company.

“When we integrate FinOps capabilities with automation, we’re looking at a change from being very reactive on cost management to being very proactive around value engineering,” says Holmes.

This shifts the assessment criteria for technical leaders. Rather than waiting “months or years to assess whether things are getting value,” engineering teams can track resource consumption – such as cost per transaction or API call – “straight from the beginning.”

The unit economics of scaling intelligent automation

Innovation projects face a high mortality rate. Holmes notes that around 80 percent of new innovation projects fail, often because financial opacity during the pilot phase masks future liabilities.

“If a pilot demonstrates that automating a process saves, say, 100 hours a month, leadership thinks that’s really successful,” says Holmes. “But what it fails to track is that the pilot sometimes is running on over-provisioned infrastructure, so it looks like it performs really well. But you wouldn’t over-provision to that degree during a real production rollout.”

Moving that workload to production changes the calculus. The requirements for compute, storage, and data transfer increase. “API calls can multiply, exceptions and edge cases appear at volume that might have been out of scope for the pilot phase, and then support overheads just grow as well,” he adds.

To prevent this, organisations must track the marginal cost at scale. This involves monitoring unit economics, such as the cost per customer served or cost per transaction. If the cost per customer increases as the customer base grows, the business model is flawed.

Conversely, effective scaling should see these unit costs decrease. Holmes cites a case study from Liberty Mutual where the insurer was able to find around $2.5 million of savings by bringing in consumption metrics and “not just looking at labour hours that they were saving.”

However, financial accountability cannot sit solely with the finance department. Holmes advocates for putting governance “back in the hands of the developers into their development tools and workloads.”

Integration with infrastructure-as-code tools like HashiCorp Terraform and GitHub allows organisations to enforce policies during deployment. Teams can spin up resources programmatically with immediate cost estimates.

“Rather than deploying things and then fixing them up, which gets into the whole whack-a-mole kind of problem,” Holmes explains, companies can verify they are “deploying the right things at the right time.”

When scaling intelligent automation, tension often simmers between the CFO, who focuses on return on investment, and the Head of Automation, who tracks operational metrics like hours saved.

“This translation challenge is precisely what TBM (Technology Business Management) and Apptio are designed to solve,” says Holmes. “It’s having a common language between technology and finance and with the business.”

The TBM taxonomy provides a standardised framework to reconcile these views. It maps technical resources (such as compute, storage, and labour) into IT towers and further up to business capabilities. This structure translates technical inputs into business outputs.

“I don’t necessarily know what goes into all the IT layers underneath it,” Holmes says, describing the business user’s perspective. “But because we’ve got this taxonomy, I can get a detailed bill that tells me about my service consumption and precisely which costs are driving  it to be more expensive as I consume more.”

Addressing legacy debt and budgeting for the long-term

Organisations burdened by legacy ERP systems face a binary choice: automation as a patch, or as a bridge to modernisation. Holmes warns that if a company is “just trying to mask inefficient processes and not redesign them,” they are merely “building up more technical debt.”

A total cost of ownership (TCO) approach helps determine the correct strategy. The Commonwealth Bank of Australia utilised a TCO model across 2,000 different applications – of various maturity stages – to assess their full lifecycle costs. This analysis included hidden costs such as infrastructure, labour, and the engineering time required to keep automation running.

“Just because of something’s legacy doesn’t mean you have to retire it,” says Holmes. “Some of those legacy systems are worth maintaining just because the value is so good.”

In other cases, calculating the cost of the automation wrappers required to keep an old system functional reveals a different reality. “Sometimes when you add up the TCO approach, and you’re including all these automation layers around it, you suddenly realise, the real cost of keeping that old system alive is not just the old system, it’s those extra layers,” Holmes argues.

Avoiding sticker shock requires a budgeting strategy that balances variable costs with long-term commitments. While variable costs (OPEX) offer flexibility, they can fluctuate wildly based on demand and engineering efficiency.

Holmes advises that longer-term visibility enables better investment decisions. Committing to specific technologies or platforms over a multi-year horizon allows organisations to negotiate economies of scale and standardise architecture.

“Because you’ve made those longer term commitments and you’ve standardised on different platforms and things like that, it makes it easier to build the right thing out for the long term,” Holmes says.

Combining tight management of variable costs with strategic commitments supports enterprises in scaling intelligent automation without the volatility that often derails transformation.

IBM is a key sponsor of this year’s Intelligent Automation Conference Global in London on 4-5 February 2026. Greg Holmes and other experts will be sharing their insights during the event. Be sure to check out the day one panel session, Scaling Intelligent Automation Successfully: Frameworks, Risks, and Real-World Lessons, to hear more from Holmes and swing by IBM’s booth at stand #362.

See also: Klarna backs Google UCP to power AI agent payments

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Apptio: Why scaling intelligent automation requires financial rigour appeared first on AI News.

]]>
Franny Hsiao, Salesforce: Scaling enterprise AI https://www.artificialintelligence-news.com/news/franny-hsiao-salesforce-scaling-enterprise-ai/ Wed, 28 Jan 2026 15:00:44 +0000 https://www.artificialintelligence-news.com/?p=111906 Scaling enterprise AI requires overcoming architectural oversights that often stall pilots before production, a challenge that goes far beyond model selection. While generative AI prototypes are easy to spin up, turning them into reliable business assets involves solving the difficult problems of data engineering and governance. Ahead of AI & Big Data Global 2026 in […]

The post Franny Hsiao, Salesforce: Scaling enterprise AI appeared first on AI News.

]]>
Scaling enterprise AI requires overcoming architectural oversights that often stall pilots before production, a challenge that goes far beyond model selection. While generative AI prototypes are easy to spin up, turning them into reliable business assets involves solving the difficult problems of data engineering and governance.

Ahead of AI & Big Data Global 2026 in London, Franny Hsiao, EMEA Leader of AI Architects at Salesforce, discussed why so many initiatives hit a wall and how organisations can architect systems that actually survive the real world.

The ‘pristine island’ problem of scaling enterprise AI

Most failures stem from the environment in which the AI is built. Pilots frequently begin in controlled settings that create a false sense of security, only to crumble when faced with enterprise scale.

Headshot of Franny Hsiao, EMEA Leader of AI Architects at Salesforce.

“The single most common architectural oversight that prevents AI pilots from scaling is the failure to architect a production-grade data infrastructure with built-in end to end governance from the start,” Hsiao explains.

“Understandably, pilots often start on ‘pristine islands’ – using small, curated datasets and simplified workflows. But this ignores the messy reality of enterprise data: the complex integration, normalisation, and transformation required to handle real-world volume and variability.”

When companies attempt to scale these island-based pilots without addressing the underlying data mess, the systems break. Hsiao warns that “the resulting data gaps and performance issues like inference latency render the AI systems unusable—and, more importantly, untrustworthy.”

Hsiao argues that the companies successfully bridging this gap are those that “bake end-to-end observability and guardrails into the entire lifecycle.” This approach provides “visibility and control into how effective the AI systems are and how users are adopting the new technology.”

Engineering for perceived responsiveness

As enterprises deploy large reasoning models – like the ‘Atlas Reasoning Engine’ – they face a trade-off between the depth of the model’s “thinking” and the user’s patience. Heavy compute creates latency.

Salesforce addresses this by focusing on “perceived responsiveness through Agentforce Streaming,” according to Hsiao.

“This allows us to deliver AI-generated responses progressively, even while the reasoning engine performs heavy computation in the background. It’s an incredibly effective approach for reducing perceived latency, which often stalls production AI.”

Transparency also plays a functional role in managing user expectations when scaling enterprise AI. Hsiao elaborates on using design as a trust mechanism: “By surfacing progress indicators that show the reasoning steps or the tools being used, as well images like spinners and progress bars to depict loading states, we don’t just keep users engaged; we improve perceived responsiveness and build trust.

“This visibility, combined with strategic model selection – like choosing smaller models for fewer computations, meaning faster response times – and explicit length constraints, ensures the system feels deliberate and responsive.”

Offline intelligence at the edge

For industries with field operations, such as utilities or logistics, reliance on continuous cloud connectivity is a non-starter. “For many of our enterprise customers, the biggest practical driver is offline functionality,” states Hsiao.

Hsiao highlights the shift toward on-device intelligence, particularly in field services, where the workflow must continue regardless of signal strength.

“A technician can photograph a faulty part, error code, or serial number while offline. Then an on-device LLM can then identify the asset or error, and provide guided troubleshooting steps from a cached knowledge base instantly,” explains Hsiao.

Data synchronisation happens automatically once connectivity returns. “Once a connection is restored, the system handles the ‘heavy lifting’ of syncing that data back to the cloud to maintain a single source of truth. This ensures that work gets done, even in the most disconnected environments.”

Hsiao expects continued innovation in edge AI due to benefits like “ultra-low latency, enhanced privacy and data security, energy efficiency, and cost savings.”

High-stakes gateways

Autonomous agents are not set-and-forget tools. When scaling enterprise AI deployments, governance requires defining exactly when a human must verify an action. Hsiao describes this not as dependency, but as “architecting for accountability and continuous learning.”

Salesforce mandates a “human-in-the-loop” for specific areas Hsiao calls “high-stakes gateways”:

“This includes specific action categories, including any ‘CUD’ (Creating, Uploading, or Deleting) actions, as well as verified contact and customer contact actions,” says Hsiao. “We also default to human confirmation for critical decision-making or any action that could be potentially exploited through prompt manipulation.”

This structure creates a feedback loop where “agents learn from human expertise,” creating a system of “collaborative intelligence” rather than unchecked automation.

Trusting an agent requires seeing its work. Salesforce has built a “Session Tracing Data Model (STDM)” to provide this visibility. It captures “turn-by-turn logs” that offer granular insight into the agent’s logic.

“This gives us granular step-by-step visibility that captures every interaction including user questions, planner steps, tool calls, inputs/outputs, retrieved chunks, responses, timing, and errors,” says Hsiao.

This data allows organisations to run ‘Agent Analytics’ for adoption metrics, ‘Agent Optimisation’ to drill down into performance, and ‘Health Monitoring’ for uptime and latency tracking.

“Agentforce observability is the single mission control for all your Agentforce agents for unified visibility, monitoring, and optimisation,” Hsiao summarises.

Standardising agent communication

As businesses deploy agents from different vendors, these systems need a shared protocol to collaborate. “For multi-agent orchestration to work, agents can’t exist in a vacuum; they need common language,” argues Hsiao.

Hsiao outlines two layers of standardisation: orchestration and meaning. For orchestration, Salesforce is adopting open-source standards like MCP (Model Context Protocol) and A2A (Agent to Agent Protocol).”

“We believe open source standards are non-negotiable; they prevent vendor lock-in, enable interoperability, and accelerate innovation.”

However, communication is useless if the agents interpret data differently. To solve for fragmented data, Salesforce co-founded OSI (Open Semantic Interchange) to unify semantics so an agent in one system “truly understands the intent of an agent in another.”

The future enterprise AI scaling bottleneck: agent-ready data

Looking forward, the challenge will shift from model capability to data accessibility. Many organisations still struggle with legacy, fragmented infrastructure where “searchability and reusability” remain difficult.

Hsiao predicts the next major hurdle – and solution – will be making enterprise data “‘agent-ready’ through searchable, context-aware architectures that replace traditional, rigid ETL pipelines.” This shift is necessary to enable “hyper-personalised and transformed user experience because agents can always access the right context.”

“Ultimately, the next year isn’t about the race for bigger, newer models; it’s about building the orchestration and data infrastructure that allows production-grade agentic systems to thrive,” Hsiao concludes.

Salesforce is a key sponsor of this year’s AI & Big Data Global in London and will have a range of speakers, including Franny Hsiao, sharing their insights during the event. Be sure to swing by Salesforce’s booth at stand #163 for more from the company’s experts.

See also: Databricks: Enterprise AI adoption shifts to agentic systems

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Franny Hsiao, Salesforce: Scaling enterprise AI appeared first on AI News.

]]>
How Standard Chartered runs AI under privacy rules https://www.artificialintelligence-news.com/news/how-standard-chartered-runs-ai-under-privacy-rules/ Wed, 28 Jan 2026 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=111846 For banks trying to put AI into real use, the hardest questions often come before any model is trained. Can the data be used at all? Where is it allowed to be stored? Who is responsible once the system goes live? At Standard Chartered, these privacy-driven questions now shape how AI systems are built, and […]

The post How Standard Chartered runs AI under privacy rules appeared first on AI News.

]]>
For banks trying to put AI into real use, the hardest questions often come before any model is trained. Can the data be used at all? Where is it allowed to be stored? Who is responsible once the system goes live? At Standard Chartered, these privacy-driven questions now shape how AI systems are built, and deployed at the bank.

For global banks operating in many jurisdictions, these early decisions are rarely straightforward. Privacy rules differ by market, and the same AI system may face very different constraints depending on where it is deployed. At Standard Chartered, this has pushed privacy teams into a more active role in shaping how AI systems are designed, approved, and monitored in the organisation.

“Data privacy functions have become the starting point of most AI regulations,” says David Hardoon, Global Head of AI Enablement at Standard Chartered. In practice, that means privacy requirements shape the type of data that can be used in AI systems, how transparent those systems need to be, and how they are monitored once they are live.

Privacy shaping how AI runs

The bank is already running AI systems in live environments. The transition from pilots brings practical challenges that are easy to underestimate early on. In small trials, data sources are limited and well understood. In production, AI systems often pull data from many upstream platforms, each with its own structure and quality issues. “When moving from a contained pilot into live operations, ensuring data quality becomes more challenging with multiple upstream systems and potential schema differences,” Hardoon says.

David Hardoon, Global Head of AI Enablement at Standard Chartered

Privacy rules add further constraints. In some cases, real customer data cannot be used to train models. Instead, teams may rely on anonymised data, which can affect how quickly systems are developed or how well they perform. Live deployments also operate at a much larger scale, increasing the impact of any gaps in controls. As Hardoon puts it, “As part of responsible and client-centric AI adoption, we prioritise adhering to principles of fairness, ethics, accountability, and transparency as data processing scope expands.”

Geography and regulation decide where AI works

Where AI systems are built and deployed is also shaped by geography. Data protection laws vary in regions, and some countries impose strict rules on where data must be stored and who can access it. These requirements play a direct role in how Standard Chartered deploys AI, particularly for systems that rely on client or personally identifiable information.

“Data sovereignty is often a key consideration when operating in different markets and regions,” Hardoon says. In markets with data localisation rules, AI systems may need to be deployed locally, or designed so that sensitive data does not cross borders. In other cases, shared platforms can be used, provided the right controls are in place. This results in a mix of global and market-specific AI deployments, shaped by local regulation not a single technical preference.

The same trade-offs appear in decisions about centralised AI platforms versus local solutions. Large organisations often aim to share models, tools, and oversight in markets to reduce duplication. Privacy laws do not always block this approach. “In general, privacy regulations do not explicitly prohibit transfer of data, but rather expect appropriate controls to be in place,” Hardoon says.

There are limits: some data cannot move in borders at all, and certain privacy laws apply beyond the country where data was collected. The details can restrict which markets a central platform can serve and where local systems remain necessary. For banks, this often leads to a layered setup, with shared foundations combined with localised AI use cases where regulation demands it.

Human oversight remains central

As AI becomes more embedded in decision-making, questions around explainability and consent grow harder to avoid. Automation may speed up processes, but it does not remove responsibility. “Transparency and explainability have become more crucial than before,” Hardoon says. Even when working with external vendors, accountability remains internal. This has reinforced the need for human oversight in AI systems, particularly where outcomes affect customers or regulatory obligations.

People also play a larger role in privacy risk than technology alone. Processes and controls can be well designed, but they depend on how staff understand and handle data. “People remain the most important factor when it comes to implementing privacy controls,” Hardoon says. At Standard Chartered, this has pushed a focus on training and awareness, so teams know what data can be used, how it should be handled, and where the boundaries lie.

Scaling AI under growing regulatory scrutiny requires making privacy and governance easier to apply in practice. One approach the bank is taking is standardisation. By creating pre-approved templates, architectures, and data classifications, teams can move faster without bypassing controls. “Standardisation and re-usability are important,” Hardoon explains. Codifying rules around data residency, retention, and access helps turn complex requirements into clearer components that can be reused in AI projects.

As more organisations move AI into everyday operations, privacy is not just a compliance hurdle. It is shaping how AI systems are built, where they run, and how much trust they can earn. In banking, that shift is already influencing what AI looks like in practice – and where its limits are set.

(Photo by Corporate Locations)

See also: The quiet work behind Citi’s 4,000-person internal AI rollout

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post How Standard Chartered runs AI under privacy rules appeared first on AI News.

]]>
Expereo: Enterprise connectivity amid AI surge with ‘visibility at the speed of life’ https://www.artificialintelligence-news.com/news/expereo-enterprise-connectivity-amid-ai-surge-with-visibility-at-the-speed-of-life/ Mon, 26 Jan 2026 15:23:57 +0000 https://www.artificialintelligence-news.com/?p=111834 AI continues to reshape technology and business; yet for the network, enterprise connectivity in the AI age means being always-on, and extra vigilant for sovereignty and security besides. This means that speed is not the only requirement. As Julian Skeels, chief digital officer at Expereo notes, it is more about ‘certainty.’ “AI workloads are distributed, […]

The post Expereo: Enterprise connectivity amid AI surge with ‘visibility at the speed of life’ appeared first on AI News.

]]>
AI continues to reshape technology and business; yet for the network, enterprise connectivity in the AI age means being always-on, and extra vigilant for sovereignty and security besides.

This means that speed is not the only requirement. As Julian Skeels, chief digital officer at Expereo notes, it is more about ‘certainty.’ “AI workloads are distributed, they’re continuous, they’re incredibly latency-sensitive. Inference, monitoring, retrieval and remediation never stop, so that changes the network’s role,” says Skeels.

“In the world of AI, networking actually becomes a system dependency,” he adds. “When the network degrades, the application degrades immediately.

“An AI-ready network needs to make data movement deterministic. It’s not just about it being fast; it’s about it being predictable, and observable, and governable, and resilient – and to do all those things under continual change.”

Many CIOs, however, are struggling right now with what Skeels describes as ‘connectivity everywhere but visibility nowhere.’

“They’re dealing with hybrid networks, multiple clouds, multiple providers and portals that create a constant operational drag to their teams,” says Skeels. “What they want is clarity and control – not more tools.”

Skeels arrived at Expereo last year with myriad cross-industry experience in product and digital transformation initiatives under his belt. He found an industry ripe for accelerative change, and a company determined to lead the way and ensure pricing global connectivity should take minutes rather than weeks.

“When I came to Expereo, I saw that global connectivity has, I would say, largely resisted real digital transformation for a long time,” notes Skeels. “Most customers will still experience it as slow, and manual, and opaque, and fragmented across the dozens of providers and portals they need to work with.

“We believe, though, that with emerging technologies such as agentic AI, that’s finally changing,” adds Skeels. “Our ambition here is to make global connectivity as simple, and immediate, and transparent as cloud computing is for our customers.”

Enabling such change for customers requires that mix of speed and visibility – and this is where the expereoOne platform comes in, to provide what the company calls ‘visibility at the speed of life’ and give customers a single, global view of what is being deployed, how it is performing, and what it costs. Beyond visibility, customers also need proactivity, as Skeels explains. “We’re deeply integrated into our customers’ order management, their ITSM, their ERP systems, which makes working with Expereo at scale absolutely seamless,” he says.

“The key point is that better visibility isn’t about more dashboards. It’s about connecting network behaviour to their business outcomes in terms of resilience, security experience, and cost.”

Skeels is speaking at the Digital Transformation Expo Global on February 4-5 around designing the AI-ready network – and his session promises to subvert the usual advice for those in attendance. “I want to challenge a few things,” notes Skeels. “I want to ask people to consider even unlearning things they’ve learned in the past.

“A lot of what we’ve taken for granted about networks no longer holds in an AI world.”

Watch the full conversation between Julian Skeels and TechEx’s James Bourne below:

Photo by Pixabay

The post Expereo: Enterprise connectivity amid AI surge with ‘visibility at the speed of life’ appeared first on AI News.

]]>
Allister Frost: Tackling workforce anxiety for AI integration success https://www.artificialintelligence-news.com/news/allister-frost-tackling-workforce-anxiety-for-ai-integration-success/ Tue, 13 Jan 2026 13:39:53 +0000 https://www.artificialintelligence-news.com/?p=111580 Navigating workforce anxiety remains a primary challenge for leaders as AI integration defines modern enterprise success. For enterprise leaders, deploying AI is less a technical hurdle than a complex exercise in change management. The reality for many organisations is that, while algorithms offer efficiency, the human element dictates the speed of adoption. Data from the […]

The post Allister Frost: Tackling workforce anxiety for AI integration success appeared first on AI News.

]]>
Navigating workforce anxiety remains a primary challenge for leaders as AI integration defines modern enterprise success.

For enterprise leaders, deploying AI is less a technical hurdle than a complex exercise in change management. The reality for many organisations is that, while algorithms offer efficiency, the human element dictates the speed of adoption.

Data from the TUC indicates that 51 percent of UK adults are concerned about the impact of AI and new technologies on their job. This anxiety creates a tangible risk to ROI; resistance halts the innovation leaders seek to foster.

Allister Frost, a former Microsoft leader and expert on business transformation, argues this friction stems from a misunderstanding of the technology’s capability.

Address the misconception of true intelligence

A common error in corporate strategy treats generative AI and Large Language Models (LLMs) as autonomous agents rather than data processors. This anthropomorphism drives the fear that machines will make human cognition obsolete.

Allister Frost, a former Microsoft leader and expert on business transformation.

“The greatest misconception is that AI is as intelligent as its name suggests and can perform human-like tasks,” Frost notes. He clarifies the reality: “AI is primarily pattern-matching at scale, offering opportunities to help people work smarter, innovate faster, and explore new pathways to growth.”

Communicating this distinction is essential. When employees view these tools as pattern-matchers rather than sentient replacements, the narrative changes from competition to utility. Frost emphasises that “AI doesn’t have the ability to replicate human intelligence, it exists to augment it.”

Some finance and operations leaders view AI integration primarily as a mechanism to reduce salary overheads. Yet stripping away experienced staff for automation often degrades institutional memory.

Frost warns against this tactic: “Too often, businesses see AI as a shortcut to headcount reduction, putting experienced workers at risk for short-term savings. This approach overlooks the enormous economic and societal cost of losing skilled staff.”

Data confirms the workforce is on edge regarding this scenario. Acas reports that 26 percent of British workers cite job losses as their biggest concern regarding AI at work. History suggests, however, that technological integration expands rather than contracts the labour market.

“The reality is that AI is not poised to eliminate jobs indiscriminately, but rather to evolve the nature of work,” states Frost.

Operationalising augmentation

Successful integration requires changing how AI use cases are identified. Rather than looking for roles to remove, enterprise leaders should identify high-volume, low-value tasks that bottleneck productivity.

“AI tools have the potential to automate mundane tasks and free up human labour to focus on creative and strategic aspects,” explains Frost.

This allows leaders to move staff toward high-touch areas where algorithms struggle.

“As AI handles repetitive tasks, it frees up time to allow staff to upskill and transition into more complex roles that require a higher level of critical thinking and emotional intelligence.”

These competencies – empathy, ethical decision-making, and complex strategy – remain outside the grasp of current computational models.

Resistance to AI is often a symptom of “change fatigue,” a common response to the pace of digital updates. With 14 percent of UK workers explicitly worried about AI’s impact on their current job, transparent governance is required.

Leaders must recognise that “resisting AI’s integration can hinder progress and limit opportunities for innovation.” Active engagement is the solution. “Engaging employees in discussions about AI’s role within the organisation can help demystify its functions and build trust,” Frost advises.

This requires moving beyond top-down mandates. It involves creating a culture where staff feel safe to experiment with new tools without the immediate fear of displacing their own roles.

“Once leaders have cultivated an environment of transparency and inclusion, businesses can alleviate anxieties, ensuring all team members are aligned and prepared to harness AI’s benefits.”

Adapting the workforce for successful AI integration

Enterprise technology advancements have always demanded adaptation, and AI – while a larger transformation than many technologies in recent decades – is no different.

“Throughout history people have been resistant to new technological advancements, yet history shows us humans have repeatedly risen to the challenge of integrating new technologies.”

For enterprise leaders, success involves investing in resilience and continuous learning. By framing AI as a transformative tool rather than a threat, organisations can protect their talent pipeline while modernising operations.

A summary of advice to ensure successful AI integration:

  • Reframe the narrative: Explicitly communicate AI as a “pattern-matching” tool for augmentation, not a sentient replacement, to lower cultural resistance.
  • Audit for augmentation: Identify the mundane and high-volume process bottlenecks for automation, specifically to free up staff for more rewarding creative work.
  • Invest in “human” skills: Allocate learning and development budgets toward critical thinking, empathy, and ethical decision-making, as these are the non-replicable assets in an AI-driven market.
  • Combat change fatigue: Ensure transparent and two-way dialogue regarding AI integration roadmaps and governance to build trust and mitigate the fear factor regarding job losses.

“My mission is to save one million working lives by showing that AI works best when it empowers humans, rather than replaces them,” Frost concludes.

See also: How Shopify is bringing agentic AI to enterprise commerce

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Allister Frost: Tackling workforce anxiety for AI integration success appeared first on AI News.

]]>
AI in 2026: Experimental AI concludes as autonomous systems rise https://www.artificialintelligence-news.com/news/ai-in-2026-experimental-ai-concludes-autonomous-systems-rise/ Fri, 12 Dec 2025 16:59:18 +0000 https://www.artificialintelligence-news.com/?p=111296 Generative AI’s experimental phase is concluding, making way for truly autonomous systems in 2026 that act rather than merely summarise. 2026 will lose the focus on model parameters and be about agency, energy efficiency, and the ability to navigate complex industrial environments. The next twelve months represent a departure from chatbots toward autonomous systems executing […]

The post AI in 2026: Experimental AI concludes as autonomous systems rise appeared first on AI News.

]]>
Generative AI’s experimental phase is concluding, making way for truly autonomous systems in 2026 that act rather than merely summarise.

2026 will lose the focus on model parameters and be about agency, energy efficiency, and the ability to navigate complex industrial environments. The next twelve months represent a departure from chatbots toward autonomous systems executing workflows with minimal oversight; forcing organisations to rethink infrastructure, governance, and talent management.

Autonomous AI systems take the wheel

Hanen Garcia, Chief Architect for Telecommunications at Red Hat, argues that while 2025 was defined by experimentation, the coming year marks a “decisive pivot towards agentic AI, autonomous software entities capable of reasoning, planning, and executing complex workflows without constant human intervention.”

Telecoms and heavy industry are the proving grounds. Garcia points to a trajectory toward autonomous network operations (ANO), moving beyond simple automation to self-configuring and self-healing systems. The business goal is to reverse commoditisation by “prioritising intelligence over pure infrastructure” and reduce operating expenditures.

Technologically, service providers are deploying multiagent systems (MAS). Rather than relying on a single model, these allow distinct agents to collaborate on multi-step tasks, handling complex interactions autonomously. However, increased autonomy introduces new threats.

Emmet King, Founding Partner of J12 Ventures, warns that “as AI agents gain the ability to autonomously execute tasks, hidden instructions embedded in images and workflows become potential attack vectors.” Security priorities must therefore shift from endpoint protection to “governing and auditing autonomous AI actions.”

As organisations scale these autonomous AI workloads, they hit a physical wall: power.

King argues energy availability, rather than model access, will determine which startups scale. “Compute scarcity is now a function of grid capacity,” King states, suggesting energy policy will become the de facto AI policy in Europe.

KPIs must adapt. Sergio Gago, CTO at Cloudera, predicts enterprises will prioritise energy efficiency as a primary metric. “The new competitive edge won’t come from the largest models, but from the most intelligent, efficient use of resources.”

Horizontal copilots lacking domain expertise or proprietary data will fail ROI tests as buyers measure real productivity. The “clearest enterprise ROI” will emerge from manufacturing, logistics, and advanced engineering—sectors where AI integrates into high-value workflows rather than consumer-facing interfaces.

AI ends the static app in 2026

Software consumption is changing too. Chris Royles, Field CTO for EMEA at Cloudera, suggests the traditional concept of an “app” is becoming fluid. “In 2026, AI will start to radically change the way we think about apps, how they function and how they’re built.”

Users will soon request temporary modules generated by code and a prompt, effectively replacing dedicated applications. “Once that function has served its purpose, it closes,” Royles explains, noting these “disposable” apps can be built and rebuilt in seconds.

Rigorous governance is required here; organisations need visibility into the reasoning processes used to create these modules to ensure errors are corrected safely.

Data storage faces a similar reckoning, especially as AI becomes more autonomous. Wim Stoop, Director of Product Marketing at Cloudera, believes the era of “digital hoarding” is ending as storage capacity hits its limit.

“AI-generated data will become disposable, created and refreshed on demand rather than stored indefinitely,” Stoop predicts. Verified, human-generated data will rise in value while synthetic content is discarded.

Specialist AI governance agents will pick up the slack. These “digital colleagues” will continuously monitor and secure data, allowing humans to “govern the governance” rather than enforcing individual rules. For example, a security agent could automatically adjust access permissions as new data enters the environment without human intervention.

Sovereignty and the human element

Sovereignty remains a pressing concern for European IT. Red Hat’s survey data indicates 92 percent of IT and AI leaders in EMEA view enterprise open-source software as vital for achieving sovereignty. Providers will leverage existing data centre footprints to offer sovereign AI solutions, ensuring data remains within specific jurisdictions to meet compliance demands.

Emmet King, Founding Partner of J12 Ventures, adds that competitive advantage is moving from owning models to “controlling training pipelines and energy supply,” with open-source advancements allowing more actors to run frontier-scale workloads.

Workforce integration is becoming personal. Nick Blasi, Co-Founder of Personos, argues tools ignoring human nuance – tone, temperament, and personality – will soon feel obsolete. By 2026, Blasi predicts “half of workplace conflict will be flagged by AI before managers know it exists.”

These systems will focus on “communication, influence, trust, motivation, and conflict resolution,” Blasi suggests, adding that personality science will become the “operating system” for the next generation of autonomous AI, offering grounded understanding of human individuality rather than generic recommendations.

The era of the “thin wrapper” is over. Buyers are now measuring real productivity, exposing tools built on hype rather than proprietary data. For the enterprise, competitive advantage will no longer come from renting access to a model, but from controlling the training pipelines and energy supply that power it.

See also: BBVA embeds AI into banking workflows using ChatGPT Enterprise

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post AI in 2026: Experimental AI concludes as autonomous systems rise appeared first on AI News.

]]>
Edge AI inside the human body: Cochlear’s machine learning implant breakthrough https://www.artificialintelligence-news.com/news/edge-ai-medical-devices-cochlear-implants/ Thu, 27 Nov 2025 09:00:00 +0000 https://www.artificialintelligence-news.com/?p=110943 The next frontier for edge AI medical devices isn’t wearables or bedside monitors—it’s inside the human body itself. Cochlear’s newly launched Nucleus Nexa System represents the first cochlear implant capable of running machine learning algorithms while managing extreme power constraints, storing personalised data on-device, and receiving over-the-air firmware updates to improve its AI models over time. For AI […]

The post Edge AI inside the human body: Cochlear’s machine learning implant breakthrough appeared first on AI News.

]]>
The next frontier for edge AI medical devices isn’t wearables or bedside monitors—it’s inside the human body itself. Cochlear’s newly launched Nucleus Nexa System represents the first cochlear implant capable of running machine learning algorithms while managing extreme power constraints, storing personalised data on-device, and receiving over-the-air firmware updates to improve its AI models over time.

For AI practitioners, the technical challenge is staggering: build a decision-tree model that classifies five distinct auditory environments in real time, optimise it to run on a device with a minimal power budget that must last decades, and do it all while directly interfacing with human neural tissue.

Decision trees meet ultra-low power computing

At the core of the system’s intelligence lies SCAN 2, an environmental classifier that analyses incoming audio and categorises it as Speech, Speech in Noise, Noise, Music, or Quiet.

“These classifications are then input to a decision tree, which is a type of machine learning model,” explains Jan Janssen, Cochlear’s Global CTO, in an exclusive interview with AI News. “This decision is used to adjust sound processing settings for that situation, which adapts the electrical signals sent to the implant.”

The model runs on the external sound processor, but here’s where it gets interesting: the implant itself participates in the intelligence through Dynamic Power Management. Data and power are interleaved between the processor and implant via an enhanced RF link, allowing the chipset to optimise power efficiency based on the ML model’s environmental classifications.

This isn’t just smart power management—it’s edge AI medical devices solving one of the hardest problems in implantable computing: how do you keep a device operational for 40+ years when you can’t replace its battery?

The spatial intelligence layer

Beyond environmental classification, the system employs ForwardFocus, a spatial noise algorithm that uses inputs from two omnidirectional microphones to create target and noise spatial patterns. The algorithm assumes target signals originate from the front while noise comes from the sides or behind, then applies spatial filtering to attenuate background interference.

What makes this noteworthy from an AI perspective is the automation layer. ForwardFocus can operate autonomously, removing cognitive load from users navigating complex auditory scenes. The decision to activate spatial filtering happens algorithmically based on environmental analysis—no user intervention required.

Upgradeability: The medical device AI paradigm shift

Here’s the breakthrough that separates this from previous-generation implants: upgradeable firmware in the implanted device itself. Historically, once a cochlear implant was surgically placed, the technology in the implant was fixed for life.

Existing patients could only benefit from innovation by upgrading their external sound processor every five to seven years—gaining access to new signal processing algorithms, improved ML models, and better noise reduction. But the implant itself? Static.

Now, with the Nucleus Nexa System, patients can benefit from technological advances through firmware upgrades to the implant itself, not just the external processor.

Jan Janssen, Chief Technology Officer, Cochlear Limited

The Nucleus Nexa Implant changes that equation. Using Cochlear’s proprietary short-range RF link, audiologists can deliver firmware updates through the external processor to the implant. Security relies on physical constraints—the limited transmission range and low power output require proximity during updates—combined with protocol-level safeguards.

“With the smart implants, we actually keep a copy [of the user’s personalised hearing map] on the implant,” Janssen explained. “So you lose this [external processor], we can send you a blank processor and put it on—it retrieves the map from the implant.”

The implant stores up to four unique maps in its internal memory. From an AI deployment perspective, this solves a critical challenge: how do you maintain personalised model parameters when hardware components fail or get replaced?

From decision trees to deep neural networks

Cochlear’s current implementation uses decision tree models for environmental classification—a pragmatic choice given power constraints and interpretability requirements for medical devices. But Janssen outlined where the technology is headed: “Artificial intelligence through deep neural networks—a complex form of machine learning—in the future may provide further improvement in hearing in noisy situations.”

The company is also exploring AI applications beyond signal processing. “Cochlear is investigating the use of artificial intelligence and connectivity to automate routine check-ups and reduce lifetime care costs,” Janssen noted.

This points to a broader trajectory for edge AI medical devices: from reactive signal processing to predictive health monitoring, from manual clinical adjustments to autonomous optimisation.

The Edge AI constraint problem

What makes this deployment fascinating from an ML engineering standpoint is the constraint stack:

Power: The device must run for decades on minimal energy, with battery life measured in full days despite continuous audio processing and wireless transmission.

Latency: Audio processing happens in real-time with imperceptible delay—users can’t tolerate lag between speech and neural stimulation.

Safety: This is a life-critical medical device directly stimulating neural tissue. Model failures aren’t just inconvenient—they impact quality of life.

Upgradeability: The implant must support model improvements over 40+ years without hardware replacement.

Privacy: Health data processing happens on-device, with Cochlear applying rigorous de-identification before any data enters their Real-World Evidence program for model training across their 500,000+ patient dataset.

These constraints force architectural decisions you don’t face when deploying ML models in the cloud or even on smartphones. Every milliwatt matters. Every algorithm must be validated for medical safety. Every firmware update must be bulletproof.

The future of Bluetooth and connected implants

Looking ahead, Cochlear is implementing Bluetooth LE Audio and Auracast broadcast audio capabilities—requiring a future firmware updates to their sound processors. Bluetooth LE Audio offers better audio quality than traditional Bluetooth while reducing power consumption, but more Auracast broadcast audio enables greater access to assistive listening networks.

Auracast broadcast audio enables the potential for direct connection to audio streams in public venues, airports, and gyms — transforming the cochlear implant system from an isolated medical device into a connected edge AI medical device participating in ambient computing environments.

The longer-term vision includes connected totally implantable devices with integrated microphones and batteries, eliminating external components entirely. At that point, you’re talking about fully autonomous AI systems operating inside the human body—adjusting to environments, optimising power, streaming connectivity, all without user interaction.

The medical device AI blueprint

Cochlear’s deployment offers a blueprint for edge AI medical devices facing similar constraints: start with interpretable models like decision trees, optimise aggressively for power, build in upgradeability from day one, and architect for the 40-year horizon rather than the typical 2-3 year consumer device cycle.

As Janssen noted, the smart implant launching today “is actually the first step to an even smarter implant.” For an industry built on rapid iteration and continuous deployment, adapting to decade-long product lifecycles while maintaining AI advancement represents a fascinating engineering challenge.

The question isn’t whether AI will transform medical devices—Cochlear’s deployment proves it already has. The question is how quickly other manufacturers can solve the constraint problem and bring similarly intelligent systems to market.

For 546 million people with hearing loss in the Western Pacific Region alone, the pace of that innovation will determine whether AI in medicine remains a prototype story or becomes standard of care.

(Photo by Cochlear)

See also: FDA AI deployment: Innovation vs oversight in drug regulation

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Edge AI inside the human body: Cochlear’s machine learning implant breakthrough appeared first on AI News.

]]>
APAC enterprises move AI infrastructure to edge as inference costs rise https://www.artificialintelligence-news.com/news/enterprises-are-rethinking-ai-infrastructure-as-inference-costs-rise/ Mon, 24 Nov 2025 12:00:00 +0000 https://www.artificialintelligence-news.com/?p=110831 AI spending in Asia Pacific continues to rise, yet many companies still struggle to get value from their AI projects. Much of this comes down to the infrastructure that supports AI, as most systems are not built to run inference at the speed or scale real applications need. Industry studies show many projects miss their […]

The post APAC enterprises move AI infrastructure to edge as inference costs rise appeared first on AI News.

]]>
AI spending in Asia Pacific continues to rise, yet many companies still struggle to get value from their AI projects. Much of this comes down to the infrastructure that supports AI, as most systems are not built to run inference at the speed or scale real applications need. Industry studies show many projects miss their ROI goals even after heavy investment in GenAI tools because of the issue.

The gap shows how much AI infrastructure influences performance, cost, and the ability to scale real-world deployments in the region.

Akamai is trying to address this challenge with Inference Cloud, built with NVIDIA and powered by the latest Blackwell GPUs. The idea is simple: if most AI applications need to make decisions in real time, then those decisions should be made close to users rather than in distant data centres. That shift, Akamai claims, can help companies manage cost, reduce delays, and support AI services that depend on split-second responses.

Jay Jenkins, CTO of Cloud Computing at Akamai, explained to AI News why this moment is forcing enterprises to rethink how they deploy AI and why inference, not training, has become the real bottleneck.

Why AI projects struggle without the right infrastructure

Jenkins says the gap between experimentation and full-scale deployment is much wider than many organisations expect. “Many AI initiatives fail to deliver on expected business value because enterprises often underestimate the gap between experimentation and production,” he says. Even with strong interest in GenAI, large infrastructure bills, high latency, and the difficulty of running models at scale often block progress.

Jay Jenkins, CTO of Cloud Computing at Akamai.

Most companies still rely on centralised clouds and large GPU clusters. But as use grows, these setups become too expensive, especially in regions far from major cloud zones. Latency also becomes a major issue when models have to run multiple steps of inference over long distances. “AI is only as powerful as the infrastructure and architecture it runs on,” Jenkins says, adding that latency often weakens the user experience and the value the business hoped to deliver. He also points to multi-cloud setups, complex data rules, and growing compliance needs as common hurdles that slow the move from pilot projects to production.

Why inference now demands more attention than training

Across Asia Pacific, AI adoption is shifting from small pilots to real deployments in apps and services. Jenkins notes that as this happens, day-to-day inference – not the occasional training cycle – is what consumes most computing power. With many organisations rolling out language, vision, and multimodal models in multiple markets, the demand for fast and reliable inference is rising faster than expected. This is why inference has become the main constraint in the region. Models now need to operate in different languages, regulations, and data environments, often in real time. That puts enormous pressure on centralised systems that were never designed for this level of responsiveness.

How edge infrastructure improves AI performance and cost

Jenkins says moving inference closer to users, devices, or agents can reshape the cost equation. Doing so shortens the distance data must travel and allows models to respond faster. It also avoids the cost of routing huge volumes of data between major cloud hubs.

Physical AI systems – robots, autonomous machines, or smart city tools – depend on decisions made in milliseconds. When inference runs distantly, these systems don’t work as expected.

The savings from more localised deployments can also be substantial. Jenkins says Akamai analysis shows enterprises in India and Vietnam see large reductions in the cost of running image-generation models when workloads are placed at the edge, rather than centralised clouds. Better GPU use and lower egress fees played a major role in those savings.

Where edge-based AI is gaining traction

Early demand for edge inference is strongest from industries where even small delays can affect revenue, safety, or user engagement. Retail and e-commerce are among the first adopters because shoppers often abandon slow experiences. Personalised recommendations, search, and multimodal shopping tools all perform better when inference is local and fast.

Finance is another area where latency directly affects value. Jenkins says workloads like fraud checks, payment approval, and transaction scoring rely on chains of AI decisions that should happen in milliseconds. Running inference closer to where data is created helps financial firms move faster and keeps data inside regulatory borders.

Why cloud and GPU partnerships matter more now

As AI workloads grow, companies need infrastructure that can keep up. Jenkins says this has pushed cloud providers and GPU makers into closer collaboration. Akamai’s work with NVIDIA is one example, with GPUs, DPUs, and AI software deployed in thousands of edge locations.

The idea is to build an “AI delivery network” that spreads inference across many sites instead of concentrating everything in a few regions. This helps with performance, but it also supports compliance. Jenkins notes that almost half of large APAC organisations struggle with differing data rules across markets, which makes local processing more important. Emerging partnerships are now shaping the next phase of AI infrastructure in the region, especially for workloads that depend on low-latency responses.

Security is built into these systems from the start, Jenkins says. Zero-trust controls, data-aware routing, and protections against fraud and bots are becoming standard parts of the technology stacks on offer.

The infrastructure needed to support agentic AI and automation

Running agentic systems – which make many decisions in sequence – needs infrastructure that can operate at millisecond speeds. Jenkins believes the region’s diversity makes this harder but not impossible. Countries differ widely in connectivity, rules, and technical readiness, so AI workloads must be flexible enough to run where it makes the most sense. He points to research showing that most enterprises in the region already use public cloud in production, but many expect to rely on edge services by 2027. That shift will require infrastructure that can hold data in-country, route tasks to the closest suitable location, and keep functioning when networks are unstable.

What companies need to prepare for next

As inference moves to the edge, companies will need new ways to manage operations. Jenkins says organisations should expect a more distributed AI lifecycle, where models are updated across many sites. This requires better orchestration and strong visibility into performance, cost, and errors in core and edge systems.

Data governance becomes more complex but also more manageable when processing stays local. Half of the region’s large enterprises already struggle with the variance in regulations, so placing inference closer to where data is generated can help.

Security also needs more attention. While spreading inference to the edge can improve resilience, it also means every site must be secured. Firms need to protect APIs, data pipelines, and guard against fraud or bot attacks. Jenkins notes that many financial institutions already rely on Akamai’s controls in these areas.

(Photo by Igor Omilaev)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post APAC enterprises move AI infrastructure to edge as inference costs rise appeared first on AI News.

]]>
Exclusive: Dubai’s Digital Government chief says speed trumps spending in AI efficiency race https://www.artificialintelligence-news.com/news/dubai-ai-government-efficiency-speed-exclusive/ Thu, 06 Nov 2025 17:00:00 +0000 https://www.artificialintelligence-news.com/?p=110386 When Dubai launched its State of AI Report in April 2025, revealing over 100 high-impact AI use cases, the emirate wasn’t just showcasing technological prowess—it was making a calculated bet that speed, not spending, would determine which cities win the global race for AI-powered governance. In an exclusive interview, Matar Al Hemeiri, Chief Executive of Digital Dubai […]

The post Exclusive: Dubai’s Digital Government chief says speed trumps spending in AI efficiency race appeared first on AI News.

]]>
When Dubai launched its State of AI Report in April 2025, revealing over 100 high-impact AI use cases, the emirate wasn’t just showcasing technological prowess—it was making a calculated bet that speed, not spending, would determine which cities win the global race for AI-powered governance.

In an exclusive interview, Matar Al Hemeiri, Chief Executive of Digital Dubai Government Establishment, revealed how Dubai’s approach to AI government efficiency differs fundamentally from both its regional competitors and established Asian tech hubs—and why the emirate believes its model of rapid deployment paired with binding ethical frameworks offers a blueprint other governments will eventually follow.

The DubaiAI advantage: 180 services, one virtual assistant

While neighbouring Abu Dhabi announced a $4.8 billion investment to become the world’s first fully AI-powered government by 2027, Dubai has taken a different path. “Abu Dhabi’s investment is focused on building an end-to-end AI-powered government infrastructure,” Al Hemeiri explained. “Dubai’s model is to embed AI ethics, interoperability, and explainability into a scalable governance framework.”

The results are already visible. DubaiAI, the citywide AI-powered virtual assistant, now provides information on more than 180 public services—a figure that represents one of the most comprehensive government AI chatbot deployments globally. The system handles 60% of routine government inquiries while cutting operational costs by 35%.

But Al Hemeiri pushed back against the narrative that AI automation inevitably means job losses. “Automation frees our workforce from repetitive, informational tasks,” he said. “Employees are being reskilled and redeployed into higher-value roles such as AI oversight, service design, and strategic policy work.”

The timing couldn’t be more critical. Dubai’s population growth has created an “immense spike in demand for government services,” according to Al Hemeiri, making AI-driven efficiency not just a competitive advantage but an operational necessity.

Speed as strategy: From pilot to deployment in months

What sets Dubai apart in AI government efficiency isn’t just what it builds—it’s how quickly it deploys. “In Dubai, once an AI initiative is announced, it is swiftly activated, moving from pilot to deployment within months, far faster than the global norm,” Al Hemeiri emphasised.

The numbers back this claim. In 2025, over 96% of government entities had adopted at least one AI solution, and 60% of surveyed users preferred AI-supported services. 

Dubai benchmarks itself against leading smart cities like Singapore, Berlin, Helsinki, and Tallinn, but argues its integration of AI ethics directly into procurement and deployment provides a decisive edge.

“Our competitive edge lies in the speed with which Dubai operationalises its ethics,” Al Hemeiri said, addressing a common criticism that AI governance frameworks are purely theoretical. “The AI Policy is not a theoretical framework; it is a binding set of principles and technical requirements applied to every AI deployment across government.”

This approach builds on the Ethical AI Toolkit launched in 2019, making Dubai one of the few cities globally where ethical compliance is embedded from procurement to performance evaluation.

Beyond chatbots: Healthcare, energy, and predictive services

While DubaiAI captures headlines, Al Hemeiri pointed to less-publicised implementations delivering measurable impact. AI models are now detecting chronic conditions such as diabetes at earlier stages, while predictive algorithms improve auditing systems within the Dubai Health Authority. 

In energy infrastructure, smart grids powered by real-time AI forecasting tools are optimising consumption and reducing environmental impact. The most ambitious project currently in development is Dubai’s predictive public services platform, which will use integrated data and AI to anticipate citizen needs—from automated license renewals to preventive healthcare notifications. 

“We have begun efforts on building this project, with full rollout targeted for the early 2030s,” Al Hemeiri revealed. Elements of this vision are already being tested through AI-enabled urban planning tools and citywide digital twins that simulate policy outcomes before implementation.

Data sovereignty: A hybrid model between China and GDPR

Dubai’s approach to data governance offers a middle path between China’s strict localisation requirements and the EU’s GDPR framework. “Dubai’s model offers a hybrid—anonymised citizen data remains within Dubai’s jurisdiction under robust sovereignty laws, but can be securely shared across entities with the user’s consent for government services, through the nation’s official digital identity platform: UAE PASS,” Al Hemeiri explained.

A key differentiator is Dubai’s embrace of synthetic data frameworks. “They allow us to develop and test AI systems at scale while preserving privacy and maintaining compliance with Dubai’s data sovereignty requirements,” he said. This approach enables faster innovation cycles while addressing privacy concerns that have hampered AI development in other jurisdictions.

The startup sandbox: Real integration, not just regulatory relief

Dubai positions itself as a testing ground for AI startups, but Al Hemeiri argued the emirate offers more than regulatory flexibility. “Dubai’s AI sandboxes combine regulatory flexibility with direct access to government datasets and real-world testing environments,” he said.

One healthcare diagnostics startup piloted within Dubai’s sandbox has already integrated its AI triage tool into Dubai Health Authority services. 

“Because our ecosystem operates as an interconnected digital operating system, startups in our sandboxes can test solutions that seamlessly integrate with other city services, from mobility innovations like the Dubai Loop and eVTOL air taxis to healthcare AI diagnostics,” Al Hemeiri explained.

Converting global attention into economic returns

Dubai AI Week 2025 attracted participants from 100 countries and partnerships with Meta, Google, Microsoft, and OpenAI. But Al Hemeiri insisted the emirate is focused on converting attention into tangible outcomes. 

“We have established post-event working groups with each of these partners to identify and accelerate joint projects,” he said, citing AI upskilling programs, R&D collaborations, and pilot deployments in healthcare, mobility, and urban planning.

These partnerships feed directly into Dubai’s D33 Economic Agenda, which aims to generate AED 100 billion annually from digital innovation. The State of AI Report projects AI could contribute over AED 235 billion to Dubai’s economy by 2030—a figure that represents nearly 20% of the emirate’s targeted economic expansion.

Quiet wins and future risks

When pressed about initiatives that deliver value without media fanfare, Al Hemeiri highlighted the UN Citiverse Challenge, co-led by Digital Dubai and global partners, which brings together innovators to design AI-powered solutions for inclusive public services and sustainability. 

He also pointed to Dubai Future Foundation’s autonomous delivery robot, already being piloted on Dubai streets to improve last-mile delivery efficiency while reducing congestion and emissions.

On risks, Al Hemeiri was direct: “The greatest risk is scaling without sufficient oversight.” Dubai mitigates this through continuous system audits and a requirement for explainability in all public sector AI. 

Al Hemeiri added that ensuring ROI “is crucial for us when deciding to build an AI use case. We calculate this when planning a project, and only move ahead once we are convinced we will be able to attain the expected ROI for the city.”

The five-year test

Asked what would constitute failure five years from now, Al Hemeiri said that it “would mean fragmented AI adoption without improving citizen trust, efficiency, or quality of life.”

Success, conversely, would be “when AI-powered public services are seamless, anticipatory, and inclusive, easing the lives of citizens and residents, and naturally becoming a blueprint replicated by other governments globally.”

It’s an ambitious vision—one that positions Dubai not just as a fast follower in AI government efficiency, but as a potential model for how cities can deploy transformative technology at speed without sacrificing ethical oversight or public trust.

Whether that model proves replicable beyond Dubai’s unique governance structure and resources remains the central question. But with 96% of government entities already adopting AI solutions and deployment timelines measured in months rather than years, Dubai is testing that hypothesis in real-time—and betting that in the race to build AI-powered governments, velocity matters as much as vision.

(Photo by David Rodrigo)

See also: UAE to teach its children AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Exclusive: Dubai’s Digital Government chief says speed trumps spending in AI efficiency race appeared first on AI News.

]]>
How AI is changing the way we travel https://www.artificialintelligence-news.com/news/how-ai-is-changing-the-way-we-travel/ Tue, 07 Oct 2025 11:00:00 +0000 https://www.artificialintelligence-news.com/?p=109755 AI is reshaping how people plan and experience travel. From curated videos on Instagram Reels to booking engines that build entire itineraries in seconds, AI is becoming a powerful force in how journeys are imagined, booked, and lived. But this shift raises an important question: is AI giving travellers more freedom, or quietly steering their […]

The post How AI is changing the way we travel appeared first on AI News.

]]>
AI is reshaping how people plan and experience travel. From curated videos on Instagram Reels to booking engines that build entire itineraries in seconds, AI is becoming a powerful force in how journeys are imagined, booked, and lived. But this shift raises an important question: is AI giving travellers more freedom, or quietly steering their choices?

Fahd Hamidaddin, Founding CEO of the Saudi Tourism Authority and President of the upcoming TOURISE Summit, believes AI can do both, speaking to AI News. In a wide-ranging conversation, he explained how AI is transforming travel discovery, personalisation, cultural exchange, and ethics—and why the industry must set clear guardrails as technology takes on a more active role.

AI as a travel companion

AI is changing how people discover destinations. Instead of generic travel lists, platforms now serve content that feels personal. “AI has turned travel discovery into a personal canvas,” Hamidaddin said. “Platforms like Instagram Reels no longer just show ‘where to go’; they curate journeys that feel tailor-made for each traveller.”

Fahd Hamidaddin, Founding CEO of the Saudi Tourism Authority and President of the upcoming TOURISE Summit

This shift is not just about convenience. By highlighting lesser-known destinations, AI can spread demand and ease pressure on crowded tourist spots. It can also introduce travellers to authentic local experiences that might otherwise remain hidden.

Hamidaddin sees the next phase as “agentic AI”—technology that doesn’t just make suggestions but takes action. He described a future where AI automatically rebooks flights disrupted by weather, adjusts itineraries, and reschedules reservations in real time. “That’s frictionless travel—where the logistics fade and the adventure takes centre stage,” he said.

AI personalisation vs. algorithmic influence in travel

AI-driven booking engines promise hyper-personalised recommendations, matching experiences to individual interests and budgets. This can make planning smoother and more inspiring, but it also comes with risks.

“They do both,” Hamidaddin said when asked whether AI empowers travellers or guides them without their knowledge. “AI can empower travellers like never before—matching experiences to passions, budgets, and even moods. But unchecked, algorithms can quietly narrow horizons, nudging people toward predictable options. This risk only increases with agentic AI, which will make decisions on travellers’ behalf. That’s why transparency and accountability are non-negotiable. AI should be a compass, not a cage, and travellers must always hold the final word.”

Trust and transparency

The balance between personalisation and privacy will shape the next era of travel. As AI systems collect vast amounts of personal data, travellers are more aware of how their preferences, clicks, and searches are used. Hamidaddin stressed that trust is the foundation.

“The era of hyper-personalisation must be built on trust. Travellers know their data is powerful, and they’re right to ask how it’s being used,” he said. The solution, in his view, is “radical transparency: explicit consent, clear explanations, and real opt-in choices.”

Agentic AI, which can act on a traveller’s behalf, makes this even more important. If algorithms are booking, adjusting, or cancelling plans automatically, travellers need clear ways to control and understand these actions. “True innovation doesn’t just customise the journey; it safeguards the traveller’s confidence and autonomy,” he added.

Setting standards through TOURISE

Hamidaddin will lead discussions on these topics at the inaugural TOURISE Summit in Riyadh this November. He sees the summit as a chance to shape global standards for AI use in travel, not just showcase technology.

“The TOURISE is designed to be more than an event—it’s the world’s first platform where government, business, and technology leaders unite to shape travel tech responsibly,” he said. His goals include creating a shared ethical framework for AI, encouraging partnerships to address privacy and workforce challenges, promoting sustainability, and training the global tourism workforce to thrive in an AI-driven industry.

“TOURISE must set a new benchmark: innovation with integrity,” he said.

Cultural exchange and economic growth

AI’s influence goes beyond logistics. It is also changing cultural exchange and economic development, particularly in Saudi Arabia. “AI is dissolving barriers—linguistic, cultural, and economic. It’s curating authentic connections that go beyond sightseeing into meaningful exchange,” Hamidaddin said.

He explained how Saudi Arabia is using AI to highlight cultural and historical treasures like AlUla and Diriyah, while supporting artisans, festivals, and small businesses. Agentic AI will help create smoother travel experiences that allow visitors to focus more on culture and less on planning.

“This isn’t just about more visitors; it’s about inclusive growth, mutual respect, and shared prosperity,” he said. By 2030, AI is expected to contribute $135 billion to Saudi Arabia’s GDP, with tourism playing a central role. But for Hamidaddin, the real impact is measured in “bonds between people.”

Ethical guardrails for AI in travel

As AI systems take on more responsibility, clear ethical standards become essential. Hamidaddin outlined several priorities: making AI usage clear to users, regularly auditing algorithms for bias, giving travellers control over their data, and designing systems that promote cultural diversity and accessibility.

“With agentic AI, the stakes rise: when an AI acts on a traveller’s behalf, we must ensure transparency, explainability, and accountability. Agency must never replace autonomy,” he said.

Innovation with ethics

The debate isn’t about whether to adopt AI, but how to do so responsibly. Hamidaddin argues that innovation should align with human values and environmental priorities. “It’s not about chasing every shiny new tool; it’s about aligning innovation with human values and planetary needs,” he said.

He believes governments, businesses, communities, and travellers must collaborate to agree on shared principles. Agentic AI makes this even more urgent, as decisions may increasingly be made by machines. “Our job is to ensure technology serves people, not the other way around,” he added.

A new era for travel

Hamidaddin is optimistic about what lies ahead. “What excites me most is that travel is becoming transformative again,” he said. He imagines a future where language barriers disappear, itineraries adapt in real time, and every trip supports local communities.

In Saudi Arabia, platforms like “Spirit of Saudi” are already using AI to showcase authentic experiences, from desert adventures to artisan workshops. The next step is agentic journeys, where AI travel companions handle logistics seamlessly, freeing travellers to focus on discovery and connection.

“At TOURISE, I believe we’re not simply shaping tourism’s future—we’re igniting a new era of connection and shared prosperity across the globe,” he said.

(Photo by S O C I A L . C U T)

See also: AI causes reduction in users’ brain activity – MIT

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post How AI is changing the way we travel appeared first on AI News.

]]>
Rising AI demands push Asia Pacific data centres to adapt, says Vertiv https://www.artificialintelligence-news.com/news/rising-ai-demands-push-asia-pacific-data-centres-to-adapt/ Tue, 30 Sep 2025 08:15:55 +0000 https://www.artificialintelligence-news.com/?p=109635 As more companies in Asia Pacific adopt artificial intelligence to boost their operations, the pressure on data centres is growing fast. Traditional facilities, built for earlier generations of computing, are struggling to keep up with the heavy energy use and cooling demands of modern AI systems. By 2030, GPU-driven workloads could push rack power densities […]

The post Rising AI demands push Asia Pacific data centres to adapt, says Vertiv appeared first on AI News.

]]>
As more companies in Asia Pacific adopt artificial intelligence to boost their operations, the pressure on data centres is growing fast. Traditional facilities, built for earlier generations of computing, are struggling to keep up with the heavy energy use and cooling demands of modern AI systems. By 2030, GPU-driven workloads could push rack power densities toward 1 MW, making incremental upgrades no longer enough. Instead, operators are now turning toward purpose-built “AI factory” data centres that are designed from the ground up.

AI News spoke with Paul Churchill, Vice President of Vertiv Asia, to better understand how the region is preparing for this shift and what kinds of infrastructure changes lie ahead.

Explosive market growth is setting the pace

The AI data-centre market is projected to surge from $236 billion in 2025 to nearly $934 billion by 2030. This growth is driven by rapid adoption of AI in industries like finance, healthcare, and manufacturing. These sectors rely on high-performance computing environments powered by dense GPU clusters, which require far more energy and cooling capacity than traditional servers.

In Asia Pacific, this demand is amplified by government investments in digitalisation, the expansion of 5G, and the rollout of cloud-native and generative AI applications. All of this is pushing compute needs higher at a pace the region has never seen before.

Churchill explained that meeting this demand requires more than just larger facilities. It calls for smarter infrastructure strategies that are scalable and sustainable. “Infrastructure leaders must move beyond piecemeal upgrades. A future-ready strategy involves adopting AI-optimised infrastructure that combines high-capacity power systems, advanced thermal management, and integrated, scalable designs,” he said.

Cooling and power challenges are rising

As rack densities increase from 40 kW to 130 kW, and potentially up to 250 kW by 2030, cooling and power delivery are becoming important issues. Traditional air cooling methods are no longer enough for these conditions.

To address this, Vertiv is developing hybrid cooling systems that mix direct-to-chip liquid cooling with air-based solutions. Systems can adjust to changing workloads, reduce energy use, and maintain reliability. “Our coolant distribution units enable direct-to-chip liquid cooling while ensuring reliability and serviceability in high-density environments,” Churchill said.

Paul Churchill, Vice President of Vertiv Asia

Power delivery is also becoming more complex. AI workloads fluctuate rapidly, so infrastructure needs to react in real time. Vertiv is evolving its rack power distribution units and busway systems to handle higher voltages and improve load balancing. Intelligent monitoring helps operators manage loads more efficiently, reduce wasted capacity, and extend uptime – a key consideration in parts of Southeast Asia where power grids are less stable.

Data centres are being redesigned for AI

The rise of liquid-cooled GPU pods and 1 MW racks, like those planned by AMD and hyperscalers such as Microsoft, Google, and Meta, signals a deeper architectural shift. Instead of retrofitting older facilities, new data centres are being designed specifically to support AI.

“The future of data-centre architecture is hybrid, and these infrastructures require facilities to be built around liquid flow,” Churchill said. This includes new floor layouts, advanced coolant distribution, and more sophisticated power systems.

The next-generation facilities will integrate cooling, power, and monitoring from the chip level to the grid. For Asia Pacific, where hyperscale campuses are expanding rapidly, this kind of integrated design is essential to keep up with performance expectations and sustainability goals.

From incremental upgrades to AI factory data centres

By 2030, Asia Pacific is expected to overtake the US in data centre capacity, reaching almost 24 GW of commissioned power. To handle this growth, enterprises are moving away from ad hoc upgrades toward full-stack AI factory data centres.

Churchill said this transition should happen in stages. The first step is integrated planning, bringing together power, cooling, and IT management rather than treating them as separate systems. The approach simplifies deployment and provides a strong base for scaling.

The second step is to adopt modular and prefabricated systems. These allow companies to add capacity in phases without major disruptions. “Companies can deploy factory-tested modules alongside existing infrastructure, gradually migrating workloads to AI-ready capacity without disruptive overhauls,” he said.

Finally, sustainability must be built into every stage. This includes using lithium-ion energy storage, grid-interactive UPS systems, and higher-voltage distribution to improve efficiency and resilience.

DC power gains new relevance for AI data centres

Vertiv recently introduced PowerDirect Rack, a DC power shelf designed for AI and high-performance computing. Switching to DC power can cut energy losses by reducing the number of conversion steps between the grid and the server. It also aligns with renewable energy and battery storage systems, which are becoming more common in Asia Pacific.

This is especially useful in energy-constrained markets like Vietnam and the Philippines. In these regions, flexible power solutions are essential to keep facilities running smoothly. As Churchill noted, DC power is “not just an efficiency play – it is a strategy for enabling sustainable scalability.”

Sustainability is becoming a central priority

With AI driving up energy use, data-centre operators are facing stricter regulations and rising grid constraints. This is particularly true in Southeast Asia, where power reliability and tariffs vary widely.

Vertiv is working with operators to integrate alternative energy sources like lithium-ion batteries, hybrid power systems, and microgrids. These can reduce dependence on the grid and improve resilience. There is also growing interest in solar-backed UPS systems and advanced energy storage technologies, which help balance loads and manage costs.

Cooling efficiency is another major focus. Hybrid liquid cooling systems can reduce both energy and water use compared to older methods. “Our focus is on delivering infrastructure that meets performance demands while aligning with ESG goals,” Churchill said. “We’re collaborating with our partners to ensure that AI-driven growth in the region remains responsible, sustainable, and aligned with long-term digital and environmental objectives.”

Modular solutions support rapid expansion

Many emerging economies in Asia Pacific face challenges like limited land, unstable power supply, and shortages of skilled labour. In these settings, modular and prefabricated data-centre systems offer a practical solution.

Prefabricated modules can cut deployment times by up to 50%, while improving energy efficiency and scalability. They allow operators to expand gradually, adding capacity as needed without heavy upfront investment. The flexibility is especially valuable for AI workloads, which can grow quickly and unpredictably.

By combining compact design with energy-efficient operation, modular systems give operators a way to build AI-ready capacity faster and with less risk – a crucial advantage as the region’s digital economies grow.

Preparing for a demanding future

The AI surge is reshaping how data centres are built and operated in Asia Pacific. As workloads intensify and sustainability pressures mount, companies can no longer rely on outdated infrastructure. The move toward AI factory data centres, powered by advanced cooling, DC power, and modular systems, reflects a shift in how the region is preparing for the next era of computing.

(Photo by İsmail Enes Ayhan)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Rising AI demands push Asia Pacific data centres to adapt, says Vertiv appeared first on AI News.

]]>
Ethical cybersecurity practice reshapes enterprise security in 2025 https://www.artificialintelligence-news.com/news/manageengine-ethical-cybersecurity-2025/ Fri, 26 Sep 2025 08:20:45 +0000 https://www.artificialintelligence-news.com/?p=109598 When ransomware attacks like Akira and Ryuk began crippling organisations worldwide, the cybersecurity industry’s first instinct was predictable: build bigger walls, deploy more aggressive automated responses, and lock down everything. But there was a different problem emerging, according to Romanus Prabhu Raymond, Director of Technology at ManageEngine. The company’s customers were demanding aggressive containment features, […]

The post Ethical cybersecurity practice reshapes enterprise security in 2025 appeared first on AI News.

]]>
When ransomware attacks like Akira and Ryuk began crippling organisations worldwide, the cybersecurity industry’s first instinct was predictable: build bigger walls, deploy more aggressive automated responses, and lock down everything. But there was a different problem emerging, according to Romanus Prabhu Raymond, Director of Technology at ManageEngine.

The company’s customers were demanding aggressive containment features, yet automatically quarantining a suspicious hospital computer or bank teller system might prove more devastating than the original threat. The dilemma – balancing rapid threat response with real-world consequences – exemplifies why ethical cybersecurity practices have become one of the defining challenges of 2025.

In our exclusive interview shortly before his presentation at the Cyber Security Expo in Amsterdam, Raymond revealed how leading organisations are breaking free from the traditional security-versus-privacy trade-off and why the companies embracing this “trust revolution” can reshape enterprise security.

For starters, the cybersecurity industry stands at a important juncture. High-profile breaches, evolving regulatory frameworks, and the rapid integration of AI into security systems have created new challenges that extend far beyond technical protection. Organisations now face important questions about how to balance innovation with responsibility, privacy with security, and automation with human oversight.

Defining ethical cybersecurity in the modern era

According to Raymond, ethical cybersecurity transcends traditional notions of defence. “Ethical cybersecurity goes beyond defending systems and data – it’s about applying security practices responsibly to protect organisations, individuals, and society at large,” he explained during our interview ahead of his presentation.

In 2025’s cloud-first environment, security isn’t a competitive differentiator, but a baseline expectation. What distinguishes organisations today is how ethically they handle data and implement security measures.

Raymond uses the analogy of installing security cameras in a neighbourhood to protect public spaces without intruding on private areas; the avoidance of peering into residents’ windows. Cybersecurity must operate under the same principle.

ManageEngine has operationalised this philosophy through what Raymond calls an “ethical by design” approach, embedding fairness, transparency, and accountability into every product from conception. The company’s stance on customer data exemplifies this commitment: it neither monetises nor monitors customer data, maintaining that it belongs solely to the customer.

The innovation-risk paradox

The tension between innovation and risk management represents an important challenge for modern organisations. Push too hard for innovation without adequate safeguards and companies risk data breaches and compliance violations. Focus too heavily on risk mitigation, and organisations may find themselves unable to compete in evolving markets.

The “trust by design” philosophy embeds responsibility and accountability into every development stage, which allows rapid innovation and maintains compliance and ethical standards. When deploying important components like endpoint agents, the company ensures new functionality inherently complies with industry standards and security requirements.

The method extends to the company’s global operations. ManageEngine maintains datacentres worldwide which align with local privacy and regulatory demands, and trains every employee – from developers to support engineers – to treat customer data with integrity. The company’s “trans-localisation strategy” ensures local teams serve local customers, creating operational efficiency and cultural trust.

AI integration and human oversight

As artificial intelligence becomes increasingly central to cybersecurity operations, the ethical implications of AI-driven security solutions have become more complex. Raymond acknowledges that AI is evolving from purely assistive roles to more decisive functions, raising questions about accountability, transparency, and fairness.

Raymond expounds ManageEngine’s “SHE AI principles”: Secure AI, Human AI, and Ethical AI. Secure AI involves building robust protections against manipulation and adversarial attacks. Human AI ensures human oversight remains integral to important security actions—for instance, if AI detects a suspicious endpoint, it escalates for human validation rather than automatically removing the device from the network.

This is particularly important in sensitive environments like hospitals or banks, where automatically blocking systems could have severe consequences.

The ethical AI component emphasises explainability. Rather than generating “black box” alerts, ManageEngine’s systems explain their reasoning. An alert might read: “The endpoint cannot log in at this time and is trying to connect to too many network devices.” This transparency is essential for compliance and building trust in AI-driven security systems.

Navigating privacy-security trade-offs

The balance between necessary security monitoring and privacy invasion represents one of the most delicate aspects of ethical cybersecurity practices. Raymond acknowledges that while proactive monitoring is essential for detecting threats early, over-monitoring risks creating a surveillance environment that treats employees as suspects rather than trusted partners.

ManageEngine uses principles that emphasise data minimisation, purpose-driven monitoring, anonymisation, and clear governance structures. The company collects only information necessary for security purposes, ensures every piece of data has a defined security use case, uses anonymised data for pattern analysis, and defines data access privileges and retention periods.

The framework demonstrates that security and privacy need not be mutually exclusive when guided by ethics, transparency, and accountability.

Industry leadership and future challenges

Raymond argues that technology vendors must act as custodians of digital ethics, earning trust rather than expecting it to be given blindly. ManageEngine says it contributes to industry standards by thought leadership, advocacy, and by embedding compliance standards like ISO 27000 and GDPR into products from the start.

Raymond identifies AI-driven autonomous security and quantum computing as the biggest ethical challenges facing the industry. As security operations centres move toward full autonomy, questions of explainability and accountability become critical. Quantum computing’s ability to break traditional encryption threatens secure communication foundations, while technologies like biometrics raise privacy concerns if not managed carefully.

Practical implementation

For organisations seeking to integrate ethical considerations into their cybersecurity strategies, Raymond recommends three concrete steps: adopting a cybersecurity ethics charter at the board level, embedding privacy and ethics in technology decisions when selecting vendors, and operationalising ethics through comprehensive training and controls that explain not just what to do, but why it matters.

As the cybersecurity landscape evolves, companies that will thrive are those that recognise ethical cybersecurity practices as the foundation for sustainable, trusted technological advancement, not as constraints on innovation. In the future organisations have to innovate responsibly and maintain human oversight and the ethical principles that digital trust requires.

See also: CERTAIN drives ethical AI compliance in Europe

Banner for the AI & Big Data Expo event series.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Ethical cybersecurity practice reshapes enterprise security in 2025 appeared first on AI News.

]]>