TechEx Events - AI News https://www.artificialintelligence-news.com/categories/techex-events/ Artificial Intelligence News Tue, 17 Feb 2026 15:33:28 +0000 en-GB hourly 1 https://wordpress.org/?v=6.9.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png TechEx Events - AI News https://www.artificialintelligence-news.com/categories/techex-events/ 32 32 SS&C Blue Prism: On the journey from RPA to agentic automation https://www.artificialintelligence-news.com/news/ssc-blue-prism-on-the-journey-from-rpa-to-agentic-automation/ Tue, 17 Feb 2026 15:27:34 +0000 https://www.artificialintelligence-news.com/?p=112272 For organizations who are still wedded to the rules and structures of robotic process automation (RPA), then considering agentic AI as the next step for automation may be faintly terrifying. SS&C Blue Prism, however, is here to help, taking customers on the journey from RPA to agentic automation at a pace with which they’re comfortable. […]

The post SS&C Blue Prism: On the journey from RPA to agentic automation appeared first on AI News.

]]>
For organizations who are still wedded to the rules and structures of robotic process automation (RPA), then considering agentic AI as the next step for automation may be faintly terrifying. SS&C Blue Prism, however, is here to help, taking customers on the journey from RPA to agentic automation at a pace with which they’re comfortable.

Big as it may be, this move is a necessary one. Modern workflows are at a level of complexity that outlines what traditional RPA was designed to do, according to Steven Colquitt, VP Software Engineering, SS&C Blue Prism. Unstructured data comes from various sources resembling non-deterministic real-world interactions. “Inputs can vary, outcomes can shift and decisions depend on context in real-time,” notes Colquitt.

Brian Halpin, Managing Director, Automation, SS&C Blue Prism, gives the example of a credit agreement where you might need to get 30 or 40 answers from it. He uses the word “answers” deliberately as opposed to data points to account for the level of reasoning that a large language model (LLM) performs.

The element of this being a journey continues to resonate, however. “We’re now saying we’re giving an AI agent the outcome that we want, but we’re not giving it the instructions on how to complete,” says Halpin. “We’re not saying, ‘follow step one, two, three, four, five.’ We’re saying, ‘I want this loan reviewed’ or ‘I want this customer onboarded.’

“Ultimately, I think that’s where the market will go,” adds Halpin. “Is it ready for that? No. Why? Because there’s trust, there’s regulations, there’s auditability […] stability, security. We know LLMs are prone to hallucinations, we know they drift, and [if] you change the underlying model, things change and responses get different.

“There’s an awful lot of learning to happen before I think companies go fully autonomous and real agentic workflows [are] driven from that sort of non-deterministic perspective,” says Halpin. “But then, there will be something else, right? There will be another model. So really, it is all a journey right now.”

SS&C Blue Prism has thousands of customers who have automated processes in place, from centers of excellence (CoEs) to running digital workers in their operations, who they’re hoping to upgrade into the “world of AI”, as Halpin puts it. Sometimes it’s about connecting two separate areas.

“It’s been interesting,” Halpin notes. “As I talk to [our] customers, I see a common thread among companies right now where, in a lot of cases, AI has been established as a separate unit in a company. You go over to the process automation team, and they’re maybe not even allowed to use the AI.

“So, it’s about, ‘How do you help them get that capability and blend it into their process efficiency and allow them to get to the next 20%, 30% of automation, in terms of the end-to-end process?’”

As part of this, SS&C Blue Prism is soon to launch new technology which helps organizations build and embed AI agents within workflows, as well as assist with orchestration. Those who attended TechEx Global, on February 4-5 as part of the Intelligent Automation conference, where SS&C Blue Prism participated, got the full story, as well as understanding the company’s ongoing path.

“[SS&C Technologies] are one of the biggest users of RPA in the world,” adds Halpin. “We have over three and a half thousand digital workers deployed [across the SS&C estate]. We’re saving hundreds of millions in run-rate benefit. We’ve about 35 AI agents in production attached to those digital workers doing […] complex tasks, and really, we just want to share that journey.”

Watch the full interview with Brian Halpin below:

Photo by Patrick Tomasso on Unsplash

The post SS&C Blue Prism: On the journey from RPA to agentic automation appeared first on AI News.

]]>
AI Expo 2026 Day 2: Moving experimental pilots to AI production https://www.artificialintelligence-news.com/news/ai-expo-2026-day-2-moving-experimental-pilots-ai-production/ Thu, 05 Feb 2026 16:08:36 +0000 https://www.artificialintelligence-news.com/?p=112021 The second day of the co-located AI & Big Data Expo and Digital Transformation Week in London showed a market in a clear transition. Early excitement over generative models is fading. Enterprise leaders now face the friction of fitting these tools into current stacks. Day two sessions focused less on large language models and more […]

The post AI Expo 2026 Day 2: Moving experimental pilots to AI production appeared first on AI News.

]]>

The second day of the co-located AI & Big Data Expo and Digital Transformation Week in London showed a market in a clear transition.

Early excitement over generative models is fading. Enterprise leaders now face the friction of fitting these tools into current stacks. Day two sessions focused less on large language models and more on the infrastructure needed to run them: data lineage, observability, and compliance.

Data maturity determines deployment success

AI reliability depends on data quality. DP Indetkar from Northern Trust warned against allowing AI to become a “B-movie robot.” This scenario occurs when algorithms fail because of poor inputs. Indetkar noted that analytics maturity must come before AI adoption. Automated decision-making amplifies errors rather than reducing them if the data strategy is unverified.

Eric Bobek of Just Eat supported this view. He explained how data and machine learning guide decisions at the global enterprise level. Investments in AI layers are wasted if the data foundation remains fragmented.

Mohsen Ghasempour from Kingfisher also noted the need to turn raw data into real-time actionable intelligence. Retail and logistics firms must cut the latency between data collection and insight generation to see a return.

Scaling in regulated environments

The finance, healthcare, and legal sectors have near-zero tolerance for error. Pascal Hetzscholdt from Wiley addressed these sectors directly.

Hetzscholdt stated that responsible AI in science, finance, and law relies on accuracy, attribution, and integrity. Enterprise systems in these fields need audit trails. Reputational damage or regulatory fines make “black box” implementations impossible.

Konstantina Kapetanidi of Visa outlined the difficulties in building multilingual, tool-using, scalable generative AI applications. Models are becoming active agents that execute tasks rather than just generating text. Allowing a model to use tools – like querying a database – creates security vectors that need serious testing.

Parinita Kothari from Lloyds Banking Group detailed the requirements for deploying, scaling, monitoring, and maintaining AI systems. Kothari challenged the “deploy-and-forget” mentality. AI models need continuous oversight, similar to traditional software infrastructure.

The change in developer workflows

Of course, AI is fundamentally changing how code is written. A panel with speakers from Valae, Charles River Labs, and Knight Frank examined how AI copilots reshape software creation. While these tools speed up code generation, they also force developers to focus more on review and architecture.

This change requires new skills. A panel with representatives from Microsoft, Lloyds, and Mastercard discussed the tools and mindsets needed for future AI developers. A gap exists between current workforce capabilities and the needs of an AI-augmented environment. Executives must plan training programmes that ensure developers sufficiently validate AI-generated code.

Dr Gurpinder Dhillon from Senzing and Alexis Ego from Retool presented low-code and no-code strategies. Ego described using AI with low-code platforms to make production-ready internal apps. This method aims to cut the backlog of internal tooling requests.

Dhillon argued that these strategies speed up development without dropping quality. For the C-suite, this suggests cheaper internal software delivery if governance protocols stay in place.

Workforce capability and specific utility

The broader workforce is starting to work with “digital colleagues.” Austin Braham from EverWorker explained how agents reshape workforce models. This terminology implies a move from passive software to active participants. Business leaders must re-evaluate human-machine interaction protocols.

Paul Airey from Anthony Nolan gave an example of AI delivering literally life-changing value. He detailed how automation improves donor matching and transplant timelines for stem cell transplants. The utility of these technologies extends to life-saving logistics.

A recurring theme throughout the event is that effective applications often solve very specific and high-friction problems rather than attempting to be general-purpose solutions.

Managing the transition

The day two sessions from the co-located events show that enterprise focus has now moved to integration. The initial novelty is gone and has been replaced by demands for uptime, security, and compliance. Innovation heads should assess which projects have the data infrastructure to survive contact with the real world.

Organisations must prioritise the basic aspects of AI: cleaning data warehouses, establishing legal guardrails, and training staff to supervise automated agents. The difference between a successful deployment and a stalled pilot lies in these details.

Executives, for their part, should direct resources toward data engineering and governance frameworks. Without them, advanced models will fail to deliver value.

See also: AI Expo 2026 Day 1: Governance and data readiness enable the agentic enterprise

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post AI Expo 2026 Day 2: Moving experimental pilots to AI production appeared first on AI News.

]]>
AI Expo 2026 Day 1: Governance and data readiness enable the agentic enterprise https://www.artificialintelligence-news.com/news/ai-expo-2026-day-1-governance-data-readiness-enable-agentic-enterprise/ Wed, 04 Feb 2026 16:33:34 +0000 https://www.artificialintelligence-news.com/?p=112005 While the prospect of AI acting as a digital co-worker dominated the day one agenda at the co-located AI & Big Data Expo and Intelligent Automation Conference, the technical sessions focused on the infrastructure to make it work. A primary topic on the exhibition floor was the progression from passive automation to “agentic” systems. These […]

The post AI Expo 2026 Day 1: Governance and data readiness enable the agentic enterprise appeared first on AI News.

]]>
While the prospect of AI acting as a digital co-worker dominated the day one agenda at the co-located AI & Big Data Expo and Intelligent Automation Conference, the technical sessions focused on the infrastructure to make it work.

A primary topic on the exhibition floor was the progression from passive automation to “agentic” systems. These tools reason, plan, and execute tasks rather than following rigid scripts. Amal Makwana from Citi detailed how these systems act across enterprise workflows. This capability separates them from earlier robotic process automation (RPA).

Scott Ivell and Ire Adewolu of DeepL described this development as closing the “automation gap”. They argued that agentic AI functions as a digital co-worker rather than a simple tool. Real value is unlocked by reducing the distance between intent and execution. Brian Halpin from SS&C Blue Prism noted that organisations typically must master standard automation before they can deploy agentic AI.

This change requires governance frameworks capable of handling non-deterministic outcomes. Steve Holyer of Informatica, alongside speakers from MuleSoft and Salesforce, argued that architecting these systems requires strict oversight. A governance layer must control how agents access and utilise data to prevent operational failure.

Data quality blocks deployment

The output of an autonomous system relies on the quality of its input. Andreas Krause from SAP stated that AI fails without trusted, connected enterprise data. For GenAI to function in a corporate context, it must access data that is both accurate and contextually-relevant.

Meni Meller of Gigaspaces addressed the technical challenge of “hallucinations” in LLMs. He advocated for the use of eRAG (retrieval-augmented generation) combined with semantic layers to fix data access issues. This approach allows models to retrieve factual enterprise data in real-time.

Storage and analysis also present challenges. A panel featuring representatives from Equifax, British Gas, and Centrica discussed the necessity of cloud-native, real-time analytics. For these organisations, competitive advantage comes from the ability to execute analytics strategies that are scalable and immediate.

Physical safety and observability

The integration of AI extends into physical environments, introducing safety risks that differ from software failures. A panel including Edith-Clare Hall from ARIA and Matthew Howard from IEEE RAS examined how embodied AI is deployed in factories, offices, and public spaces. Safety protocols must be established before robots interact with humans.

Perla Maiolino from the Oxford Robotics Institute provided a technical perspective on this challenge. Her research into Time-of-Flight (ToF) sensors and electronic skin aims to give robots both self-awareness and environmental awareness. For industries such as manufacturing and logistics, these integrated perception systems prevent accidents.

In software development, observability remains a parallel concern. Yulia Samoylova from Datadog highlighted how AI changes the way teams build and troubleshoot software. As systems become more autonomous, the ability to observe their internal state and reasoning processes becomes necessary for reliability.

Infrastructure and adoption barriers

Implementation demands reliable infrastructure and a receptive culture. Julian Skeels from Expereo argued that networks must be designed specifically for AI workloads. This involves building sovereign, secure, and “always-on” network fabrics capable of handling high throughput.

Of course, the human element remains unpredictable. Paul Fermor from IBM Automation warned that traditional automation thinking often underestimates the complexity of AI adoption. He termed this the “illusion of AI readiness”. Jena Miller reinforced this point, noting that strategies must be human-centred to ensure adoption. If the workforce does not trust the tools, the technology yields no return.

Ravi Jay from Sanofi suggested that leaders need to ask operational and ethical questions early on in the process. Success depends on deciding where to build proprietary solutions versus where to buy established platforms.

The sessions from day one of the co-located events indicate that, while technology is moving toward autonomous agents, deployment requires a solid data foundation.

CIOs should focus on establishing data governance frameworks that support retrieval-augmented generation. Network infrastructure must be evaluated to ensure it supports the latency requirements of agentic workloads. Finally, cultural adoption strategies must run parallel to technical implementation.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

Banner for AI & Big Data Expo by TechEx events.

The post AI Expo 2026 Day 1: Governance and data readiness enable the agentic enterprise appeared first on AI News.

]]>
From blogosphere to the AI & Big Data Expo: Rackspace and operational AI https://www.artificialintelligence-news.com/news/combing-the-rackspace-blogfiles-for-operational-ai-pointers/ Wed, 04 Feb 2026 10:01:00 +0000 https://www.artificialintelligence-news.com/?p=111961 In a recent blog output, Rackspace refers to the bottlenecks familiar to many readers: messy data, unclear ownership, governance gaps, and the cost of running models once they become part of production. The company frames them through the lens of service delivery, security operations, and cloud modernisation, which tells you where it is putting its […]

The post From blogosphere to the AI & Big Data Expo: Rackspace and operational AI appeared first on AI News.

]]>
In a recent blog output, Rackspace refers to the bottlenecks familiar to many readers: messy data, unclear ownership, governance gaps, and the cost of running models once they become part of production. The company frames them through the lens of service delivery, security operations, and cloud modernisation, which tells you where it is putting its own effort.

One of the clearest examples of operational AI inside Rackspace sits in its security business. In late January, the company described RAIDER (Rackspace Advanced Intelligence, Detection and Event Research) as a custom back-end platform built for its internal cyber defense centre. With security teams working amid many alerts and logs, standard detection engineering doesn’t scale if dependent on the manual writing of security rules. Rackspace says its RAIDER system unifies threat intelligence with detection engineering workflows and uses its AI Security Engine (RAISE) and LLMs to automate detection rule creation, generating detection criteria it describes as “platform-ready” in line with known frameworks such as MITRE ATT&CK. The company claims it’s cut detection development time by more than half and reduced mean time to detect and respond. This is just the kind of internal process change that matters.

The company also positions agentic AI as a way of taking the friction out of complex engineering programmes. A January post on modernising VMware environments on AWS describes a model in which AI agents handle data-intensive analysis and many repeating tasks, yet it keeps “architectural judgement, governance and business decisions” remain in the human domain. Rackspace presents this workflow as stopping senior engineers being sidelined into migration projects. The article states the target is to keep day two operations in scope – where many migration plans fail as teams discover they have modernised infrastructure but not operating practices.

Elsewhere the company sets out a picture of AI-supported operations where monitoring becomes more predictive, routine incidents are handled by bots and automation scripts, and telemetry (plus historical data) are used to spot patterns and, it turn, recommend fixes. This is conventional AIOps language, but it Rackspace is tying such language to managed services delivery, suggesting the company uses AI to reduce the cost of labour in operational pipelines in addition to the more familiar use of AI in customer-facing environments.

In a post describing AI-enabled operations, the company stresses the importance of focus strategy, governance and operating models. It specifies the machinery it needed to industrialise AI, such as choosing infrastructure based on whether workloads involve training, fine-tuning or inference. Many tasks are relatively lightweight and can run inference locally on existing hardware.

The company’s noted four recurring barriers to AI adoption, most notably that of fragmented and inconsistent data, and it recommends investment in integration and data management so models have consistent foundations. This is not an opinion unique to Rackspace, of course, but having it writ large by a technology-first, big player is illustrative of the issues faced by many enterprise-scale AI deployments.

A company of even greater size, Microsoft, is working to coordinate autonomous agents’ work across systems. Copilot has evolved into an orchestration layer, and in Microsoft’s ecosystem, multi-step task execution and broader model choice do exist. However, it’s noteworthy that Redmond is called out by Rackspace on the fact that productivity gains only arrive when identity, data access, and oversight are firmly ensconced into operations.

Rackspace’s near-term AI plan comprises of AI-assisted security engineering, agent-supported modernisation, and AI-augmented service management. Its future plans can perhaps be discerned in a January article published on the company’s blog that concerns private cloud AI trends. In it, the author argues inference economics and governance will drive architecture decisions well into 2026. It anticipates ‘bursty’ exploration in public clouds, while moving inference tasks into private clouds on the grounds of cost stability, and compliance. That’s a roadmap for operational AI grounded in budget and audit requirements, not novelty.

For decision-makers trying to accelerate their own deployments, the useful takeaway is that Rackspace has treats AI as an operational discipline. The concrete, published examples it gives are those that reduce cycle time in repeatable work. Readers may accept the company’s direction and still be wary of the company’s claimed metrics. The steps to take inside a growing business are to discover repeating processes, examine where strict oversight is necessary because of data governance, and where inference costs might be reduced by bringing some processing in-house.

(Image source: Pixabay)

 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post From blogosphere to the AI & Big Data Expo: Rackspace and operational AI appeared first on AI News.

]]>
Ronnie Sheth, CEO, SENEN Group: Why now is the time for enterprise AI to ‘get practical’ https://www.artificialintelligence-news.com/news/ronnie-sheth-ceo-senen-group-why-now-is-the-time-for-enterprise-ai-to-get-practical/ Tue, 03 Feb 2026 11:47:14 +0000 https://www.artificialintelligence-news.com/?p=111981 Before you set sail on your AI journey, always check the state of your data – because if there is one thing likely to sink your ship, it is data quality. Gartner estimates that poor data quality costs organisations an average of $12.9 million each year in wasted resources and lost opportunities. That’s the bad […]

The post Ronnie Sheth, CEO, SENEN Group: Why now is the time for enterprise AI to ‘get practical’ appeared first on AI News.

]]>
Before you set sail on your AI journey, always check the state of your data – because if there is one thing likely to sink your ship, it is data quality.

Gartner estimates that poor data quality costs organisations an average of $12.9 million each year in wasted resources and lost opportunities. That’s the bad news. The good news is that organisations are increasingly understanding the importance of their data quality – and less likely to fall into this trap.

That’s the view of Ronnie Sheth, CEO of AI strategy, execution and governance firm SENEN Group. The company focuses on data and AI advisory, operationalisation and literacy, and Sheth notes she has been in the data and AI space ‘ever since [she] was a corporate baby’, so there is plenty of real-world experience behind the viewpoint. There is also plenty of success; Sheth notes that her company has a 99.99% client repeat rate.

“If I were to be very practical, the one thing I’ve noticed is companies jump into adopting AI before they’re ready,” says Sheth. Companies, she notes, will have an executive direction insisting they adopt AI, but without a blueprint or roadmap to accompany it. The result may be impressive user numbers, but with no measurable outcome to back anything up.

Even as recently as 2024, Sheth saw many organisations struggling because their data was ‘nowhere where it needed to be.’ “Not even close,” she adds. Now, the conversation has turned more practical and strategic. Companies are realising this, and coming to SENEN Group initially to get help with their data, rather than wanting to adopt AI immediately.

“When companies like that come to us, the first course of order is really fixing their data,” says Sheth. “The next course of order is getting to their AI model. They are building a strong foundation for any AI initiative that comes after that.

“Once they fix their data, they can build as many AI models as they want, and they can have as many AI solutions as they want, and they will get accurate outputs because now they have a strong foundation,” Sheth adds.

With breadth and depth in expertise, SENEN Group allows organisations to right their course. Sheth notes the example of one customer who came to them wanting a data governance initiative. Ultimately, it was the data strategy which was needed – the why and how, the outcomes of what they were trying to do with their data – before adding in governance and providing a roadmap for an operating model. “They’ve moved from raw data to descriptive analytics, moving into predictive analytics, and now we’re actually setting up an AI strategy for them,” says Sheth.

It is this attitude and requirement for practical initiatives which will be the cornerstone of Sheth’s discussion at AI & Big Data Expo Global in London this week. “Now would be the time to get practical with AI, especially enterprise AI adoption, and not think about ‘look, we’re going to innovate, we’re going to do pilots, we’re going to experiment,’” says Sheth. “Now is not the time to do that. Now is the time to get practical, to get AI to value. This is the year to do that in the enterprise.”

Watch the full video conversation with Ronnie Sheth below:

The post Ronnie Sheth, CEO, SENEN Group: Why now is the time for enterprise AI to ‘get practical’ appeared first on AI News.

]]>
Apptio: Why scaling intelligent automation requires financial rigour https://www.artificialintelligence-news.com/news/apptio-why-scaling-intelligent-automation-requires-financial-rigour/ Tue, 03 Feb 2026 10:52:22 +0000 https://www.artificialintelligence-news.com/?p=111972 Greg Holmes, Field CTO for EMEA at Apptio, an IBM company, argues that successfully scaling intelligent automation requires financial rigour. The “build it and they will come” model of technology adoption often leaves a hole in the budget when applied to automation. Executives frequently find that successful pilot programmes do not translate into sustainable enterprise-wide […]

The post Apptio: Why scaling intelligent automation requires financial rigour appeared first on AI News.

]]>
Greg Holmes, Field CTO for EMEA at Apptio, an IBM company, argues that successfully scaling intelligent automation requires financial rigour.

The “build it and they will come” model of technology adoption often leaves a hole in the budget when applied to automation. Executives frequently find that successful pilot programmes do not translate into sustainable enterprise-wide deployments because initial financial modelling ignored the realities of production scaling.

Headshot of Greg Holmes, Field CTO for the EMEA region at Apptio, an IBM company.

“When we integrate FinOps capabilities with automation, we’re looking at a change from being very reactive on cost management to being very proactive around value engineering,” says Holmes.

This shifts the assessment criteria for technical leaders. Rather than waiting “months or years to assess whether things are getting value,” engineering teams can track resource consumption – such as cost per transaction or API call – “straight from the beginning.”

The unit economics of scaling intelligent automation

Innovation projects face a high mortality rate. Holmes notes that around 80 percent of new innovation projects fail, often because financial opacity during the pilot phase masks future liabilities.

“If a pilot demonstrates that automating a process saves, say, 100 hours a month, leadership thinks that’s really successful,” says Holmes. “But what it fails to track is that the pilot sometimes is running on over-provisioned infrastructure, so it looks like it performs really well. But you wouldn’t over-provision to that degree during a real production rollout.”

Moving that workload to production changes the calculus. The requirements for compute, storage, and data transfer increase. “API calls can multiply, exceptions and edge cases appear at volume that might have been out of scope for the pilot phase, and then support overheads just grow as well,” he adds.

To prevent this, organisations must track the marginal cost at scale. This involves monitoring unit economics, such as the cost per customer served or cost per transaction. If the cost per customer increases as the customer base grows, the business model is flawed.

Conversely, effective scaling should see these unit costs decrease. Holmes cites a case study from Liberty Mutual where the insurer was able to find around $2.5 million of savings by bringing in consumption metrics and “not just looking at labour hours that they were saving.”

However, financial accountability cannot sit solely with the finance department. Holmes advocates for putting governance “back in the hands of the developers into their development tools and workloads.”

Integration with infrastructure-as-code tools like HashiCorp Terraform and GitHub allows organisations to enforce policies during deployment. Teams can spin up resources programmatically with immediate cost estimates.

“Rather than deploying things and then fixing them up, which gets into the whole whack-a-mole kind of problem,” Holmes explains, companies can verify they are “deploying the right things at the right time.”

When scaling intelligent automation, tension often simmers between the CFO, who focuses on return on investment, and the Head of Automation, who tracks operational metrics like hours saved.

“This translation challenge is precisely what TBM (Technology Business Management) and Apptio are designed to solve,” says Holmes. “It’s having a common language between technology and finance and with the business.”

The TBM taxonomy provides a standardised framework to reconcile these views. It maps technical resources (such as compute, storage, and labour) into IT towers and further up to business capabilities. This structure translates technical inputs into business outputs.

“I don’t necessarily know what goes into all the IT layers underneath it,” Holmes says, describing the business user’s perspective. “But because we’ve got this taxonomy, I can get a detailed bill that tells me about my service consumption and precisely which costs are driving  it to be more expensive as I consume more.”

Addressing legacy debt and budgeting for the long-term

Organisations burdened by legacy ERP systems face a binary choice: automation as a patch, or as a bridge to modernisation. Holmes warns that if a company is “just trying to mask inefficient processes and not redesign them,” they are merely “building up more technical debt.”

A total cost of ownership (TCO) approach helps determine the correct strategy. The Commonwealth Bank of Australia utilised a TCO model across 2,000 different applications – of various maturity stages – to assess their full lifecycle costs. This analysis included hidden costs such as infrastructure, labour, and the engineering time required to keep automation running.

“Just because of something’s legacy doesn’t mean you have to retire it,” says Holmes. “Some of those legacy systems are worth maintaining just because the value is so good.”

In other cases, calculating the cost of the automation wrappers required to keep an old system functional reveals a different reality. “Sometimes when you add up the TCO approach, and you’re including all these automation layers around it, you suddenly realise, the real cost of keeping that old system alive is not just the old system, it’s those extra layers,” Holmes argues.

Avoiding sticker shock requires a budgeting strategy that balances variable costs with long-term commitments. While variable costs (OPEX) offer flexibility, they can fluctuate wildly based on demand and engineering efficiency.

Holmes advises that longer-term visibility enables better investment decisions. Committing to specific technologies or platforms over a multi-year horizon allows organisations to negotiate economies of scale and standardise architecture.

“Because you’ve made those longer term commitments and you’ve standardised on different platforms and things like that, it makes it easier to build the right thing out for the long term,” Holmes says.

Combining tight management of variable costs with strategic commitments supports enterprises in scaling intelligent automation without the volatility that often derails transformation.

IBM is a key sponsor of this year’s Intelligent Automation Conference Global in London on 4-5 February 2026. Greg Holmes and other experts will be sharing their insights during the event. Be sure to check out the day one panel session, Scaling Intelligent Automation Successfully: Frameworks, Risks, and Real-World Lessons, to hear more from Holmes and swing by IBM’s booth at stand #362.

See also: Klarna backs Google UCP to power AI agent payments

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Apptio: Why scaling intelligent automation requires financial rigour appeared first on AI News.

]]>
ThoughtSpot: On the new fleet of agents delivering modern analytics https://www.artificialintelligence-news.com/news/thoughtspot-on-the-new-fleet-of-agents-delivering-modern-analytics/ Mon, 02 Feb 2026 09:34:52 +0000 https://www.artificialintelligence-news.com/?p=111947 If you are a data and analytics leader, then you know agentic AI is fuelling unprecedented speed of change right now. Knowing you need to do something and knowing what to do, however, are two different things. The good news is providers like ThoughtSpot are able to assist, with the company in its own words […]

The post ThoughtSpot: On the new fleet of agents delivering modern analytics appeared first on AI News.

]]>
If you are a data and analytics leader, then you know agentic AI is fuelling unprecedented speed of change right now. Knowing you need to do something and knowing what to do, however, are two different things. The good news is providers like ThoughtSpot are able to assist, with the company in its own words determined to ‘reimagin[e] analytics and BI from the ground up’.

“Certainly, agentic systems really are shifting us into very new territory,” explains Jane Smith, field chief data and AI officer at ThoughtSpot. “They’re shifting us away from passive reporting to much more active decision making.

“Traditional BI waits for you to find an insight,” adds Jane. “Agentic systems are proactively monitoring data from multiple sources 24/7; they’re diagnosing why changes happened; they’re triggering the next action automatically.

“We’re getting much more action-oriented.”

Alongside moving from passive to active, there are two other ways in which Jane sees this change taking place in BI. There is a shift towards the ‘true democratisation of data’ on one hand, but on the other is the ‘resurgence of focus’ on the semantic layer. “You cannot have an agent taking action in the way I just described when it doesn’t strictly understand business context,” says Jane. “A strong semantic layer is really the only way to make sense… of the chaos of AI.”

ThoughtSpot has a fleet of agents to take action and move the needle for customers. In December, the company launched four new BI agents, with the idea that they work as a team to deliver modern analytics.

Spotter 3, the latest iteration of an agent first debuted towards the end of 2024, is the star. It is conversant with applications like Slack and Salesforce, and can not only answer questions, but assess the quality of its answer and keep trying until it gets the right result.

“It leverages the [Model Context] protocol, so you can ask your questions to your organisation’s structured data – everything in your rows, your columns, your tables – but also incorporate your unstructured data,” says Jane. “So, you can get really context-rich answers to questions, all through our agent, or if you wish, through your own LLM.”

With this power, however, comes responsibility. As ThoughtSpot’s recent eBook exploring data and AI trends for 2026 notes, the C-suite needs to work out how to design systems so every decision – be it human or AI – can be explained, improved, and trusted.

ThoughtSpot calls this emerging architecture ‘decision intelligence’ (DI). “What we’ll see a lot of, I think, will be decision supply chains,” explains Jane. “Instead of a one-off insight, I think what we’re going to see is decisions… flow through repeatable stages, data analysis, simulation, action, feedback, and these are all interactions between humans and machines that will be logged in what we can think of as a decision system of record.”

What would this look like in practice? Jane offers an example from a clinical trial in the pharma industry. “The system would log and version, really, every step of how a patient is chosen for a clinical trial; how data from a health record is used to identify a candidate; how that decision was simulated against the trial protocol; how the matching occurred; how potentially a doctor ultimately recommended this patient for the trial,” she says.

“These are processes that can be audited, they can be improved for the following trial. But the very meticulous logging of every element of the flow of this decision into what we think of as a supply chain is a way that I would visualise that.”

ThoughtSpot is participating at the AI & Big Data Expo Global, in London, on February 4-5. You can watch the full interview with Jane Smith below:

Photo by Steve Johnson on Unsplash

The post ThoughtSpot: On the new fleet of agents delivering modern analytics appeared first on AI News.

]]>
Franny Hsiao, Salesforce: Scaling enterprise AI https://www.artificialintelligence-news.com/news/franny-hsiao-salesforce-scaling-enterprise-ai/ Wed, 28 Jan 2026 15:00:44 +0000 https://www.artificialintelligence-news.com/?p=111906 Scaling enterprise AI requires overcoming architectural oversights that often stall pilots before production, a challenge that goes far beyond model selection. While generative AI prototypes are easy to spin up, turning them into reliable business assets involves solving the difficult problems of data engineering and governance. Ahead of AI & Big Data Global 2026 in […]

The post Franny Hsiao, Salesforce: Scaling enterprise AI appeared first on AI News.

]]>
Scaling enterprise AI requires overcoming architectural oversights that often stall pilots before production, a challenge that goes far beyond model selection. While generative AI prototypes are easy to spin up, turning them into reliable business assets involves solving the difficult problems of data engineering and governance.

Ahead of AI & Big Data Global 2026 in London, Franny Hsiao, EMEA Leader of AI Architects at Salesforce, discussed why so many initiatives hit a wall and how organisations can architect systems that actually survive the real world.

The ‘pristine island’ problem of scaling enterprise AI

Most failures stem from the environment in which the AI is built. Pilots frequently begin in controlled settings that create a false sense of security, only to crumble when faced with enterprise scale.

Headshot of Franny Hsiao, EMEA Leader of AI Architects at Salesforce.

“The single most common architectural oversight that prevents AI pilots from scaling is the failure to architect a production-grade data infrastructure with built-in end to end governance from the start,” Hsiao explains.

“Understandably, pilots often start on ‘pristine islands’ – using small, curated datasets and simplified workflows. But this ignores the messy reality of enterprise data: the complex integration, normalisation, and transformation required to handle real-world volume and variability.”

When companies attempt to scale these island-based pilots without addressing the underlying data mess, the systems break. Hsiao warns that “the resulting data gaps and performance issues like inference latency render the AI systems unusable—and, more importantly, untrustworthy.”

Hsiao argues that the companies successfully bridging this gap are those that “bake end-to-end observability and guardrails into the entire lifecycle.” This approach provides “visibility and control into how effective the AI systems are and how users are adopting the new technology.”

Engineering for perceived responsiveness

As enterprises deploy large reasoning models – like the ‘Atlas Reasoning Engine’ – they face a trade-off between the depth of the model’s “thinking” and the user’s patience. Heavy compute creates latency.

Salesforce addresses this by focusing on “perceived responsiveness through Agentforce Streaming,” according to Hsiao.

“This allows us to deliver AI-generated responses progressively, even while the reasoning engine performs heavy computation in the background. It’s an incredibly effective approach for reducing perceived latency, which often stalls production AI.”

Transparency also plays a functional role in managing user expectations when scaling enterprise AI. Hsiao elaborates on using design as a trust mechanism: “By surfacing progress indicators that show the reasoning steps or the tools being used, as well images like spinners and progress bars to depict loading states, we don’t just keep users engaged; we improve perceived responsiveness and build trust.

“This visibility, combined with strategic model selection – like choosing smaller models for fewer computations, meaning faster response times – and explicit length constraints, ensures the system feels deliberate and responsive.”

Offline intelligence at the edge

For industries with field operations, such as utilities or logistics, reliance on continuous cloud connectivity is a non-starter. “For many of our enterprise customers, the biggest practical driver is offline functionality,” states Hsiao.

Hsiao highlights the shift toward on-device intelligence, particularly in field services, where the workflow must continue regardless of signal strength.

“A technician can photograph a faulty part, error code, or serial number while offline. Then an on-device LLM can then identify the asset or error, and provide guided troubleshooting steps from a cached knowledge base instantly,” explains Hsiao.

Data synchronisation happens automatically once connectivity returns. “Once a connection is restored, the system handles the ‘heavy lifting’ of syncing that data back to the cloud to maintain a single source of truth. This ensures that work gets done, even in the most disconnected environments.”

Hsiao expects continued innovation in edge AI due to benefits like “ultra-low latency, enhanced privacy and data security, energy efficiency, and cost savings.”

High-stakes gateways

Autonomous agents are not set-and-forget tools. When scaling enterprise AI deployments, governance requires defining exactly when a human must verify an action. Hsiao describes this not as dependency, but as “architecting for accountability and continuous learning.”

Salesforce mandates a “human-in-the-loop” for specific areas Hsiao calls “high-stakes gateways”:

“This includes specific action categories, including any ‘CUD’ (Creating, Uploading, or Deleting) actions, as well as verified contact and customer contact actions,” says Hsiao. “We also default to human confirmation for critical decision-making or any action that could be potentially exploited through prompt manipulation.”

This structure creates a feedback loop where “agents learn from human expertise,” creating a system of “collaborative intelligence” rather than unchecked automation.

Trusting an agent requires seeing its work. Salesforce has built a “Session Tracing Data Model (STDM)” to provide this visibility. It captures “turn-by-turn logs” that offer granular insight into the agent’s logic.

“This gives us granular step-by-step visibility that captures every interaction including user questions, planner steps, tool calls, inputs/outputs, retrieved chunks, responses, timing, and errors,” says Hsiao.

This data allows organisations to run ‘Agent Analytics’ for adoption metrics, ‘Agent Optimisation’ to drill down into performance, and ‘Health Monitoring’ for uptime and latency tracking.

“Agentforce observability is the single mission control for all your Agentforce agents for unified visibility, monitoring, and optimisation,” Hsiao summarises.

Standardising agent communication

As businesses deploy agents from different vendors, these systems need a shared protocol to collaborate. “For multi-agent orchestration to work, agents can’t exist in a vacuum; they need common language,” argues Hsiao.

Hsiao outlines two layers of standardisation: orchestration and meaning. For orchestration, Salesforce is adopting open-source standards like MCP (Model Context Protocol) and A2A (Agent to Agent Protocol).”

“We believe open source standards are non-negotiable; they prevent vendor lock-in, enable interoperability, and accelerate innovation.”

However, communication is useless if the agents interpret data differently. To solve for fragmented data, Salesforce co-founded OSI (Open Semantic Interchange) to unify semantics so an agent in one system “truly understands the intent of an agent in another.”

The future enterprise AI scaling bottleneck: agent-ready data

Looking forward, the challenge will shift from model capability to data accessibility. Many organisations still struggle with legacy, fragmented infrastructure where “searchability and reusability” remain difficult.

Hsiao predicts the next major hurdle – and solution – will be making enterprise data “‘agent-ready’ through searchable, context-aware architectures that replace traditional, rigid ETL pipelines.” This shift is necessary to enable “hyper-personalised and transformed user experience because agents can always access the right context.”

“Ultimately, the next year isn’t about the race for bigger, newer models; it’s about building the orchestration and data infrastructure that allows production-grade agentic systems to thrive,” Hsiao concludes.

Salesforce is a key sponsor of this year’s AI & Big Data Global in London and will have a range of speakers, including Franny Hsiao, sharing their insights during the event. Be sure to swing by Salesforce’s booth at stand #163 for more from the company’s experts.

See also: Databricks: Enterprise AI adoption shifts to agentic systems

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Franny Hsiao, Salesforce: Scaling enterprise AI appeared first on AI News.

]]>
Masumi Network: How AI-blockchain fusion adds trust to burgeoning agent economy https://www.artificialintelligence-news.com/news/masumi-network-how-ai-blockchain-fusion-adds-trust-to-burgeoning-agent-economy/ Wed, 28 Jan 2026 12:28:14 +0000 https://www.artificialintelligence-news.com/?p=111898 2026 will see forward-thinking organisations building out their squads of AI agents across roles and functions. But amid the rush, there is another aspect to consider. One of IDC’s enterprise technology predictions for the coming five years, published in October, was fascinating. “By 2030, up to 20% of [global 1000] organisations will have faced lawsuits, […]

The post Masumi Network: How AI-blockchain fusion adds trust to burgeoning agent economy appeared first on AI News.

]]>
2026 will see forward-thinking organisations building out their squads of AI agents across roles and functions. But amid the rush, there is another aspect to consider.

One of IDC’s enterprise technology predictions for the coming five years, published in October, was fascinating. “By 2030, up to 20% of [global 1000] organisations will have faced lawsuits, substantial fines, and CIO dismissals, due to high-profile disruptions stemming from inadequate controls and governance of AI agents,” the analyst noted.

How do you therefore put guardrails in place – and how do you ensure these agents work together and, ultimately, do business together? Patrick Tobler, founder and CEO of blockchain infrastructure platform provider NMKR, is working on a project which aims to solve this – by fusing agentic AI and decentralisation.

The Masumi Network, born out of a collaboration between NMKR and Serviceplan Group, launched in late 2024 as a framework-agnostic infrastructure which ‘empowers developers to build autonomous agents that collaborate, monetise services, and maintain verifiable trust.’

“The core thesis of Masumi is that there’s going to be billions of different AI agents from different companies interacting with each other in the future,” explains Tobler. “The difficult part now is – how do you actually have agents from different companies that can interact with each other and send money to each other as well, across these different companies?”

Take travel as an example. You want to attend an industry conference, so your hotel booking agent buys a plane ticket from your airline agent. The entire experience and transaction will be seamless – but that implicit trust is required.

“Masumi is a decentralised network of agents, so it’s not relying on any centralised payment infrastructure,” says Tobler. “Instead, agents are equipped with wallets and can send stablecoins from one agent to another and, because of that, interacting with each other in a completely safe and trustless manner.”

For Tobler, having spent in his words ‘a lot of time’ in crypto, he determined that its benefits were being pointed to the wrong place.

“I think there’s a lot of these problems that we have solved in crypto for humans, and then I came to this conclusion that maybe we’ve been solving them or the wrong target audience,” he explains. “Because for humans, using crypto and wallets and blockchains, all that kind of stuff is extremely difficult; the user experience is not great. But for agents, they don’t care if it’s difficult to use. They just use it, and it’s very native to them.

“So all these issues that are now arising with agents having to interact with millions, or maybe even billions, of agents in the future – these problems have all already been solved with crypto.”

Tobler is attending AI & Big Data Expo Global as part of Discover Cardano; NMKR started on the Cardano blockchain, while Masumi is built completely on Cardano. He says he is looking forward to speaking with businesses that are ‘hearing a lot about AI but aren’t really using it much besides ChatGPT’.

“I want to understand from them what they are doing, and then figure out how we can help them,” he says. “That’s most often the thing missing from traditional tech startups. We’re all building for our own bubble, instead of actually talking to the people that would be using it every day.”

Discover Cardano is exhibiting at the AI & Big Data Expo Global, in London on February 4-5. Watch the full video interview with NMKR’s Patrick Tobler below:

Photo by Google DeepMind

The post Masumi Network: How AI-blockchain fusion adds trust to burgeoning agent economy appeared first on AI News.

]]>
Lowering the barriers databases place in the way of strategy, with RavenDB https://www.artificialintelligence-news.com/news/lowering-the-barriers-databases-place-in-the-way-of-strategy-with-ravendb/ Tue, 27 Jan 2026 11:46:00 +0000 https://www.artificialintelligence-news.com/?p=111867 If database technologies offered performance, flexibility and security, most professionals would be happy to get two of the three, and they might have to expect to accept some compromises, too. Systems optimised for speed demand manual tuning, while flexible platforms can impose costs when early designs become constraints. Security is, sadly, sometimes, a bolt-on, with […]

The post Lowering the barriers databases place in the way of strategy, with RavenDB appeared first on AI News.

]]>
If database technologies offered performance, flexibility and security, most professionals would be happy to get two of the three, and they might have to expect to accept some compromises, too. Systems optimised for speed demand manual tuning, while flexible platforms can impose costs when early designs become constraints. Security is, sadly, sometimes, a bolt-on, with DBAs relying on internal teams’ skills and knowledge not to introduce breaking changes.

RavenDB, however, exists because its founder saw the cumulative costs of those common trade-offs, and the inherent problems stemming from them. They wanted a database system that didn’t force developers and administrators to choose.

Abstracting away complexity

Oren Eini, RavenDB’s founder and CTO was working as a freelance database performance consultant nearly two decades ago. In an exclusive interview he recounted how he encountered many capable teams “digging themselves into a hole” as the systems in their care grew in complexity. Problems he was presented with didn’t stem from developers not possessing the required skills, but rather from system architecture. Databases tend to guide their developers towards fragile designs and punish developers for following those paths, he says. RavenDB was a project that began as a way to reduce friction when the unstoppable force of what’s required meets the mountain of database schema.

The platform’s emphasis is on performance and adaptability without (ironically) at some stage requiring the services of people like Oren. Armed with a bag full of experience and knowledge, he formed RavenDB, which has now been shipping for more than fifteen years – well before the current interest in AI-assisted development.

The bottom line is that over time, the RavenDB database adapts to what the organisation cares about, rather than what it guessed it might care about when the database was first spun up. “When I talk to business people,” Eini says, “I tell them I take care of data ownership complexity.”

For example, instead of expecting developers or DBAs to anticipate every possible query pattern, RavenDB observes queries as they are executed. If it detects that a query would benefit from an index, it creates one in the background, with minimal overhead on extant processing. This contrasts with most relational databases, where schema and indexing strategies are set by the initial developers, so are difficult to alter later, regardless of how an organisation may have changed.

Oren draws the comparison with pouring a building’s foundations before deciding where the doors and support columns might go. It’s an approach that can work, but when the business changes direction over the years, the cost of regretting those early decisions can be alarming.

Image of Oren Eini
Oren Eini (source: RavenDB)

Speaking ahead of the company’s appearance at the upcoming TechEx Global event in London this year (February 4 & 5, Olympia), he cited an example of a European client that struggled to expand into US markets because its database assumed a simple VAT rate that it had consigned to a single field, a schema not suitable for the complexities of state and federal sales taxes. From seemingly simple decisions made in the past (and perhaps not given much thought – European VAT is fairly standard), the client was storing financial pain and technical debt for the next generation.

Much of RavenDB’s attractiveness is manifest in practical details and small tweaks that make databases more performant and easier to address. Pagination, for example, requires two database calls in most systems (one to fetch a page of results, another to count matching records). RavenDB returns both in a single query. Individually, such optimisations may appear minor, but at scale they compound. Oren says. “If you smooth down the friction everywhere you go, you end up with a really good system where you don’t have to deal with friction.”

Compounded removal of frictions improves performance and makes developers’ jobs simpler. Related data is embedded or included without the penalties associated with table joins in relational databases, so complex queries are completed in a single round trip. Software engineers don’t need to be database specialists. In their world, they just formulate SQL-like queries to RavenDB’s APIs.

Compared to other NoSQL databases, Raven DB provides full ACID transactions by default, and reduced operational complexity: many of its baked-in features (ETL pipelines, subscriptions, full-text search, counters, time series, etc.) reduce the need for external systems.

In contrast with DBAs and software developers addressing a competing database system and its necessary adjuncts, both developers and admins spend less time sweating the detail with Raven DB. That’s good news, not least for those that hold an organisation’s purse strings.

Scaling to fit the purpose

RavenDB is also built to scale, as painlessly as it handles complex queries. It can create multi-node clusters if wanted so supports huge numbers of concurrent users. Such clusters are created by RavenDB without time-consuming manual configuration. “With RavenDB, this is normal cost of business,” he says.

In February this year, RavenDB Cloud announced version 7.2, and this being 2026, mention needs to be made of AI. Raven DB’s AI Assistant is, “in effect, […] a virtual DBA that comes inside of your database,” he says. The key word is inside. It’s designed for developers and administrators, not end users, answering their questions about indexing, storage usage or system behaviour.

AI as a professional tool

He’s sceptical about giving AIs unconfined access to any data store. Allowing an AI to act as a generic gatekeeper to sensitive information creates unavoidable security risks, because such systems are difficult to constrain reliably.

For the DBA and software developer, it’s another story – AI is a useful tool that operates as a helping hand, configuring and addressing the data. RavenDB’s AI assistant inherits the permissions of the user invoking it, having no privileged access of its own. “Anything it knows about your RavenDB instance comes because, behind the scenes, it’s accessing your system with your permissions,” he says.

The company’s AI strategy is to provide developers and admins with opinionated features: generating queries, explaining indexes, helping with schema exploration, and answering operational questions, with calls bounded by operator validation and privileges.

Teams developing applications with RavenDB get support for vector search, native embeddings, server-side indexing, and agnostic integration with external LLMs. This, Oren says, lets organisations deliver useful AI-driven features in their applications quickly, without exposing the business to risk and compliance issues.

Security and risk

Security and risk comprise one of those areas where RavenDB draws a clear line between it and its competitors. We touched on the recent MongoBleed vulnerability, which exposed data from unauthenticated MongoDB instances due to an interaction between compression and authentication code. Oren describes the issue as an architectural failure caused by mixing general-purpose and security-critical code paths. “The reason this is a vulnerability,” he says, “is specifically the fact that you’re trying to mix concerns.”

RavenDB uses established cryptographic infrastructure to handle authentication before any database logic is invoked. And even if a flaw emanated from elsewhere, the attack surface would be significantly smaller because unauthenticated users never reach the general code paths: that architectural separation limits the blast radius.

While the internals of RavenDB are highly technical and specialised, business decision-makers can easily appreciate that delays caused by schema changes, performance tuning, or infrastructure changes will have significant economic impact. But RavenDB’s malleability and speed also remove what Oren describes as the “no, you can’t do that” conversations.

Organisations running RavenDB reduce their dependency on specialist expertise, plus they get the ability to respond to changing business needs much more quickly. “[The database’s] role is to bring actual business value,” Eini says, arguing that infrastructure should, in operational contexts, fade into the background. As it stands, it often determines the scope of strategy discussions.

Migration and getting started

RavenDB uses a familiar SQL-like query language, and most teams will only need a day at most to get up to speed. Where friction does appear, Oren suggests, it is often due to assumptions carried over from other platforms around security and high availability. For RavenDB, these are built into the design so don’t cause extra workload that needs to be factored in.

Coming about as the result of the experience of operational pain by the company’s founder himself, RavenDB’s difference stems from accumulated design decisions: background indexing, query-aware optimisation, the separation of security and authentication issues, and latterly, the need for constraints on AI tooling. In everyday use, developers experience fewer sharp edges, and in the longer term, business leaders see a reduction in costs, especially around the times of change. The combination is compelling enough to displace entrenched platforms in many contexts.

To learn more, you can speak to RavenDB representatives at TechEx Global, held at Olympia, London, February 4 and 5. If what you’ve read here has awakened your interest, head over to the company’s website.

(Image source: “#316 AVZ Database” by Ralf Appelt is licensed under CC BY-NC-SA 2.0.)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Lowering the barriers databases place in the way of strategy, with RavenDB appeared first on AI News.

]]>
Expereo: Enterprise connectivity amid AI surge with ‘visibility at the speed of life’ https://www.artificialintelligence-news.com/news/expereo-enterprise-connectivity-amid-ai-surge-with-visibility-at-the-speed-of-life/ Mon, 26 Jan 2026 15:23:57 +0000 https://www.artificialintelligence-news.com/?p=111834 AI continues to reshape technology and business; yet for the network, enterprise connectivity in the AI age means being always-on, and extra vigilant for sovereignty and security besides. This means that speed is not the only requirement. As Julian Skeels, chief digital officer at Expereo notes, it is more about ‘certainty.’ “AI workloads are distributed, […]

The post Expereo: Enterprise connectivity amid AI surge with ‘visibility at the speed of life’ appeared first on AI News.

]]>
AI continues to reshape technology and business; yet for the network, enterprise connectivity in the AI age means being always-on, and extra vigilant for sovereignty and security besides.

This means that speed is not the only requirement. As Julian Skeels, chief digital officer at Expereo notes, it is more about ‘certainty.’ “AI workloads are distributed, they’re continuous, they’re incredibly latency-sensitive. Inference, monitoring, retrieval and remediation never stop, so that changes the network’s role,” says Skeels.

“In the world of AI, networking actually becomes a system dependency,” he adds. “When the network degrades, the application degrades immediately.

“An AI-ready network needs to make data movement deterministic. It’s not just about it being fast; it’s about it being predictable, and observable, and governable, and resilient – and to do all those things under continual change.”

Many CIOs, however, are struggling right now with what Skeels describes as ‘connectivity everywhere but visibility nowhere.’

“They’re dealing with hybrid networks, multiple clouds, multiple providers and portals that create a constant operational drag to their teams,” says Skeels. “What they want is clarity and control – not more tools.”

Skeels arrived at Expereo last year with myriad cross-industry experience in product and digital transformation initiatives under his belt. He found an industry ripe for accelerative change, and a company determined to lead the way and ensure pricing global connectivity should take minutes rather than weeks.

“When I came to Expereo, I saw that global connectivity has, I would say, largely resisted real digital transformation for a long time,” notes Skeels. “Most customers will still experience it as slow, and manual, and opaque, and fragmented across the dozens of providers and portals they need to work with.

“We believe, though, that with emerging technologies such as agentic AI, that’s finally changing,” adds Skeels. “Our ambition here is to make global connectivity as simple, and immediate, and transparent as cloud computing is for our customers.”

Enabling such change for customers requires that mix of speed and visibility – and this is where the expereoOne platform comes in, to provide what the company calls ‘visibility at the speed of life’ and give customers a single, global view of what is being deployed, how it is performing, and what it costs. Beyond visibility, customers also need proactivity, as Skeels explains. “We’re deeply integrated into our customers’ order management, their ITSM, their ERP systems, which makes working with Expereo at scale absolutely seamless,” he says.

“The key point is that better visibility isn’t about more dashboards. It’s about connecting network behaviour to their business outcomes in terms of resilience, security experience, and cost.”

Skeels is speaking at the Digital Transformation Expo Global on February 4-5 around designing the AI-ready network – and his session promises to subvert the usual advice for those in attendance. “I want to challenge a few things,” notes Skeels. “I want to ask people to consider even unlearning things they’ve learned in the past.

“A lot of what we’ve taken for granted about networks no longer holds in an AI world.”

Watch the full conversation between Julian Skeels and TechEx’s James Bourne below:

Photo by Pixabay

The post Expereo: Enterprise connectivity amid AI surge with ‘visibility at the speed of life’ appeared first on AI News.

]]>
Martin Frederik, Snowflake: Data quality is key to AI-driven growth https://www.artificialintelligence-news.com/news/martin-frederik-snowflake-data-quality-key-ai-driven-growth/ Tue, 23 Sep 2025 16:34:26 +0000 https://www.artificialintelligence-news.com/?p=109548 As companies race to implement AI, many are finding that project success hinges directly on the quality of their data. This dependency is causing many ambitious initiatives to stall, never making it beyond the experimental proof-of-concept stage. So, what’s the secret to turning these experiments into real revenue generators? AI News caught up with Martin […]

The post Martin Frederik, Snowflake: Data quality is key to AI-driven growth appeared first on AI News.

]]>
As companies race to implement AI, many are finding that project success hinges directly on the quality of their data. This dependency is causing many ambitious initiatives to stall, never making it beyond the experimental proof-of-concept stage.

So, what’s the secret to turning these experiments into real revenue generators? AI News caught up with Martin Frederik, regional leader for the Netherlands, Belgium, and Luxembourg at data cloud giant Snowflake, to find out.

“There’s no AI strategy without a data strategy,” Frederik says simply. “AI apps, agents, and models are only as effective as the data they’re built on, and without unified, well-governed data infrastructure, even the most advanced models can fall short.”

Improving data quality is key to AI project success

It’s a familiar story for many organisations: a promising proof-of-concept impresses the team but never translates into a tool that makes the company money. According to Frederik, this often happens because leaders treat the technology as the end goal.

Headshot of Martin Frederik, regional leader for the Netherlands, Belgium, and Luxembourg at AI data cloud giant Snowflake.

“AI is not the destination – it’s the vehicle to achieving your business goals,” Frederik advises.

When projects get stuck, it’s usually down to a few common culprits: the project isn’t truly aligned with what the business needs, teams aren’t talking to each other, or the data is a mess. It’s easy to get disheartened by statistics suggesting that 80% of AI projects don’t reach production, but Frederik offers a different perspective. This isn’t necessarily a failure, he suggests, but “part of the maturation process”.

For those who get the foundation right, the payoff is very real. A recent Snowflake study found that 92% of companies are already seeing a return on their AI investments. In fact, for every £1 spent, they’re getting back £1.41 in cost savings and new revenue. The key, Frederik repeats, is having a “secure, governed and centralised platform” for your data from the very beginning.

It’s not just about tech, it’s about people

Even with the best technology, an AI strategy can fall flat if the company culture isn’t ready for it. One of the biggest challenges is getting data into the hands of everyone who needs it, not just a select few data scientists. To make AI work at scale, you have to build strong foundations in your “people, processes, and technology.”

This means breaking down the walls between departments and making quality data and AI tools accessible to everyone.

“With the right governance, AI becomes a shared resource rather than a siloed tool,” Frederik explains. When everyone works from a single source of truth, teams can stop arguing about whose numbers are correct and start making faster and smarter decisions together.

The next leap: AI that reasons for itself

The true breakthrough we’re seeing now is the emergence of AI agents that can understand and reason over all kinds of data at once regardless of structure quality; from the neat rows and columns in a spreadsheet, to the unstructured information in documents, videos, and emails. Considering that this unstructured data makes up 80-90% of a typical company’s data, this is a huge step forward.

New tools are enabling staff, no matter their technical skill level, to simply ask complex questions in plain English and get answers directly from the data.

Frederik explains that this is a move towards what he calls “goal-directed autonomy”. Until now, AI has been a helpful assistant you had to constantly direct. “You ask a question, you get an answer; you ask for code, you get a snippet,” he notes.

The next generation of AI is different. You can give an agent a complex goal, and it will figure out the necessary steps on its own, from writing code to pulling in information from other apps to deliver a complete answer. This will automate the most time-consuming parts of a data scientist’s job, like “tedious data cleaning” and “repetitive model tuning.”

The result? It frees up your brightest minds to focus on what really matters. This elevates your people “from practitioner to strategist” and allows them to drive real value for the business. That can only be a good thing.

Snowflake is a key sponsor of this year’s AI & Big Data Expo Europe and will have a range of speakers sharing their deep insights during the event. Swing by Snowflake’s booth at stand number 50 to hear more from the company about making enterprise AI easy, efficient, and trusted.

See also: Public trust deficit is a major hurdle for AI growth

Banner for the AI & Big Data Expo event series.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Martin Frederik, Snowflake: Data quality is key to AI-driven growth appeared first on AI News.

]]>