Workforce & HR AI - AI News https://www.artificialintelligence-news.com/categories/ai-in-action/workforce-hr-ai/ Artificial Intelligence News Fri, 13 Feb 2026 02:42:04 +0000 en-GB hourly 1 https://wordpress.org/?v=6.9.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png Workforce & HR AI - AI News https://www.artificialintelligence-news.com/categories/ai-in-action/workforce-hr-ai/ 32 32 How e& is using HR to bring AI into enterprise operations https://www.artificialintelligence-news.com/news/how-e-is-using-hr-to-bring-ai-into-enterprise-operations/ Fri, 13 Feb 2026 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=112196 For many enterprises, the first real test of AI is not customer-facing products or flashy automation demos. It is the quiet machinery that runs the organisation itself. Human resources, with its mix of routine workflows, compliance needs, and large volumes of structured data, is emerging as one of the earliest areas where companies are pushing […]

The post How e& is using HR to bring AI into enterprise operations appeared first on AI News.

]]>
For many enterprises, the first real test of AI is not customer-facing products or flashy automation demos. It is the quiet machinery that runs the organisation itself. Human resources, with its mix of routine workflows, compliance needs, and large volumes of structured data, is emerging as one of the earliest areas where companies are pushing AI into day-to-day operations.

That shift is visible in how large employers are rethinking workforce systems. The telecommunications group e& began moving its human resources operations to what it describes as an AI-first model, covering roughly 10,000 employees across its organisation. The transition is built on Oracle Fusion Cloud Human Capital Management (HCM), running in an Oracle Cloud Infrastructure dedicated region. Details of the deployment were outlined in a recent Oracle announcement.

The change is less about introducing a single AI feature and more about restructuring how HR processes are handled. Automated and AI-driven tools are expected to help HR departments with recruitment screening, interview coordination, and employee learning recommendations. The stated goal is to standardise processes across regions and provide managers with faster access to workforce data and insights.

HR as an enterprise AI proving ground

From an enterprise perspective, HR is a logical entry point. Many HR tasks follow repeatable patterns: candidate matching, onboarding documentation, leave management, and training assignments. These workflows produce consistent data trails, which makes them easier to model and automate than loosely defined knowledge work. Moving such functions onto AI-supported systems allows organisations to test reliability, governance, and user acceptance in a controlled environment before expanding into more sensitive areas.

The infrastructure choice also indicates how enterprises are balancing innovation with compliance. Oracle claims that the system is deployed in a dedicated cloud region designed to address data sovereignty and regulatory requirements. For multinational corporations, workforce data sits at the intersection of privacy law, employment regulation, and corporate governance. Running AI tools in a controlled environment is part of how companies are trying to contain risk while experimenting with automation.

Governance, compliance, and internal risk management

The e& rollout reflects a broader pattern in enterprise AI adoption: internal transformation is often more achievable than external disruption. Customer-facing AI systems attract attention, but they introduce reputational and operational risk if they fail. HR platforms, by contrast, operate behind the scenes. Errors can still carry consequences, yet they are easier to monitor, audit, and correct within existing governance structures.

Industry research supports the idea that internal operations are becoming a primary testing ground. Deloitte’s 2026 State of AI in the Enterprise report found that organisations are increasingly shifting AI projects from pilot stages into production environments, with productivity and workflow automation cited as early areas of return. The report is based on a survey of more than 3,000 senior leaders involved in AI initiatives, including respondents in Southeast Asia. While the study spans multiple business functions, administrative and operational processes were repeatedly identified as practical entry points for scaled deployment.

Workforce systems also provide a natural setting for AI agents and assistants. HR teams handle frequent employee queries about policies, benefits, and training options. Embedding conversational tools into these workflows may reduce manual workload while giving employees faster access to information. According to Oracle’s description of the deployment, e& plans to introduce digital assistants designed to support candidate engagement and employee development tasks. Whether such tools deliver consistent value will depend on accuracy, oversight, and how well they integrate with existing HR processes.

Scaling AI inside the organisation

The lesson is not that HR automation is new, but that AI is changing the scope of what can be automated. Traditional HR software focused on record-keeping and workflow management. AI layers add predictive matching, pattern analysis, and decision support. That expansion raises familiar governance questions: data quality, bias, auditability, and employee trust.

There is also a workforce dimension. Automating parts of HR does not eliminate the need for human oversight; it changes where effort is concentrated. HR professionals may spend less time on routine coordination and more on policy interpretation, employee engagement, and exception handling. Enterprises adopting AI-driven systems will need clear escalation paths and review processes to avoid over-reliance on automated outputs.

What makes the current moment different is scale. Deployments that cover thousands of employees turn AI from an experiment into operational infrastructure. They force organisations to confront issues of reliability, training, and change management in real time. The systems must work consistently across jurisdictions, languages, and regulatory frameworks.

As enterprises look for low-risk entry points into AI, workforce operations are likely to remain high on the list. They combine structured data, repeatable workflows, and measurable outcomes — conditions that suit automation while still allowing room for human judgement. The experience of early adopters will shape how quickly other internal functions, from finance to procurement, follow a similar path.

(Photo by Zulfugar Karimov)

See also: Barclays bets on AI to cut costs and boost returns

Want to learn more about AI and big data from industry leaders? Check outAI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, clickhere for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post How e& is using HR to bring AI into enterprise operations appeared first on AI News.

]]>
The quiet work behind Citi’s 4,000-person internal AI rollout https://www.artificialintelligence-news.com/news/the-quiet-work-behind-citi-4000-person-internal-ai-rollout/ Wed, 21 Jan 2026 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=111645 For many large companies, artificial intelligence still lives in side projects. Small teams test tools, run pilots, and present results that struggle to spread beyond a few departments. Citi has taken a different path, where instead of keeping AI limited to specialists, the bank has spent the past two years pushing the technology into daily […]

The post The quiet work behind Citi’s 4,000-person internal AI rollout appeared first on AI News.

]]>
For many large companies, artificial intelligence still lives in side projects. Small teams test tools, run pilots, and present results that struggle to spread beyond a few departments. Citi has taken a different path, where instead of keeping AI limited to specialists, the bank has spent the past two years pushing the technology into daily work in the organisation.

That effort has resulted in an internal AI workforce of roughly 4,000 employees, drawn from roles that range from technology and operations to risk and customer support. The figure was first reported by Business Insider, which detailed how Citi built its “AI Champions” and “AI Accelerators” programmes to encourage participation not central control.

The scale of integration is notable, as Citi employs around 182,000 people globally, and more than 70% of them now use firm-approved AI tools in some form, according to the same report. That level of use places Citi ahead of many peers that still restrict AI access to technical teams or innovation labs.

From central pilots to team-level adoption

Rather than start with tools, Citi focused on people. The bank invited employees to volunteer as AI Champions, giving them access to training, internal resources, and early versions of approved AI systems. The employees then supported colleagues in their own teams, acting as local points of contact not formal trainers.

The approach reflects a practical view of adoption. New tools often fail not because they lack features, but because staff do not know when or how to use them. By embedding support inside teams, Citi reduced the gap between experimentation and routine work.

Training played a central role. Employees could earn internal badges by completing courses or demonstrating how they used AI to improve their own tasks. The badges did not come with promotions or pay rises, but they helped create visibility and credibility in the organisation. According to Business Insider, this peer-driven model helped AI spread faster than top-down mandates.

Everyday use, with guardrails

Citi’s leadership has framed the effort as a response to scale not novelty. With operations spanning retail banking, investment services, compliance, and customer support, small efficiency gains can add up quickly. AI tools are being used to summarise documents, draft internal notes, analyse data sets, and assist with software development. None of these uses are new on their own, but the difference lies in how they are applied.

The focus on everyday tasks also shapes Citi’s risk posture. The bank has limited employees to firm-approved tools, with guardrails around what data can be used and how outputs are handled. That constraint has slowed some experiments, but it has also made managers more comfortable allowing broader access. In regulated industries, trust often matters more than speed.

What Citi’s approach shows about scaling AI

The structure of Citi’s programme suggests a lesson for other large enterprises. AI adoption does not require every employee to become an expert. It requires enough people to understand the tools well enough to apply them responsibly and explain them to others. By training thousands instead of dozens, Citi reduced its reliance on a small group of specialists.

There is also a cultural signal at play. Encouraging employees from non-technical roles to participate sends a message that AI is not only for engineers or data scientists. It becomes part of how work gets done, similar to spreadsheets or presentation software in earlier decades.

That shift aligns with broader industry trends. Surveys from firms like McKinsey have shown that many companies struggle to move AI projects into production, often citing talent gaps and unclear ownership. Citi’s model sidesteps some of those issues by distributing ownership in teams, while keeping governance centralised.

Still, the approach is not without limits. Peer-led adoption depends on sustained interest, and not all teams move at the same pace. There is also the risk that informal support networks become uneven, with some groups benefiting more than others. Citi has tried to address this by rotating Champions and updating training content as tools change.

What stands out is the bank’s willingness to treat AI as infrastructure not innovation. Instead of asking whether AI could transform the business, Citi asked where it could remove friction from existing work. That framing makes progress easier to measure and reduces pressure to produce dramatic results.

The experience also challenges a common assumption that AI adoption must start at the top. Citi’s senior leadership supported the effort, but much of the momentum came from employees who volunteered time to learn and teach. In large organisations, that bottom-up energy can be hard to generate, yet it often determines whether new technology sticks.

As more companies move from pilots to production, Citi’s experiment offers a useful case study. It shows that scale does not come from buying more tools, but from helping people feel confident using the ones they already have. For enterprises wondering why AI progress feels slow, the answer may lie less in strategy decks and more in how work actually gets done, one team at a time.

(Photo by Declan Sun)

See also: JPMorgan Chase treats AI spending as core infrastructure

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check outAI & Big Data Expo taking place in Amsterdam, California, and London. This comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post The quiet work behind Citi’s 4,000-person internal AI rollout appeared first on AI News.

]]>
McKinsey tests AI chatbot in early stages of graduate recruitment https://www.artificialintelligence-news.com/news/mckinsey-tests-ai-chatbot-in-early-stages-of-graduate-recruitment/ Thu, 15 Jan 2026 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=111598 Hiring at large firms has long relied on interviews, tests, and human judgment. That process is starting to shift. McKinsey has begun using an AI chatbot as part of its graduate recruitment process, signalling a shift in how professional services organisations evaluate early-career candidates. The chatbot is being used during the initial stages of recruitment, […]

The post McKinsey tests AI chatbot in early stages of graduate recruitment appeared first on AI News.

]]>
Hiring at large firms has long relied on interviews, tests, and human judgment. That process is starting to shift. McKinsey has begun using an AI chatbot as part of its graduate recruitment process, signalling a shift in how professional services organisations evaluate early-career candidates.

The chatbot is being used during the initial stages of recruitment, where applicants are asked to interact with it as part of their assessment. Rather than replacing interviews or final hiring decisions, the tool is intended to support screening and evaluation earlier in the process. The move reflects a wider trend across large organisations: AI is no longer limited to research or client-facing tools, but is increasingly shaping internal workflows.

Why McKinsey is using AI in graduate hiring

Graduate recruitment is resource-heavy. Every year, large firms receive tens of thousands of applications, many of which must be assessed in short hiring cycles. Screening candidates for basic fit, communication skills, and problem-solving ability can take a long time, even before interviews begin.

Using AI at this stage offers a way to manage volume. A chatbot can interact with every applicant, ask consistent questions and collect organised responses. Human recruiters can then review that data, rather than requiring staff to manually screen every application from scratch.

For McKinsey, the chatbot is part of a larger assessment process that includes interviews and human judgment. According to the company, the tool helps in gathering more information early on, rather than making recruiting judgments on its own.

Shifting the role of recruiters

Introducing AI into recruitment alters how hiring teams operate. Rather than focusing on early screening, recruiters can devote more time to assessing prospects who have already passed initial tests. In theory, that allows for more thoughtful interviews and deeper evaluation later in the process.

At the same time, it raises questions about oversight. Recruiters need to understand how the chatbot evaluates responses and what signals it prioritises. Without that visibility, there is a risk that decisions could lean too heavily on automated outputs, even if the tool is meant to assist rather than decide.

Professional services firms are typically wary about such adjustments. Their reputations rely heavily on talent quality, and any perception of unfair or flawed hiring practices carries risk. As a result, recruitment serves as a testing ground for AI use, as well as an area where controls are important.

Concerns around fairness and bias

Using AI in hiring is not without controversy. Critics have raised concerns that automated systems can reflect biases present in their training data or in how questions are framed. If not monitored closely, those biases can affect who progresses through the hiring process.

McKinsey has said it is mindful of these risks and that the chatbot is used alongside human review. Still, the move highlights a broader challenge for organisations adopting AI internally: tools must be tested, audited, and adjusted over time.

In recruitment, that includes checking whether certain groups are disadvantaged by how questions are asked or how responses are interpreted. It also means giving candidates clear information about how AI is used and how their data is handled.

How McKinsey’s AI hiring move fits a wider enterprise trend

The use of AI in graduate hiring is not unique to consulting. Large employers in finance, law, and technology are also testing AI tools for screening, scheduling interviews, and analysing written responses. What stands out is how quickly these tools are moving from experiments to real processes.

In many cases, AI enters organisations through small, contained use cases. Hiring is one of them. It sits inside the company, affects internal efficiency, and can be adjusted without changing products or services offered to clients.

That pattern mirrors how AI adoption is unfolding more broadly. Instead of sweeping transformations, many firms are adding AI to specific workflows where the benefits and risks are easier to manage.

What this signals for enterprises

McKinsey’s use of an AI chatbot in recruitment points to a practical shift in enterprise thinking. AI is becoming a tool for routine internal decisions, not just analysis or automation behind the scenes.

For other organisations, the lesson is less about copying the tool and more about approach. Introducing AI into sensitive areas like hiring requires clear boundaries, human oversight, and a willingness to review outcomes over time.

It also requires communication. Candidates need to know when they are interacting with AI and how that interaction fits into the overall hiring process. Transparency helps build trust, especially as AI becomes more common in workplace decisions.

As professional services firms continue to test AI in their own operations, recruitment offers an early view of how far they are willing to go. The technology may help manage scale and consistency, but responsibility for decisions still rests with people. How well companies balance those two will shape how AI is accepted inside the enterprise.

(Photo by Resume Genius)

See also: Allister Frost: Tackling workforce anxiety for AI integration success

Want to learn more about AI and big data from industry leaders? Check outAI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post McKinsey tests AI chatbot in early stages of graduate recruitment appeared first on AI News.

]]>
From cloud to factory – humanoid robots coming to workplaces https://www.artificialintelligence-news.com/news/from-cloud-to-factory-humanoid-robots-coming-to-workplaces/ Fri, 09 Jan 2026 13:06:00 +0000 https://www.artificialintelligence-news.com/?p=111539 The Microsoft-Hexagon partnerships may mark a turning point in the acceptance of humanoid robots in the workplace, as prototypes become operational realities.

The post From cloud to factory – humanoid robots coming to workplaces appeared first on AI News.

]]>
The partnership announced this week between Microsoft and Hexagon Robotics marks an inflection point in the commercialisation of humanoid, AI-powered robots for industrial environments. The two companies will combine Microsoft’s cloud and AI infrastructure with Hexagon’s expertise in robotics, sensors, and spatial intelligence to advance the deployment of physical AI systems in real-world settings.

At the centre of the collaboration is AEON, Hexagon’s industrial humanoid robot, a device designed to operate autonomously in environments like factories, logistics hubs, engineering plants, and inspection sites.

The partnership will focus on multimodal AI training, imitation learning, real-time data management, and integration with existing industrial systems. Initial target sectors include automotive, aerospace, manufacturing, and logistics, the companies say. It’s in these industries where labour shortages and operational complexity are already constraining financial growth.

The announcement is the sign of a maturing ecosystem: cloud platforms, physical AI, and robotics engineering’s convergence, making humanoid automation commercially viable.

Humanoid robots out of the research lab

While humanoid robots have been the subject of work at research institutions, demonstrated proudly at technology events, the last five years have seen a move to practical deployment in real-world, working environments. The main change has been the combination of improved perception, advances in reinforcement and imitation learning, and the availability of scalable cloud infrastructure.

One of the most visible examples is Agility Robotics’ Digit, a bipedal humanoid robot designed for logistics and warehouse operations. Digit has been piloted in live environments by companies like Amazon, where it performs material-handling tasks including tote movement and last-metre logistics. Such deployments tend to focus on augmenting human workers rather than replacing them, with Digit handling more physically demanding tasks.

Similarly, Tesla’s Optimus programme has moved out of the phase where concept videos were all that existed, and is now undergoing factory trials. Optimus robots are being tested on structured tasks like part handling and equipment transport inside Tesla’s automotive manufacturing facilities. While still limited in scope, these pilots demonstrate the pattern of humanoid-like machines chosen over less anthropomorphic form-factors so they can operate in human-designed and -populated spaces.

Inspection, maintenance, and hazardous environments

Industrial inspection is emerging as one of the earliest commercially viable use cases for humanoid and quasi-humanoid robots. Boston Dynamics’ Atlas, while not yet a general-purpose commercial product, has been used in live industrial trials for inspection and disaster-response environments. It can navigate uneven terrain, climb stairs, and manipulate tools in places considered unsafe for humans.

Toyota Research Institute has deployed humanoid robotics platforms for remote inspection and manipulation tasks in similar settings. Toyota’s systems rely on multimodal perception and human-in-the-loop control, the latter reinforcing an industry trend: early deployments prioritise reliability and traceability, so need human oversight.

Hexagon’s AEON aligns closely with this trend. Its emphasis on sensor fusion and spatial intelligence is relevant for inspection and quality assurance tasks, where precise understanding of physical environments is more valuable than the conversational abilities most associated with everyday use of AIs.

Cloud platforms central to robotics strategy

A defining feature of the Microsoft-Hexagon partnership is the use of cloud infrastructure in the scaling of humanoid robots. Training, updating, and monitoring physical AI systems generates large quantities of data, including video, force feedback from on-device sensors, spatial mapping (such as that derived from LIDAR), and operational telemetry. Managing this data locally has historically been a bottleneck, due to storage and processing constraints.

By using platforms like Azure and Azure IoT Operations, plus real-time intelligence services in the cloud, humanoid robots can be trained fleet-wide, not isolated units. This leads to multiple possibilities in shared learning, improvement by iteration, and greater consistency. For board-level buyers, these IT architecture shifts mean humanoid robots become viable entities that can be treated – in terms of IT requirements – more like enterprise software than machinery.

Labour shortages drive adoption

The demographic trends in manufacturing, logistics, and asset-intensive industries are increasingly unfavourable. Ageing workforces, declining interest in manual roles, and persistent skills shortages create skills gaps that conventional automation cannot fully address – at least, not without rebuilding entire facilities to be more suited to a robotic workforce. Fixed robotic systems excel in repetitive, predictable tasks but struggle in dynamic, human environments.

Humanoid robots occupy a middle ground. Not designed to replace workflows, they can stabilise operations where human availability is uncertain. Case studies show early value in night shifts, periods of peak demand, and tasks deemed too hazardous for humans.

What boards should evaluate before investing

For decision-makers considering investment in next-generation workplace robots, several issues to note have emerged from existing, real-world deployments:

Task specificity matters more than general intelligence, with the more successful pilots focusing on well-defined activities. Data governance and security continue to have to be placed front and centre when robots are put into play, especially so when it’s necessary to connect them to cloud platforms.

At a human level, workforce integration can be more challenging than sourcing, installing, and running the technology itself. Yet human oversight remains essential at this stage in AI maturity, for safety and regulatory acceptance.

A measured but irreversible shift

Humanoid robots won’t replace the human workforce, but an increasing body of evidence from live deployments and prototyping shows such devices are moving into the workplace. As of now, humanoid, AI-powered robots can perform economically-valuable tasks, and integration with existing industrial systems is immensely possible. For boards with the appetite to invest, the question could be when competitors might deploy the technology responsibly and at scale.

(Image source: Source: Hexagon Robotics)

 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post From cloud to factory – humanoid robots coming to workplaces appeared first on AI News.

]]>
AI in Human Resources: the real operational impact https://www.artificialintelligence-news.com/news/hr-ai-in-human-resources-the-real-operational-impact/ Thu, 18 Dec 2025 12:04:01 +0000 https://www.artificialintelligence-news.com/?p=111388 Human Resources is an area in many organisations where AI can have significant operational impact. The technology is now being embedded into day-to-day operations, in activities like answering employees’ questions and supporting training. The clearest impact appears where organisations can measure the tech’s outcomes, typically in time saved and the numbers of queries successfully resolved. […]

The post AI in Human Resources: the real operational impact appeared first on AI News.

]]>
Human Resources is an area in many organisations where AI can have significant operational impact. The technology is now being embedded into day-to-day operations, in activities like answering employees’ questions and supporting training. The clearest impact appears where organisations can measure the tech’s outcomes, typically in time saved and the numbers of queries successfully resolved.

Fewer tickets, more first-time answers

IBM’s internal virtual agent, AskHR, was built to handle employee queries and automate routine HR actions. IBM says AskHR automates more than 80 internal HR tasks and has engaged in over two million conversations with employees every year. It uses a two-tier approach, where AI resolves routine issues, and human advisers handle more complex cases.

The company reports some operational benefits: a 94% success rate in answering commonly-asked questions, a 75% reduction in the number of lodged support tickets since 2016, and – the headline figure – a 40% reduction in HR operational costs over four years.

But it’s important to note that AI is not used by IBM to route queries to existing materials. The automation is capable of completing the transaction, thus reducing the need to hand-off queries to human staff.

Recruitment and onboarding efficiencies

Vodafone’s 2024 annual report describes an internal platform it calls ‘Grow with Vodafone‘. The company says it’s reduced its time-to-hire periods from 50 days to 48 days, made the job application process simpler, and added personalised skills-based job recommendations for applicants. That’s led to a 78% reduction in questions posed by potential applicants and those onboarding into new roles.

The company also has a global headcount planning tool that reduces the manual work needed to assemble necessary data, plus there’s an AI-powered global HR ‘data lake’ that standardises dashboards and reduces the need for manual reporting – stakeholders can dive into the data themselves and surface the insights they need.

Training and internal support

Big employers have challengers getting new staff up to speed quickly; so-called time-to-competence. Bank of Americas’ newsroom describes how its onboarding and professional development organisation, ‘The Academy’ uses AI for interactive coaching, with employees completing over a million simulations in a year.

The organisation operates ‘Erica for Employees‘, an internal assistant that handles topics like health benefits and payroll or tax forms for employees. It’s used by over 90% of employees – for the IT service desk, having Erica triage situations is impactful, with a reduction of more than 50% in incoming calls.

Such tools reduce hidden work (searching, repeating questions, waiting for answers) and its associated costs. Plus, a shorter time-to-competence is especially valuable in regulated and customer-facing environments.

Frontline work at big employers

Walmart’s June 2025 corporate update describes rolling out AI tools via its associates’ app, which include a workflow tool that prioritises and recommends work tasks. At the time of publication, it was early days, but based on early results, Walmart says team leads and store managers are beginning to see shift planning times down from 90 to 30 minutes.

As an employer of a diverse workforce, its app’s real-time translation ability (44 languages) is invaluable. The company is currently upgrading its associates’ software with AI to turn its internal process guides into multi-lingual instructions. It has more than 900,000 employees using the system every week users, with more than three million queries per day going through the associates’ conversational AI platform.

Workforce efficiencies at Walmart scale is impressive, but for every size of business, there are clear advantages to be gained from giving employees faster guidance and better support across multilingual teams. In addition to the immediate cost savings, simple-to-use and effective software of this type affects retention, safety standards, and service quality – all for the better.

Governance and human safety nets

Multinational bank, HSBC’s publication, “Transforming HSBC with AI” describes over 600 AI use cases in operation at the company, and says colleagues have access to an LLM-based productivity tool for tasks like translation and document analysis. In an environment where governance and data security are of paramount importance, it’s ensured that all automated systems abide by existing codes, something that’s enforced by dedicated AI Review Councils and AI lifecycle management frameworks.

In HR this matters, regardless of vertical. Governance decisions should shape what can be automated, how people data is handled, and how accountability is maintained into the long term. HR data is often personally-identifiable, so the highest standards – and their maintenance – are critically important.

Operational trade-offs

Operational impact is about trust as well as speed and efficiency. A self-service agent answering confidently but incorrectly creates rework, escalations, and causes problems. A pragmatic pattern for reducing risk is to keep humans in the loop, especially for complex decisions.

IBM’s two-tier model, Vodafone’s tailored job recommendations, and Walmart an HSBC’s data governance and security bring oversight. Hybrid service models plus data discipline and oversight are what enable AI to scale without undermining employee confidence or fairness.

Where this is heading

The pattern of successful operational deployment has been consistent in the cases of the HR function in these large enterprises. They each started with high-volume questions and repetitive transactions, expanded into hiring and training, and then pushed AI to the frontline where it can save time. The biggest gains come when AI turns HR from a service queue into a faster, more consistently-operating function.

(Image source: “Business Meetings” by thinkpanama is licensed under CC BY-NC 2.0.)

 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post AI in Human Resources: the real operational impact appeared first on AI News.

]]>
Wall Street’s AI gains are here — banks plan for fewer people https://www.artificialintelligence-news.com/news/wall-street-ai-gains-are-here-banks-plan-for-fewer-people/ Thu, 18 Dec 2025 11:00:00 +0000 https://www.artificialintelligence-news.com/?p=111374 By December 2025, AI adoption on Wall Street had moved past experiments inside large US banks and into everyday operations. Speaking at a Goldman Sachs financial-services conference in New York on 9 December, bank executives described AI—particularly generative AI—as an operational upgrade already lifting productivity across engineering, operations, and customer service. The same discussion also […]

The post Wall Street’s AI gains are here — banks plan for fewer people appeared first on AI News.

]]>
By December 2025, AI adoption on Wall Street had moved past experiments inside large US banks and into everyday operations. Speaking at a Goldman Sachs financial-services conference in New York on 9 December, bank executives described AI—particularly generative AI—as an operational upgrade already lifting productivity across engineering, operations, and customer service.

The same discussion also surfaced a harder reality. If banks can produce more with the same teams, some roles may no longer be required at current levels once demand stabilises.

How Wall Street banks say AI is delivering results today

JPMorgan: operational gains begin to compound

Marianne Lake, chief executive of consumer and community banking at JPMorgan, said productivity in areas using AI has risen to around 6%, up from roughly 3% before deployment. She added that operations roles could eventually see productivity gains of 40% to 50% as AI becomes part of routine work.

Those gains rest on deliberate choices rather than broad experimentation. JPMorgan has focused on secure internal access to large language models, targeted changes to workflows, and tight controls on how data is used. The bank has described its internal “LLM Suite” as a controlled setting where staff can draft and summarise content using large language models.

Wells Fargo: output rising ahead of staffing changes

Wells Fargo CEO Charlie Scharf said the bank has not reduced headcount because of AI so far, but noted that it is “getting a lot more done.” He said management expects to find areas where fewer people are needed as productivity improves.

In comments reported the same day, Scharf said the bank’s internal budgets already point to a smaller workforce by 2026, even before factoring in AI’s full impact. He also flagged higher severance costs, suggesting preparations for future adjustments are under way.

PNC: AI speeds up a long-running shift

PNC CEO Bill Demchak positioned AI as an accelerator rather than a new direction. He said the bank’s headcount has stayed largely flat for about a decade, even as the business expanded. That stability, he said, came from automation and branch optimisation, with AI likely to push the trend further.

Citigroup: gains in software and customer support

Citi’s incoming CFO Gonzalo Luchetti said the bank has recorded a 9% productivity improvement in software development. That mirrors a broader pattern across large firms adopting AI copilots to support coding work.

He also pointed to two customer service areas where AI is helping: improving self-service so fewer calls reach agents, and supporting agents in real time when customers do need to speak with a person.

Goldman Sachs: workflow changes paired with hiring restraint

According to Reuters, Goldman Sachs’ internal “OneGS 3.0” programme has focused on using AI to improve sales processes and client onboarding. It has also targeted process-heavy functions such as lending workflows, regulatory reporting, and vendor management.

These changes are unfolding alongside job cuts and a slower pace of hiring, linking workflow redesign directly to staffing decisions.

Where Wall Street banks see the earliest AI productivity gains

Across banks, the clearest gains are showing up in work that relies heavily on documents, follows repeatable steps, and operates within defined rules. Generative AI can shorten the time needed to search for information, summarise material, draft content, and move work through approval chains—especially when paired with structured processes and human checks.

Common areas seeing early impact include:

  • Operations: drafting responses, summarising cases, and resolving exceptions more quickly
  • Software development: generating code, writing tests, refactoring, and producing documentation
  • Customer service: stronger self-service combined with real-time support for agents
  • Sales support and onboarding: pulling data from documents, filling forms, and speeding up client setup
  • Regulatory reporting: assembling narratives and evidence faster, under strict review and controls

Why governance shapes the pace of adoption

For banks, enthusiasm is not the main constraint. Control is. US regulators have long required strong oversight of models, and those expectations extend to AI systems. Guidance such as the Federal Reserve and OCC’s SR 11-7 sets standards for model development, validation, and ongoing review. A 2025 report from the US Government Accountability Office noted that existing model risk management principles already apply to AI, including testing and independent oversight.

In practice, this pushes banks toward designs that can be examined and traced. AI use is often limited in how independently it can act. Prompts and outputs are logged, performance is monitored for drift, and humans remain responsible for high-impact decisions such as lending, dispute handling, and official reporting.

Productivity rises, but employment questions remain

The comments from bank leaders point to a phased shift. The first phase looks like stable headcount paired with higher output as AI tools spread across teams. The second phase begins once those gains become consistent enough to influence staffing plans, through attrition, role changes, or targeted cuts.

Signals from Wells Fargo around 2026 headcount planning and severance costs suggest some banks are approaching that second stage.

At a broader level, institutions such as the International Monetary Fund have warned that AI could affect a large share of jobs worldwide, with different mixes of automation and augmentation depending on role and region. The World Economic Forum’s Future of Jobs Report 2025 also projects substantial job movement as companies adopt AI and adjust skill needs.

What AI means for Wall Street bank strategy beyond 2025

Banks that gain the most from AI are likely to focus on three areas at once: redesigning workflows rather than layering on chat tools, building strong data foundations, and putting governance in place that supports speed without eroding trust.

Research firms argue the financial stakes are high. McKinsey estimates that generative AI could deliver between $200 billion and $340 billion in annual value for the banking sector, largely through productivity improvements.

The open question is no longer whether AI can deliver results in banking. It is how quickly banks can make those gains routine while preserving audit trails, security, and customer safeguards—and how they manage the workforce changes that follow.

(Photo by Lo Lo)

See also: BNP Paribas introduces AI tool for investment banking

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Wall Street’s AI gains are here — banks plan for fewer people appeared first on AI News.

]]>
How the Royal Navy is using AI to cut its recruitment workload https://www.artificialintelligence-news.com/news/how-the-royal-navy-is-using-ai-to-cut-recruitment-workload/ Thu, 20 Nov 2025 17:41:37 +0000 https://www.artificialintelligence-news.com/?p=110813 The Royal Navy is handing the first line of its recruitment operations to a real-time AI avatar called Atlas. Atlas is powered by a large language model and has been deployed to field questions from prospective submariners. The deployment shows how AI can support a shift from slow text-based triage to fast and immersive automated […]

The post How the Royal Navy is using AI to cut its recruitment workload appeared first on AI News.

]]>
The Royal Navy is handing the first line of its recruitment operations to a real-time AI avatar called Atlas.

Atlas is powered by a large language model and has been deployed to field questions from prospective submariners. The deployment shows how AI can support a shift from slow text-based triage to fast and immersive automated support.

Public sector IT projects often suffer from bloated timelines and vague deliverables, but the Navy’s latest deployment is grounded in hard operational metrics. The launch of Atlas follows a specific business case: the need to filter and support candidates for one of the service’s most demanding roles while reducing the administrative burden on human staff.

The data behind the deployment

The Royal Navy, working with WPP Media’s Wavemaker, has spent years refining its automated entry points. Before the avatar, there was a text-based assistant.

That initial system, which was recently upgraded to a full LLM and retrieval-augmented generation (RAG) solution, proved the efficacy of the model. It fielded over 460,000 queries from more than 165,000 users and logged a 93 percent satisfaction rate.

More importantly for the bottom line, the text-based system slashed the workload for live-agent teams by 76 percent. It also generated 89,000 expressions of interest, proving that automation could widen the funnel without overwhelming the recruiting officers. Atlas is effectively the visual evolution of those successes, designed to arrest the attention of a younger demographic that engages differently with digital channels.

Under the hood of the AI recruitment avatar

The architecture relies on a multi-vendor ecosystem rather than a single-source solution. Wavemaker led the strategic direction and conversational design, ensuring the “brain” of the operation was trained on the correct knowledge base. Voxly Digital built the front and back end, supported by Great State, the Navy’s digital agency.

Functionally, Atlas does more than recite policy. It uses a conversational interface that is multimedia-enabled. If a candidate asks about life on a submarine – a notorious pain point for recruitment conversion due to the unique lifestyle – Atlas can respond with spoken answers, on-screen captions, and relevant videos or quotes from serving personnel.

The goal is to keep the user in the ecosystem longer. Atlas will be trialled at events and linked directly to the NavyReady app and the Enterprise Customer Relationship Management (e-CRM) programme, ensuring data continuity.

Augmentation, not replacement

Despite the high degree of automation, the Royal Navy frames this AI avatar as a workforce augmentation tool for recruitment.

Paul Colley, Head of Marketing at the Royal Navy, was explicit about the boundaries of the technology: “When it comes to AI, our focus is on how we can use it responsibly and strategically to better arm the teams we have. It’s not about replacing human support. It’s about giving the best support we can wherever and whenever candidates need it..

“We’re excited to launch Atlas and see if it can provide a new, different kind of support for those who would be considering the submarine service but need some more time to explore and discuss.”

Caroline Scott, Head of e-CRM and Innovation, added: “By trialling new interfaces and adopting a test-and-learn mindset, the Royal Navy can be better equipped to understand how these technologies can transform the way people connect, apply for roles, and engage with us, while also creating more meaningful digital experiences.” 

For business leaders, the Atlas pilot illustrates a mature approach to generative AI adoption. The Navy didn’t start with the avatar; they started with the data and a simpler text interface. Only after securing a 76 percent efficiency gain did they scale up to the more complex and resource-intensive visual medium.

The end result is an AI-assisted recruitment system that filters low-value queries at scale, allowing human recruiters to focus on the serious candidates.

See also: Lightweight LLM powers Japanese enterprise AI deployments

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post How the Royal Navy is using AI to cut its recruitment workload appeared first on AI News.

]]>
Are AI chatbots really changing the world of work? https://www.artificialintelligence-news.com/news/are-ai-chatbots-really-changing-the-world-of-work/ Fri, 02 May 2025 09:54:32 +0000 https://www.artificialintelligence-news.com/?p=106266 We’ve heard endless predictions about how AI chatbots will transform work, but data paints a much calmer picture—at least for now. Despite huge and ongoing advancements in generative AI, the massive wave it was supposed to create in the world of work looks more like a ripple so far. Researchers Anders Humlum (University of Chicago) […]

The post Are AI chatbots really changing the world of work? appeared first on AI News.

]]>
We’ve heard endless predictions about how AI chatbots will transform work, but data paints a much calmer picture—at least for now.

Despite huge and ongoing advancements in generative AI, the massive wave it was supposed to create in the world of work looks more like a ripple so far.

Researchers Anders Humlum (University of Chicago) and Emilie Vestergaard (University of Copenhagen) didn’t just rely on anecdotes. They dug deep, connecting responses from two big surveys (late 2023 and 2024) with official, detailed records about jobs and pay in Denmark.

The pair zoomed in on around 25,000 people working in 7,000 different places, covering 11 jobs thought to be right in the path of AI disruption.   

Everyone’s using AI chatbots for work, but where are the benefits?

What they found confirms what many of us see: AI chatbots are everywhere in Danish workplaces now. Most bosses are actually encouraging staff to use them, a real turnaround from the early days when companies were understandably nervous about things like data privacy.

Almost four out of ten employers have even rolled out their own in-house chatbots, and nearly a third of employees have had some formal training on these tools.   

When bosses gave the nod, the number of staff using chatbots practically doubled, jumping from 47% to 83%. It also helped level the playing field a bit. That gap between men and women using chatbots? It shrank noticeably when companies actively encouraged their use, especially when they threw in some training.

So, the tools are popular, companies are investing, people are getting trained… but the big economic shift? It seems to be missing in action.

Using statistical methods to compare people who used AI chatbots for work with those who didn’t, both before and after ChatGPT burst onto the scene, the researchers found… well, basically nothing.

“Precise zeros,” the researchers call their findings. No significant bump in pay, no change in recorded work hours, across all 11 job types they looked at. And they’re pretty confident about this – the numbers rule out any average effect bigger than just 1%.

This wasn’t just a blip, either. The lack of impact held true even for the keen beans who jumped on board early, those using chatbots daily, or folks working where the boss was actively pushing the tech.

Looking at whole workplaces didn’t change the story; places with lots of chatbot users didn’t see different trends in hiring, overall wages, or keeping staff compared to places using them less.

Productivity gains: More of a gentle nudge than a shove

Why the big disconnect? Why all the hype and investment if it’s not showing up in paychecks or job stats? The study flags two main culprits: the productivity boosts aren’t as huge as hoped in the real world, and what little gains there are aren’t really making their way into wages.

Sure, people using AI chatbots for work felt they were helpful. They mentioned better work quality and feeling more creative. But the number one benefit? Saving time.

However, when the researchers crunched the numbers, the average time saved was only about 2.8% of a user’s total work hours. That’s miles away from the huge 15%, 30%, even 50% productivity jumps seen in controlled lab-style experiments (RCTs) involving similar jobs.

Why the difference? A few things seem to be going on. Those experiments often focus on jobs or specific tasks where chatbots really shine (like coding help or basic customer service responses). This study looked at a wider range, including jobs like teaching where the benefits might be smaller.

The researchers stress the importance of what they call “complementary investments”. People whose companies encouraged chatbot use and provided training actually did report bigger benefits – saving more time, improving quality, and feeling more creative. This suggests that just having the tool isn’t enough; you need the right support and company environment to really unlock its potential.

And even those modest time savings weren’t padding wallets. The study reckons only a tiny fraction – maybe 3% to 7% – of the time saved actually showed up as higher earnings. It might be down to standard workplace inertia, or maybe it’s just harder to ask for a raise based on using a tool your boss hasn’t officially blessed, especially when many people started using them off their own bat.

Making new work, not less work

One fascinating twist is that AI chatbots aren’t just about doing old work tasks faster. They seem to be creating new tasks too. Around 17% of people using them said they had new workloads, mostly brand new types of tasks.

This phenomenon happened more often in workplaces that encouraged chatbot use. It even spilled over to people not using the tools – about 5% of non-users reported new tasks popping up because of AI, especially teachers having to adapt assignments or spot AI-written homework.   

What kind of new tasks? Things like figuring out how to weave AI into daily workflows, drafting content with AI help, and importantly, dealing with the ethical side and making sure everything’s above board. It hints that companies are still very much in the ‘figuring it out’ phase, spending time and effort adapting rather than just reaping instant rewards.

What’s the verdict on the work impact of AI chatbots?

The researchers are careful not to write off generative AI completely. They see pathways for it to become more influential over time, especially as companies get better at integrating it and maybe as those “new tasks” evolve.

But for now, their message is clear: the current reality doesn’t match the hype about a massive, immediate job market overhaul.

“Despite rapid adoption and substantial investments… our key finding is that AI chatbots have had minimal impact on productivity and labor market outcomes to date,” the researchers conclude.   

It brings to mind that old quote about the early computer age: seen everywhere, except in the productivity stats. Two years on from ChatGPT’s launch kicking off the fastest tech adoption we’ve ever seen, its actual mark on jobs and pay looks surprisingly light.

The revolution might still be coming, but it seems to be taking its time.   

See also: Claude Integrations: Anthropic adds AI to your favourite work tools

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Are AI chatbots really changing the world of work? appeared first on AI News.

]]>
AI Action Summit: Leaders call for unity and equitable development https://www.artificialintelligence-news.com/news/ai-action-summit-leaders-call-for-unity-equitable-development/ Mon, 10 Feb 2025 13:07:09 +0000 https://www.artificialintelligence-news.com/?p=104258 As the 2025 AI Action Summit kicks off in Paris, global leaders, industry experts, and academics are converging to address the challenges and opportunities presented by AI. Against the backdrop of rapid technological advancements and growing societal concerns, the summit aims to build on the progress made since the 2024 Seoul Safety Summit and establish […]

The post AI Action Summit: Leaders call for unity and equitable development appeared first on AI News.

]]>
As the 2025 AI Action Summit kicks off in Paris, global leaders, industry experts, and academics are converging to address the challenges and opportunities presented by AI.

Against the backdrop of rapid technological advancements and growing societal concerns, the summit aims to build on the progress made since the 2024 Seoul Safety Summit and establish a cohesive global framework for AI governance.  

AI Action Summit is ‘a wake-up call’

French President Emmanuel Macron has described the summit as “a wake-up call for Europe,” emphasising the need for collective action in the face of AI’s transformative potential. This comes as the US has committed $500 billion to AI infrastructure.

The UK, meanwhile, has unveiled its Opportunities Action Plan ahead of the full implementation of the UK AI Act. Ahead of the AI Summit, UK tech minister Peter Kyle told The Guardian the AI race must be led by “western, liberal, democratic” countries.

These developments signal a renewed global dedication to harnessing AI’s capabilities while addressing its risks.  

Matt Cloke, CTO at Endava, highlighted the importance of bridging the gap between AI’s potential and its practical implementation.

Headshot of Matt Cloke.

“Much of the conversation is set to focus on understanding the risks involved with using AI while helping to guide decision-making in an ever-evolving landscape,” he said.  

Cloke also stressed the role of organisations in ensuring AI adoption goes beyond regulatory frameworks.

“Modernising core systems enables organisations to better harness AI while ensuring regulatory compliance,” he explained.

“With improved data management, automation, and integration capabilities, these systems make it easier for organisations to stay agile and quickly adapt to impending regulatory changes.”  

Governance and workforce among critical AI Action Summit topics

Kit Cox, CTO and Founder of Enate, outlined three critical areas for the summit’s agenda.

Headshot of Kit Cox ahead of the 2025 AI Action Summit in Paris.

“First, AI governance needs urgent clarity,” he said. “We must establish global guidelines to ensure AI is safe, ethical, and aligned across nations. A disconnected approach won’t work; we need unity to build trust and drive long-term progress.”

Cox also emphasised the need for a future-ready workforce.

“Employers and governments must invest in upskilling the workforce for an AI-driven world,” he said. “This isn’t just about automation replacing jobs; it’s about creating opportunities through education and training that genuinely prepare people for the future of work.”  

Finally, Cox called for democratising AI’s benefits.

“AI must be fair and democratic both now and in the future,” he said. “The benefits can’t be limited to a select few. We must ensure that AI’s power reaches beyond Silicon Valley to all corners of the globe, creating opportunities for everyone to thrive.”  

Developing AI in the public interest

Professor Gina Neff, Professor of Responsible AI at Queen Mary University of London and Executive Director at Cambridge University’s Minderoo Centre for Technology & Democracy, stressed the importance of making AI relatable to everyday life.

Headshot of Professor Gina Neff.

“For us in civil society, it’s essential that we bring imaginaries about AI into the everyday,” she said. “From the barista who makes your morning latte to the mechanic fixing your car, they all have to understand how AI impacts them and, crucially, why AI is a human issue.”  

Neff also pushed back against big tech’s dominance in AI development.

“I’ll be taking this spirit of public interest into the Summit and pushing back against big tech’s push for hyperscaling. Thinking about AI as something we’re building together – like we do our cities and local communities – puts us all in a better place.”

Addressing bias and building equitable AI

Professor David Leslie, Professor of Ethics, Technology, and Society at Queen Mary University of London, highlighted the unresolved challenges of bias and diversity in AI systems.

“Over a year after the first AI Safety Summit at Bletchley Park, only incremental progress has been made to address the many problems of cultural bias and toxic and imbalanced training data that have characterised the development and use of Silicon Valley-led frontier AI systems,” he said.

Headshot of Professor David Leslie ahead of the 2025 AI Action Summit in Paris.

Leslie called for a renewed focus on public interest AI.

“The French AI Action Summit promises to refocus the conversation on AI governance to tackle these and other areas of immediate risk and harm,” he explained. “A main focus will be to think about how to advance public interest AI for all through mission-driven and society-led funding.”  

He proposed the creation of a public interest AI foundation, supported by governments, companies, and philanthropic organisations.

“This type of initiative will have to address issues of algorithmic and data biases head on, at concrete and practice-based levels,” he said. “Only then can it stay true to the goal of making AI technologies – and the infrastructures upon which they depend – accessible global public goods.”  

Systematic evaluation  

Professor Maria Liakata, Professor of Natural Language Processing at Queen Mary University of London, emphasised the need for rigorous evaluation of AI systems.

Headshot of Professor Maria Liakata ahead of the 2025 AI Action Summit in Paris.

“AI has the potential to make public service more efficient and accessible,” she said. “But at the moment, we are not evaluating AI systems properly. Regulators are currently on the back foot with evaluation, and developers have no systematic way of offering the evidence regulators need.”  

Liakata called for a flexible and systematic approach to AI evaluation.

“We must remain agile and listen to the voices of all stakeholders,” she said. “This would give us the evidence we need to develop AI regulation and help us get there faster. It would also help us get better at anticipating the risks posed by AI.”  

AI in healthcare: Balancing innovation and ethics

Dr Vivek Singh, Lecturer in Digital Pathology at Barts Cancer Institute, Queen Mary University of London, highlighted the ethical implications of AI in healthcare.

Headshot of Dr Vivek Singh ahead of the 2025 AI Action Summit in Paris.

“The Paris AI Action Summit represents a critical opportunity for global collaboration on AI governance and innovation,” he said. “I hope to see actionable commitments that balance ethical considerations with the rapid advancement of AI technologies, ensuring they benefit society as a whole.”  

Singh called for clear frameworks for international cooperation.

“A key outcome would be the establishment of clear frameworks for international cooperation, fostering trust and accountability in AI development and deployment,” he said.  

AI Action Summit: A pivotal moment

The 2025 AI Action Summit in Paris represents a pivotal moment for global AI governance. With calls for unity, equity, and public interest at the forefront, the summit aims to address the challenges of bias, regulation, and workforce readiness while ensuring AI’s benefits are shared equitably.

As world leaders and industry experts converge, the hope is that actionable commitments will pave the way for a more inclusive and ethical AI future.

(Photo by Jorge Gascón)

See also: EU AI Act: What businesses need to know as regulations go live

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI Action Summit: Leaders call for unity and equitable development appeared first on AI News.

]]>
Understanding AI’s impact on the workforce https://www.artificialintelligence-news.com/news/understanding-ai-impact-on-the-workforce/ Fri, 08 Nov 2024 10:11:03 +0000 https://www.artificialintelligence-news.com/?p=16459 The Tony Blair Institute (TBI) has examined AI’s impact on the workforce. The report outlines AI’s potential to reshape work environments, boost productivity, and create opportunities—while warning of potential challenges ahead. “Technology has a long history of profoundly reshaping the world of work,” the report begins. From the agricultural revolution to the digital age, each […]

The post Understanding AI’s impact on the workforce appeared first on AI News.

]]>
The Tony Blair Institute (TBI) has examined AI’s impact on the workforce. The report outlines AI’s potential to reshape work environments, boost productivity, and create opportunities—while warning of potential challenges ahead.

“Technology has a long history of profoundly reshaping the world of work,” the report begins.

From the agricultural revolution to the digital age, each wave of innovation has redefined labour markets. Today, AI presents a seismic shift, advancing rapidly and prompting policymakers to prepare for change.

Economic opportunities

The TBI report estimates that AI, when fully adopted by UK firms, could significantly increase productivity. It suggests that AI could save “almost a quarter of private-sector workforce time,” equivalent to the annual output of 6 million workers.

Most of these time savings are expected to stem from AI-enabled software performing cognitive tasks such as data analysis and routine administrative operations.

The report identifies sectors reliant on routine cognitive tasks, such as banking and finance, as those with significant exposure to AI. However, sectors like skilled trades or construction – which involve complex manual tasks – are likely to see less direct impact.

While AI can result in initial job losses, it also has the potential to create new demand by fostering economic growth and new industries. 

The report expects these job losses can be balanced by new job creation. Over the years, technology has historically spurred new employment opportunities, as innovation leads to the development of new products and services.

Shaping future generations

AI’s potential extends into education, where it could assist both teachers and students.

The report suggests that AI could help “raise educational attainment by around six percent” on average. By personalising and supporting learning, AI has the potential to equalise access to opportunities and improve the quality of the workforce over time.

Health and wellbeing

Beyond education, AI offers potential benefits in healthcare, supporting a healthier workforce and reducing welfare costs.

The report highlights AI’s role in speeding medical research, enabling preventive healthcare, and helping those with disabilities re-enter the workforce.

Workplace transformation

The report acknowledges potential workplace challenges, such as increased monitoring and stress from AI tools. It stresses the importance of managing these technologies thoughtfully to “deliver a more engaging, inclusive and safe working environment.”

To mitigate potential disruption, the TBI outlines recommendations. These include upgrading labour-market infrastructure and utilising AI for job matching.

The report suggests creating an “Early Awareness and Opportunity System” to help workers understand the impact of AI on their jobs and provide advice on career paths.

Preparing for an AI-powered future

In light of the uncertainties surrounding AI’s impact on the workforce, the TBI urges policy changes to maximise benefits. Recommendations include incentivising AI adoption across industries, developing AI-pathfinder programmes, and creating challenge prizes to address public-sector labour shortages.

The report concludes that while AI presents risks, the potential gains are too significant to ignore.

Policymakers are encouraged to adopt a “pro-innovation” stance while being attuned to the risks, fostering an economy that is dynamic and resilient.

(Photo by Mimi Thian)

See also: Anthropic urges AI regulation to avoid catastrophes

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Understanding AI’s impact on the workforce appeared first on AI News.

]]>
JPMorgan CEO: AI will be used for ‘every single process’ https://www.artificialintelligence-news.com/news/jpmorgan-ceo-ai-will-be-used-for-every-single-process/ Tue, 03 Oct 2023 14:20:44 +0000 https://www.artificialintelligence-news.com/?p=13664 In a recent Bloomberg interview, JPMorgan CEO Jamie Dimon unveiled his AI-driven vision for the financial industry. Dimon expressed his belief that AI has the potential to revolutionise every aspect of JPMorgan’s operations, from trading and hedging to research and error detection. He described AI as a “living, breathing thing,” capable of transforming traditional processes […]

The post JPMorgan CEO: AI will be used for ‘every single process’ appeared first on AI News.

]]>
In a recent Bloomberg interview, JPMorgan CEO Jamie Dimon unveiled his AI-driven vision for the financial industry.

Dimon expressed his belief that AI has the potential to revolutionise every aspect of JPMorgan’s operations, from trading and hedging to research and error detection. He described AI as a “living, breathing thing,” capable of transforming traditional processes and augmenting human capabilities.

Dimon’s enthusiasm for AI is grounded in its current applications within JPMorgan. He revealed that AI is already extensively used in equity hedging, idea generation, and large language models.

Despite the ongoing debate about the impact of AI on employment, Dimon remains pragmatic. He acknowledged that AI will replace certain jobs, but he emphasised that technology has historically led to job displacement and this evolution is a natural part of progress.

One of Dimon’s main concerns about AI technology revolves around its potential misuse by malicious actors, especially in cyberspace. He stressed the importance of establishing legal safeguards to prevent the misuse of AI.

Despite these concerns, Dimon remains optimistic about the positive impact of AI on the workforce and society. He highlighted the benefits of other technological breakthroughs, many of which can be further enhanced using AI.

“Your children will live to 100 and not have cancer because of technology, and they’ll probably be working three days a week. So technology’s done unbelievable things for mankind,” said Dimon.

Dimon outlined JPMorgan’s proactive approach to potential job displacement caused by AI implementation. He expressed the firm’s commitment to supporting employees who might be affected, stating that they plan to redeploy displaced workers in local branches or different functions within the company.

Dimon’s forward-thinking approach highlights the transformative power of AI in shaping the future of finance and other industries. However, it also reiterates the need for consideration of how to minimise negative impacts such as job displacement.

(Image Credit: Stuart Isett/Fortune Global Forum under CC BY-NC-ND 2.0 DEED license)

See also: Cyber Security & Cloud Expo: The alarming potential of AI-powered cybercrime

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post JPMorgan CEO: AI will be used for ‘every single process’ appeared first on AI News.

]]>
Universities want to ensure staff and students are ‘AI-literate’ https://www.artificialintelligence-news.com/news/universities-ensure-staff-and-students-ai-literate/ Tue, 04 Jul 2023 12:48:07 +0000 https://www.artificialintelligence-news.com/?p=13251 In a joint statement published today, the 24 Vice Chancellors of the Russell Group of universities have pledged their commitment to ensuring the ethical and responsible use of generative AI and new technologies like ChatGPT. Universities are increasingly recognising the importance of equipping their students and staff with AI literacy skills to leverage the opportunities […]

The post Universities want to ensure staff and students are ‘AI-literate’ appeared first on AI News.

]]>
In a joint statement published today, the 24 Vice Chancellors of the Russell Group of universities have pledged their commitment to ensuring the ethical and responsible use of generative AI and new technologies like ChatGPT.

Universities are increasingly recognising the importance of equipping their students and staff with AI literacy skills to leverage the opportunities presented by technological advancements in teaching and learning. 

Sheila Flavell CBE, Chief Operating Officer at FDM Group, commented: “With businesses crying out for new hires equipped with the latest tech skills and analytics capabilities, providing students with a fully rounded education and qualifications in this area is critical for building a dynamic workforce, fit for the future ahead.”

Developed in collaboration with AI and educational experts, the new principles acknowledge both the risks and opportunities associated with generative AI. The statement emphasises the role of Russell Group universities in cultivating AI leaders who can navigate an AI-enabled world effectively and responsibly.

The five principles outlined in the joint statement are as follows:

  1. AI Literacy Support: Universities will support students and staff in developing AI literacy skills, enabling them to comprehend and engage with AI effectively.
  2. Faculty Training: Staff members will be equipped with the necessary knowledge and skills to assist students in utilising generative AI tools appropriately and effectively within their learning experiences.
  3. Ethical Integration: Universities will adapt their teaching and assessment methods to incorporate the ethical use of generative AI, ensuring equal access to its benefits.
  4. Academic Rigour: Academic integrity and rigour will be upheld as universities embrace the transformative power of AI in education.
  5. Collaborative Best Practices: Universities will collaborate and share best practices as the technology and its applications in education evolve.

This announcement closely follows the UK Government’s launch of a consultation on the use of generative AI in education in England. By issuing this joint statement, the Russell Group universities aim to foster a shared understanding of the values and considerations surrounding AI in education.

Ross Sleight, Chief Strategy Officer, EMEA at CI&T, said:

“Education is still yet to be transformed by AI. It’s centuries old in how it’s done, but that doesn’t mean change isn’t on the horizon.

Exams and essays can risk regurgitation over critical thinking. Institutions must ask themselves, what is the most effective way to facilitate and consolidate knowledge, and can new technology better support this?

Technology such as ChatGPT is here to stay, and while it does pose challenges for the education sector, fighting against it is a losing battle. Institutions need to work with it and use it to their advantage. Great innovation can come from it.”

Dr Tim Bradshaw, Chief Executive of the Russell Group, highlighted the significance of AI breakthroughs in reshaping work dynamics and stressed the importance of preparing students with the skills required for successful careers. Furthermore, he emphasised the need to support university staff as they explore the potential of AI to enhance teaching methods and engage students effectively.

As the field of AI continues to advance rapidly, the joint statement of principles serves as a testament to the commitment of Russell Group universities to harnessing the transformative opportunities presented by AI.

John Kirk, Group Deputy CEO at ITG, commented: “The reality is that this technology is here to stay and deployed correctly can enhance our creative industries and help businesses transform marketing and customer interactions for the long term.

“With the digital skills shortfall still causing headaches for many companies, having systems in place to better understand such a high-impact technology is a step in the right direction.”

By prioritising the welfare of students and staff and safeguarding the integrity of education, the principles will help to ensure that AI adoption in universities is guided by clear and understood values.

Prof Michael Grove, deputy pro-vice chancellor (education policy and standards) at the University of Birmingham, said: “The rapid rise of generative AI will mean we need to continually review and re-evaluate our assessment practices, but we should view this as an opportunity rather than a threat.

You can find the full principles on the use of AI in education here (PDF)

(Photo by Suad Kamardeen on Unsplash)

See also: UK will host global AI summit to address potential risks

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Universities want to ensure staff and students are ‘AI-literate’ appeared first on AI News.

]]>