Computer Vision - AI News https://www.artificialintelligence-news.com/categories/how-it-works/computer-vision/ Artificial Intelligence News Fri, 09 Jan 2026 13:15:33 +0000 en-GB hourly 1 https://wordpress.org/?v=6.9.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png Computer Vision - AI News https://www.artificialintelligence-news.com/categories/how-it-works/computer-vision/ 32 32 From cloud to factory – humanoid robots coming to workplaces https://www.artificialintelligence-news.com/news/from-cloud-to-factory-humanoid-robots-coming-to-workplaces/ Fri, 09 Jan 2026 13:06:00 +0000 https://www.artificialintelligence-news.com/?p=111539 The Microsoft-Hexagon partnerships may mark a turning point in the acceptance of humanoid robots in the workplace, as prototypes become operational realities.

The post From cloud to factory – humanoid robots coming to workplaces appeared first on AI News.

]]>
The partnership announced this week between Microsoft and Hexagon Robotics marks an inflection point in the commercialisation of humanoid, AI-powered robots for industrial environments. The two companies will combine Microsoft’s cloud and AI infrastructure with Hexagon’s expertise in robotics, sensors, and spatial intelligence to advance the deployment of physical AI systems in real-world settings.

At the centre of the collaboration is AEON, Hexagon’s industrial humanoid robot, a device designed to operate autonomously in environments like factories, logistics hubs, engineering plants, and inspection sites.

The partnership will focus on multimodal AI training, imitation learning, real-time data management, and integration with existing industrial systems. Initial target sectors include automotive, aerospace, manufacturing, and logistics, the companies say. It’s in these industries where labour shortages and operational complexity are already constraining financial growth.

The announcement is the sign of a maturing ecosystem: cloud platforms, physical AI, and robotics engineering’s convergence, making humanoid automation commercially viable.

Humanoid robots out of the research lab

While humanoid robots have been the subject of work at research institutions, demonstrated proudly at technology events, the last five years have seen a move to practical deployment in real-world, working environments. The main change has been the combination of improved perception, advances in reinforcement and imitation learning, and the availability of scalable cloud infrastructure.

One of the most visible examples is Agility Robotics’ Digit, a bipedal humanoid robot designed for logistics and warehouse operations. Digit has been piloted in live environments by companies like Amazon, where it performs material-handling tasks including tote movement and last-metre logistics. Such deployments tend to focus on augmenting human workers rather than replacing them, with Digit handling more physically demanding tasks.

Similarly, Tesla’s Optimus programme has moved out of the phase where concept videos were all that existed, and is now undergoing factory trials. Optimus robots are being tested on structured tasks like part handling and equipment transport inside Tesla’s automotive manufacturing facilities. While still limited in scope, these pilots demonstrate the pattern of humanoid-like machines chosen over less anthropomorphic form-factors so they can operate in human-designed and -populated spaces.

Inspection, maintenance, and hazardous environments

Industrial inspection is emerging as one of the earliest commercially viable use cases for humanoid and quasi-humanoid robots. Boston Dynamics’ Atlas, while not yet a general-purpose commercial product, has been used in live industrial trials for inspection and disaster-response environments. It can navigate uneven terrain, climb stairs, and manipulate tools in places considered unsafe for humans.

Toyota Research Institute has deployed humanoid robotics platforms for remote inspection and manipulation tasks in similar settings. Toyota’s systems rely on multimodal perception and human-in-the-loop control, the latter reinforcing an industry trend: early deployments prioritise reliability and traceability, so need human oversight.

Hexagon’s AEON aligns closely with this trend. Its emphasis on sensor fusion and spatial intelligence is relevant for inspection and quality assurance tasks, where precise understanding of physical environments is more valuable than the conversational abilities most associated with everyday use of AIs.

Cloud platforms central to robotics strategy

A defining feature of the Microsoft-Hexagon partnership is the use of cloud infrastructure in the scaling of humanoid robots. Training, updating, and monitoring physical AI systems generates large quantities of data, including video, force feedback from on-device sensors, spatial mapping (such as that derived from LIDAR), and operational telemetry. Managing this data locally has historically been a bottleneck, due to storage and processing constraints.

By using platforms like Azure and Azure IoT Operations, plus real-time intelligence services in the cloud, humanoid robots can be trained fleet-wide, not isolated units. This leads to multiple possibilities in shared learning, improvement by iteration, and greater consistency. For board-level buyers, these IT architecture shifts mean humanoid robots become viable entities that can be treated – in terms of IT requirements – more like enterprise software than machinery.

Labour shortages drive adoption

The demographic trends in manufacturing, logistics, and asset-intensive industries are increasingly unfavourable. Ageing workforces, declining interest in manual roles, and persistent skills shortages create skills gaps that conventional automation cannot fully address – at least, not without rebuilding entire facilities to be more suited to a robotic workforce. Fixed robotic systems excel in repetitive, predictable tasks but struggle in dynamic, human environments.

Humanoid robots occupy a middle ground. Not designed to replace workflows, they can stabilise operations where human availability is uncertain. Case studies show early value in night shifts, periods of peak demand, and tasks deemed too hazardous for humans.

What boards should evaluate before investing

For decision-makers considering investment in next-generation workplace robots, several issues to note have emerged from existing, real-world deployments:

Task specificity matters more than general intelligence, with the more successful pilots focusing on well-defined activities. Data governance and security continue to have to be placed front and centre when robots are put into play, especially so when it’s necessary to connect them to cloud platforms.

At a human level, workforce integration can be more challenging than sourcing, installing, and running the technology itself. Yet human oversight remains essential at this stage in AI maturity, for safety and regulatory acceptance.

A measured but irreversible shift

Humanoid robots won’t replace the human workforce, but an increasing body of evidence from live deployments and prototyping shows such devices are moving into the workplace. As of now, humanoid, AI-powered robots can perform economically-valuable tasks, and integration with existing industrial systems is immensely possible. For boards with the appetite to invest, the question could be when competitors might deploy the technology responsibly and at scale.

(Image source: Source: Hexagon Robotics)

 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post From cloud to factory – humanoid robots coming to workplaces appeared first on AI News.

]]>
ClinCheck Live brings AI planning to Invisalign dental treatments https://www.artificialintelligence-news.com/news/clincheck-live-brings-ai-planning-to-invisalign-dental-treatments/ Tue, 04 Nov 2025 11:37:13 +0000 https://www.artificialintelligence-news.com/?p=110204 Align Technology, a medical device company that designs, manufactures, and sells the Invisalign system of clear aligners, exocad CAD/CAM software, and iTero intra-oral scanners, has unveiled ClinCheck Live Plan, a new feature in its Invisalign digital dental treatment planning. ClinCheck Live Plan is designed to automate the creation of an initial Invisalign treatment plan that’s […]

The post ClinCheck Live brings AI planning to Invisalign dental treatments appeared first on AI News.

]]>
Align Technology, a medical device company that designs, manufactures, and sells the Invisalign system of clear aligners, exocad CAD/CAM software, and iTero intra-oral scanners, has unveiled ClinCheck Live Plan, a new feature in its Invisalign digital dental treatment planning.

ClinCheck Live Plan is designed to automate the creation of an initial Invisalign treatment plan that’s ready for a practitioner to review and approve, cutting treatment planning cycles from days down to just 15 minutes. The goal is to help patients get the treatment they need faster.

The latest plan follows Align’s range of new treatment planning tools and automation features launched in recent years, like cloud-based ClinCheck Pro 6.0 software, the automated Invisalign Personalised Plan templates, and the one-page Flex Rx prescription form for simplified workflows. Each new feature has been designed to improve consistency, dentist control, and speed.

Built on Align’s data and algorithms, ClinCheck Live Plan has been in development for decades, with insights from dentists and orthodontists who have treated over 21 million Invisalign patients globally.

Dentists will be able to create and adjust treatment plans and, once an eligible case has been submitted using the Flex Rx system, receive a personalised ClinCheck treatment plan in approximately 15 minutes.

Invisalign specialists can review their patients’ teeth and how they plan to adjust them, helping improve service while the patient is present. Once an Invisalign clinician submits a new case with an iTero intra-oral scan and a completed Flex Rx prescription, the ClinCheck Live Plan system makes a 3D plan. Ultimately, a faster process should help clinics operate more efficiently and enhance their patients’ experiences.

Invisalign-trained specialists that currently use the ClinCheck preferences template and Flex Rx form will gain access to ClinCheck Live Plan when it becomes available in their region. A worldwide rollout of the plan is set to start in the first quarter of 2026.

(Image source: “Visiting the dentist in SL” by Daniel Voyager is licensed under CC BY 2.0.)

 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post ClinCheck Live brings AI planning to Invisalign dental treatments appeared first on AI News.

]]>
AI Redaction That Puts Privacy First: CaseGuard Studio Leading The Way https://www.artificialintelligence-news.com/news/ai-redaction-that-puts-privacy-first-caseguard-studio-leading-the-way/ Wed, 08 Oct 2025 09:07:44 +0000 https://www.artificialintelligence-news.com/?p=109632 Law enforcement, law firms, hospitals, and financial institutions are asked every day to release records, which can contain highly sensitive details – including addresses, social security numbers, medical diagnoses, evidence footage, and children’s identities. To meet compliance and security requirements, staff spend hundreds of hours manually redacting sensitive information, yet when that process goes wrong, […]

The post AI Redaction That Puts Privacy First: CaseGuard Studio Leading The Way appeared first on AI News.

]]>
Law enforcement, law firms, hospitals, and financial institutions are asked every day to release records, which can contain highly sensitive details – including addresses, social security numbers, medical diagnoses, evidence footage, and children’s identities.

To meet compliance and security requirements, staff spend hundreds of hours manually redacting sensitive information, yet when that process goes wrong, there can be costly consequences. Last year, healthcare company Advanced was fined £6 million for losing patient records that, among other details, contained information about how to gain entry to the homes of 890 care receivers. Even the smallest oversights can create unpleasant headlines and catastrophic fines.

This is the reality of modern data handling: leaks can be catastrophic, and compliance frameworks like GDPR, HIPAA, and FERPA, plus FOIA requests, require more vigilance than manual redaction can provide. What organizations need is not more staff to ensure proper redaction, but tools that achieve it quickly, reliably, and securely.

CaseGuard Studio, a US-based AI redaction & investigation platform, has built software that automates this manual work with 98% accuracy. It can process thousands of files in minutes, working on data that’s kept securely on-premises of any file type, including video, audio, documents, and images.

Why Manual Redaction No Longer Works

Redaction is not new, but the tools most people reach for were not built for the complexity of today’s compliance requirements. Adobe Acrobat, for example, offers text redaction but needs manual work on each document. Premiere’s video editing software requires frame-by-frame subject tracking for video redaction, which is slow and impractical. These solutions provide only limited capability and were never designed for departments that process a multitude of redactions on a weekly basis.

CaseGuard Studio, by contrast, was purpose-built for just this challenge. It can detect 12 categories of PII (personally-identifiable information) in video and images, such as faces, license plates, notepads, and more. It tracks and redacts all PII without needing manual frame-by-frame intervention.

For audio and documents, CaseGuard Studio supports over 30 PII types, like names, phone numbers, and addresses. Custom keywords, phrases, or sentences can be auto-detected and redacted directly from thousands of documents and transcripts, streamlining compliance in ways manual tools can’t match. It transcribes recordings with high accuracy and can translate to and from 100+ languages, so it can redact sensitive terms in multilingual content.

What once took days of human labor can now happen in minutes. CaseGuard Studio automates redaction work with 98% accuracy, up to 30 times faster than manual methods, and because it runs fully on-premise, data never leaves the device.

What to Ask When Choosing Redaction Software

For organizations evaluating redaction software, the decision often comes down to a handful of critical questions that determine whether a platform can deliver on both compliance and efficiency. The following questions are central to making the right choice.

  • Can the software handle every file type we work with? From scanned forms and handwritten notes to video, audio, and still images, organizations in sensitive sectors deal with more than PDFs.
  • Is the platform fully automated? If redaction still means blacking out text with a Sharpie or scrubbing video frame by frame, the process is slow and prone to error. Full automation ensures accuracy and frees staff for higher-impact work.
  • Does the software ensure data never leaves your environment? On-premise deployment means sensitive files are processed locally, so nothing is exposed to third-party servers or cloud risks.
  • Does the pricing stay predictable as you scale? Per-file or per-minute pricing quickly becomes unsustainable as workloads grow. Look for a flat subscription with unlimited redaction, so costs stay predictable no matter how much data you process.

Evaluating CaseGuard Studio Against the Four Redaction Essentials

When assessed against these requirements, CaseGuard Studio was the only platform in our evaluation that consistently delivered across all four redaction essentials.

     1. Auto-redact files from any source

From text documents and scanned forms to video, audio, images, and even handwriting, redaction has to cover every format where sensitive information might appear. Missing one identifiable feature, a face in a crowd or an un-redacted license plate, and a single oversight can be the difference between full compliance and a lawsuit. CaseGuard Studio automatically detects and redacts sensitive information across all these file types within a single platform with complete compliance.

      2. Automated bulk redaction at speed and scale

Thousands of files can be redacted in bulk, turning weeks of manual effort into minutes of processing. CaseGuard Studio handles workloads up to 32x faster than manual methods, with 98% accuracy, giving organizations the speed and scalability to meet growing compliance demands.

    3. Your data, your control

CaseGuard Studio runs fully on-premise, within your secure environment, including air-gapped systems that are completely isolated from external networks. This ensures organizations retain full control of their data, with nothing exposed to third-party servers or cloud risks.

   4. Unlimited redaction, no pay-per-file fees

Pay-per-file pricing quickly adds up, making every additional redaction more expensive. CaseGuard Studio offers predictable pricing under a flat subscription with unlimited redaction, so costs remain the same no matter how heavy the redaction load is.

Final Thoughts

Over the course of our evaluation, we compared methods and platforms ranging from manual redaction and legacy PDF editors to newer AI-driven tools that have appeared in the last few years. Most delivered partial solutions, treating written documents well but failing on audio, while others blurred faces in video, but weren’t practical to use at scale. Cloud-only options raised sovereignty and compliance concerns that, for many users, would count them out of the running entirely.

CaseGuard Studio was the only platform that consistently met all five requirements detailed above. It supports the widest of file types, from body-cam video to scanned or handwritten forms.

Audio and video are probably the most difficult formats to redact, especially at scale. Here, CaseGuard wins our vote with its AI-powered smarts. It runs fully on-premise, keeps sensitive files under organizational control, and its local AI models are refined with each version release.

At a time when many cloud redaction software licensing models drive up costs as workloads grow, CaseGuard’s flat pricing offers a refreshing change — predictable, transparent, and sustainable.

For any organization facing rising compliance demands and ever-larger volumes of sensitive data, CaseGuard Studio is well worth a closer look. Click here to book a consultation.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post AI Redaction That Puts Privacy First: CaseGuard Studio Leading The Way appeared first on AI News.

]]>
UK deploys AI to boost Arctic security amid growing threats https://www.artificialintelligence-news.com/news/uk-deploys-ai-to-boost-arctic-security-amid-growing-threats/ Tue, 27 May 2025 14:39:13 +0000 https://www.artificialintelligence-news.com/?p=106587 The UK is deploying AI to keep a watchful eye on Arctic security threats from hostile states amid growing geopolitical tensions. This will be underscored by Foreign Secretary David Lammy during his visit to the region, which kicks off today. The deployment is seen as a signal of the UK’s commitment to leveraging technology to […]

The post UK deploys AI to boost Arctic security amid growing threats appeared first on AI News.

]]>
The UK is deploying AI to keep a watchful eye on Arctic security threats from hostile states amid growing geopolitical tensions. This will be underscored by Foreign Secretary David Lammy during his visit to the region, which kicks off today.

The deployment is seen as a signal of the UK’s commitment to leveraging technology to navigate an increasingly complex global security landscape. For Britain, what unfolds in the territories of two of its closest Arctic neighbours – Norway and Iceland – has direct and profound implications.

The national security of the UK is linked to stability in the High North. The once remote and frozen expanse is changing, and with it, the security calculus for the UK.

Foreign Secretary David Lammy said: “The Arctic is becoming an increasingly important frontier for geopolitical competition and trade, and a key flank for European and UK security. 

“We cannot bolster the UK’s defence and deliver the Plan for Change without greater security in the Arctic. This is a region where Russia’s shadowfleet operates, threatening critical infrastructure like undersea cables to the UK and Europe, and helping fund Russia’s aggressive activity.”

British and Norwegian naval vessels conduct vital joint patrols in the Arctic. These missions are at the sharp end of efforts to detect, deter, and manage the increasing subsea threats that loom over vital energy supplies, national infrastructure, and broader regional security.

Russia’s Northern Fleet, in particular, presents a persistent challenge in these icy waters. This high-level engagement follows closely on the heels of the Prime Minister’s visit to Norway earlier this month for a Joint Expeditionary Force meeting, where further support for Ukraine was a key talking point with allies from the Baltic and Scandinavian states.

During the Icelandic stop of his tour, Lammy will unveil a UK-Iceland tech partnership to boost Arctic security. This new scheme is designed to harness AI technologies for monitoring hostile activity across this vast and challenging region. It’s a forward-looking strategy, acknowledging that as the Arctic opens up, so too do the opportunities for those who might seek to exploit its vulnerabilities.

As global temperatures climb and the ancient ice caps continue their retreat, previously impassable shipping routes are emerging. This is not just a matter for climate scientists; it’s redrawing geopolitical maps. The Arctic is fast becoming an arena of increased competition, with nations eyeing newly accessible reserves of gas, oil, and precious minerals. Unsurprisingly, this scramble for resources is cranking up security concerns.

Adding another layer of complexity, areas near the Arctic are being actively used by Russia’s fleet of nuclear-powered icebreakers. Putin’s vessels are crucial to his “High North” strategy, carving paths for tankers that, in turn, help to bankroll his illegal war in Ukraine.

Such operations cast a long shadow, threatening not only maritime security but also the delicate Arctic environment. Reports suggest Putin has been forced to rely on “dodgy and decaying vessels,” which frequently suffer breakdowns and increase the risk of devastating oil spills.

The UK’s defence partnership with Norway is deeply rooted, with British troops undertaking vital Arctic training in the country for over half a century. This enduring collaboration is now being elevated through an agreement to fortify the security of both nations.

“It’s more important than ever that we work with our allies in the High North, like Norway and Iceland, to enhance our ability to patrol and protect these waters,” added Lammy.

“That’s why we have today announced new UK funding to work more closely with Iceland, using AI to bolster our ability to monitor and detect hostile state activity in the Arctic.”

Throughout his Arctic tour, the Foreign Secretary will be emphasising the UK’s role in securing NATO’s northern flank. This includes the often unseen but hugely significant task of protecting the region’s critical undersea infrastructure – the cables and pipelines that are the lifelines for stable energy supplies and telecoms for the UK and much of Europe.

These targeted Arctic security initiatives are part and parcel of a broader, robust enhancement of the UK’s overall defence posture. Earlier this year, the Prime Minister announced the most significant sustained increase in defence spending since the Cold War. This will see UK defence expenditure climb to 2.5% of GDP by April 2027, with a clear ambition to reach 3% in the next Parliament, contingent on economic and fiscal conditions.

The significance of maritime security and the Arctic is also recognised in the UK’s ambitious new Security and Defence Partnership with the EU, agreed last week. This pact commits both sides to closer collaboration to make Europe a safer place.

In today’s interconnected world, security, climate action, and international collaboration are inextricably linked. The turn to AI isn’t just a tech upgrade; it’s a strategic necessity.

(Photo by Annie Spratt)

See also: Thales: AI and quantum threats top security agendas

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK deploys AI to boost Arctic security amid growing threats appeared first on AI News.

]]>
Congress pushes GPS tracking for every exported semiconductor https://www.artificialintelligence-news.com/news/congress-pushes-gps-tracking-for-every-exported-semiconductor/ Fri, 16 May 2025 12:17:30 +0000 https://www.artificialintelligence-news.com/?p=106457 America’s quest to protect its semiconductor technology from China has taken increasingly dramatic turns over the past few years—from export bans to global restrictions—but the latest proposal from Congress ventures into unprecedented territory.  Lawmakers are now pushing for mandatory GPS-style tracking embedded in every AI chip exported from the United States, essentially turning advanced semiconductors […]

The post Congress pushes GPS tracking for every exported semiconductor appeared first on AI News.

]]>
America’s quest to protect its semiconductor technology from China has taken increasingly dramatic turns over the past few years—from export bans to global restrictions—but the latest proposal from Congress ventures into unprecedented territory. 

Lawmakers are now pushing for mandatory GPS-style tracking embedded in every AI chip exported from the United States, essentially turning advanced semiconductors into devices that report their location back to Washington.

On May 15, 2025, a bipartisan group of eight House representatives introduced the Chip Security Act, which would require companies like Nvidia to embed location verification mechanisms in their processors before export. 

This represents perhaps the most invasive approach yet in America’s technological competition with China, moving far beyond restricting where chips can go to actively monitoring where they end up.

The mechanics of AI chip surveillance

Under the proposed Chip Security Act, AI chip surveillance would become mandatory for all “covered integrated circuit products”—including those classified under Export Control Classification Numbers 3A090, 3A001.z, 4A090, and 4A003.z. Companies like Nvidia would be required to embed location verification mechanisms in their AI chips before export, reexport, or in-country transfer to foreign nations.

Representative Bill Huizenga, the Michigan Republican who introduced the House bill, stated that “we must employ safeguards to help ensure export controls are not being circumvented, allowing these advanced AI chips to fall into the hands of nefarious actors.” 

His co-lead, Representative Bill Foster—an Illinois Democrat and former physicist who designed chips during his scientific career—added, “I know that we have the technical tools to prevent powerful AI technology from getting into the wrong hands.”

The legislation goes far beyond simple location tracking. Companies would face ongoing surveillance obligations, required to report any credible information about chip diversion, including location changes, unauthorized users, or tampering attempts. 

This creates a continuous monitoring system that extends indefinitely beyond the point of sale, fundamentally altering the relationship between manufacturers and their products.

Cross-party support for technology control

Perhaps most striking about this AI chip surveillance initiative is its bipartisan nature. The bill enjoys broad support across party lines, co-led by House Select Committee on China Chairman John Moolenaar and Ranking Member Raja Krishnamoorthi. Other cosponsors include Representatives Ted Lieu, Rick Crawford, Josh Gottheimer, and Darin LaHood.

Moolenaar said that “the Chinese Communist Party has exploited weaknesses in our export control enforcement system—using shell companies and smuggling networks to divert sensitive US technology.” 

The bipartisan consensus on AI chip surveillance reflects how deeply the China challenge has penetrated American political thinking, transcending traditional partisan divisions.

The Senate has already introduced similar legislation through Senator Tom Cotton, suggesting that semiconductor surveillance has broad congressional support. Coordination between chambers indicates that some form of AI chip surveillance may become law regardless of which party controls Congress.

Technical challenges and implementation questions

The technical requirements for implementing AI chip surveillance raise significant questions about feasibility, security, and performance. The bill mandates that chips implement “location verification using techniques that are feasible and appropriate” within 180 days of enactment, but provides little detail on how such mechanisms would work without compromising chip performance or introducing new vulnerabilities.

For industry leaders like Nvidia, implementing mandatory surveillance technology could fundamentally alter product design and manufacturing processes. Each chip would need embedded capabilities to verify its location, potentially requiring additional components, increased power consumption, and processing overhead that could impact performance—precisely what customers in AI applications cannot afford.

The bill also grants the Secretary of Commerce broad enforcement authority to “verify, in a manner the Secretary determines appropriate, the ownership and location” of exported chips. This creates a real-time surveillance system where the US government could potentially track every advanced semiconductor worldwide, raising questions about data sovereignty and privacy.

Commercial surveillance meets national security

AI chip surveillance proposal represents an unprecedented fusion of national security imperatives with commercial technology products. Unlike traditional export controls that simply restrict destinations, the approach creates ongoing monitoring obligations that blur the lines between private commerce and state surveillance.

Representative Foster’s background as a physicist lends technical credibility to the initiative, but it also highlights how scientific expertise can be enlisted in geopolitical competition. The legislation reflects a belief that technical solutions can solve political problems—that embedding surveillance capabilities in semiconductors can prevent their misuse.

Yet the proposed law raises fundamental questions about the nature of technology export in a globalized world. Should every advanced semiconductor become a potential surveillance device? 

How will mandatory AI chip surveillance affect innovation in countries that rely on US technology? What precedent does this set for other nations seeking to monitor their technology exports?

Accelerating technological decoupling

The mandatory AI chip surveillance requirement could inadvertently accelerate the development of alternative semiconductor ecosystems. If US chips come with built-in tracking mechanisms, countries may intensify efforts to develop domestic alternatives or source from suppliers without such requirements.

China, already investing heavily in semiconductor self-sufficiency following years of US restrictions, may view these surveillance requirements as further justification for technological decoupling. The irony is striking: efforts to track Chinese use of US chips may ultimately reduce their appeal and market share in global markets.

Meanwhile, allied nations may question whether they want their critical infrastructure dependent on chips that can be monitored by the US government. The legislation’s broad language suggests that AI chip surveillance would apply to all foreign countries, not just adversaries, potentially straining relationships with partners who value technological sovereignty.

The future of semiconductor governance

As the Trump administration continues to formulate its replacement for Biden’s AI Diffusion Rule, Congress appears unwilling to wait. The Chip Security Act represents a more aggressive approach than traditional export controls, moving from restriction to active surveillance in ways that could reshape the global semiconductor industry.

This evolution reflects deeper changes in how nations view technology exports in an era of great power competition. The semiconductor industry, once governed primarily by market forces and technical standards, increasingly operates under geopolitical imperatives that prioritize control over commerce.

Whether AI chip surveillance becomes law depends on congressional action and industry response. But the bipartisan support suggests that some form of semiconductor monitoring may be inevitable, marking a new chapter in the relationship between technology, commerce, and national security.

Conclusion: The end of anonymous semiconductors from America?

The question facing the industry is no longer whether the US will control technology exports, but how extensively it will monitor them after they leave American shores. In this emerging paradigm, every chip becomes a potential intelligence asset, and every export a data point in a global surveillance network.

The semiconductor industry now faces a critical choice: adapt to a future where products carry their own tracking systems, or risk being excluded from the US market entirely. 

As Congress pushes for mandatory AI chip surveillance, we may be witnessing the end of anonymous semiconductors and the beginning of an era where every processor knows exactly where it belongs—and reports back accordingly.

See also: US-China tech war escalates with new AI chips export controls

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Congress pushes GPS tracking for every exported semiconductor appeared first on AI News.

]]>
Dame Wendy Hall, AI Council: Shaping AI with ethics, diversity and innovation https://www.artificialintelligence-news.com/news/dame-wendy-hall-ai-council-shaping-ai-with-ethics-diversity-and-innovation/ Mon, 31 Mar 2025 10:54:40 +0000 https://www.artificialintelligence-news.com/?p=105089 Dame Wendy Hall is a pioneering force in AI and computer science. As a renowned ethical AI speaker and one of the leading voices in technology, she has dedicated her career to shaping the ethical, technical and societal dimensions of emerging technologies. She is the co-founder of the Web Science Research Initiative, an AI Council […]

The post Dame Wendy Hall, AI Council: Shaping AI with ethics, diversity and innovation appeared first on AI News.

]]>
Dame Wendy Hall is a pioneering force in AI and computer science. As a renowned ethical AI speaker and one of the leading voices in technology, she has dedicated her career to shaping the ethical, technical and societal dimensions of emerging technologies. She is the co-founder of the Web Science Research Initiative, an AI Council Member and was named as one of the 100 Most Powerful Women in the UK by Woman’s Hour on BBC Radio 4.

A key advocate for responsible AI governance and diversity in tech, Wendy has played a crucial role in global discussions on the future of AI.

In our Q&A, we spoke to her about the gender imbalance in the AI industry, the ethical implications of emerging technologies, and how businesses can harness AI while ensuring it remains an asset to humanity.

The AI sector remains heavily male-dominated. Can you share your experience of breaking into the industry and the challenges women face in achieving greater representation in AI and technology?

It’s incredibly frustrating because I wrote my first paper about the lack of women in computing back in 1987, when we were just beginning to teach computer science degree courses at Southampton. That October, we arrived at the university and realised we had no women registered on the course — none at all.

So, those of us working in computing started discussing why that was the case. There were several reasons. One significant factor was the rise of the personal computer, which was marketed as a toy for boys, fundamentally changing the culture. Since then, in the West — though not as much in countries like India or Malaysia — computing has been seen as something nerdy, something that only ‘geeks’ do. Many young girls simply do not want to be associated with that stereotype. By the time they reach their GCSE choices, they often don’t see computing as an option, and that’s where the problem begins.

Despite many efforts, we haven’t managed to change this culture. Nearly 40 years later, the industry is still overwhelmingly male-dominated, even though women make up more than

half of the global population. Women are largely absent from the design and development of computers and software. We apply them, we use them, but we are not part of the fundamental conversations shaping future technologies.

AI is even worse in this regard. If you want to work in machine learning, you need a degree in mathematics or computer science, which means we are funnelling an already male-dominated sector into an even more male-dominated pipeline.

But AI is about more than just machine learning and programming. It’s about application, ethics, values, opportunities, and mitigating potential risks. This requires a broad diversity of voices — not just in terms of gender, but also in age, ethnicity, culture, and accessibility. People with disabilities should be part of these discussions, ensuring technology is developed for everyone.

AI’s development needs input from many disciplines — law, philosophy, psychology, business, and history, to name just a few. We need all these different voices. That’s why I believe we must see AI as a socio-technical system to truly understand its impact. We need diversity in every sense of the word.

As businesses increasingly integrate AI into their operations, what steps should they take to ensure emerging technologies are developed and deployed ethically?

Take, for example, facial recognition. We still haven’t fully established the rules and regulations for when and how this technology should be applied. Did anyone ask you whether you wanted facial recognition on your phone? It was simply offered as a system update, and you could either enable it or not.

We know facial recognition is used extensively for surveillance in China, but it is creeping into use across Europe and the US as well. Security forces are adopting it, which raises concerns about privacy. At the same time, I appreciate the presence of CCTV cameras in car parks at night — they make me feel safer.

This duality applies to all emerging technologies, including AI tools we haven’t even developed yet. Every new technology has a good and a bad side — the yin and the yang, if you will. There are always benefits and risks.

The challenge is learning how to maximise the benefits for humanity, society and business while mitigating the risks. That’s what we must focus on — ensuring AI works in service of people rather than against them.

The rapid advancement of AI is transforming everyday life. How do you envision the future of AI, and what significant changes will it bring to society and the way we work?

I see a future where AI becomes part of the decision-making process, whether in legal cases, medical diagnoses, or education.

AI is already deeply embedded in our daily lives. If you use Google on your phone, you’re using AI. If you unlock your phone with facial recognition, that’s AI. Google Translate? AI. Speech processing, video analysis, image recognition, text generation, and natural language processing — these are all AI-driven technologies.

Right now, the buzz is around generative AI, particularly ChatGPT. It’s like how ‘Hoover’ became synonymous with vacuum cleaners — ChatGPT has become shorthand for AI. In reality, it’s just a clever interface created by OpenAI to allow public access to its generative AI model.

It feels like you’re having a conversation with the system, asking questions and receiving natural language responses. It works with images and videos too, making it seem incredibly advanced. But the truth is, it’s not actually intelligent. It’s not sentient. It’s simply predicting the next word in a sequence based on training data. That’s a crucial distinction.

With generative AI becoming a powerful tool for businesses, what strategies should companies adopt to leverage its capabilities while maintaining human authenticity in their processes and decision-making?

Generative AI is nothing to be afraid of, and I believe we will all start using it more and more. Essentially, it’s software that can assist with writing, summarising, and analysing information.

I compare it to when calculators first appeared. People were outraged: ‘How can we allow calculators in schools? Can we trust the answers they provide?’ But over time, we adapted. The finance industry, for example, is now entirely run by computers, yet it employs more people than ever before. I expect we’ll see something similar with generative AI.

People will be relieved not to have to write endless essays. AI will enhance creativity and efficiency, but it must be viewed as a tool to augment human intelligence, not replace it, because it’s simply not advanced enough to take over.

Look at the legal industry. AI can summarise vast amounts of data, assess the viability of legal cases, and provide predictive analysis. In the medical field, AI could support diagnoses. In education, it could help assess struggling students.

I envision AI being integrated into decision-making teams. We will consult AI, ask it questions, and use its responses as a guide — but it’s crucial to remember that AI is not infallible.

Right now, AI models are trained on biased data. If they rely on information from the internet, much of that data is inaccurate. AI systems also ‘hallucinate’ by generating false information when they don’t have a definitive answer. That’s why we can’t fully trust AI yet.

Instead, we must treat it as a collaborative partner — one that helps us be more productive and creative while ensuring that humans remain in control. Perhaps AI will even pave the way for shorter workweeks, giving us more time for other pursuits.

Photo by Igor Omilaev on Unsplash and AI Speakers Agency.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation ConferenceBlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Dame Wendy Hall, AI Council: Shaping AI with ethics, diversity and innovation appeared first on AI News.

]]>
Lighthouse AI for Review enhances document eDiscovery https://www.artificialintelligence-news.com/news/lighthouse-ai-for-review-enhances-document-ediscovery/ Wed, 26 Mar 2025 12:02:32 +0000 https://www.artificialintelligence-news.com/?p=105013 In an increasing number of industries, eDiscovery of regulation and compliance documents can make trading (across state borders in the US, for example) less complex. In an industry like pharmaceutical, and its often complex supply chains, companies have to be aware of the mass of changing rules and regulations emanating from different legislatures at local […]

The post Lighthouse AI for Review enhances document eDiscovery appeared first on AI News.

]]>
In an increasing number of industries, eDiscovery of regulation and compliance documents can make trading (across state borders in the US, for example) less complex.

In an industry like pharmaceutical, and its often complex supply chains, companies have to be aware of the mass of changing rules and regulations emanating from different legislatures at local and federal levels. It’s no surprise, therefore, that it’s in regulated supply chain compliance that AI can be hugely beneficial. Given that AIs excel at reading and parsing documentation and images, service providers like Lighthouse AI use the technology in its different forms to comb through existing and new documentation that governs the industry.

The company’s latest suite, Lighthouse AI for Review uses the variations on machine learning of predictive and generative AI, image recognition and OCR, plus linguistic modelling, to handle use cases in large volume, time-sensitive settings.

Predictive AI is used for classification of documents and generative AI helps with the review process for better, more defensible, downstream results. The company claims that the linguistic modelling element of the suite refines the platform’s accuracy to levels normally “beyond AI’s capabilities.”

eDiscovery – the broad term

Lighthouse AI is currently six years old, and has analysed billions of documents since 2019, but predictive AI remains important to the software, despite – it might be said – generative AI grabbing most of the headlines in the last 18 months. Fernando Delgado, Director of AI and Analytics at Lighthouse, said, “While much attention has been rightly paid to the impact of GenAI recently, the power and relevancy of predictive AI cannot be overlooked. They do different things, and there is often real value in combining them to handle different elements in the same workflow.”

Given that the blanket term ‘the pharmaceutical industry’ includes concerns as disparate as medical technology, drug research, and production, right through to dispensing stores, the compliance requirements for an individual company in the sector can be wildly varied. “Rather than a one-size-fits-all approach, we’ve been able to shape the technology to fit our unique needs – turning our ideas into real, impactful solutions,” says Christian Mahoney, Counsel at Cleary Gottlieb Steen & Hamilton.

Lighthouse AI for Review includes use cases including AI for Responsive Review, AI for Privilege Review, AI for Privilege Analysis, and AI for PII/PHI/PCI Identification. The Lighthouse AI claims that its users see an up to 40% reduction in the volume of classification and summary documents with the AI for Responsive Review feature, with less training required by the LLM before it begins to create ROI.
AI Privilege for Review is also “60% more accurate than keyword-based models,” Lighthouse AI says.

AI’s acuity with visual data is handled by AI for Image Analysis uses GenAI to analyse images and, for example, produce text descriptions of media, presenting results using the interface users interact with for other tasks.

Lighthouse’s AI for PII/PHI/PCI Identification automates the mapping of relationships between entities, and can reduce the need for manual reviews. “The new offerings are highly differentiated and designed to provide the most impact for the volume, velocity, and complexity of eDiscovery,” said Lighthouse CEO, Ron Markezich.

(Image source: “Basel – Roche Building 1” by corno.fulgur75 is licensed under CC BY 2.0.)

See also: Hugging Face calls for open-source focus in the AI Action Plan

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Lighthouse AI for Review enhances document eDiscovery appeared first on AI News.

]]>
LG EXAONE Deep is a maths, science, and coding buff https://www.artificialintelligence-news.com/news/lg-exaone-deep-maths-science-and-coding-buff/ Tue, 18 Mar 2025 12:49:26 +0000 https://www.artificialintelligence-news.com/?p=104905 LG AI Research has unveiled EXAONE Deep, a reasoning model that excels in complex problem-solving across maths, science, and coding. The company highlighted the global challenge in creating advanced reasoning models, noting that currently, only a handful of organisations with foundational models are actively pursuing this complex area. EXAONE Deep aims to compete directly with […]

The post LG EXAONE Deep is a maths, science, and coding buff appeared first on AI News.

]]>
LG AI Research has unveiled EXAONE Deep, a reasoning model that excels in complex problem-solving across maths, science, and coding.

The company highlighted the global challenge in creating advanced reasoning models, noting that currently, only a handful of organisations with foundational models are actively pursuing this complex area. EXAONE Deep aims to compete directly with these leading models, showcasing a competitive level of reasoning ability.

LG AI Research has focused its efforts on dramatically improving EXAONE Deep’s reasoning capabilities in core domains. The model also demonstrates a strong ability to understand and apply knowledge across a broader range of subjects.

The performance benchmarks released by LG AI Research are impressive:

  • Maths: The EXAONE Deep 32B model outperformed a competing model, despite being only 5% of its size, in a demanding mathematics benchmark. Furthermore, the 7.8B and 2.4B versions achieved first place in all major mathematics benchmarks for their respective model sizes.
  • Science and coding: In these areas, the EXAONE Deep models (7.8B and 2.4B) have secured the top spot across all major benchmarks.
  • MMLU (Massive Multitask Language Understanding): The 32B model achieved a score of 83.0 on the MMLU benchmark, which LG AI Research claims is the best performance among domestic Korean models.

The capabilities of the EXAONE Deep 32B model have already garnered international recognition.

Shortly after its release, it was included in the ‘Notable AI Models’ list by US-based non-profit research organisation Epoch AI. This listing places EXAONE Deep alongside its predecessor, EXAONE 3.5, making LG the only Korean entity with models featured on this prestigious list in the past two years.

Maths prowess

EXAONE Deep has demonstrated exceptional mathematical reasoning skills across its various model sizes (32B, 7.8B, and 2.4B). In assessments based on the 2025 academic year’s mathematics curriculum, all three models outperformed global reasoning models of comparable size.

The 32B model achieved a score of 94.5 in a general mathematics competency test and 90.0 in the American Invitational Mathematics Examination (AIME) 2024, a qualifying exam for the US Mathematical Olympiad.

In the AIME 2025, the 32B model matched the performance of DeepSeek-R1—a significantly larger 671B model. This result showcases EXAONE Deep’s efficient learning and strong logical reasoning abilities, particularly when tackling challenging mathematical problems.

The smaller 7.8B and 2.4B models also achieved top rankings in major benchmarks for lightweight and on-device models, respectively. The 7.8B model scored 94.8 on the MATH-500 benchmark and 59.6 on AIME 2025, while the 2.4B model achieved scores of 92.3 and 47.9 in the same evaluations.

Science and coding excellence

EXAONE Deep has also showcased remarkable capabilities in professional science reasoning and software coding.

The 32B model scored 66.1 on the GPQA Diamond test, which assesses problem-solving skills in doctoral-level physics, chemistry, and biology. In the LiveCodeBench evaluation, which measures coding proficiency, the model achieved a score of 59.5, indicating its potential for high-level applications in these expert domains.

The 7.8B and 2.4B models continued this trend of strong performance, both securing first place in the GPQA Diamond and LiveCodeBench benchmarks within their respective size categories. This achievement builds upon the success of the EXAONE 3.5 2.4B model, which previously topped Hugging Face’s LLM Readerboard in the edge division.

Enhanced general knowledge

Beyond its specialised reasoning capabilities, EXAONE Deep has also demonstrated improved performance in general knowledge understanding.

The 32B model achieved an impressive score of 83.0 on the MMLU benchmark, positioning it as the top-performing domestic model in this comprehensive evaluation. This indicates that EXAONE Deep’s reasoning enhancements extend beyond specific domains and contribute to a broader understanding of various subjects.

LG AI Research believes that EXAONE Deep’s reasoning advancements represent a leap towards a future where AI can tackle increasingly complex problems and contribute to enriching and simplifying human lives through continuous research and innovation.

See also: Baidu undercuts rival AI models with ERNIE 4.5 and ERNIE X1

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post LG EXAONE Deep is a maths, science, and coding buff appeared first on AI News.

]]>
From punch cards to mind control: Human-computer interactions https://www.artificialintelligence-news.com/news/from-punch-cards-to-mind-control-human-computer-interactions/ Wed, 05 Mar 2025 15:22:07 +0000 https://www.artificialintelligence-news.com/?p=104721 The way we interact with our computers and smart devices is very different from previous years. Over the decades, human-computer interfaces have transformed, progressing from simple cardboard punch cards to keyboards and mice, and now extended reality-based AI agents that can converse with us in the same way as we do with friends. With each […]

The post From punch cards to mind control: Human-computer interactions appeared first on AI News.

]]>
The way we interact with our computers and smart devices is very different from previous years. Over the decades, human-computer interfaces have transformed, progressing from simple cardboard punch cards to keyboards and mice, and now extended reality-based AI agents that can converse with us in the same way as we do with friends.

With each advance in human-computer interfaces, we’re getting closer to achieving the goal of interactions with machines, making computers more accessible and integrated with our lives.

Where did it all begin?

Modern computers emerged in the first half of the 20th century and relied on punch cards to feed data into the system and enable binary computations. The cards had a series of punched holes, and light was shone at them. If the light passed through a hole and was detected by the machine, it represented a “one”. Otherwise, it was a “zero”. As you can imagine, it was extremely cumbersome, time-consuming, and error-prone.

That changed with the arrival of ENIAC, or Electronic Numerical Integrator and Computer, widely considered to be the first “Turing-complete” device that could solve a variety of numerical problems. Instead of punch cards, operating ENIAC involved manually setting a series of switches and plugging patch cords into a board to configure the computer for specific calculations, while data was inputted via a further series of switches and buttons. It was an improvement over punch cards, but not nearly as dramatic as the arrival of the modern QWERTY electronic keyboard in the early 1950s.

Keyboards, adapted from typewriters, were a game-changer, allowing users to input text-based commands more intuitively. But while they made programming faster, accessibility was still limited to those with knowledge of the highly-technical programming commands required to operate computers.

GUIs and touch

The most important development in terms of computer accessibility was the graphical user interface or GUI, which finally opened computing to the masses. The first GUIs appeared in the late 1960s and were later refined by companies like IBM, Apple, and Microsoft, replacing text-based commands with a visual display made up of icons, menus, and windows.

Alongside the GUI came the iconic “mouse“, which enabled users to “point-and-click” to interact with computers. Suddenly, these machines became easily navigable, allowing almost anyone to operate one. With the arrival of the internet a few years later, the GUI and the mouse helped pave the way for the computing revolution, with computers becoming commonplace in every home and office.

The next major milestone in human-computer interfaces was the touchscreen, which first appeared in the late 1990s and did away with the need for a mouse or a separate keyboard. Users could now interact with their computers by tapping icons on the screen directly, pinching to zoom, and swiping left and right. Touchscreens eventually paved the way for the smartphone revolution that started with the arrival of the Apple iPhone in 2007 and, later, Android devices.

With the rise of mobile computing, the variety of computing devices evolved further, and in the late 2000s and early 2010s, we witnessed the emergence of wearable devices like fitness trackers and smartwatches. Such devices are designed to integrate computers into our everyday lives, and it’s possible to interact with them in newer ways, like subtle gestures and biometric signals. Fitness trackers, for instance, use sensors to keep track of how many steps we take or how far we run, and can monitor a user’s pulse to measure heart rate.

Extended reality & AI avatars

In the last decade, we also saw the first artificial intelligence systems, with early examples being Apple’s Siri and Amazon’s Alexa. AI chatbots use voice recognition technology to enable users to communicate with their devices using their voice.

As AI has advanced, these systems have become increasingly sophisticated and better able to understand complex instructions or questions, and can respond based on the context of the situation. With more advanced chatbots like ChatGPT, it’s possible to engage in lifelike conversations with machines, eliminating the need for any kind of physical input device.

AI is now being combined with emerging augmented reality and virtual reality technologies to further refine human-computer interactions. With AR, we can insert digital information into our surroundings by overlaying it on top of our physical environment. This is enabled using VR devices like the Oculus Rift, HoloLens, and Apple Vision Pro, and further pushes the boundaries of what’s possible.

So-called extended reality, or XR, is the latest take on the technology, replacing traditional input methods with eye-tracking, and gestures, and can provide haptic feedback, enabling users to interact with digital objects in physical environments. Instead of being restricted to flat, two-dimensional screens, our entire world becomes a computer through a blend of virtual and physical reality.

The convergence of XR and AI opens the doors to more possibilities. Mawari Network is bringing AI agents and chatbots into the real world through the use of XR technology. It’s creating more meaningful, lifelike interactions by streaming AI avatars directly into our physical environments. The possibilities are endless – imagine an AI-powered virtual assistant standing in your home or a digital concierge that meets you in the hotel lobby, or even an AI passenger that sits next to you in your car, directing you on how to avoid the worst traffic jams. Through its decentralised DePin infrastructure, it’s enabling AI agents to drop into our lives in real-time.

The technology is nascent but it’s not fantasy. In Germany, tourists can call on an avatar called Emma to guide them to the best spots and eateries in dozens of German cities. Other examples include digital popstars like Naevis, which is pioneering the concept of virtual concerts that can be attended from anywhere.

In the coming years, we can expect to see this XR-based spatial computing combined with brain-computer interfaces, which promise to let users control computers with their thoughts. BCIs use electrodes placed on the scalp and pick up the electrical signals generated by our brains. Although it’s still in its infancy, this technology promises to deliver the most effective human-computer interactions possible.

The future will be seamless

The story of the human-computer interface is still under way, and as our technological capabilities advance, the distinction between digital and physical reality will more blurred.

Perhaps one day soon, we’ll be living in a world where computers are omnipresent, integrated into every aspect of our lives, similar to Star Trek’s famed holodeck. Our physical realities will be merged with the digital world, and we’ll be able to communicate, find information, and perform actions using only our thoughts. This vision would have been considered fanciful only a few years ago, but the rapid pace of innovation suggests it’s not nearly so far-fetched. Rather, it’s something that the majority of us will live to see.

(Image source: Unsplash)

The post From punch cards to mind control: Human-computer interactions appeared first on AI News.

]]>
DeepSeek ban? China data transfer boosts security concerns https://www.artificialintelligence-news.com/news/deepseek-ban-china-data-transfer-boosts-security-concerns/ Fri, 07 Feb 2025 17:44:01 +0000 https://www.artificialintelligence-news.com/?p=104228 US lawmakers are pushing for a DeepSeek ban after security researchers found the app transferring user data to a banned state-owned company. DeepSeek, practically unknown just weeks ago, took the tech world by storm—gaining global acclaim for its cutting-edge performance while sparking debates reminiscent of the TikTok saga. Its rise has been fuelled in part […]

The post DeepSeek ban? China data transfer boosts security concerns appeared first on AI News.

]]>
US lawmakers are pushing for a DeepSeek ban after security researchers found the app transferring user data to a banned state-owned company.

DeepSeek, practically unknown just weeks ago, took the tech world by storm—gaining global acclaim for its cutting-edge performance while sparking debates reminiscent of the TikTok saga.

Its rise has been fuelled in part by its business model: unlike many of its American counterparts, including OpenAI and Google, DeepSeek offered its advanced powers for free.

However, concerns have been raised about DeepSeek’s extensive data collection practices and a probe has been launched by Microsoft and OpenAI over a breach of the latter’s system by a group allegedly linked to the Chinese AI startup.

A threat to US AI dominance

DeepSeek’s astonishing capabilities have, within a matter of weeks, positioned it as a major competitor to American AI stalwarts like OpenAI’s ChatGPT and Google Gemini. But, alongside the app’s prowess, concerns have emerged over alleged ties to the Chinese Communist Party (CCP).  

According to security researchers, hidden code within DeepSeek’s AI has been found transmitting user data to China Mobile—a state-owned telecoms company banned in the US. DeepSeek’s own privacy policy permits the collection of data such as IP addresses, device information, and, most alarmingly, even keystroke patterns.

Such findings have led to bipartisan efforts in the US Congress to curtail DeepSeek’s influence, with lawmakers scrambling to protect sensitive data from potential CCP oversight.

Reps. Darin LaHood (R-IL) and Josh Gottheimer (D-NJ) are spearheading efforts to introduce legislation that would prohibit DeepSeek from being installed on all government-issued devices. 

Several federal agencies, among them NASA and the US Navy, have already preemptively issued a ban on DeepSeek. Similarly, the state of Texas has also introduced restrictions.

Potential ban of DeepSeek a TikTok redux?

The controversy surrounding DeepSeek bears similarities to debates over TikTok, the social video app owned by Chinese company ByteDance. TikTok remains under fire over accusations that user data is accessible to the CCP, though definitive proof has yet to materialise.

In contrast, DeepSeek’s case involves clear evidence, as revealed by cybersecurity investigators who identified the app’s unauthorised data transmissions. While some might say DeepSeek echoes the TikTok controversy, security experts argue that it represents a much starker and documented threat.

Lawmakers around the world are taking note. In addition to the US proposals, DeepSeek has already faced bans from government systems in countries including Australia, South Korea, and Italy.  

AI becomes a geopolitical battleground

The concerns over DeepSeek exemplify how AI has now become a geopolitical flashpoint between global superpowers—especially between the US and China.

American AI firms like OpenAI have enjoyed a dominant position in recent years, but Chinese companies have poured resources into catching up and, in some cases, surpassing their US competitors.  

DeepSeek’s lightning-quick growth has unsettled that balance, not only because of its AI models but also due to its pricing strategy, which undercuts competitors by offering the app free of charge. That begs the question of whether it’s truly “free” or if the cost is paid in lost privacy and security.

China Mobile’s involvement raises further eyebrows, given the state-owned telecom company’s prior sanctions and prohibition from the US market. Critics worry that data collected through platforms like DeepSeek could fill gaps in Chinese surveillance activities or even potential economic manipulations.

A nationwide DeepSeek ban is on the cards

If the proposed US legislation is passed, it could represent the first step toward nationwide restrictions or an outright ban on DeepSeek. Geopolitical tension between China and the West continues to shape policies in advanced technologies, and AI appears to be the latest arena for this ongoing chess match.  

In the meantime, calls to regulate applications like DeepSeek are likely to grow louder. Conversations about data privacy, national security, and ethical boundaries in AI development are becoming ever more urgent as individuals and organisations across the globe navigate the promises and pitfalls of next-generation tools.  

DeepSeek’s rise may have, indeed, rattled the AI hierarchy, but whether it can maintain its momentum in the face of increasing global pushback remains to be seen.

(Photo by Solen Feyissa)

See also: AVAXAI brings DeepSeek to Web3 with decentralised AI agents

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post DeepSeek ban? China data transfer boosts security concerns appeared first on AI News.

]]>
EU AI Act: What businesses need to know as regulations go live https://www.artificialintelligence-news.com/news/eu-ai-act-what-businesses-need-know-regulations-go-live/ Fri, 31 Jan 2025 12:52:49 +0000 https://www.artificialintelligence-news.com/?p=17015 Next week marks the beginning of a new era for AI regulations as the first obligations of the EU AI Act take effect. While the full compliance requirements won’t come into force until mid-2025, the initial phase of the EU AI Act begins February 2nd and includes significant prohibitions on specific AI applications. Businesses across […]

The post EU AI Act: What businesses need to know as regulations go live appeared first on AI News.

]]>
Next week marks the beginning of a new era for AI regulations as the first obligations of the EU AI Act take effect.

While the full compliance requirements won’t come into force until mid-2025, the initial phase of the EU AI Act begins February 2nd and includes significant prohibitions on specific AI applications. Businesses across the globe that operate in the EU must now navigate a regulatory landscape with strict rules and high stakes.

The new regulations prohibit the deployment or use of several high-risk AI systems. These include applications such as social scoring, emotion recognition, real-time remote biometric identification in public spaces, and other scenarios deemed unacceptable under the Act.

Companies found in violation of the rules could face penalties of up to 7% of their global annual turnover, making it imperative for organisations to understand and comply with the restrictions.  

Early compliance challenges  

“It’s finally here,” says Levent Ergin, Chief Strategist for Climate, Sustainability, and AI at Informatica. “While we’re still in a phased approach, businesses’ hard-earned preparations for the EU AI Act will now face the ultimate test.”

Headshot of Levent Ergin, Chief Strategist for Climate, Sustainability, and AI at Informatica, for an article on what the regulations introduced in EU AI Act means for businesses in the European Union and beyond.

Ergin highlights that even though most compliance requirements will not take effect until mid-2025, the early prohibitions set a decisive tone.

“For businesses, the pressure in 2025 is twofold. They must demonstrate tangible ROI from AI investments while navigating challenges around data quality and regulatory uncertainty. It’s already the perfect storm, with 89% of large businesses in the EU reporting conflicting expectations for their generative AI initiatives. At the same time, 48% say technology limitations are a major barrier to moving AI pilots into production,” he remarks.

Ergin believes the key to compliance and success lies in data governance.

“Without robust data foundations, organisations risk stagnation, limiting their ability to unlock AI’s full potential. After all, isn’t ensuring strong data governance a core principle that the EU AI Act is built upon?”

To adapt, companies must prioritise strengthening their approach to data quality.

“Strengthening data quality and governance is no longer optional, it’s critical. To ensure both compliance and prove the value of AI, businesses must invest in making sure data is accurate, holistic, integrated, up-to-date and well-governed,” says Ergin.

“This isn’t just about meeting regulatory demands; it’s about enabling AI to deliver real business outcomes. As 82% of EU companies plan to increase their GenAI investments in 2025, ensuring their data is AI-ready will be the difference between those who succeed and those who remain in the starting blocks.”

EU AI Act has no borders

The extraterritorial scope of the EU AI Act means non-EU organisations are assuredly not off the hook. As Marcus Evans, a partner at Norton Rose Fulbright, explains, the Act applies far beyond the EU’s borders.

Headshot of Marcus Evans, a partner at Norton Rose Fulbright, for an article on what the regulations introduced in EU AI Act means for businesses in the European Union and beyond.

“The AI Act will have a truly global application,” says Evans. “That’s because it applies not only to organisations in the EU using AI or those providing, importing, or distributing AI to the EU market, but also AI provision and use where the output is used in the EU. So, for instance, a company using AI for recruitment in the EU – even if it is based elsewhere – would still be captured by these new rules.”  

Evans advises businesses to start by auditing their AI use. “At this stage, businesses must first understand where AI is being used in their organisation so that they can then assess whether any use cases may trigger the prohibitions. Building on that initial inventory, a wider governance process can then be introduced to ensure AI use is assessed, remains outside the prohibitions, and complies with the AI Act.”  

While organisations work to align their AI practices with the new regulations, additional challenges remain. Compliance requires addressing other legal complexities such as data protection, intellectual property (IP), and discrimination risks.  

Evans emphasises that raising AI literacy within organisations is also a critical step.

“Any organisations in scope must also take measures to ensure their staff – and anyone else dealing with the operation and use of their AI systems on their behalf – have a sufficient level of AI literacy,” he states.

“AI literacy will play a vital role in AI Act compliance, as those involved in governing and using AI must understand the risks they are managing.”

Encouraging responsible innovation  

The EU AI Act is being hailed as a milestone for responsible AI development. By prohibiting harmful practices and requiring transparency and accountability, the regulation seeks to balance innovation with ethical considerations.

Headshot of Beatriz Sanz Sáiz, AI Sector Leader at EY Global, for an article on what the regulations introduced in EU AI Act means for businesses in the European Union and beyond.

“This framework is a pivotal step towards building a more responsible and sustainable future for artificial intelligence,” says Beatriz Sanz Sáiz, AI Sector Leader at EY Global.

Sanz Sáiz believes the legislation fosters trust while providing a foundation for transformative technological progress.

“It has the potential to foster further trust, accountability, and innovation in AI development, as well as strengthen the foundations upon which the technology continues to be built,” Sanz Sáiz asserts.

“It is critical that we focus on eliminating bias and prioritising fundamental rights like fairness, equity, and privacy. Responsible AI development is a crucial step in the quest to further accelerate innovation.”

What’s prohibited under the EU AI Act?  

To ensure compliance, businesses need to be crystal-clear on which activities fall under the EU AI Act’s strict prohibitions. The current list of prohibited activities includes:  

  • Harmful subliminal, manipulative, and deceptive techniques  
  • Harmful exploitation of vulnerabilities  
  • Unacceptable social scoring  
  • Individual crime risk assessment and prediction (with some exceptions)  
  • Untargeted scraping of internet or CCTV material to develop or expand facial recognition databases  
  • Emotion recognition in areas such as the workplace and education (with some exceptions)  
  • Biometric categorisation to infer sensitive categories (with some exceptions)  
  • Real-time remote biometric identification (RBI) in publicly accessible spaces for law enforcement purposes (with some exceptions)  

The Commission’s forthcoming guidance on which “AI systems” fall under these categories will be critical for businesses seeking to ensure compliance and reduce legal risks. Additionally, companies should anticipate further clarification and resources at the national and EU levels, such as the upcoming webinar hosted by the AI Office.

A new landscape for AI regulations

The early implementation of the EU AI Act represents just the beginning of what is a remarkably complex and ambitious regulatory endeavour. As AI continues to play an increasingly pivotal role in business strategy, organisations must learn to navigate new rules and continuously adapt to future changes.  

For now, businesses should focus on understanding the scope of their AI use, enhancing data governance, educating staff to build AI literacy, and adopting a proactive approach to compliance. By doing so, they can position themselves as leaders in a fast-evolving AI landscape and unlock the technology’s full potential while upholding ethical and legal standards.

(Photo by Guillaume Périgois)

See also: ChatGPT Gov aims to modernise US government agencies

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post EU AI Act: What businesses need to know as regulations go live appeared first on AI News.

]]>
Yiannis Antoniou, Lab49: OpenAI Operator kickstarts era of browser AI agents https://www.artificialintelligence-news.com/news/yiannis-antoniou-lab49-openai-operator-era-browser-ai-agents/ Fri, 24 Jan 2025 14:03:14 +0000 https://www.artificialintelligence-news.com/?p=16963 OpenAI has unveiled Operator, a tool that integrates seamlessly with web browsers to perform tasks autonomously. From filling out forms to ordering groceries, Operator promises to simplify repetitive online activities by interacting directly with websites through clicks, typing, and scrolling. Designed around a new model called the Computer-Using Agent (CUA), Operator combines GPT-4o’s vision recognition […]

The post Yiannis Antoniou, Lab49: OpenAI Operator kickstarts era of browser AI agents appeared first on AI News.

]]>
OpenAI has unveiled Operator, a tool that integrates seamlessly with web browsers to perform tasks autonomously. From filling out forms to ordering groceries, Operator promises to simplify repetitive online activities by interacting directly with websites through clicks, typing, and scrolling.

Designed around a new model called the Computer-Using Agent (CUA), Operator combines GPT-4o’s vision recognition with advanced reasoning capabilities—allowing it to function as a virtual “human-in-the-browser.” Yet, for all its innovation, industry experts see room for refinement.

Yiannis Antoniou, Head of AI, Data, and Analytics at specialist consultancy Lab49, shared his insights on Operator’s significance and positioning in the competitive landscape of agent AI systems.

Agentic AI through a familiar interface

“OpenAI’s announcement of Operator, its latest foray into the agentic AI wars, is both fascinating and incomplete,” said Antoniou, who has over two decades of experience designing AI systems for financial services firms.

Headshot of Yiannis Antoniou, Head of AI, Data, and Analytics at specialist consultancy Lab49, for an article on how OpenAI operator is kickstarting the era of browser AI agents.

“Clearly influenced by Anthropic Claude’s Computer Use system, introduced back in October, Operator streamlines the experience by removing the need for complex infrastructure and focusing on a familiar interface: the browser.”

By designing Operator to operate within an environment users already understand, the web browser, OpenAI sidesteps the need for bespoke APIs or integrations.

“By leveraging the world’s most popular interface, OpenAI enhances the user experience and captures immediate interest from the general public. This browser-centric approach creates significant potential for widespread adoption, something Anthropic – despite its early-mover advantage – has struggled to achieve.”

Unlike some competing systems that may feel technical or niche in their application, Operator’s browser-focused framework lowers the barrier to entry and is a step forward in OpenAI’s efforts to democratise AI.

Unique take on usability and security

One of the hallmarks of Operator is its emphasis on adaptability and security, implemented through human-in-the-loop protocols. Antoniou acknowledged these thoughtful usability features but noted that more work is needed.

“Architecturally, Operator’s browser integration closely mirrors Claude’s system. Both involve taking screenshots of the user’s browser and sending them for analysis, as well as controlling the screen via virtual keystrokes and mouse movements. However, Operator introduces thoughtful usability touches. 

“Features like custom instructions for specific websites add a layer of personalisation, and the emphasis on human-in-the-loop safeguards against unauthorised actions – such as purchases, sending emails, or applying for jobs – demonstrate OpenAI’s awareness of potential security risks posed by malicious websites, but more work is clearly needed to make this system widely safe across a variety of scenarios.”

OpenAI has implemented a multi-layered safety framework for Operator, including takeover mode for secure inputs, user confirmations prior to significant actions, and monitoring systems to detect adversarial behavior. Furthermore, users can delete browsing data and manage privacy settings directly within the tool.

However, Antoniou emphasised that these measures are still evolving—particularly as Operator encounters complex or sensitive tasks. 

OpenAI Operator further democratises AI

Antoniou also sees the release of Operator as a pivotal moment for the consumer AI landscape, albeit one that is still in its early stages. 

“Overall, this is an excellent first attempt at building an agentic system for everyday users, designed around how they naturally interact with technology. As the system develops – with added capabilities and more robust security controls – this limited rollout, priced at $200/month, will serve as a testing ground. 

“Once matured and extended to lower subscription tiers and the free version, Operator has the potential to usher in the era of consumer-facing agents, further democratising AI and embedding it into daily life.”

Designed initially for Pro users at a premium price point, Operator provides OpenAI with an opportunity to learn from early adopters and refine its capabilities.

Antoniou noted that while $200/month might not yet justify the system’s value for most users, investment in making Operator more powerful and accessible could lead to significant competitive advantages for OpenAI in the long run.

“Is it worth $200/month? Perhaps not yet. But as the system evolves, OpenAI’s moat will grow, making it harder for competitors to catch up. Now, the challenge shifts back to Anthropic and Google – both of whom have demonstrated similar capabilities in niche or engineering-focused products – to respond and stay in the game,” he concludes.

As OpenAI continues to fine-tune Operator, the potential to revolutionise how people interact with technology becomes apparent. From collaborations with companies like Instacart, DoorDash, and Uber to use cases in the public sector, Operator aims to balance innovation with trust and safety.

While early limitations and pricing may deter widespread adoption for now, these hurdles might only be temporary as OpenAI commits to enhancing usability and accessibility over time.

See also: OpenAI argues against ChatGPT data deletion in Indian court

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Yiannis Antoniou, Lab49: OpenAI Operator kickstarts era of browser AI agents appeared first on AI News.

]]>