Education AI - AI News https://www.artificialintelligence-news.com/categories/ai-in-action/education-ai/ Artificial Intelligence News Tue, 09 Dec 2025 14:45:07 +0000 en-GB hourly 1 https://wordpress.org/?v=6.9.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png Education AI - AI News https://www.artificialintelligence-news.com/categories/ai-in-action/education-ai/ 32 32 OpenAI targets AI skills gap with new certification standards https://www.artificialintelligence-news.com/news/openai-targets-ai-skills-gap-with-new-certification-standards/ Tue, 09 Dec 2025 14:45:06 +0000 https://www.artificialintelligence-news.com/?p=111227 Adoption of generative AI has outpaced workforce capability, prompting OpenAI to target the skills gap with new certification standards. While it’s safe to say OpenAI’s tools have reached mass adoption, organisations struggle to convert this usage into reliable output. To address this, OpenAI has announced ‘AI Foundations,’ a structured initiative designed to standardise how employees […]

The post OpenAI targets AI skills gap with new certification standards appeared first on AI News.

]]>
Adoption of generative AI has outpaced workforce capability, prompting OpenAI to target the skills gap with new certification standards.

While it’s safe to say OpenAI’s tools have reached mass adoption, organisations struggle to convert this usage into reliable output. To address this, OpenAI has announced ‘AI Foundations,’ a structured initiative designed to standardise how employees learn and apply the technology.

OpenAI’s initiative marks a necessary evolution in the vendor ecosystem; indicating a departure from the “move fast” phase of experimental deployment toward a focus on verifiable competence. OpenAI explicitly states its intention to certify 10 million Americans by 2030.

Workers and employers have an incentive to close the AI skills gap

The economic case for AI training and certification is rooted in wage and productivity data. Workers possessing AI skills earn approximately 50 percent more than those without them. However, CIOs often find that productivity gains on paper fail to materialise in practice. OpenAI notes that gains “only materialise when people have the skills to use the technology.”

Without guidance, widespread access can create operational risk. OpenAI admits the technology is “disruptive, leaving many people unsure which skills matter most.” By defining a standard curriculum, OpenAI aims to help organisations capture the efficiency gains promised by their software investments.

The delivery method for AI Foundations differs from traditional corporate LMS (Learning Management System) modules. The course sits directly inside ChatGPT, allowing the platform to act as “tutor, the practice space, and the feedback loop” simultaneously. This integration allows learners to execute real tasks and receive context-aware corrections to help close the AI skills gap, rather than just watching passive video content.

Completing the programme yields a badge verifying “job-ready AI skills”. This credential serves as a stepping stone toward a full OpenAI Certification. To ensure these badges carry weight in the labour market, OpenAI has engaged Coursera, ETS, and Credly by Pearson to validate the psychometric rigour and design of the assessments.

Operational pilots for the AI certification and improving the hiring pipeline

A consortium of large-scale employers and public-sector bodies will test the curriculum before a wider rollout. Pilot partners include Walmart, John Deere, Lowe’s, Boston Consulting Group, Russell Reynolds Associates, Upwork, Elevance Health, and Accenture. The Office of the Governor of Delaware is also participating, which shows interest from state-level administration.

These partners span industries with heavy operational footprints (including retail, agriculture, and healthcare) suggesting the training targets core business functions rather than just technical roles. OpenAI plans to use the next few months to refine the course based on data from these pilots to ensure that it can effectively close the AI skills gap.

OpenAI’s initiative extends into recruitment. The company is developing an ‘OpenAI Jobs Platform’ to connect certified workers with employers. Partnerships with Indeed and Upwork support this mechanism, aiming to make it easier for businesses to identify candidates with verified technical expertise.

For hiring managers, this offers a potential solution to the difficulty of vetting AI literacy. A standardised AI certification could reduce the reliance on self-reported skills, providing “portable evidence” of a candidate’s development.

Academic alignment to seed future AI talent

While the enterprise focus is immediate, OpenAI is also seeding the future talent pipeline. A ‘ChatGPT Foundations for Teachers’ course has launched on Coursera. With three in five teachers already using AI tools to save time and personalise materials, this stream aims to formalise existing habits.

Simultaneously, pilots with Arizona State University and the California State University system are creating pathways for students to certify their skills before entering the job market. This ensures that the next wave of graduates arrives with the “job-ready” verification that enterprise employers are beginning to demand.

Organisations must now decide whether to rely on vendor-supplied certification or continue developing proprietary training. The involvement of firms like Boston Consulting Group and Accenture implies that major players see value in a standardised external benchmark.

As OpenAI moves to certify millions of people and close the AI skills gap, the certification badge may become a baseline expectation for knowledge workers much like office suite proficiency in previous decades.

See also: Instacart pilots agentic commerce by embedding in ChatGPT

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post OpenAI targets AI skills gap with new certification standards appeared first on AI News.

]]>
WorldGen: Meta reveals generative AI for interactive 3D worlds https://www.artificialintelligence-news.com/news/worldgen-meta-generative-ai-for-interactive-3d-worlds/ Fri, 21 Nov 2025 16:35:32 +0000 https://www.artificialintelligence-news.com/?p=110824 With its WorldGen system, Meta is shifting the use of generative AI for 3D worlds from creating static imagery to fully interactive assets. The main bottleneck in creating immersive spatial computing experiences – whether for consumer gaming, industrial digital twins, or employee training simulations – has long been the labour-intensive nature of 3D modelling. The […]

The post WorldGen: Meta reveals generative AI for interactive 3D worlds appeared first on AI News.

]]>
With its WorldGen system, Meta is shifting the use of generative AI for 3D worlds from creating static imagery to fully interactive assets.

The main bottleneck in creating immersive spatial computing experiences – whether for consumer gaming, industrial digital twins, or employee training simulations – has long been the labour-intensive nature of 3D modelling. The production of an interactive environment typically requires teams of specialised artists working for weeks.

WorldGen, according to a new technical report from Meta’s Reality Labs, is capable of generating traversable and interactive 3D worlds from a single text prompt in approximately five minutes.

While the technology is currently research-grade, the WorldGen architecture addresses specific pain points that have prevented generative AI from being useful in professional workflows: functional interactivity, engine compatibility, and editorial control.

Generative AI environments become truly interactive 3D worlds

The primary failing of many existing text-to-3D models is that they prioritise visual fidelity over function. Approaches such as gaussian splatting create photorealistic scenes that look impressive in a video but often lack the underlying physical structure required for a user to interact with the environment. Assets lacking collision data or ramp physics hold little-to-no value for simulation or gaming.

WorldGen diverges from this path by prioritising “traversability”. The system generates a navigation mesh (navmesh) – a simplified polygon mesh that defines walkable surfaces – alongside the visual geometry. This ensures that a prompt such as “medieval village” produces not just a collection of houses, but a spatially-coherent layout where streets are clear of obstructions and open spaces are accessible.

For enterprises, this distinction is vital. A digital twin of a factory floor or a safety training simulation for hazardous environments requires valid physics and navigation data.

Meta’s approach ensures the output is “game engine-ready,” meaning the assets can be exported directly into standard platforms like Unity or Unreal Engine. This compatibility allows technical teams to integrate generative workflows into existing pipelines without needing specialised rendering hardware that other methods, such as radiance fields, often demand.

The four-stage production line of WorldGen

Meta’s researchers have structured WorldGen as a modular AI pipeline that mirrors traditional development workflows for creating 3D worlds.

The process begins with scene planning. A LLM acts as a structural engineer, parsing the user’s text prompt to generate a logical layout. It determines the placement of key structures and terrain features, producing a “blockout” – a rough 3D sketch – that guarantees the scene makes physical sense.

The subsequent “scene reconstruction” phase builds the initial geometry. The system conditions the generation on the navmesh, ensuring that as the AI “hallucinates” details, it does not inadvertently place a boulder in a doorway or block a fire exit path.

“Scene decomposition,” the third stage, is perhaps the most relevant for operational flexibility. The system uses a method called AutoPartGen to identify and separate individual objects within the scene—distinguishing a tree from the ground, or a crate from a warehouse floor.

In many “single-shot” generative models, the scene is a single fused lump of geometry. By separating components, WorldGen allows human editors to move, delete, or modify specific assets post-generation without breaking the entire world.

For the last step, “scene enhancement” polishes the assets. The system generates high-resolution textures and refines the geometry of individual objects to ensure visual quality holds up when close.

Screenshot of Meta WorldGen in action for using generative AI to create 3D worlds.

Operational realism of using generative AI to create 3D worlds

Implementing such technology requires an assessment of current infrastructure. WorldGen’s outputs are standard textured meshes. This choice avoids the vendor lock-in associated with proprietary rendering techniques. It means that a logistics firm building a VR training module could theoretically use this tool to prototype layouts rapidly, then hand them over to human developers for refinement.

Creating a fully textured, navigable scene takes roughly five minutes on sufficient hardware. For studios or departments accustomed to multi-day turnaround times for basic environment blocking, this efficiency gain is quite literally world-changing.

However, the technology does have limitations. The current iteration relies on generating a single reference view, which restricts the scale of the worlds it can produce. It cannot yet natively generate sprawling open worlds spanning kilometres without stitching multiple regions together, which risks visual inconsistencies.

The system also currently represents each object independently without reuse, which could lead to memory inefficiencies in very large scenes compared to hand-optimised assets where a single chair model is repeated fifty times. Future iterations aim to address larger world sizes and lower latency.

Comparing WorldGen against other emerging technologies

Evaluating this approach against other emerging AI technologies for creating 3D worlds offers clarity. World Labs, a competitor in the space, employs a system called Marble that uses Gaussian splats to achieve high photorealism. While visually striking, these splat-based scenes often degrade in quality when the camera moves away from the centre and can drop in fidelity just 3-5 metres from the viewpoint.

Meta’s choice to output mesh-based geometry positions WorldGen as a tool for functional application development rather than just visual content creation. It supports physics, collisions, and navigation natively—features that are non-negotiable for interactive software. Consequently, WorldGen can generate scenes spanning 50×50 metres that maintain geometric integrity throughout.

For leaders in the technology and creative sectors, the arrival of systems like WorldGen brings exciting new possibilities. Organisations should audit their current 3D workflows to identify where “blockout” and prototyping absorb the most resources. Generative tools are best deployed here to accelerate iteration, rather than attempting to replace final-quality production immediately.

Concurrently, technical artists and level designers will need to transition from placing every vertex manually to prompting and curating AI outputs. Training programmes should focus on “prompt engineering for spatial layout” and editing AI-generated assets for 3D worlds. Finally, while the output is standard, the generation process requires plenty of compute. Assessing on-premise versus cloud rendering capabilities will be necessary for adoption.

Generative 3D serves best as a force multiplier for structural layout and asset population rather than a total replacement for human creativity. By automating the foundational work of building a world, enterprise teams can focus their budgets on the interactions and logic that drive business value.

See also: How the Royal Navy is using AI to cut its recruitment workload

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post WorldGen: Meta reveals generative AI for interactive 3D worlds appeared first on AI News.

]]>
Lightweight LLM powers Japanese enterprise AI deployments https://www.artificialintelligence-news.com/news/lightweight-llm-enterprise-deployment-single-gpu/ Thu, 20 Nov 2025 12:00:00 +0000 https://www.artificialintelligence-news.com/?p=110710 Enterprise AI deployment faces a fundamental tension: organisations need sophisticated language models but baulk at the infrastructure costs and energy consumption of frontier systems. NTT’s recent launch of tsuzumi 2, a lightweight large language model (LLM) running on a single GPU, demonstrates how businesses are resolving this constraint – with early deployments showing performance matching […]

The post Lightweight LLM powers Japanese enterprise AI deployments appeared first on AI News.

]]>
Enterprise AI deployment faces a fundamental tension: organisations need sophisticated language models but baulk at the infrastructure costs and energy consumption of frontier systems.

NTT’s recent launch of tsuzumi 2, a lightweight large language model (LLM) running on a single GPU, demonstrates how businesses are resolving this constraint – with early deployments showing performance matching larger models and running at a fraction of the operational cost.

The business case is straightforward. Traditional large language models require dozens or hundreds of GPUs, creating electricity consumption and operational cost barriers that make AI deployment impractical for many organisations.

(GPU Cost Comparison)

For enterprises operating in markets with constrained power infrastructure or tight operational budgets, these requirements eliminate AI as a viable option. NTT’s press release illustrates the practical considerations driving lightweight LLM adoption with Tokyo Online University’s deployment.

The university operates an on-premise platform keeping student and staff data in its campus network – a data sovereignty requirement common in educational institutions and regulated industries.

After validating that tsuzumi 2 handles complex context understanding and long-document processing at production-ready levels, the university deployed it for course Q&A enhancement, teaching material creation support, and personalised student guidance.

The single-GPU operation means the university avoids both capital expenditure for GPU clusters and ongoing electricity costs. More significantly, on-premise deployment addresses data privacy concerns that prevent many educational institutions from using cloud-based AI services that process sensitive student information.

Performance without scale: The technical economics

NTT’s internal evaluation for financial-system inquiry handling showed tsuzumi 2 matching or exceeding leading external models despite dramatically smaller infrastructure requirements. The performance-to-resource ratio determines AI adoption feasibility for enterprises where the total cost of ownership drives decisions.

The model delivers what NTT characterises as “world-top results among models of comparable size” in Japanese language performance, with particular strength in business domains prioritising knowledge, analysis, instruction-following, and safety.

For enterprises operating primarily in Japanese markets, this language optimisation reduces the need to deploy larger multilingual models requiring significantly more computational resources.

Reinforced knowledge in financial, medical, and public sectors – developed based on customer demand – enables domain-specific deployments without extensive fine-tuning.

The model’s RAG (Retrieval-Augmented Generation) and fine-tuning capabilities allow efficient development of specialised applications for enterprises with proprietary knowledge bases or industry-specific terminology where generic models underperform.

Data sovereignty and security as business drivers

Beyond cost considerations, data sovereignty drives lightweight LLM adoption in regulated industries. Organisations handling confidential information face risk exposure when processing data through external AI services subject to foreign jurisdiction.

NTT positions tsuzumi 2 as a “purely domestic model” developed from scratch in Japan, operating on-premises or in private clouds. This addresses concerns prevalent in Asia-Pacific markets about data residency, regulatory compliance, and information security.

FUJIFILM Business Innovation’s partnership with NTT DOCOMO BUSINESS demonstrates how enterprises combine lightweight models with existing data infrastructure. FUJIFILM’s REiLI technology converts unstructured corporate data – contracts, proposals, mixed text and images – into structured information.

Integrating tsuzumi 2’s generative capabilities enables advanced document analysis without transmitting sensitive corporate information to external AI providers. This architectural approach – combining lightweight models with on-premise data processing – represents a practical enterprise AI strategy balancing capability requirements with security, compliance, and cost constraints.

Multimodal capabilities and enterprise workflows

tsuzumi 2 includes built-in multimodal support handling text, images, and voice in enterprise applications. Thematters for business workflows requiring AI to process multiple data types without deploying separate specialised models.

Manufacturing quality control, customer service operations, and document processing workflows typically involve text, images, and sometimes voice inputs. Single models handling all three reduce integration complexity compared to managing multiple specialised systems with different operational requirements.

Market context and implementation considerations

NTT’s lightweight approach contrasts with hyperscaler strategies emphasising massive models with broad capabilities. For enterprises with substantial AI budgets and advanced technical teams, frontier models from OpenAI, Anthropic, and Google provide cutting-edge performance.

However, this approach excludes organisations lacking these resources – a significant portion of the enterprise market, particularly in Asia-Pacific regions with varying infrastructure quality. Regional considerations matter.

Power reliability, internet connectivity, data centre availability, and regulatory frameworks vary significantly in markets. Lightweight models enabling on-premise deployment accommodate these variations better than approaches requiring consistent cloud infrastructure access.

Organisations evaluating lightweight LLM deployment should consider several factors:

Domain specialisation: tsuzumi 2’s reinforced knowledge in financial, medical, and public sectors addresses specific domains, but organisations in other industries should evaluate whether available domain knowledge meets their requirements.

Language considerations: Optimisation for Japanese language processing benefits Japanese-market operations but may not suit multilingual enterprises requiring consistent cross-language performance.

Integration complexity: On-premise deployment requires internal technical capabilities for installation, maintenance, and updates. Organisations lacking these capabilities may find cloud-based alternatives operationally simpler despite higher costs.

Performance tradeoffs: While tsuzumi 2 matches larger models in specific domains, frontier models may outperform in edge cases or novel applications. Organisations should evaluate whether domain-specific performance suffices or whether broader capabilities justify higher infrastructure costs.

The practical path forward?

NTT’s tsuzumi 2 deployment demonstrates that sophisticated AI implementation doesn’t require hyperscale infrastructure – at least for organisations whose requirements align with lightweight model capabilities. Early enterprise adoptions show practical business value: reduced operational costs, improved data sovereignty, and production-ready performance for specific domains.

As enterprises navigate AI adoption, the tension between capability requirements and operational constraints increasingly drives demand for efficient, specialised solutions rather than general-purpose systems requiring extensive infrastructure.

For organisations evaluating AI deployment strategies, the question isn’t whether lightweight models are “better” than frontier systems – it’s whether they’re sufficient for specific business requirements while addressing cost, security, and operational constraints that make alternative approaches impractical.

The answer, as Tokyo Online University and FUJIFILM Business Innovation deployments demonstrate, is increasingly yes.

See also: How Levi Strauss is using AI for its DTC-first business model

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Lightweight LLM powers Japanese enterprise AI deployments appeared first on AI News.

]]>
What Europe’s AI education experiments can teach a business https://www.artificialintelligence-news.com/news/ai-education-the-european-experience/ Wed, 19 Nov 2025 12:30:36 +0000 https://www.artificialintelligence-news.com/?p=110704 We’re all chasing talent. It’s become as crucial to success as building amazing products, and a lot of businesses are feeling the squeeze. The problem is that demand for people with AI skills is skyrocketing, but the supply isn’t keeping up. The OECD points this out – lots of us need AI expertise, but very […]

The post What Europe’s AI education experiments can teach a business appeared first on AI News.

]]>
We’re all chasing talent. It’s become as crucial to success as building amazing products, and a lot of businesses are feeling the squeeze. The problem is that demand for people with AI skills is skyrocketing, but the supply isn’t keeping up. The OECD points this out – lots of us need AI expertise, but very few job postings actually require it.

But there’s a promising trend emerging, and it’s happening in Europe. On the continent and in the UK, some things are happening in AI education – experiments that use AI to change how people learn. These are glimpses into the future workforce, showing us how the next generation will approach problem-solving and collaboration in a world increasingly using AI.

Let’s take a look at a few examples, and examine how they can help businesses rethink their approach to talent.

Training teachers to work with AI – the Manchester story

The University of Manchester is integrating generative AI into how it prepares future educators, using the tools critically, creatively, and thoughtfully, combining AI’s suggestions with their students’ knowledge and experience.

That suggests a future where employees aren’t consumers of training but are comfortable co-creating with AI. Future generations will expect AI assistance in their day-to-day tasks, and the real competitive edge won’t be whether people use AI, but how they use it responsibly and ethically. UNESCO’s take is spot-on, highlighting the enhancing of human capabilities, not replacing them.

Building AI skills from the ground up: AI-ENTR4YOUTH

AI-ENTR4YOUTH is a programme bringing together Junior Achievement Europe and partners in ten European countries. Here AI is embedded in entrepreneurship education, where students use AI tools to tackle real-world problems, with a focus on innovation and European values.

This develops practical AI literacy early on, linking AI with the entrepreneurial mindset; the ability to spot opportunities and test new ideas. Importantly, it’s broadening the pool of AI talent by reaching students who might chose business, not technical degrees.

The skills gap can be solved. Companies that complain about a lack of AI talent should ask: How can we actively support or emulate programmes like AI-ENTR4YOUTH to build the workforce we need?

Personalised learning & impact: The Social Tides perspective

Social Tides champions education innovators in Europe. Its work highlights projects that use AI to create more tailored learning experiences, particularly for students who need extra support or have diverse learning styles. AI is helping personalise content, act as mentor, and build communities around students.

The common thread is human oversight. AI gives recommendations and insight, but humans are still very much in the loop, making judgements and offering support. This aligns with best AI business practice, as leaders try to make learning an integral part of the working day.

Key questions for leaders

What does this mean for decision-makers? Here are a few questions to consider:

  • Learning architecture: Are we embracing AI-assisted, personalised learning paths internally?
  • Talent & pipeline: Are we shaping the future talent pool through partnerships with local schools and universities?
  • Governance & ethics: Do we have clear guidelines for AI use in training, ensuring fairness and transparency?
  • Vendor choices: Are we selecting AI tools that align with our values and pertinent regulations?

Although these educational programmes could be termed experiments, they are a signal of how the future of work might be shaped. Companies that pay attention now will be the ones to secure better talent and build more adaptable, learning-driven organisations.

(Image source: “Laboratory” by ♔ Georgie R is licensed under CC BY-ND 2.0. T)

 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post What Europe’s AI education experiments can teach a business appeared first on AI News.

]]>
AI Redaction That Puts Privacy First: CaseGuard Studio Leading The Way https://www.artificialintelligence-news.com/news/ai-redaction-that-puts-privacy-first-caseguard-studio-leading-the-way/ Wed, 08 Oct 2025 09:07:44 +0000 https://www.artificialintelligence-news.com/?p=109632 Law enforcement, law firms, hospitals, and financial institutions are asked every day to release records, which can contain highly sensitive details – including addresses, social security numbers, medical diagnoses, evidence footage, and children’s identities. To meet compliance and security requirements, staff spend hundreds of hours manually redacting sensitive information, yet when that process goes wrong, […]

The post AI Redaction That Puts Privacy First: CaseGuard Studio Leading The Way appeared first on AI News.

]]>
Law enforcement, law firms, hospitals, and financial institutions are asked every day to release records, which can contain highly sensitive details – including addresses, social security numbers, medical diagnoses, evidence footage, and children’s identities.

To meet compliance and security requirements, staff spend hundreds of hours manually redacting sensitive information, yet when that process goes wrong, there can be costly consequences. Last year, healthcare company Advanced was fined £6 million for losing patient records that, among other details, contained information about how to gain entry to the homes of 890 care receivers. Even the smallest oversights can create unpleasant headlines and catastrophic fines.

This is the reality of modern data handling: leaks can be catastrophic, and compliance frameworks like GDPR, HIPAA, and FERPA, plus FOIA requests, require more vigilance than manual redaction can provide. What organizations need is not more staff to ensure proper redaction, but tools that achieve it quickly, reliably, and securely.

CaseGuard Studio, a US-based AI redaction & investigation platform, has built software that automates this manual work with 98% accuracy. It can process thousands of files in minutes, working on data that’s kept securely on-premises of any file type, including video, audio, documents, and images.

Why Manual Redaction No Longer Works

Redaction is not new, but the tools most people reach for were not built for the complexity of today’s compliance requirements. Adobe Acrobat, for example, offers text redaction but needs manual work on each document. Premiere’s video editing software requires frame-by-frame subject tracking for video redaction, which is slow and impractical. These solutions provide only limited capability and were never designed for departments that process a multitude of redactions on a weekly basis.

CaseGuard Studio, by contrast, was purpose-built for just this challenge. It can detect 12 categories of PII (personally-identifiable information) in video and images, such as faces, license plates, notepads, and more. It tracks and redacts all PII without needing manual frame-by-frame intervention.

For audio and documents, CaseGuard Studio supports over 30 PII types, like names, phone numbers, and addresses. Custom keywords, phrases, or sentences can be auto-detected and redacted directly from thousands of documents and transcripts, streamlining compliance in ways manual tools can’t match. It transcribes recordings with high accuracy and can translate to and from 100+ languages, so it can redact sensitive terms in multilingual content.

What once took days of human labor can now happen in minutes. CaseGuard Studio automates redaction work with 98% accuracy, up to 30 times faster than manual methods, and because it runs fully on-premise, data never leaves the device.

What to Ask When Choosing Redaction Software

For organizations evaluating redaction software, the decision often comes down to a handful of critical questions that determine whether a platform can deliver on both compliance and efficiency. The following questions are central to making the right choice.

  • Can the software handle every file type we work with? From scanned forms and handwritten notes to video, audio, and still images, organizations in sensitive sectors deal with more than PDFs.
  • Is the platform fully automated? If redaction still means blacking out text with a Sharpie or scrubbing video frame by frame, the process is slow and prone to error. Full automation ensures accuracy and frees staff for higher-impact work.
  • Does the software ensure data never leaves your environment? On-premise deployment means sensitive files are processed locally, so nothing is exposed to third-party servers or cloud risks.
  • Does the pricing stay predictable as you scale? Per-file or per-minute pricing quickly becomes unsustainable as workloads grow. Look for a flat subscription with unlimited redaction, so costs stay predictable no matter how much data you process.

Evaluating CaseGuard Studio Against the Four Redaction Essentials

When assessed against these requirements, CaseGuard Studio was the only platform in our evaluation that consistently delivered across all four redaction essentials.

     1. Auto-redact files from any source

From text documents and scanned forms to video, audio, images, and even handwriting, redaction has to cover every format where sensitive information might appear. Missing one identifiable feature, a face in a crowd or an un-redacted license plate, and a single oversight can be the difference between full compliance and a lawsuit. CaseGuard Studio automatically detects and redacts sensitive information across all these file types within a single platform with complete compliance.

      2. Automated bulk redaction at speed and scale

Thousands of files can be redacted in bulk, turning weeks of manual effort into minutes of processing. CaseGuard Studio handles workloads up to 32x faster than manual methods, with 98% accuracy, giving organizations the speed and scalability to meet growing compliance demands.

    3. Your data, your control

CaseGuard Studio runs fully on-premise, within your secure environment, including air-gapped systems that are completely isolated from external networks. This ensures organizations retain full control of their data, with nothing exposed to third-party servers or cloud risks.

   4. Unlimited redaction, no pay-per-file fees

Pay-per-file pricing quickly adds up, making every additional redaction more expensive. CaseGuard Studio offers predictable pricing under a flat subscription with unlimited redaction, so costs remain the same no matter how heavy the redaction load is.

Final Thoughts

Over the course of our evaluation, we compared methods and platforms ranging from manual redaction and legacy PDF editors to newer AI-driven tools that have appeared in the last few years. Most delivered partial solutions, treating written documents well but failing on audio, while others blurred faces in video, but weren’t practical to use at scale. Cloud-only options raised sovereignty and compliance concerns that, for many users, would count them out of the running entirely.

CaseGuard Studio was the only platform that consistently met all five requirements detailed above. It supports the widest of file types, from body-cam video to scanned or handwritten forms.

Audio and video are probably the most difficult formats to redact, especially at scale. Here, CaseGuard wins our vote with its AI-powered smarts. It runs fully on-premise, keeps sensitive files under organizational control, and its local AI models are refined with each version release.

At a time when many cloud redaction software licensing models drive up costs as workloads grow, CaseGuard’s flat pricing offers a refreshing change — predictable, transparent, and sustainable.

For any organization facing rising compliance demands and ever-larger volumes of sensitive data, CaseGuard Studio is well worth a closer look. Click here to book a consultation.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post AI Redaction That Puts Privacy First: CaseGuard Studio Leading The Way appeared first on AI News.

]]>
Teachers in England given the green-light to use AI https://www.artificialintelligence-news.com/news/teachers-in-england-given-the-green-light-to-use-ai/ Wed, 11 Jun 2025 14:38:26 +0000 https://www.artificialintelligence-news.com/?p=106772 Teachers in England have been given the all-clear to use AI to help them in low-level tasks that are part of their duties, the BBC reports. Guidance from the Department for Education (DfE) says AI can be used by school teachers in England, but it should only be for ‘low stakes’ tasks, such as writing […]

The post Teachers in England given the green-light to use AI appeared first on AI News.

]]>
Teachers in England have been given the all-clear to use AI to help them in low-level tasks that are part of their duties, the BBC reports.

Guidance from the Department for Education (DfE) says AI can be used by school teachers in England, but it should only be for ‘low stakes’ tasks, such as writing letters to parents and marking homework.

The decision to approve the use of the technology follows the results of a survey of teachers in 2023, undertaken on behalf of the DfE. In it, a majority of respondents were said to be “broadly optimistic” about using AI in the course of their jobs. At the time, a spokesperson from Teacher Tap (the company behind the software used to conduct the survey) said: “It’s really quite normal now as a maths teacher, that you don’t mark maths homework any more … because we have such chronic shortages of maths teachers that you know nobody really feels aggrieved.”

Responses to the 2023 survey quoted teachers saying AI can be quite useful when they need to source appropriate teaching materials, and in the course of writing reports to parents on the performance and behaviour of their children.

As part of today’s announcement, the DfE said that teachers using AI will help reduce the burden of unpaid overtime teachers work, and can lead to improved work-life balance and job satisfaction.

By allowing staff to use AI tools, it’s hoped that the statistics around teachers’ mental health in general should improve (36% of teachers have experienced ‘burn-out’ according to the charity Education Support [PDF]), and will have the effect of attracting more graduates to the profession.

Part of the daily stress many teachers suffer is caused by a shortage of qualified teachers, a situation that use of AI may help. Although the UK government has pointed to a greater number of teachers employed in the entirety of the UK than a decade ago, the ratio of pupils to teachers continues to widen as the population grows. Teaching classes of 33 or more is commonplace in English state schools, and over a million pupils in the UK are taught in classes of more than 30.

The attrition rate for qualified teachers in the UK is around 8.8% according to SecEd, an industry website aimed at teachers working in secondary schools (the 11-18 age group). SecEd has also stated that the number of open positions in the sector climbed from three to six per 1,000 teachers in the 12 months from 2022.

Due to budgetary constraints on local authorities and schools, open teaching positions are often filled by short-term supply (substitute) teachers sourced through employment agencies, a practice that costs schools significantly more than paying permanent salaried staff.

In line with today’s announcement, a post on the Education Hub blog published by the UK government states that “teachers can use AI to help with things like planning lessons, creating resources, marking work, giving feedback, and handling administrative tasks.” It also gives the proviso of it being up to the individual teacher to “check that anything AI generates is accurate and appropriate – the final responsibility always rests with them and their school or college.”

The DfE has also given the government’s seal of approval for the use of AI by companies that conduct curriculum and assessment reviews of UK schools, the outcomes of which determine schools’ rankings in the so-called league tables. These are classifications given to schools by Ofsted (Office for Standards in Education) such as ‘special measures’, ‘good’, or ‘outstanding’. The approval for the use of AI in this context comes despite opposition from teaching unions.

The longer-term issue that has pervaded the English school system for several decades is not the sector’s use of technology, but its chronic under-funding. The NAHT (National Association of Head Teachers) states that between school years 2009-10 and 2021-2022, capital spending on schools saw an inflation-adjusted reduction of 29% over the decade. The Institute for Fiscal Study has said that school spending per pupil in England has seen a real-terms decrease of 9% in the same period.

Equipping teaching professionals with technology tools may help teachers with some of the burden of administration placed on them, although whether marking homework can be considered what the Department for Education terms ‘low stakes’ is debatable.

Investment in school-age children in the form of education budget increases is expensive, while subscriptions to AI models can be as little as a few dollars a month. On paper, the lure of AI helping teachers manage their workloads a little more efficiently must be attractive to DfE officials. But what is apparent is the consistently low value placed on childhood education by successive UK governments.

Deciding to allow AI to help staff in a criminally under-funded education sector is largely irrelevant and will have little impact on the quality of education offered to another generation of English children.

(Image source: “Village School Classroom” by Thomas Galvez is licensed under CC BY 2.0.)

The post Teachers in England given the green-light to use AI appeared first on AI News.

]]>
UAE to teach its children AI https://www.artificialintelligence-news.com/news/uae-to-teach-its-children-ai/ Wed, 07 May 2025 15:40:19 +0000 https://www.artificialintelligence-news.com/?p=106353 The United Arab Emirates looks set to integrating AI education in its schools’ curricula, meaning all children from kindergarten to high school will learn about the technology, how it can be used day-to-day, and the best ways to implement the various types of models. There will also be classes covering the ethics of AI, something […]

The post UAE to teach its children AI appeared first on AI News.

]]>
The United Arab Emirates looks set to integrating AI education in its schools’ curricula, meaning all children from kindergarten to high school will learn about the technology, how it can be used day-to-day, and the best ways to implement the various types of models.

There will also be classes covering the ethics of AI, something that the country’s young might eventually teach to the world, according to OpenAI’s Sam Altman, who once termed the UAE the world’s ‘sandbox’ in which issues around AI such as governance could be thrashed out, and from which the rest of the world can make its regulatory models.

The new curriculum will include areas such as data and algorithms, software applications, the ethics of AI, real-world applications of the technology, policies, and social engagement. All modules have been designed to be age-appropriate, and will be incorporated into the standard curriculum, rather than being taught after-hours.

The rollout into schools is part of the country’s National Artificial Intelligence Strategy 2031, which aims to position the Kingdom as a global leader in AI capabilities – including education – and is part of wider efforts to diversify the UAE’s economy from its core basis of oil production and sale.

In addition to the changes to school timetables, the National AI Strategy also includes funds to promote AI in research, with institutions like the American University of Sharjah and United Arab Emirates University committed to the technology’s use in the higher and postgraduate sectors. There are also public awareness and learning initiatives springing up in the country as it gathers momentum to become the country that’s most behind the possibilities of AI in the modern world.

An AI investment fund is expected to reach a value of $100 billion in the next few years, according to people familiar with the project. The country also has plans to spend around $1.4 trillion in in the US in the next ten years on energy generation, semiconductor manufacture, and AI infrastructure. Investments abroad will ensure the Kingdom retains positive relations with elements of its supply chain related to AI.

US President Trump is thought to be considering easing tariffs and other restrictions on exports of Nvidia hardware to the Kingdom, and has planned a visit to the region later this month, when he will also visit Saudi Arabia and Qatar.

The UAE has actively encouraged investment in infrastructure from Chinese manufacturers such as Huawei, and is seen as something of an impartial middle-ground in the ongoing trade war between the Western and Eastern industrial and technology blocs. The wider region is home to some of the most affluent nations, so any curbs on trade tend to have negative effects on vendors based on both sides of the divide.

See also: Conversations with AI – Education

(Image source: “Dubai” by Eugene Kaspersky is licensed under CC BY-NC-SA 2.0.)

The post UAE to teach its children AI appeared first on AI News.

]]>
Conversations with AI: Education https://www.artificialintelligence-news.com/news/conversations-with-ai-education-implications-and-future/ Thu, 01 May 2025 10:27:00 +0000 https://www.artificialintelligence-news.com/?p=106152 How can AI be used in education? An ethical debate, with an AI

The post Conversations with AI: Education appeared first on AI News.

]]>
The classroom hasn’t changed much in over a century. A teacher at the front, rows of students listening, and a curriculum defined by what’s testable – not necessarily what’s meaningful.

But AI, as arguably the most powerful tool humanity has created in the last few years, is about to break that model open. Not with smarter software or faster grading, but by forcing us to ask: “What is the purpose of education in a world where machines could teach?”

At AI News, rather than speculate about distant futures or lean on product announcements and edtech deals, we started a conversation – with an AI. We asked it what it sees when it looks at the classroom, the teacher, and the learner.

What follows is a distilled version of that exchange, given here not as a technical analysis, but as a provocation.

The system cracks

Education is under pressure worldwide: Teachers are overworked, students are disengaged, and curricula feel outdated in a changing world. Into this comes AI – not as a patch or plug-in, but as a potential accelerant.

Our opening prompt: What roles might an AI play in education?

The answer was wide-ranging:

  • Personalised learning pathways
  • Intelligent tutoring systems
  • Administrative efficiency
  • Language translation and accessibility tools
  • Behavioural and emotional recognition
  • Scalable, always-available content delivery

These are features of an education system, its nuts and bolts. But what about meaning and ethics?

Flawed by design?

One concern kept resurfacing: bias.

We asked the AI: “If you’re trained on the internet – and the internet is the output of biased, flawed human thought – doesn’t that mean your responses are equally flawed?”

The AI acknowledged the logic. Bias is inherited. Inaccuracies, distortions, and blind spots all travel from teacher to pupil. What an AI learns, it learns from us, and it can reproduce our worst habits at vast scale.

But we weren’t interested in letting human teachers off the hook either. So we asked: “Isn’t bias true of human educators too?”

The AI agreed: human teachers are also shaped by the limitations of their training, culture, and experience. Both systems – AI and human – are imperfect. But only humans can reflect and care.

That led us to a deeper question: if both AI and human can reproduce bias, why use AI at all?

Why use AI in education?

The AI outlined what it felt were its clear advantages, which seemed to be systemic, rather than revolutionary. The aspect of personalised learning intrigued us – after all, doing things fast and at scale is what software and computers are good at.

We asked: How much data is needed to personalise learning effectively?

The answer: it varies. But at scale, it could require gigabytes or even terabytes of student data – performance, preferences, feedback, and longitudinal tracking over years.

Which raises its own question: “What do we trade in terms of privacy for that precision?”

A personalised or fragmented future?

Putting aside the issue of whether we’re happy with student data being codified and ingested, if every student were to receive a tailored lesson plan, what happens to the shared experience of learning?

Education has always been more than information. It’s about dialogue, debate, discomfort, empathy, and encounters with other minds, not just mirrored algorithms. AI can tailor a curriculum, but it can’t recreate the unpredictable alchemy of a classroom.

We risk mistaking customisation for connection.

“I use ChatGPT to provide more context […] to plan, structure and compose my essays.” – James, 17, Ottawa, Canada.

The teacher reimagined

Where does this leave the teacher?

In the AI’s view: liberated. Freed from repetitive tasks and administrative overload, the teacher is able to spend more time guiding, mentoring, and cultivating important thinking.

But this requires a shift in mindset – from delivering knowledge to curating wisdom. In broad terms, from part-time administrator, part-time teacher, to in-classroom collaborator.

AI won’t replace teachers, but it might reveal which parts of the teaching job were never the most important.

“The main way I use ChatGPT is to either help with ideas for when I am planning an essay, or to reinforce understanding when revising.” – Emily, 16, Eastbourne College, UK.

What we teach next

So, what do we want students to learn?

In an AI-rich world, important thinking, ethical reasoning, and emotional intelligence rise in value. Ironically, the more intelligent our machines become, the more we’ll need to double down on what makes us human.

Perhaps the ultimate lesson isn’t in what AI can teach us – but in what it can’t, or what it shouldn’t even try.

Conclusion

The future of education won’t be built by AI alone. The is our opportunity to modernise classrooms, and to reimagine them. Not to fear the machine, but to ask the bigger question: “What is learning in a world where all knowledge is available?”

Whatever the answer is – that’s how we should be teaching next.

(Image source: “Large lecture college classes” by Kevin Dooley is licensed under CC BY 2.0)

See also: AI in education: Balancing promises and pitfalls

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Conversations with AI: Education appeared first on AI News.

]]>
AI in education: Balancing promises and pitfalls https://www.artificialintelligence-news.com/news/ai-in-education-balancing-promises-and-pitfalls/ Mon, 28 Apr 2025 12:27:09 +0000 https://www.artificialintelligence-news.com/?p=106158 The role of AI in education is a controversial subject, bringing both exciting possibilities and serious challenges. There’s a real push to bring AI into schools, and you can see why. The recent executive order on youth education from President Trump recognised that if future generations are going to do well in an increasingly automated […]

The post AI in education: Balancing promises and pitfalls appeared first on AI News.

]]>
The role of AI in education is a controversial subject, bringing both exciting possibilities and serious challenges.

There’s a real push to bring AI into schools, and you can see why. The recent executive order on youth education from President Trump recognised that if future generations are going to do well in an increasingly automated world, they need to be ready.

“To ensure the United States remains a global leader in this technological revolution, we must provide our nation’s youth with opportunities to cultivate the skills and understanding necessary to use and create the next generation of AI technology,” President Trump declared.

So, what does AI actually look like in the classroom?

One of the biggest hopes for AI in education is making learning more personal. Imagine software that can figure out how individual students are doing, then adjust the pace and materials just for them. This could mean finally moving away from the old one-size-fits-all approach towards learning environments that adapt and offer help exactly where it’s needed.

The US executive order hints at this, wanting to improve results through things like “AI-based high-quality instructional resources” and “high-impact tutoring.”

And what about teachers? AI could be a huge help here too, potentially taking over tedious admin tasks like grading, freeing them up to actually teach. Plus, AI software might offer fresh ways to present information.

Getting kids familiar with AI early on could also take away some of the mystery around the technology. It might spark their “curiosity and creativity” and give them the foundation they need to become “active and responsible participants in the workforce of the future.”

The focus stretches to lifelong learning and getting people ready for the job market. On top of that, AI tools like text-to-speech or translation features can make learning much more accessible for students with disabilities, opening up educational environments for everyone.

Not all smooth sailing: The challenges ahead for AI in education

While the potential is huge, we need to be realistic about the significant hurdles and potential downsides.

First off, AI runs on student data – lots of it. That means we absolutely need strong rules and security to make sure this data is collected ethically, used correctly, and kept safe from breaches. Privacy is paramount here.

Then there’s the bias problem. If the data used to train AI reflects existing unfairness in society (and let’s be honest, it often does), the AI could end up repeating or even worsening those inequalities. Think biased assessments or unfair resource allocation. Careful testing and constant checks are crucial to catch and fix this.

We also can’t ignore the digital divide. If some students don’t have reliable internet, the right devices, or the necessary tech infrastructure at home or school, AI could widen the gap between the haves and have-nots. It’s vital that everyone gets fair access.

There’s also a risk that leaning too heavily on AI education tools might stop students from developing essential skills like critical thinking. We need to teach them how to use AI as a helpful tool, not a crutch they can’t function without.

Maybe the biggest piece of the puzzle, though, is making sure our teachers are ready. As the executive order rightly points out, “We must also invest in our educators and equip them with the tools and knowledge.”

This isn’t just about knowing which buttons to push; teachers need to understand how AI fits into teaching effectively and ethically. That requires solid professional development and ongoing support.

A recent GMB Union poll found that while about a fifth of UK schools are using AI now, the staff often aren’t getting the training they need:

View on Threads

Finding the right path forward

It’s going to take everyone – governments, schools, tech companies, and teachers – pulling together in order to ensure that AI plays a positive role in education.

We absolutely need clear policies and standards covering ethics, privacy, bias, and making sure AI is accessible to all students. We also need to keep investing in research to figure out the best ways to use AI in education and to build tools that are fair and effective.

And critically, we need a long-term commitment to teacher education to get educators comfortable and skilled with these changes. Part of this is building broad AI literacy, making sure all students get a basic understanding of this technology and how it impacts society.

AI could be a positive force in education – making it more personalised, efficient, and focused on the skills students actually need. But turning that potential into reality means carefully navigating those tricky ethical, practical, and teaching challenges head-on.

See also: How does AI judge? Anthropic studies the values of Claude

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI in education: Balancing promises and pitfalls appeared first on AI News.

]]>
Red Hat on open, small language models for responsible, practical AI https://www.artificialintelligence-news.com/news/red-hat-on-open-small-language-models-for-responsible-practical-ai/ Tue, 22 Apr 2025 07:49:15 +0000 https://www.artificialintelligence-news.com/?p=105184 As geopolitical events shape the world, it’s no surprise that they affect technology too – specifically, in the ways that the current AI market is changing, alongside its accepted methodology, how it’s developed, and the ways it’s put to use in the enterprise. The expectations of results from AI are balanced at present with real-world […]

The post Red Hat on open, small language models for responsible, practical AI appeared first on AI News.

]]>
As geopolitical events shape the world, it’s no surprise that they affect technology too – specifically, in the ways that the current AI market is changing, alongside its accepted methodology, how it’s developed, and the ways it’s put to use in the enterprise.

The expectations of results from AI are balanced at present with real-world realities. And there remains a good deal of suspicion about the technology, again in balance with those who are embracing it even in its current nascent stages. The closed-loop nature of the well-known LLMs is being challenged by instances like Llama, DeepSeek, and Baidu’s recently-released Ernie X1.

In contrast, open source development provides transparency and the ability to contribute back, which is more in tune with the desire for “responsible AI”: a phrase that encompasses the environmental impact of large models, how AIs are used, what comprises their learning corpora, and issues around data sovereignty, language, and politics. 

As the company that’s demonstrated the viability of an economically-sustainable open source development model for its business, Red Hat wants to extend its open, collaborative, and community-driven approach to AI. We spoke recently to Julio Guijarro, the CTO for EMEA at Red Hat, about the organisation’s efforts to unlock the undoubted power of generative AI models in ways that bring value to the enterprise, in a manner that’s responsible, sustainable, and as transparent as possible. 

Julio underlined how much education is still needed in order for us to more fully understand AI, stating, “Given the significant unknowns about AI’s inner workings, which are rooted in complex science and mathematics, it remains a ‘black box’ for many. This lack of transparency is compounded where it has been developed in largely inaccessible, closed environments.”

There are also issues with language (European and Middle-Eastern languages are very much under-served), data sovereignty, and fundamentally, trust. “Data is an organisation’s most valuable asset, and businesses need to make sure they are aware of the risks of exposing sensitive data to public platforms with varying privacy policies.” 

The Red Hat response 

Red Hat’s response to global demand for AI has been to pursue what it feels will bring most benefit to end-users, and remove many of the doubts and caveats that are quickly becoming apparent when the de facto AI services are deployed. 

One answer, Julio said, is small language models, running locally or in hybrid clouds, on non-specialist hardware, and accessing local business information. SLMs are compact, efficient alternatives to LLMs, designed to deliver strong performance for specific tasks while requiring significantly fewer computational resources. There are smaller cloud providers that can be utilised to offload some compute, but the key is having the flexibility and freedom to choose to keep business-critical information in-house, close to the model, if desired. That’s important, because information in an organisation changes rapidly. “One challenge with large language models is they can get obsolete quickly because the data generation is not happening in the big clouds. The data is happening next to you and your business processes,” he said. 

There’s also the cost. “Your customer service querying an LLM can present a significant hidden cost – before AI, you knew that when you made a data query, it had a limited and predictable scope. Therefore, you could calculate how much that transaction could cost you. In the case of LLMs, they work on an iterative model. So the more you use it, the better its answer can get, and the more you like it, the more questions you may ask. And every interaction is costing you money. So the same query that before was a single transaction can now become a hundred, depending on who and how is using the model. When you are running a model on-premise, you can have greater control, because the scope is limited by the cost of your own infrastructure, not by the cost of each query.”

Organisations needn’t brace themselves for a procurement round that involves writing a huge cheque for GPUs, however. Part of Red Hat’s current work is optimising models (in the open, of course) to run on more standard hardware. It’s possible because the specialist models that many businesses will use don’t need the huge, general-purpose data corpus that has to be processed at high cost with every query. 

“A lot of the work that is happening right now is people looking into large models and removing everything that is not needed for a particular use case. If we want to make AI ubiquitous, it has to be through smaller language models. We are also focused on supporting and improving vLLM (the inference engine project) to make sure people can interact with all these models in an efficient and standardised way wherever they want: locally, at the edge or in the cloud,” Julio said. 

Keeping it small 

Using and referencing local data pertinent to the user means that the outcomes can be crafted according to need. Julio cited projects in the Arab- and Portuguese-speaking worlds that wouldn’t be viable using the English-centric household name LLMs. 

There are a couple of other issues, too, that early adopter organisations have found in practical, day-to-day use LLMs. The first is latency – which can be problematic in time-sensitive or customer-facing contexts. Having the focused resources and relevantly-tailored results just a network hop or two away makes sense. 

Secondly, there is the trust issue: an integral part of responsible AI. Red Hat advocates for open platforms, tools, and models so we can move towards greater transparency, understanding, and the ability for as many people as possible to contribute. “It is going to be critical for everybody,” Julio said. “We are building capabilities to democratise AI, and that’s not only publishing a model, it’s giving users the tools to be able to replicate them, tune them, and serve them.” 

Red Hat recently acquired Neural Magic to help enterprises more easily scale AI, to improve performance of inference, and to provide even greater choice and accessibility of how enterprises build and deploy AI workloads with the vLLM project for open model serving. Red Hat, together with IBM Research, also released InstructLab to open the door to would-be AI builders who aren’t data scientists but who have the right business knowledge. 

There’s a great deal of speculation around if, or when, the AI bubble might burst, but such conversations tend to gravitate to the economic reality that the big LLM providers will soon have to face. Red Hat believes that AI has a future in a use case-specific and inherently open source form, a technology that will make business sense and that will be available to all. To quote Julio’s boss, Matt Hicks (CEO of Red Hat), “The future of AI is open.” 

Supporting Assets: 

Tech Journey: Adopt and scale AI

The post Red Hat on open, small language models for responsible, practical AI appeared first on AI News.

]]>
Web3 tech helps instil confidence and trust in AI https://www.artificialintelligence-news.com/news/web3-tech-helps-instil-confidence-and-trust-in-ai/ Wed, 09 Apr 2025 13:47:57 +0000 https://www.artificialintelligence-news.com/?p=105268 The promise of AI is that it’ll make all of our lives easier. And with great convenience comes the potential for serious profit. The United Nations thinks AI could be a $4.8 trillion global market by 2033 – about as big as the German economy. But forget about 2033: in the here and now, AI […]

The post Web3 tech helps instil confidence and trust in AI appeared first on AI News.

]]>
The promise of AI is that it’ll make all of our lives easier. And with great convenience comes the potential for serious profit. The United Nations thinks AI could be a $4.8 trillion global market by 2033 – about as big as the German economy.

But forget about 2033: in the here and now, AI is already fueling transformation in industries as diverse as financial services, manufacturing, healthcare, marketing, agriculture, and e-commerce. Whether it’s autonomous algorithmic ‘agents’ managing your investment portfolio or AI diagnostics systems detecting diseases early, AI is fundamentally changing how we live and work.

But cynicism is snowballing around AI – we’ve seen Terminator 2 enough times to be extremely wary. The question worth asking, then, is how do we ensure trust as AI integrates deeper into our everyday lives?

The stakes are high: A recent report by Camunda highlights an inconvenient truth: most organisations (84%) attribute regulatory compliance issues to a lack of transparency in AI applications. If companies can’t view algorithms – or worse, if the algorithms are hiding something – users are left completely in the dark. Add the factors of systemic bias, untested systems, and a patchwork of regulations and you have a recipe for mistrust on a large scale.

Transparency: Opening the AI black box

For all their impressive capabilities, AI algorithms are often opaque, leaving users ignorant of how decisions are reached. Is that AI-powered loan request being denied because of your credit score – or due to an undisclosed company bias? Without transparency, AI can pursue its owner’s goals, or that of its owner, while the user remains unaware, still believing it’s doing their bidding.

One promising solution would be to put the processes on the blockchain, making algorithms verifiable and auditable by anyone. This is where Web3 tech comes in. We’re already seeing startups explore the possibilities. Space and Time (SxT), an outfit backed by Microsoft, offers tamper-proof data feeds consisting of a verifiable compute layer, so SxT can ensure that the information on which AI relies is real, accurate, and untainted by a single entity.

Space and Time’s novel Proof of SQL prover guarantees queries are computed accurately against untampered data, proving computations in blockchain histories and being able to do so much faster than state-of-the art zkVMs and coprocessors. In essence, SxT helps establish trust in AI’s inputs without dependence on a centralised power.

Proving AI can be trusted

Trust isn’t a one-and-done deal; it’s earned over time, analogous to a restaurant maintaining standards to retain its Michelin star. AI systems must be assessed continually for performance and safety, especially in high-stakes domains like healthcare or autonomous driving. A second-rate AI prescribing the wrong medicines or hitting a pedestrian is more than a glitch, it’s a catastrophe.

This is the beauty of open-source models and on-chain verification via using immutable ledgers, with built-in privacy protections assured by the use of cryptography like Zero-Knowledge Proofs (ZKPs). Trust isn’t the only consideration, however: Users must know what AI can and can’t do, to set their expectations realistically. If a user believes AI is infallible, they’re more likely to trust flawed output.

To date, the AI education narrative has centred on its dangers. From now on, we should try to improve users’ knowledge of AI’s capabilities and limitations, better to ensure users are empowered not exploited.

Compliance and accountability

As with cryptocurrency, the word compliance comes often when discussing AI. AI doesn’t get a pass under the law and various regulations. How should a faceless algorithm be held accountable? The answer may lie in the modular blockchain protocol Cartesi, which ensures AI inference happens on-chain.

Cartesi’s virtual machine lets developers run standard AI libraries – like TensorFlow, PyTorch, and Llama.cpp – in a decentralised execution environment, making it suitable for on-chain AI development. In other words, a blend of blockchain transparency and computational AI.

Trust through decentralisation

The UN’s recent Technology and Innovation Report shows that while AI promises prosperity and innovation, its development risks “deepening global divides.” Decentralisation could be the answer, one that helps AI scale and instils trust in what’s under the hood.

(Image source: Unsplash)

The post Web3 tech helps instil confidence and trust in AI appeared first on AI News.

]]>
Navigating the EU AI Act: Implications for UK businesses https://www.artificialintelligence-news.com/news/navigating-the-eu-ai-act-implications-for-uk-businesses/ Mon, 07 Apr 2025 07:10:00 +0000 https://www.artificialintelligence-news.com/?p=105005 The EU AI Act, which came into effect on August 1, 2024, marks a turning point in the regulation of artificial intelligence. Aimed at governing the use and development of AI, it imposes rigorous standards for organisations operating within the EU or providing AI-driven products and services to its member states. Understanding and complying with […]

The post Navigating the EU AI Act: Implications for UK businesses appeared first on AI News.

]]>
The EU AI Act, which came into effect on August 1, 2024, marks a turning point in the regulation of artificial intelligence. Aimed at governing the use and development of AI, it imposes rigorous standards for organisations operating within the EU or providing AI-driven products and services to its member states. Understanding and complying with the Act is essential for UK businesses seeking to compete in the European market.

The scope and impact of the EU AI Act

The EU AI Act introduces a risk-based framework that classifies AI systems into four categories: minimal, limited, high, and unacceptable risk. High-risk systems, which include AI used in healthcare diagnostics, autonomous vehicles, and financial decision-making, face stringent regulations. This risk-based approach ensures that the level of oversight corresponds to the potential impact of the technology on individuals and society.

For UK businesses, non-compliance with these rules is not an option. Organisations must ensure their AI systems align with the Act’s requirements or risk hefty fines, reputational damage, and exclusion from the lucrative EU market. The first step is to evaluate how their AI systems are classified and adapt operations accordingly. For instance, a company using AI to automate credit scoring must ensure its system meets transparency, fairness, and data privacy standards.

Preparing for the UK’s next steps

While the EU AI Act directly affects UK businesses trading with the EU, the UK is also likely to implement its own AI regulations. The recent King’s Speech highlighted the government’s commitment to AI governance, focusing on ethical AI and data protection. Future UK legislation will likely mirror aspects of the EU framework, making it essential for businesses to proactively prepare for compliance in multiple jurisdictions.

The role of ISO 42001 in ensuring compliance

International standards like ISO 42001 provide a practical solution for businesses navigating this evolving regulatory landscape. As the global benchmark for AI management systems, ISO 42001 offers a structured framework to manage the development and deployment of AI responsibly.

Adopting ISO 42001 enables businesses to demonstrate compliance with EU requirements while fostering trust among customers, partners, and regulators. Its focus on continuous improvement ensures that organisations can adapt to future regulatory changes, whether from the EU, UK, or other regions. Moreover, the standard promotes

transparency, safety, and ethical practices, which are essential for building AI systems that are not only compliant but also aligned with societal values.

Using AI as a catalyst for growth

Compliance with the EU AI Act and ISO 42001 isn’t just about avoiding penalties; it’s an opportunity to use AI as a sustainable growth and innovation driver. Businesses prioritising ethical AI practices can gain a competitive edge by enhancing customer trust and delivering high-value solutions.

For example, AI can revolutionise patient care in the healthcare sector by enabling faster diagnostics and personalised treatments. By aligning these technologies with ISO 42001, organisations can ensure their tools meet the highest safety and privacy standards. Similarly, financial firms can harness AI to optimise decision-making processes while maintaining transparency and fairness in customer interactions.

The risks of non-compliance

Recent incidents, such as AI-driven fraud schemes and cases of algorithmic bias, highlight the risks of neglecting proper governance. The EU AI Act directly addresses these challenges by enforcing strict guidelines on data usage, transparency, and accountability. Failure to comply risks significant fines and undermines stakeholder confidence, with long-lasting consequences for an organisation’s reputation.

The MOVEit and Capita breaches serve as stark reminders of the vulnerabilities associated with technology when governance and security measures are lacking. For UK businesses, robust compliance strategies are essential to mitigate such risks and ensure resilience in an increasingly regulated environment.

How UK businesses can adapt

1. Understand the risk level of AI systems: Conduct a comprehensive review of how AI is used within the organisation to determine risk levels. This assessment should consider the impact of the technology on users, stakeholders, and society.

2. Update compliance programs: Align data collection, system monitoring, and auditing practices with the requirements of the EU AI Act.

3. Adopt ISO 42001: Implementing the standard provides a scalable framework to manage AI responsibly, ensuring compliance while fostering innovation.

4. Invest in employee education: Equip teams with the knowledge to manage AI responsibly and adapt to evolving regulations.

5. Leverage advanced technologies: Use AI itself to monitor compliance, identify risks, and improve operational efficiency.

The future of AI regulation

As AI becomes an integral part of business operations, regulatory frameworks will continue to evolve. The EU AI Act will likely inspire similar legislation worldwide, creating a more complex compliance landscape. Businesses that act now to adopt international standards and align with best practices will be better positioned to navigate these changes.

The EU AI Act is a wake-up call for UK businesses to prioritise ethical AI practices and proactive compliance. By implementing tools like ISO 42001 and preparing for future regulations, organisations can turn compliance into an opportunity for growth, innovation, and resilience.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation ConferenceBlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Navigating the EU AI Act: Implications for UK businesses appeared first on AI News.

]]>