Field of Interest

Kea - a curious and intelligent parrot from the South Island of New Zealand.
Kea - a curious and intelligent parrot from the South Island of New Zealand. © J. Frochte

My research connects machine learning methodology with real-world applicability. A recurring theme is supporting organisations in their digital transformation — whether through AI methods that grow with changing requirements, models that explain their decisions, data-driven teaching innovation, or practical AI deployment in industry. Published papers including preprints are provided here and information and codes concerning my books here.

Applied AI

LLMs · Industry 4.0 · Digital Twins · Robotics · AI for SMEs

Bringing AI into practice — including where standard approaches fall short. We work on locally hosted Large Language Models for data sovereignty, AI-driven optimisation in manufacturing, and autonomous systems from visual SLAM to LLM-based navigation. Many real-world problems come with limited data, ill-posed formulations, or require domain knowledge: we develop methods like Physics-Informed Neural Networks, patch-based regression, and one-shot identification specifically for these scenarios.

Green AI

Continual Learning · Resource-Efficient Models

How can AI models learn new tasks without retraining from scratch? Continual Learning avoids catastrophic forgetting and reduces computational cost, CO2 footprint, and the need to store old training data — directly relevant for GDPR compliance. Our methods range from sparse network expansion (SECL/eSECL) to gradient boosting with neural networks (MANN) and tree-structured architectures. The goal: AI systems that grow with changing requirements instead of being rebuilt every time.

Responsible & Trustworthy AI

Explainable AI · Interpretability · EU AI Act

AI systems that make decisions need to explain them — increasingly also by law, as the EU AI Act raises the bar for transparency and accountability. We develop architectures where explainability is built in from the start, not added as an afterthought. Capsule networks, additive neural network ensembles, and sparse architectures provide inherent transparency. This work is tightly linked to our Continual Learning research: many of our CL methods are explainable by design. We also study the gap in explainability across application domains, from industrial AI to educational data mining.

Educational Data Mining & Learning Analytics

Student Success Prediction · Gamification · Learning Design

Data-driven approaches to improve higher education outcomes: predicting dropout risks from anonymised exam data, identifying success factors in e-learning exercises, and designing gamified math environments that measurably increase motivation. A key concern is the lack of explainability in modern EDM — most deep learning based approaches remain black boxes despite ethical and regulatory demands. Our work combines practical tool development (open-source, LMS-integrable) with research on making educational AI more trustworthy and actionable for counsellors.

Research Network

Selected academic collaborators I have published with: