Building trustworthy AI Agents in the EU AI Act era

AI is finding its way into almost every part of business – and nowhere more obviously than in customer service. It can solve problems faster, free up teams, and improve customer experiences. But as adoption of AI grows, so does the need for safety, transparency, and compliance with new regulations like the EU AI Act.

To understand what that looks like in practice, we caught up with Felicia Ekener from our AI team, for a closer look at the process behind creating reliable AI Agents.

By now, most people have heard of the EU AI Act. But for anyone who isn’t familiar – what is it, and how does it apply to services like Ebbot?

Felicia:

"The EU AI Act is a new regulation that sets the rules for how AI should be developed and used across Europe. The aim is to make sure AI is safe, transparent, and respectful of people’s rights.

In short the EU AI Act divides AI into four different risk categories – from minimal to high risk – depending on what the AI is used for. Ebbot falls under the limited-risk category. That means our technology is considered safe for everyday business use, but we still need to meet certain requirements.

For us, that translates into that we have a really thorough process before any AI agent ever reaches a customer. For example we constantly evaluate new open-source foundation models and compare them with standard benchmarks, and also against our own internal metrics that reflect the qualities we care about most.

And when we find a model that looks promising, we fine-tune it and test it continuously – both automatically with metrics and manually with real-world customer scenarios. And only once a model has gone through all those steps, usually several rounds of iteration, do we consider it ready.

It’s a long process, but it’s what ensures we can stand behind our AI with confidence.”

👉 Want to dive deeper into what the EU AI Act means for businesses? Check out our blog on How the EU AI Act Will Shape the Future of Service Automation.

Customer service often involves sensitive information. How do you make sure that data stays protected?

Felicia:

"You’re absolutely right – data protection is critical when you’re using AI in customer service.

The first thing that you need to know is that our models are never trained on customer data. That means the model itself doesn’t actually ‘know’ anything about individual customers or their information.

Instead, when someone interacts with the AI, it uses tools like RAG (Retrieval-Augmented Generation) to pull just the specific data needed to answer that request. You can think of it as two separate parts: the AI agent, which acts as the brain, and the customer data, which it only borrows for a moment to generate the right response. Nothing ever gets stored inside the model itself.

The data always stays under the customer’s control, and anything stored is kept within the EU. And we also have built-in functions that can automatically mask or remove sensitive information.”

The EU AI Act also highlights fairness and non-discrimination as key principles. How do we ensure Ebbot’s AI agents live up to that?

Felicia:

“Yeah, bias is one of the biggest concerns when it comes to AI. And for good reason. That’s why every model we build starts with bias mitigation in mind.

That means we don’t just look at how well a model performs, but also what kind of data it was trained on and how that might influence its answers. We test for this early on, and it’s one of the key factors we evaluate before a model is ever used in production.

And once a model goes live, we keep monitoring it. Through our AI Insights tool, customers can actually see how the AI has answered in different situations, review the analysis behind it, and even create their own evaluation sets to track potential bias in their specific domain.

That way, if something doesn’t look right, they have full visibility and the control to fine-tune the AI so it behaves the way they want.”

And what about hallucinations? You know, when an AI sounds confident but gives the wrong answer?

Felicia:

“Yeah, hallucinations are definitely one of the trickiest parts of working with generative AI.

At Ebbot we work against hallucinations in several ways. First, we test and benchmark our models specifically for how prone they are to hallucinate, and we fine-tune the ones that perform best. But the reality is, you can’t completely remove hallucinations – it’s part of how these models generate language.

That’s why we add extra safeguards that can flag answers that look unreliable or off-topic. And just as important, we make sure the model always has clear, accurate data to rely on. The better the context, the less likely it is to make something up.

In practice, we can really see the difference — our fine-tuned models perform noticeably better than the raw base models when it comes to reducing hallucinations and keeping answers grounded in real information.”

And after all that testing, how do you keep the AI agent safe when it’s actually running with customers?

Felicia:

“We’ve built in several layers of protection to make sure our AI agents operate safely once they’re live.

For example, we use prompt guards that detect and block harmful or unusual inputs before they even reach the model. Then we have content guards that review the AI’s responses before they’re shown to the user – kind of like a final quality check.

On top of that, each customer setup is tailored with its own configuration – usually through custom personas, prompts, and settings that fit their specific use case.

The goal isn’t just to prevent something from going wrong – it’s to make sure the agent stays genuinely helpful, accurate, and trustworthy in every single interaction.”

Wrapping up

The EU AI Act is raising the bar for how AI should be built – and that’s something we welcome here at Ebbot. With a strong in-house team of AI engineers and machine learning experts, we can continuously test, refine, and secure our models to meet these new standards.

About Ebbot

Ebbot is an Agentic AI platform designed for large-scale service automation. Built to meet the needs of regulated industries, Ebbot is trusted by more than 200 companies to deliver outstanding service experiences for both customers and employees.

👉 Want to see Ebbot’s AI Agents in action?