Jan 12, 2026

Third-party AI in dental clinics: what cannot be outsourced?

Third-party AI solutions in dental clinics: what cannot be outsourced?

Dental clinics are increasingly using AI features provided by external parties – often as embedded parts of record systems, communication tools, or operational systems.

When AI comes "ready-made" from a supplier, it is easy to assume that the responsibility also accompanies it. In practice, confusion arises precisely at the intersection between the supplier's product responsibility and the clinic's responsibility for use, control, and documentation.

This article explains what can typically be delegated to third parties, what cannot be outsourced, and which governance principles should be in place for the usage to be verifiable over time. 

What does AI use in dental health mean?

In dental health, third-party AI is rarely a clear "AI solution." More often, AI functionality is packaged into services the clinic already uses or purchased as add-ons that provide automation and support.

Typical third-party deliveries may include:

  • Text and documentation support (drafts, summaries, structuring, language adaptation)

  • Workflow and prioritization (sorting of inquiries, task suggestions, capacity support)

  • Quality and deviation signals (alerts about deficiencies, inconsistency, or patterns)

  • Patient communication (standard texts, response suggestions, translations)

In all cases, the core remains the same: AI affects decision basis, formulations, or priorities in the clinic's work. This makes it so that responsibility cannot be understood as solely a "product question," but as a matter of how the clinic governs and uses the functionality in practice. 

Where does the responsibility issue arise?

The responsibility issue becomes especially evident when the clinic uses third-party AI without simultaneously establishing clear frameworks for use, control, and documentation.

Some recurring situations are:

  1. When AI is used as a "standard function" without local clarification
    Embedded functions can be activated through updates or default settings. If the clinic does not have a decision point for what is an acceptable use area, responsibility becomes unclear and person-dependent.

  2. When the supplier's documentation does not meet the clinic's governance needs
    The supplier may document the product, but the clinic must still be able to explain its own practice: which use areas are approved, who controls, and how deviations are handled.

  3. When the data chain becomes complex (subcontractors and cloud services)
    Third-party AI often involves multiple links. For personal data, there is a fundamental principle that the entity determining purpose and means (responsible for processing) cannot "outsource" its responsibility by using a data processor. 

  4. When "human control" is assumed but not operationalized
    The clinic may assume that professionals always evaluate and intervene when necessary, but without concrete control points and roles, this becomes variable practice. The EU AI Act describes obligations for deployers of high-risk AI, including duties to ensure competent human oversight and using the system according to instructions. 

  5. When something goes wrong, and the clinic cannot reconstruct the event history
    In practice, "responsibility" often becomes a matter of inquiry: What was the active function? What suggestion did the system provide? Who approved? What was changed? Without traceability, it becomes difficult to learn, correct, and explain – regardless of the supplier.

Common misconceptions

"Responsibility follows the supplier"

Suppliers are responsible for their product and documentation, but the clinic is responsible for how the solution is used in its own operations: use areas, roles, training, control points, and deviation handling. This becomes particularly evident in the EU AI Act, which distinguishes between supplier and deployer obligations for high-risk AI. 

"If we have a contract, we have outsourced the risk"

A contract can allocate deliveries and duties, but it does not replace the clinic's need for actual management. Regarding personal data, organizations can outsource processing activities but not their responsibility and obligations under GDPR. 

"The supplier's certification or 'compliance' is enough"

Certifications and standards may be relevant signals, but they do not say much about how the solution functions in the clinic's specific usage pattern. The responsibility issue often doesn’t arise in the supplier’s design, but in the clinic’s workflow: what is AI used for, when should one stop, and how is control documented.

"This is just an IT and procurement question"

Third-party AI involves procurement and operations, but the most demanding clarifications are organizational: who approves use areas, who exercises professional control, and who owns deviations and changes. When governance is reduced to IT/procurement, professional ownership often becomes unclear.

"If the system is embedded, it's not 'our AI'"

Embedded functions can feel like "just part of the system," but for responsibility and verifiability, it is still the clinic's practice that is decisive: who uses it, for what, and under which frameworks. It is often here a gap arises between actual use and documented governance.

What should be in place in practice?

When AI is provided by a third party, the clinic should practically be able to show that it has managed the use – not just acquired a tool. A pragmatic minimum can be structured as follows:

  • Mapped actual AI usage in the organization
    Overview of which systems have AI functions (including "embedded" functions), what they are used for, and which processes they affect. 

  • Clear roles, responsibilities, and decision authority
    Define who can activate new functions, who approves use areas, and who owns follow-up. In small clinics, this can be collected within a few individuals, but the roles should be explicit. 

  • Human control and possibility for override
    The control must be concrete: what should be verified before signing/sending, what are the stop criteria, and who has the mandate to pause use in case of deviations. For deployers of high-risk AI in the EU AI Act, duties related to competent human oversight, use according to instructions, and continuous monitoring are described. 

  • Documentation and traceability
    The clinic should be able to explain afterward: which use area, which function, who approved, and which control mechanisms applied. For high-risk AI, the EU AI Act also describes an obligation for deployers to keep logs generated by the system for at least six months (as far as the logs are under the deployer's control). 

  • Explainability to patients and supervisory authorities
    The clinic should be able to describe practically what AI is used for and what limitations and control points exist. This is about verifiability, not technical details. 

  • Third-party management that actually covers the data chain
    Where personal data is processed, the clinic must have an overview of roles and the chain (data processors/subcontractors) and be able to document that this is managed as part of the organization's accountability. The EDPB emphasizes that accountability primarily lies with the controller, even when processing is outsourced, and that the controller must have control through instructions and documentation. 

Actera's role in this

Actera is established to provide dental health organizations with structure regarding responsible AI use. 

We do not work with technology development or clinical decisions but with governance structure, lines of responsibility, and documentation – so that AI can be used in a safe, predictable, and verifiable manner. 

Final consideration

Third-party AI makes it possible to use advanced functionality without building it yourself. At the same time, it does not change a fundamental fact: The clinic owns the usage in its own context and must be able to explain how AI affects practice.

What cannot be outsourced is therefore not "the technology," but the governance: decision points, control mechanisms, traceability, and lines of responsibility that withstand verification. When this is in place, supplier usage also becomes more robust over time – because the clinic has a framework that handles changes, deviations, and new functions without the responsibility becoming unclear.

Lawyer portrait photo

Related articles

Related articles

AI in documentation: responsibility and record-keeping

AI in dental clinics: who is responsible?

AI and Patient Information: Traceability and Accountability

Business Intelligence in Dental Care: Data as Decision Support

Deviations in AI at Dental Clinic: Responsibility and Follow-up

Third-party AI in dental clinics: what cannot be outsourced?

Documentation of AI Use in Dentistry

Human oversight in the use of AI in dental clinics

Responsible AI in Dental Clinics: Which Roles Need to be Clear?

AI in documentation: responsibility and record-keeping

AI in dental clinics: who is responsible?

AI in documentation: responsibility and record-keeping

AI in dental clinics: who is responsible?

AI and Patient Information: Traceability and Accountability