AI Disclosure
- Last updated:
- 10 May 2026
- Version:
- 1.0
This page explains how Loopify Independent Media uses Artificial Intelligence (AI) tools, what humans review, and how we comply with the EU AI Act, national rules, and the content policies of the platforms on which we publish.
What AI tools we use
| Function | Example tools | Human role |
|---|---|---|
| Research consolidation | Large language models (e.g. Gemini) | Editor selects topic, verifies sources, rewrites where needed |
| Script drafting | Large language models | Editor reads in full, edits hooks and claims, approves or rejects |
| Narration audio | Text-to-speech (e.g. ElevenLabs) | Editor selects voice; output reviewed |
| Imagery | Text-to-image generators (e.g. Imagen) | Editor reviews each image; replaces unsuitable ones |
| Subtitles | Speech-to-text (e.g. Whisper) | Editor reviews and corrects subtitle timing or transcription |
| Production layout | Custom pipeline | Editor configures parameters; output reviewed |
What AI tools do not do
- AI does not decide what to publish. The editor decides.
- AI does not impersonate real persons. We do not generate synthetic likenesses of identifiable individuals without consent.
- AI is not used to fabricate quotations or events. Where AI output contains a quotation, the editor verifies it against the named source or removes it.
- AI is not used to deceive. Output is framed as commentary or news, never as a verbatim reproduction of someone else's speech.
- AI does not auto-publish. No item leaves the system without human approval.
Editorial responsibility exemption (EU AI Act Article 50(4))
The EU AI Act (Regulation (EU) 2024/1689), in Article 50(4), provides that the obligation to label AI-generated or AI-modified content does not apply where the content has undergone a process of human review or editorial control and where a natural or legal person holds editorial responsibility for the publication of the content.
Loopify Independent Media relies on this exemption. The required documentation is in place:
- Identifiable editor. Identified on the Editor & Publisher page.
- Documented review procedure. Set out on the Editorial Policy page.
- Per-item review records. Audit log with reviewer identity, timestamps, actions, and approval signature retained for each item.
- Embedded metadata. Approval information embedded as XMP fields (Creator, Publisher, Rights, Identifier, History Action) in the file at publication.
Where, despite this exemption, a destination platform's own policy still requires a label, we apply the required platform label.
National rules
We follow national rules in the jurisdictions where we operate or publish. In particular:
- Italy — Law No. 132/2025. We comply with Italian rules on the use of AI in editorial and commercial communication. We do not produce or distribute deceptive synthetic media of real persons. Material that uses AI for substantial creative generation is identified to the destination platform per its own policy.
- Germany — Telemediengesetz / TTDSG and editorial duties. Where applicable, the imprint and editorial responsibility are clearly stated and easily accessible.
- Other EU member states. We follow generally applicable national press, transparency, and consumer protection rules.
Platform-specific labelling
We apply each platform's required AI label or "made with AI" toggle where the platform's own rules require it, regardless of the EU exemption. Specifics for each platform are summarised on the Platform Compliance page. As a brief overview:
- YouTube: "Altered or synthetic content" disclosure used on items with realistic synthetic content (per YouTube policy effective May 2025).
- TikTok: "AI-generated content" toggle enabled on items with substantially AI-generated visuals or audio.
- Meta (Facebook / Instagram / Threads): "AI Info" tag applied where required; we do not contest C2PA-based auto-labels.
- X (Twitter): AI use disclosed where the platform's policies require it.
- LinkedIn: Disclosure of AI-assisted content where the platform expects it.
What we will not publish
- Deepfakes of real persons made without their consent;
- Synthetic content depicting children or minors in any sexualised context;
- AI-generated medical, legal, or financial advice presented as professional advice;
- Synthetic crisis-event imagery (fake disasters, fake breaking news);
- Content that fabricates statements by named real persons;
- Content designed to manipulate elections or undermine civic processes;
- Content that incites violence, hate, or discrimination on protected characteristics.
Asking about a specific item
If you would like to know how AI was used in a specific item, please write to support@loopify.pro with the URL of the item. We will respond and, where appropriate, update or annotate the item.