Editorial Policy
- Last updated:
- 10 May 2026
- Version:
- 1.0
This page describes how Loopify Independent Media reviews and approves content before publication. The process is the basis for our claim of editorial responsibility under Article 50(4) of the EU AI Act and underpins our compliance with platform-specific content policies.
Who reviews
Editorial review is performed by the Editor & Publisher, Serhii Hrytsyshyn, identified on the Editor & Publisher page. Reviewer identity, role, and contact details are recorded as a snapshot at the time of each review and stored with the audit log for that item.
Review process
Each item proceeds through the following stages before publication:
- Topic selection. The editor selects topics that are newsworthy, of public interest, and aligned with the channel's purpose.
- Draft production. AI tools assist with research consolidation, draft writing, narration synthesis, image generation, and subtitle generation. AI never makes the publication decision.
- Editorial review. The editor reads the full script, evaluates the opening line (hook), checks the substantive claims against named sources, and reviews the visual material for accuracy and appropriateness.
- Edits. Where the editor identifies issues — factual ambiguity, weak sourcing, unclear framing, inappropriate visuals — the item is edited or rejected.
- Approval. Once the editor is satisfied, the item is approved with a timestamped action log and a SHA-256 signature derived from the project identifier, reviewer name, completion timestamp, and number of recorded actions.
- Metadata embedding. Approval information — including editor name, organisation, and approval signature — is embedded as XMP metadata in the published video file before distribution.
- Distribution. Only after approval is the item released to a distribution platform.
What "substantial human review" means here
Substantial review under this policy requires, at minimum:
- Reading the full script in the language in which it will be published;
- Verifying that the claims attributed to specific sources actually appear in those sources;
- Approving or rewriting the opening line (hook) so it accurately frames the item;
- Reviewing each generated image for accuracy, appropriateness, and absence of misleading depiction;
- Confirming that the item complies with the destination platform's content policies;
- Recording each of the above as a discrete entry in the audit log.
Light proofreading, grammar correction, or a casual scan does not qualify as substantial review under this policy.
Standards we apply
Accuracy
We aim for factual accuracy. Where a claim cannot be verified against a credible named source, it is rewritten as opinion, removed, or the item is rejected.
Sourcing
We name our sources. We avoid anonymous claims unless attribution is impossible and the public interest is significant. Specific source attribution standards are described on the Source Policy page.
Fairness and balance
On contested or political topics, we represent the principal perspectives without advocacy. Where we express opinion, it is identified as opinion.
Originality
Content is original to Loopify Independent Media. Quotations from third-party sources are kept short, attributed, and used in the context of commentary or news reporting. We do not republish substantial passages from other publishers.
No deception
We do not impersonate real persons, fabricate quotations, generate synthetic likenesses of identifiable individuals without consent, or create content designed to mislead viewers about its origin or factual basis.
Privacy and dignity
We avoid content that exposes private individuals without a clear public interest, content directed at minors that could be inappropriate, and content that demeans groups on protected characteristics.
Audit log
For each item, the audit log records:
- Project identifier and channel;
- Review submission, start, and completion timestamps;
- Reviewer identity (snapshot);
- Each review action with a timestamp and structured details;
- The decision (approved or rejected) and, if rejected, the reason;
- The SHA-256 approval signature.
Audit logs are retained for the operational lifetime of the published item and at minimum for the period required by applicable law. Logs are produced on request to competent authorities and are available to the editor for self-audit.
Distribution platform compliance
Approved items are released only to platforms whose content policies they comply with. Where a platform requires explicit AI labelling at upload (for example, the YouTube "Altered or synthetic" disclosure or the TikTok AI-generated content label), the editor applies the required label. Platform-specific commitments are summarised on the Platform Compliance page.
Corrections
Where a published item contains a material error, we publish a correction or retraction in line with the Corrections policy. The audit log of the affected item is updated to record the correction.
Updates to this policy
This policy is reviewed periodically. Material changes are noted at the top of this page. The version label and last-updated date allow you to identify which policy version applied at any given time.