Editorial Policy

Last updated:
10 May 2026
Version:
1.0

This page describes how Loopify Independent Media reviews and approves content before publication. The process is the basis for our claim of editorial responsibility under Article 50(4) of the EU AI Act and underpins our compliance with platform-specific content policies.

Core principle. Every item published under Loopify Independent Media is reviewed in full by an identifiable human editor before it is released. AI tools assist; humans decide and are accountable.

Who reviews

Editorial review is performed by the Editor & Publisher, Serhii Hrytsyshyn, identified on the Editor & Publisher page. Reviewer identity, role, and contact details are recorded as a snapshot at the time of each review and stored with the audit log for that item.

Review process

Each item proceeds through the following stages before publication:

  1. Topic selection. The editor selects topics that are newsworthy, of public interest, and aligned with the channel's purpose.
  2. Draft production. AI tools assist with research consolidation, draft writing, narration synthesis, image generation, and subtitle generation. AI never makes the publication decision.
  3. Editorial review. The editor reads the full script, evaluates the opening line (hook), checks the substantive claims against named sources, and reviews the visual material for accuracy and appropriateness.
  4. Edits. Where the editor identifies issues — factual ambiguity, weak sourcing, unclear framing, inappropriate visuals — the item is edited or rejected.
  5. Approval. Once the editor is satisfied, the item is approved with a timestamped action log and a SHA-256 signature derived from the project identifier, reviewer name, completion timestamp, and number of recorded actions.
  6. Metadata embedding. Approval information — including editor name, organisation, and approval signature — is embedded as XMP metadata in the published video file before distribution.
  7. Distribution. Only after approval is the item released to a distribution platform.

What "substantial human review" means here

Substantial review under this policy requires, at minimum:

Light proofreading, grammar correction, or a casual scan does not qualify as substantial review under this policy.

Standards we apply

Accuracy

We aim for factual accuracy. Where a claim cannot be verified against a credible named source, it is rewritten as opinion, removed, or the item is rejected.

Sourcing

We name our sources. We avoid anonymous claims unless attribution is impossible and the public interest is significant. Specific source attribution standards are described on the Source Policy page.

Fairness and balance

On contested or political topics, we represent the principal perspectives without advocacy. Where we express opinion, it is identified as opinion.

Originality

Content is original to Loopify Independent Media. Quotations from third-party sources are kept short, attributed, and used in the context of commentary or news reporting. We do not republish substantial passages from other publishers.

No deception

We do not impersonate real persons, fabricate quotations, generate synthetic likenesses of identifiable individuals without consent, or create content designed to mislead viewers about its origin or factual basis.

Privacy and dignity

We avoid content that exposes private individuals without a clear public interest, content directed at minors that could be inappropriate, and content that demeans groups on protected characteristics.

Audit log

For each item, the audit log records:

Audit logs are retained for the operational lifetime of the published item and at minimum for the period required by applicable law. Logs are produced on request to competent authorities and are available to the editor for self-audit.

Distribution platform compliance

Approved items are released only to platforms whose content policies they comply with. Where a platform requires explicit AI labelling at upload (for example, the YouTube "Altered or synthetic" disclosure or the TikTok AI-generated content label), the editor applies the required label. Platform-specific commitments are summarised on the Platform Compliance page.

Corrections

Where a published item contains a material error, we publish a correction or retraction in line with the Corrections policy. The audit log of the affected item is updated to record the correction.

Updates to this policy

This policy is reviewed periodically. Material changes are noted at the top of this page. The version label and last-updated date allow you to identify which policy version applied at any given time.