Launched March 4, 2026, Apple Music's four AI-disclosure tags travel inside the metadata Apple ingests from aggregators. element59 sets them at upload from your DDEX 4.3.1 disclosure so they ship with every track — no separate Apple-side form.
Source: AppleInsider — Apple Music rolls out AI transparency tags (March 4, 2026).
AI generated or transformed the vocal performance — voice cloning, text-to-singing models like Suno's vocal stem, AI vocal tuning beyond standard pitch correction. The tag fires whether the vocal is fully synthetic or a real performance pushed through an AI vocal-style model.
AI generated the instrumental performance or significant production elements — drums, bass, synth lines, guitar from a generative model. Standard digital instruments (sampled VSTs, drum machines) don't trigger the tag; only generative-model output does.
AI co-wrote or proposed the song's melodic / harmonic / structural decisions. A human songwriter using a chatbot for lyric ideas usually doesn't trigger this tag; an AI that proposed the chord progression or song structure does.
AI handled mastering, stem separation, vocal isolation, denoising, or other post-tracking processing using a generative or learned model — not just a static algorithm.
The five-state DDEX 4.3.1 origin you pick during upload (FULLY_HUMAN / HUMAN_WITH_AI_ASSIST / AI_WITH_HUMAN_MOD / AI_WITH_HUMAN_GUIDE / FULLY_AI) plus the four per-tag checkboxes in the disclosure wizard combine into the Apple tag set. You see the resulting tag set live in the upload preview. We don't infer tags from your audio; the artist always controls the final set.