Why Apple still lets malformed media files reach decoders – and how to stop it

By: jamweba

Proposed: a memory-safe, pre-decoder validator layer for media inputs (MP4, MOV, etc) that Apple could deploy without changing existing decoders.

Eliminates a class of zero-click exploits. No format breakage. No patching.

https://jam2we5b3a.medium.com/this-is-the-future-apple-should-already-be-shipping-054c69d78e50

By: jamweba

4 weeks ago

Most media decoders still process unvalidated files — which keeps zero-click attack surfaces wide open.

This write-up outlines a minimal architectural fix: a structural validator that intercepts files before decoding begins.

    It needs no decoder rewrites

    It's format-agnostic (MP4, MOV, PNG, etc.)

    It works with existing delivery paths (AirDrop, Mail, Safari)

    And it could be deployed today
Curious what others think: Why hasn’t this already been adopted? Would Apple (or anyone) ship it?

By: solardev

4 weeks ago

Doesn't this move the validation step from each decoder to this sort of universal validation app (maintained and audited by who?), and would require every app to pipe its documents through it first, or else require an OS level change to make this an integral part of the "open file" workflow? It's almost like an anti-virus program.

If validating media integrity is as simple as checking a few bytes in the header, the decoder could already do that on its own.

Presumably there are attacks in media that look valid but cause subtle decoding bugs that then escalate into more serious things. How would this proposal catch those without an in depth understanding of each codec and version's possible failure modes, per operating system and hardware combo? The people who typically know that the best are already on decoder or security teams, and this just moves their work to a separate project where they'd have to integrate their checks and preventive measures alongside every other format's. Seems like a lot of work?

By: jamweba

3 weeks ago

Sorry, didn't see your reply earlier. Let me address each of your points. First, your question about shifting work to a universal validator. The point isn't to create one monolithic parser for all formats — it's to enforce a structural validation layer before any decoder is allowed to operate. Think: byte-level box/frame/atom parsing for formats like MP4, MOV, PNG — where the container format is well-defined and modular. The validator isn’t decoding media; it’s checking that structure matches declared length/type bounds, box trees are sane, and forbidden segments aren’t present. This can be OS-level, just like Apple’s existing xprotect and AMFI — but for structured media, not binaries.

Can’t decoders already do this themselves? Theoretically yes — but in practice, media decoders are huge, legacy-tangled, performance-optimized, and frequently cross-platform. Asking each one to reliably gate input based on structural sanity is like asking libc to do bounds-checking. Sandboxes help, but they’re coarse — we’re talking about a clean, minimal contract: don’t decode unless the container structure is provably valid. It’s the same logic behind memory-safe preprocessing layers.

What about subtle decoder bugs in seemingly valid media? True — this doesn’t solve all decoder vulnerabilities. But it dramatically cuts risk by stripping malformed, truncated, recursive, or structurally deviant files before they reach the decoder logic. You can’t prevent every logic bug in a decoder, but you can gate execution to files that pass structural integrity — just like we gate executable code through signing and entitlement checks.

Thanks for the thoughtful reply!

Jamweba