Skip to content

Decoder metadata interface#2672

Open
197g wants to merge 33 commits intomainfrom
decoder-metadata-interface
Open

Decoder metadata interface#2672
197g wants to merge 33 commits intomainfrom
decoder-metadata-interface

Conversation

@197g
Copy link
Copy Markdown
Member

@197g 197g commented Nov 26, 2025

See #2245, the intended ImageDecoder changes.

This changes the ImageDecoder trait to fix some underlying issues. The main change is a clarification to the responsibilities; the trait is an interface from an implementor towards the image library. That is, the protocol established from its interface should allow us to drive the decoder into our buffers and our metadata. It is not optimized to be used by an external caller which should prefer the use of ImageReader and other inherent methods instead.

This is a work-in-progress, below motivates the changes and discusses open points.

  • ImageDecoder::peek_layout encourages decoders to read headers after the constructor. This fixes the inherent problem we had with communicating limits. The sequences for internal use is roughly:
    let mut decoder = …;
    decoder.set_limits(); // Other global configuration we have?
    
    { // Potentially multiple times:
      let layout_info = decoder.peek_layout()?;
      let mut buffer = allocate_for(&layout_info);
      decoder.assign_metadata()?;
      decoder.read_image(&mut buffer)?;
    }
    
    // … for sequences, start again from `peek_layout()`
  • ImageDecoder::read_image(&mut self) no longer consumes self. We no longer need the additional boxed method and its trait work around, the trait is now dyn-compatible.

Discussion

  • Maybe init peek_layout should return the full layout information in a single struct. We have a similar open issue for png in its own crate, and the related work for tiff is in the pipeline where its BufferLayoutPreference already exists to be extended with said information.
    • Review limits and remove its size bounds insofar as they can be checked against the communicated bounds in the metadata step by the image side. see: Replace ImageDecoder::set_limits with ImageDecoder::set_allocation_limit #2709, Add an atomically shared allocation limit #2708
    • Idea: If a decoder supports builtin transforms (e.g. YCbCr -> Rgb conversion, grayscale, thumbnailing) that are more efficient than post-processing then there could be a negotiation phase here where information is polled twice / multiple times by different methods. The design should leave this negative space to be added in 1.1, but it's not highly critical.
  • Fix the sequence decoder to use the new API
  • Tests for reading an image with read_image then switching to a sequence reader. But that is supposed to become mainly an adapter that implements the iterator protocol.
  • Remove remnants of the dyn-compatibility issue.
  • Adapt to the possibility of fetching metadata after the image. This includes changing ImageReader with a new interface to return some of it. That may be better suited for a separate PR though.
    • Extract the CICP part of the metadata as CicpRgb and apply it to a decoded DynamicImage.
    • Ensure that this is supported by all the bindings.
  • Deal with limits: Decoder metadata interface #2672 (comment)

Cleanup

  • Better errors for peek_layout more consistently after read_image
    • avif
    • tga
    • pnm
    • tiff
    • dxt
    • qoi
    • dds
    • gif
  • Make sure that read_image is 'destructive' in all decoders, i.e. re-reading an image and reading an image before init should never access an incorrect part of the underlying stream but instead return an error. Affects pnm and qoi for instance where the read will interpret bytes based on the dimensions and color, which would be invalid before reading the header and only valid for one read.

@mstoeckl
Copy link
Copy Markdown
Contributor

mstoeckl commented Nov 27, 2025

The main change is a clarification to the responsibilities; the trait is an interface from an implementor towards the image library. That is, the protocol established from its interface should allow us to drive the decoder into our buffers and our metadata. It is not optimized to be used by an external caller which should prefer the use of ImageReader and other inherent methods instead.

With this framing, I think Limits::max_image_width and Limits::max_image_height no longer need to be communicated to or handled by the ImageDecoder trait, because the external code can check ImageDecoder::dimensions() before invoking ImageDecoder::read_image(); only the memory limit (Limits::max_alloc) is essential. That being said, the current way Limits are handled by ImageDecoder isn't that awkward to implement, so to reduce migration costs keeping the current ImageDecoder::set_limits() API may be OK.

@fintelia
Copy link
Copy Markdown
Contributor

A couple thoughts...

I do like the idea of handling animation decoding with this same trait. To understand, are you thinking of "sequences" as being animations or also stuff like the multiple images stored in a TIFF file? Even just handling animation has some tricky cases though. For instance in PNG, the default image that you get if you treat the image as non-animated may be different from the first frame of the animation. We might need both a read_image and a read_frame method.

The addition of an init method doesn't seem like it gains us much. The tricky part of our current new+set_limits API is that you get to look at the image dimensions and total output size in bytes when deciding what decoding limits to set. Requiring init (and by extension set_limits) to be called before reading the dimensions makes it basically the same as just having a with_limits constructor.

@197g
Copy link
Copy Markdown
Member Author

197g commented Nov 27, 2025

Requiring init (and by extension set_limits) to be called before reading the dimensions makes it basically the same as just having a with_limits constructor.

It's a dyn-compatible way that achieves the goal of the constructor so it is actually an abstraction.

The tricky part of our current new+set_limits API is that you get to look at the image dimensions and total output size in bytes when deciding what decoding limits to set.

What do you by this? The main problem in png that I'm aware of is the lack of configured limits for reading the header in the ImageReader path, that motivated the extra constructor in the first place. With png we can not modify the limits after the fact but also we don't really perform any large size-dependent allocation within the decoder.

I'm also not suggesting that calling set_limits after the layout inspection would be disallowed but obviously is decoder dependent on whether that 'frees' additional capacity. I guess if that is sufficient remains to be seen? When we allocate a buffer (with applied allocator limits)´ that allows forwarding the remaining buffer size to the decoder. Or, set aside a different buffer allowance for metadata vs. image data. Whatever change is necessary in png just comes on top anyways, the init flow just allows us to abstract this and thus apply it with an existing Box<dyn ImageDecoder> so we don't have to do it all before. Indeed, as the comment on size eludes to we may want to different limit structs: one user facing that we use in ImageReader and one binding-facing that is passed to ImageDecoder::set_limits. Then settings just need to be

@197g
Copy link
Copy Markdown
Member Author

197g commented Dec 4, 2025

@fintelia This now includes the other changes including to ImageReader as a draft of what I meant in #2679 (comment). In short:

  • The file guessing routines and the construction of the boxed decoder are split into a separate type, ImageFile, which provides the previous methods of ImageReader but also an into_reader for the mutable, stateful interface.
  • ImageReader:
    • features all accessors for metadata considering that some formats fill said metadata (or a pointer to it) after an image is decoded.
    • has viewbox as a re-imagining of the previous ImageDecoderRect trait but split into two responsibilites: the trait does the efficiency decision on an image-by-image basis with an interface that allows a partial application of the viewbox (in jpeg and tiff we would decode whole tiles); then the reader takes care of translating that into an exact layout. Note that another type of image buffer with offset+rowpitch information could do that adjustment zerocopy—I still want to get those benefits of the type erased buffer/image-canvas someday and this fits in.
  • The code also retrieves the CICP from the color profile and annotates the DynamicImage with it where available. For sanity's sake the moxcms integration was rewritten to allow a smaller dependency to be used here, I'll split these off the PR if we decide to go that route.
  • Conceivably there's a gain_map (or similar) that may be queried similar to the metadata methods. For that to be more ergonomic I'd like to seriously consider read_plane for, in tiff lingo, planar images as well as associated and non-associated mask data; and more speculatively other extra samples that are bump maps? uv? true cmyk?. While that does not necessarily all go into 1.* for any output that is not quite neatly statically sorted and sized as an Rgba 4-channel-homogeneous-host-order, I imagine it will be much simpler for a decoder to provides its data successively in multiple calls instead of a contiguous large byte slice. Similar to viewbox we'd allow this where ImageReader provides the compatibility to re-layout the image for the actual user—except where explicitly instructed. Adjusting ImageReader::decode to that effect should be no problem in principle.

@RunDevelopment
Copy link
Copy Markdown
Member

I can't speak about image metadata, but I really don't like the new ImageDecoder interface as both an implementor of the interface and a potential user of it. Right now, it's just not clear to me at all how decoders should behave. My problems are:

  1. init. This is just two-phase initialization and opens so many questions.
    • Do users have to call it? The docs say "should" not "must" be called before read_image.
    • Are users allowed to call it multiple times? If so, the decoder has to keep track of whether the header has already been read.
    • Since init returns a layout, what's the point of dimensions() and color_type()? And what if they disagree?
    • What should dimensions and co do before init is called?
    • If init fails, what should happen to methods like dimensions and read_image? When called, should they panic, return an error, return default values?
    • After calling read_image, do you have to re-init before calling read_image again?
  2. viewbox makes it more difficult to implement decoders.
    • Now they always have to internally keep track of the viewbox rect if rect decoding is supported.
    • After calling viewbox, what should dimensions be? If they should be the viewbox size, should they reflect the new viewbox even before calling init?
    • It's not clear what should happen if viewbox returns ok, but init errors.
    • What should happen if users supply a viewbox outside the bounds of the image?
  3. When calling viewbox, is the offset of the rect relative to the (0,0) of the full image or the last set viewbox?
  4. What should happen if read_image is called twice? Should it read the same image again, error, read the next image in the sequence? The docs don't say.
    • If the intended behavior is "read the same image again", then those semantics would force all decoders to require Seek for the reader (or keep an in memory copy of the image for subsequent reads). Not an unreasonable requirement, but it should be explicitly documented.

Regarding rectangle decoding, I think it would be better if we force decoders to support arbitrary rects. That's because the current interface is actually less efficient by allowing decoder to support only certain rects. To read a specific rect that is not supported as is, ImageReader has to read a too-large rect and then crop the read image, allocating the memory for the too-large image only to throw it away. It is forced to do this, because of the API.

However, most image formats are based on lines of block (macro pixels). So we can do a trick. Decode a line according to the too-large rect, and then only copy the pixels in the real rect to the output buffer. This reduces the memory overhead for unsupported rects from O(width*height) to O(width*block_height). Supported rects don't need this dance and can decode into the output buffer directly. I.e. that's kinda what DDS does.

And if a format can't do the line-based trick for unsupported rects, then decoders should just allocate a temp buffer for the too-large rect and then crop (=copy what is needed). This is still just as efficient as the best ImageReader can do.

For use cases where users can use rowpitch to ignore the exccess parts of the too-large rect, we could just have a method that gives back a preferred rect, which can be decoded very efficiently.

So the API could look like this:

trait ImageDecoder {
    // ...
    /// Returns a viewbox that contains all pixels of the given rect but can potentially be decoded more efficiently.
    /// If rect decoding is not supported or no more-efficient rect exists, the given rect is returned as is.
    fn preferred_viewbox(&self, viewbox: Rect) -> Rect {
        viewbox // default impl
    }
    fn read_image_rect(&mut self, buf, viewbox) -> ImageResult {
        Err(ImageError::Decoding(Decoding::RectDecodingNotSupported)) // or similar
    }

This API should make rect decoding easier to use, easier to implement, and allow for more efficient implementations.

@197g 197g force-pushed the decoder-metadata-interface branch from 86c9194 to cdc0363 Compare December 7, 2025 18:22
@197g
Copy link
Copy Markdown
Member Author

197g commented Dec 7, 2025

  1. init. This is just two-phase initialization and opens so many questions.

That was one of the open questions, the argument you're presenting makes it clear it should return the layout and that's it. Renamed to next_layout accordingly. I'd like to remove the existing dimensions()/color_type methods from the trait as well. There's no point using separate method calls for communicating them.


  • For use cases where users can use rowpitch, […]

    That is ultimately the crux of the problem. I'd say it's pretty much the only problem even though that does not appreciate the complexity. A lot of what you put forth is overly specific to solving one instance of it, obviously focusing on DDS. That's not bad but take a step back to the larger picture. There's no good way to communicate all kinds of layouts that the caller could handle: tiled, planar, depths, sample types …. With the information being exchanged right now, no-one can find a best-match between the requirements of image's data types (and Limits) and what the decoder can provide. This won't be solved by moving complexity into the decoders, we need to get structured information out of them primarily, then make that decision / handling the resulting byte data in image's code.

    1. viewbox makes it more difficult to implement decoders.

    The point of the default implementation in this PR is that it is purely opt-in. Don't implement the method for decoders that can not provide viewbox decoding and everything works correctly. The documentation seems to be confusing, point taken. We're always going to have inefficiencies, I'm for working through the distinct alternative layouts that allow an optimization one-by-one. More importantly for this PR immediately is what outcome a caller may want and what interface would give it to them—in this case I've worked on the use-case of extracting part of an atlas.

  • However, most image formats are based on lines of block (macro pixels). So we can do a trick.

    I'm not designing anything in this interface around a singular "trick", that's the wrong way around. That is how we got here. That's precisely what created ImageDecoderRect, almost to the dot. Falsehoods programmer's assume about image decoding will lead that to this breaking down and to be horrible to maintain. The trick you mention should live in the decoder's trait impl and nowhere else and we can bring it back where appropriate and possible. (Note that if you do it for a specific format, some formats will be even more efficient and not require you decode anything line-by-line but skip ahead, do tiles, … That's just to drive home the point that you do not want to do this above the decoder abstraction but below it in the ImageDecoder impl).

  • It is forced to do this, because of the API.

    The decoder impls is only forced to do anything if we force it via an interface—this PR does not; read_image_rect(&mut self, buf, viewbox) does force a decoder to be able to handle all possible viewboxes—this PR does not. I'm definitely taking worse short-term efficiency over code maintenance problems—the latter won't get us efficiency in the long run either.


When calling viewbox, is the offset of the rect relative to the (0,0) of the full image or the last set viewbox?

It's suppose to be to the full image. Yeah, that needs more documentation and pointers to the proper implementation.

@RunDevelopment

This comment was marked as outdated.

@197g

This comment was marked as resolved.

@RunDevelopment

This comment was marked as resolved.

@197g 197g force-pushed the decoder-metadata-interface branch 2 times, most recently from 1a114c3 to 306c6d2 Compare December 16, 2025 18:40
@197g
Copy link
Copy Markdown
Member Author

197g commented Dec 16, 2025

Resolving the naming question as peek_layout, hopefully satisfactory for now.

@197g 197g force-pushed the decoder-metadata-interface branch 7 times, most recently from e8d2713 to 4325060 Compare December 22, 2025 17:54
@197g
Copy link
Copy Markdown
Member Author

197g commented Dec 22, 2025

@fintelia I understand this is too big for a code-depth review but I'd be interested in the directional input. Is the merging of 'animations' and simple images as well as the optimization hint methods convincing enough? Is the idea of returning data from read_image something that works for you? The struct is meant to be Default-able and fills in the information for into_frames() but I'll sketch out some way of putting the metadata indicators in there (i.e. should you poll xmp for this frame, or wait until the end, or is it constant of the file).

As an aside, in wondermagick we basically find that sequence encoding is a missing API to match imagemagick. We can currently only do this with gif, despite tiff and webp having absolutely no conceptual problems with it (avif too, but imagemagick behaves odd, does not match libavifs decoding and the rust libraries don't provide it). It would be nice to make those traits symmetric so the direction here influences the encoding, too.

@197g 197g marked this pull request as ready for review December 22, 2025 22:17
@197g 197g force-pushed the decoder-metadata-interface branch from f6720de to c677c88 Compare December 29, 2025 16:30
Motivated by attempting integration with wondermagick. This is part of
the metadata group available after decoding and does, by definition, not
influence the layout. This placement also makes it impossible to be
interpreted that way. In the future the decoder may return a chain of
transformations that it undertook, this being (part of) the base state.
This whole chain would obviously only be available afterwards.
@197g
Copy link
Copy Markdown
Member Author

197g commented Mar 8, 2026

@Shnatsel Sketch for the integration with wondermagick is here. Unfortunately does not compile yet since the integration crates depend on the crates.io version and not the git version—so they don't automatically work.

@Shnatsel
Copy link
Copy Markdown
Member

Shnatsel commented Mar 9, 2026

Looking at the wondermagick sketch, why are we creating a luma8 image? That looks really odd:

    let mut pixels = DynamicImage::new_luma8(0, 0);

If this is a way to create a blank placeholder DynamicImage, then maybe add a convenience method to decode into a DynamicImage without having API users create a luma8 image first?

Copy link
Copy Markdown
Member Author

@197g 197g left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Continued as a comment since it's easier to read. @Shnatsel

Copy link
Copy Markdown
Member

@Shnatsel Shnatsel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The shape of the public API looks good now. I left some nits but they're minor.

There's no option to decode without metadata, but given that you need metadata to correctly display an image anyway (Exif for orientation, ICC for color profiles), I think it's fair not to provide such an API.

What happens if decoding metadata returns an error? Does the whole decoding error out, do we silently ignore it and keep going with decoding the pixels, or something else? Do the decoding plugins have to do anything to match the desired behavior or does image handle it for them?

@197g 197g force-pushed the decoder-metadata-interface branch from 79e52c9 to cf98540 Compare March 14, 2026 04:12
@197g
Copy link
Copy Markdown
Member Author

197g commented Mar 14, 2026

What happens if decoding metadata returns an error?

The errors is propagated for all non-Unsupported error kinds. The unsupported category is silently ignored for the metadata filled by ImageReader itself. Some metadata categories can be retried, I also think there should eventually be a method to explicitly access metadata after peek_layout (for per-image data) but that can be added later. For metadata that is AfterImage (some in gif) there is no really good solution, neither in main nor this PR, but the other planned extension is:

  • Bring seek support to gif so we can access this efficiently.
  • Add ImageDecoder::finish for that metadata category and a method on ImageDecodedMetadata for indicating the intent of fetching the after-image metadata and implying not to fetch more images.

@Shnatsel
Copy link
Copy Markdown
Member

The errors is propagated for all non-Unsupported error kinds.

So if decoding the image succeeds but decoding one of the metadata field fails, the whole decode call also fails? I think this may cause e.g. malformed auxiliary chunks in PNG to fail the entire decoding process, something that was specifically undesirable for Chromium.

Can we easily provide a generic, high-level "ignore failed parts of metadata" method that implements that behavior once and doesn't push it onto every implementer? Maybe decode_into() should accept an enum describing how the metadata should be handled, e.g. Discard/DiscardOnError/Require?

@197g
Copy link
Copy Markdown
Member Author

197g commented Mar 15, 2026

We could use the strictness configuration for this? If you think SpecCompliance::Lenient is the right way to put that (which is also the default).

@Shnatsel
Copy link
Copy Markdown
Member

No, I don't think it's the same knob. IIRC spec compliance knob was originally created for interpreting pixel data more leniently. So those seem like two unrelated concepts to me.

@197g 197g force-pushed the decoder-metadata-interface branch from f47284c to 84c3143 Compare March 16, 2026 01:39
@Shnatsel
Copy link
Copy Markdown
Member

I like the API added in Delay all metadata errors 👍

I was thinking about it on my own and had the same idea. It provides even more flexibility than a "how to handle metadata" enum, and doesn't require any additional knobs.

@197g
Copy link
Copy Markdown
Member Author

197g commented Mar 16, 2026

It was easier to write as well. (At least in terms of testing). @fintelia I will polish the documentation a bit, intending to merge this soon. No more rebasing, the rest will be merges if any conflict arises like #2862, #2867. I'd appreciate if we can merge this one first though.

@fintelia
Copy link
Copy Markdown
Contributor

Haven't been following this closely, but will try to give another round of feedback this week

Copy link
Copy Markdown
Contributor

@fintelia fintelia left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left a bunch of comments. Haven't had a chance to fully read/consider the image_reader_type.rs changes, but I like the direction this is going

///
/// The layout returned by an implementation of [`ImageDecoder::peek_layout`] must match the
/// buffer expected in [`ImageDecoder::read_image`].
fn peek_layout(&mut self) -> ImageResult<crate::ImageLayout>;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I'd prefer to call this layout rather than peek_layout. Since if we make calling it optional (see other comments) then it just basically becomes a getter like any of the others.

Comment on lines +63 to +65
/// This must be called before a call to [`Self::read_image`] to ensure that the initial
/// metadata has been read. In contrast to a constructor it can be called after configuring
/// limits and context which avoids resource issues for formats that buffer metadata.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we make it optional to call this method, and just say that a call to read_image implicitly reads the initial metadata if necessary? Readers will usually need it to allocate the buffer, but some use cases might transfer the info out-of-band.

Copy link
Copy Markdown
Member Author

@197g 197g Mar 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there should be some method here that necessarily precedes the other metadata calls because it makes the contract rather easy to describe. It may be awkward to surface all kinds of errors in the metadata methods especially considering that the other kinds of metadata (may) get silent failure models. Having this method allows better error ergonomics, I hope.

And the basic layout requirements are always convenient to have, so in terms of Api ergonomics it seems prudent to just include them. All that said, maybe we should not return ImageLayout but another wrapper here. If we ever add fields intended for the decoder to communicate per-image analogues of format_attributes (e.g. information about how to negotiate decoding color conversion) then it would be odd to add them to ImageLayout.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm concerned that the contract won't be obvious to someone reading the code. Many people aren't going to consult our docs and neither peek_layout nor the names of the other metadata methods make it clear that there's an order dependency between them.

And if we do have order requirements between methods, it also becomes important that all decoders enforce those requirements. It would be very unfortunate for someone to test their code with one/several formats and then discover at runtime that other formats don't work because the API was being misused. And since calling methods in the wrong order is a bug, the right error handling strategy is probably to panic!...

Before going down this route, I'd like to understand a bit more about how this improves error handling from metadata methods. I/O errors are still going to be possible from any of them, right? Is the idea that bad magic bytes or issues like that would only be triggered by peek_layout and not read_image or any of the metadata methods?

Copy link
Copy Markdown
Member Author

@197g 197g Mar 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And if we do have order requirements between methods […].

Such is the nature of a protocol. That holds for TCP sockets and yet bind and read are both defined on a file descriptor. Maybe prepare_layout then? And the error strategy is probably a proper error, not panic.

I'm not too concerned about the caller requirements honestly. The concern that other external direct users of the trait may be confused is minor to me. I have yet to even see any evidence that this happens at all and reiterating that the interface is supposed to be almost unidirectional, from some supplier of an ImageDecoder to image (ImageReader).

Before going down this route, I'd like to understand a bit more about how this improves error handling from metadata methods.

The method would be responsible for returning an appropriate error when more_frames was not checked, for instance. That makes a lot more sense than demand it form all meta data methods when the metadata is InHeader (for PerImage, maybe). Overall, the complication that the position of metadata may be completely unrelated in the file relative to the position of what constitutes an 'image' means I rather avoid any hard sequencing dependence between those calls—on the other hand the layout is definitely always per-image by definition.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And since calling methods in the wrong order is a bug

Typestate lets us statically enforce ordering so that doesn't happen.

the interface is supposed to be almost unidirectional, from some supplier of an ImageDecoder to image (ImageReader).

So only when implementing decoders for the plugin interface? Yeah in that case relying a bit of documentation doesn't sound too bad to me.

Copy link
Copy Markdown
Member Author

@197g 197g Mar 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just about every method on ImageReader just calls peek_layout right at the start so it doesn't seem like there's anything (other than setting limits) that you can do on a decoder without it.

Some methods do not call it, only those that interact with a 'current frame'. Obviously format_attributes does not call it but for instance some uses of calling metadata also do not call it. Reconfiguring the limits and/or reconfiguring strictness (not included in this PR) would also never call it.

The call in animation_attributes is admittedly confusing. Its main use currently demands it to be available with into_frames and the model is copied from the current code where we assume metadata to be available after the header (both of these were already the case). It's not perfect for animations yet, only forward compatible without being a regression to the current model. The method would probably be better moved like the other metadata retrievals but that can also be a future addition where we do not rely on Frames<'_> as much.

There's also one additional use in into_frames which uses the call to detect and end-of-image through NoMoreData. That's necessary as otherwise more_images is ill-defined as it does not claim anything about the current state (note that in contract, has_image could not be defaulted).

So, its intended use is to synchronize the decoder's current state and our external view of it especially when the decoder is passed as a (boxed) value. This also explains why it is called so often at the start of the exposed methods. That's going to combat the need to cram functionality into monolithic methods with redundant implementations. If we do incremental decoding I'd propose adding attributes to its return value that indicate the current position of decoding for restarting—so we can restart even without exfiltrating that progress from an error return and remembering it redundantly as a sibling field to the coder when it's clearly stored in the decoder, too. (And we can error diagnose that a from_decoder had an initial state in a partially decoded image). I think that'll set us a much better path to read_rect and incremental reading. Its a coroutine control flow rather than read_with_callback and that composes much better.

That we do not require the decoder to be at any initial state is a happy little side effect of the synchronization role that I will definitely want to cash in on for the previously noted restart-after-Would-Block that BMP, png, tiff want.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry if I'm being dense, but what precisely does the peek_layout method do? The docs say it consumes the image header, but clearly for something like PNG it does far more than just read the IHDR. Should I interpret it as "read until the next instance of pixel data then return the layout for those pixels"?

Copy link
Copy Markdown
Member Author

@197g 197g Mar 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pretty much? "Put yourself into a state where the next unit of pixel data can be consumed and tell me the buffer to do that". (Or tell me "how" to do that if we want to extend it with another mechanism than simple read_image later on).

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Crucially it should be idempotent; barring modifications made via the other methods it must be safe to call multiple times in a row and that should result in equivalent descriptions. I'll add that to the documentation.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One good alternative name for this might be prepare_image. The documentation would then have us use the following sequence of calls:

/// ```text,bnf
/// decoding sequence = configure, { decode image }, "finish", { metadata }
///     
/// decode image =
///    "prepare_image", { metadata | "prepare_image" }, "read_image"
///
/// configure = "set_limits"
///
/// metadata = "xmp_metadata" | "icc_profile" | "exif_metadata" | "iptc_metadata"
/// ```

Also I'd then move ImageLayout into a layout field of a DecoderPreparedImage struct and if we need to communicate more data than the layout itself we'd extend the latter rather than the former. (I do want ImageLayout to describe the shape of DynamicImage in any case; we're dearly missing that for a bunch of APIs for instance passing multiple parameters in the encoder).

Comment on lines +205 to +210
/// The x-coordinate of the top-left rectangle of the image relative to canvas indicated by the
/// sequence of frames.
pub x: u32,
/// The y-coordinate of the top-left rectangle of the image relative to canvas indicated by the
/// sequence of frames.
pub y: u32,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this mean that decoders are now expected to return raw frames rather than compositing them? At the moment we have a mixture of approaches.

I've thought about trying to centralize all the compositing logic into this crate, but the big downside I see is that it makes the underlying backend crates much more annoying to use for animations. There's also edge cases like handling of background colors that might take more attention.

This comment was marked as duplicate.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Displaying animations needs compositing, but image editors need raw frames. I think we should expose both in the long run; this is going to come up as a requirement sooner or later. But that doesn't have to be part of this PR or even of the next release.

I've thought about trying to centralize all the compositing logic into this crate, but the big downside I see is that it makes the underlying backend crates much more annoying to use for animations.

I think the ideal scenario would be making a standalone crate in the vein of https://crates.io/crates/gif-dispose that doesn't depend on image and implements GIF, APNG and WebP compositing. Then it can be used either from the format decoder crates directly or by image to expose the composited API.

Right now we have a separate compositing implementation for each format, so fixes and optimizations have to be applied separately to each; this is something I have long wanted to change.

Copy link
Copy Markdown
Member Author

@197g 197g Mar 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some formats have compositing in a single frame of displayed information, too. JpegXL refers to those zero-length constructions as layers; it's an intended use case for this to create one composite still-image. The reference decoder composites them unless otherwise requested.

Edit: and somewhat random thought, GIF's Plaintext Extension block is supposed to be composited onto the image but no one to my knowledge is reckless enough to implement this. That would be an extremely big step in complexity of blend modes. One could also regard SVG as a complicated stack of blend modes. We probably want to steer very clear of the complexity here but still provide some practical subset..

/// [`ImageDecoder::read_image`] with kind set to [`None`](crate::io::SequenceControl::None),
/// which is also treated as end of stream. This may be used by decoders which can not
/// determine the number of images in advance.
pub fn into_frames(mut self) -> Frames<'stream> {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should be clear about whether this also applies to image sequences

Comment on lines +374 to +379
/// Result of [`ImageReader::decode_into`] that provides access to metadata.
pub struct DecodedImageMetadata<'reader> {
inner: &'reader mut (dyn ImageDecoder + 'reader),
attributes: &'reader DecodedImageAttributes,
metadata_buffers: &'reader mut MetadataBuffers,
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I haven't had a chance to think it through in detail, but it might make sense to have this be a flat struct containing the metadata:

pub struct DecodeImageMetadata {
    pub orientation: Option<Vec<u8>>,
    pub exif: Option<Vec<u8>>,
    ...
}

Comment on lines +205 to +210
/// The x-coordinate of the top-left rectangle of the image relative to canvas indicated by the
/// sequence of frames.
pub x: u32,
/// The y-coordinate of the top-left rectangle of the image relative to canvas indicated by the
/// sequence of frames.
pub y: u32,

This comment was marked as duplicate.

@197g 197g force-pushed the decoder-metadata-interface branch from 68b3037 to 0d413c2 Compare March 24, 2026 12:25
@197g 197g force-pushed the decoder-metadata-interface branch from 0d413c2 to 278bb47 Compare March 30, 2026 01:00
/// ```
pub fn decode_into(&mut self, buffer: &mut [u8]) -> ImageResult<DecodedImageMetadata<'_>> {
let layout = self.inner.peek_layout()?;
self.fill_header_metadata_if_any();
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've thought about this more, and I don't think it is reasonable to say that info like the image color space or the orientation might just not be available when decoding individual frames. Incremental frame at-a-time decoding isn't very useful if we can get to the end and then say "oh, by the way, make sure to rotate the animation before displaying it". If we do that, users are effectively required to buffer the entire animation in memory before they can display it.

Especially since most users are going to be operating on a byte slice or a File object, both of which easily allow jumping back and forth within the file.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alright that is very reasonable, I've removed AfterFinish then. This will require patches to gif, png, webp but that's just a bug in the decoder as currently.

197g added 3 commits April 12, 2026 08:07
Consolidates the variants so that all supported types of metadata must
guarantee that the data is actually present. This merely requires the
decoder to be able to seek; which is already usually the case and
reasonably implementable.

This will require support in: gif, png, webp
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants