Skip to content

Add MaDs for Apache Avro#21754

Draft
jacknojo wants to merge 7 commits intomainfrom
jacknojo/add_llm_generated_mads_for_avro
Draft

Add MaDs for Apache Avro#21754
jacknojo wants to merge 7 commits intomainfrom
jacknojo/add_llm_generated_mads_for_avro

Conversation

@jacknojo
Copy link
Copy Markdown
Contributor

@jacknojo jacknojo commented Apr 24, 2026

Add Models as Data for the Java version of Apache Avro. This is based on this subfolder/commit.

This is entirely LLM-generated and the output has undergone a voting procedure. It is not meant to fully cover the library. I am curious for any feedback on this. We also need to decide if the provenance is OK or if there is a better name.

This PR contains #21751 for the qlpack.yml wildcard, needed for a DCA run.

Copilot AI review requested due to automatic review settings April 24, 2026 11:37
@jacknojo jacknojo requested a review from a team as a code owner April 24, 2026 11:37
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copilot wasn't able to review any files in this pull request.

@github-actions github-actions Bot added the Java label Apr 24, 2026
@jacknojo jacknojo requested a review from michaelnebel April 24, 2026 11:38
@jacknojo
Copy link
Copy Markdown
Contributor Author

I am not asking for a full review necessarily, this is machine-generated and I have looked at the rows myself individually. Obviously my CodeQL knowledge is still limited so I may very well have made mistakes.

I am open for discussing how to approach getting this reviewed/merged.

@jacknojo jacknojo removed the request for review from michaelnebel April 27, 2026 13:24
@jacknojo jacknojo marked this pull request as draft April 27, 2026 13:24
@jacknojo jacknojo changed the base branch from main to jacknojo/move_java_generated_mads April 27, 2026 13:42
Copy link
Copy Markdown
Contributor

@owen-mc owen-mc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the output format needs to be updated to correctly deal with nested classes.

pack: codeql/java-all
extensible: sinkModel
data:
- ["org.apache.avro.data", "ObjectReader", True, "read", "(Object,Decoder)", "", "Argument[1]", "unsafe-deserialization", "ai-generated"]
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried to look up the docs for this and found that it's actually a nested class. I believe we would need the below syntax to correctly specify it. (You can see this is used for java.io.ObjectInputFilter.Config, for example.)

Suggested change
- ["org.apache.avro.data", "ObjectReader", True, "read", "(Object,Decoder)", "", "Argument[1]", "unsafe-deserialization", "ai-generated"]
- ["org.apache.avro.data", "Json$ObjectReader", True, "read", "(Object,Decoder)", "", "Argument[1]", "unsafe-deserialization", "ai-generated"]

pack: codeql/java-all
extensible: sinkModel
data:
- ["org.apache.avro.data", "ObjectReader", True, "read", "(Object,Decoder)", "", "Argument[1]", "unsafe-deserialization", "ai-generated"]
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, Gemini 3.1 Pro doesn't think this should be a sink.

Reasoning: Unsafe deserialization vulnerabilities occur when an application deserializes untrusted data into arbitrary Java objects (allowing an attacker to trigger malicious gadget chains). However, Json.ObjectReader is designed to strictly read Avro-encoded data matching the specific Json.SCHEMA internal to Apache Avro.

If you examine its implementation, it maps incoming primitive tokens directly to basic, safe Jackson JsonNode types (like LongNode, DoubleNode, TextNode, ArrayNode, and ObjectNode) and then unwraps them into basic Java structures (Map, List, String, Long, etc.). Since it does not perform polymorphic deserialization or resolve arbitrary class names from the data stream, it is structurally immune to unsafe class instantiation and does not act as a deserialization sink.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting. Out of curiosity, how did you ask the model? Did you provide the source code, which the tool (currently) does not.
My impression is that you can ask it multiple times and get different results, which is why I would like to know what input it was given.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Which arguments of org.apache.avro.data.Json.ObjectReader.read, if any, should be "unsafe-deserialization" sinks for CodeQL?

It looks like it searched GitHub and found the source code. In the past I have given it the javadocs that I've found online, but I was lazy this time.

Copy link
Copy Markdown
Contributor Author

@jacknojo jacknojo Apr 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FYI, I am trying to understand the reasoning of the model (Opus 4.6). It seems to conclude the same, in it's internal monologue (copy-pasted verbatim):

Object read(Object reuse, Decoder in) in org.apache.avro.data.Json$ObjectReader

  • This method deserializes data from a Decoder parameter, which reads from an external source like a stream or network. While the Decoder is passed in by the caller, the method itself performs the actual deserialization of untrusted data, making it a potential source for tainted data.
  • Sink analysis: The Decoder in parameter is where the potentially malicious data comes from. When deserializing untrusted data through this decoder, there's a risk of CWE-502 (Deserialization of Untrusted Data). The decoder provides the data being deserialized, so it's the actual source of the taint.

For deserialization vulnerabilities, the method performing the deserialization is the sink, and the decoder carrying the untrusted data is what we're concerned about. In Avro's case, the JSON deserialization is schema-driven, which provides some protection by limiting what types can be instantiated, but I should still consider whether this creates a meaningful vulnerability. a security perspective, we should consider this as a potential deserialization concern. But looking closer, Avro's schema-based deserialization doesn't instantiate arbitrary classes like Java's native serialization—it just reads structured data according to a fixed schema, so this isn't really a CWE-502 vulnerability. The Json$ObjectReader is parsing Avro-encoded data into JSON objects, which is more of a data parsing operation than dangerous deserialization.

For Avro specifically, this doesn't constitute a dangerous deserialization sink since it produces JSON data structures rather than arbitrary Java objects, operates schema-driven rather than type-driven, and doesn't invoke arbitrary constructors or methods. I'll skip reporting this as a CWE-502 sink.

As for whether this is a source—the method reads from a Decoder that could represent external data, but the Decoder is already provided by the caller, meaning the actual I/O boundary was crossed when the decoder was created. The method itself is processing data through the Decoder abstraction, which typically wraps an InputStream and calls read methods on it.

Given all this, I don't think these methods clearly qualify as sources or sinks from a security standpoint—they're just performing schema-based data parsing without the dangerous characteristics that would trigger CWE-502 concerns. Actually, let me reconsider one more time. The read method does perform deserialization - it takes serialized data from a Decoder and produces Java objects. Even though Avro's deserialization is schema-based and generally safer than Java's native serialization, it's still deserialization. In a conservative security analysis, we might want to flag this.

But looking at it more carefully - Avro's JSON reader produces JsonNode objects (from Jackson) or similar simple data structures. It doesn't instantiate arbitrary Java classes based on the input data. The security risk of CWE-502 specifically relates to being able to instantiate arbitrary classes or invoke methods through deserialization gadgets. Avro's schema-based approach doesn't have this vulnerability.

I'll not report these as sinks or sources. Now I'm thinking through what Json$ObjectReader.read actually does - it's an inner class in the Avro package that reads JSON from a Decoder and returns a JsonNode. For the in parameter, the Decoder carries the serialized data, so that could be considered a source if we're tracking external input, though typically the application creates the Decoder from a known source rather than the method opening connections itself.

However, it still ends up reporting it as a sink (this is from the LLM output):

Object read(Object reuse, Decoder in) - This method performs deserialization: it reads serialized data from an Avro Decoder and produces Java objects. The Decoder (argument 1) carries the potentially untrusted serialized data that is being deserialized into objects. This is a deserialization operation that could be vulnerable to CWE-502 if the input is untrusted.

As we discussed yesterday, it's not clear to me where we need to draw the line. I have explicitly told it to be conservative in it's analysis (including sources/sinks if in doubt).

Edit: Having run the same query multiple times now, there is a strong leaning towards the method being a sink: The issue above is not normal, usually the LLM is convinced that this is a potential security issue (assuming input made with ill intent).

@jacknojo jacknojo force-pushed the jacknojo/add_llm_generated_mads_for_avro branch from 5d78705 to ec10873 Compare April 28, 2026 11:24
@jacknojo jacknojo changed the base branch from jacknojo/move_java_generated_mads to main April 28, 2026 11:24
@owen-mc
Copy link
Copy Markdown
Contributor

owen-mc commented Apr 28, 2026

In the past, when we have generated too many models to manually check, we have validated them by running QA and checking the alert changes are reasonable (not too many, not too many FPs). It turns out that if you have 100 incorrect models, most of them won't cause any problems, because they just don't lead to any extra data flow paths, but there are some which do cause a lot of extra data flow paths, which will cause lots of FPs, which are pretty obvious when you look at the QA results.

This PR doesn't introduce that many models, so it might be worth reviewing them manually. Also, the above procedure won't work for sources that below to non-default threat models. It might well be possible to run QA with a particular threat model. I haven't done this before so I've asked around. If so we should definitely make sure to add file to the default remote threat model, so that those models are exercised.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants