Conversation
|
I am not asking for a full review necessarily, this is machine-generated and I have looked at the rows myself individually. Obviously my CodeQL knowledge is still limited so I may very well have made mistakes. I am open for discussing how to approach getting this reviewed/merged. |
owen-mc
left a comment
There was a problem hiding this comment.
I think the output format needs to be updated to correctly deal with nested classes.
| pack: codeql/java-all | ||
| extensible: sinkModel | ||
| data: | ||
| - ["org.apache.avro.data", "ObjectReader", True, "read", "(Object,Decoder)", "", "Argument[1]", "unsafe-deserialization", "ai-generated"] |
There was a problem hiding this comment.
I tried to look up the docs for this and found that it's actually a nested class. I believe we would need the below syntax to correctly specify it. (You can see this is used for java.io.ObjectInputFilter.Config, for example.)
| - ["org.apache.avro.data", "ObjectReader", True, "read", "(Object,Decoder)", "", "Argument[1]", "unsafe-deserialization", "ai-generated"] | |
| - ["org.apache.avro.data", "Json$ObjectReader", True, "read", "(Object,Decoder)", "", "Argument[1]", "unsafe-deserialization", "ai-generated"] |
| pack: codeql/java-all | ||
| extensible: sinkModel | ||
| data: | ||
| - ["org.apache.avro.data", "ObjectReader", True, "read", "(Object,Decoder)", "", "Argument[1]", "unsafe-deserialization", "ai-generated"] |
There was a problem hiding this comment.
Also, Gemini 3.1 Pro doesn't think this should be a sink.
Reasoning: Unsafe deserialization vulnerabilities occur when an application deserializes untrusted data into arbitrary Java objects (allowing an attacker to trigger malicious gadget chains). However, Json.ObjectReader is designed to strictly read Avro-encoded data matching the specific Json.SCHEMA internal to Apache Avro.
If you examine its implementation, it maps incoming primitive tokens directly to basic, safe Jackson JsonNode types (like LongNode, DoubleNode, TextNode, ArrayNode, and ObjectNode) and then unwraps them into basic Java structures (Map, List, String, Long, etc.). Since it does not perform polymorphic deserialization or resolve arbitrary class names from the data stream, it is structurally immune to unsafe class instantiation and does not act as a deserialization sink.
There was a problem hiding this comment.
Interesting. Out of curiosity, how did you ask the model? Did you provide the source code, which the tool (currently) does not.
My impression is that you can ask it multiple times and get different results, which is why I would like to know what input it was given.
There was a problem hiding this comment.
Which arguments of org.apache.avro.data.Json.ObjectReader.read, if any, should be "unsafe-deserialization" sinks for CodeQL?
It looks like it searched GitHub and found the source code. In the past I have given it the javadocs that I've found online, but I was lazy this time.
There was a problem hiding this comment.
FYI, I am trying to understand the reasoning of the model (Opus 4.6). It seems to conclude the same, in it's internal monologue (copy-pasted verbatim):
Object read(Object reuse, Decoder in)inorg.apache.avro.data.Json$ObjectReader
- This method deserializes data from a Decoder parameter, which reads from an external source like a stream or network. While the Decoder is passed in by the caller, the method itself performs the actual deserialization of untrusted data, making it a potential source for tainted data.
- Sink analysis: The
Decoder inparameter is where the potentially malicious data comes from. When deserializing untrusted data through this decoder, there's a risk of CWE-502 (Deserialization of Untrusted Data). The decoder provides the data being deserialized, so it's the actual source of the taint.For deserialization vulnerabilities, the method performing the deserialization is the sink, and the decoder carrying the untrusted data is what we're concerned about. In Avro's case, the JSON deserialization is schema-driven, which provides some protection by limiting what types can be instantiated, but I should still consider whether this creates a meaningful vulnerability. a security perspective, we should consider this as a potential deserialization concern. But looking closer, Avro's schema-based deserialization doesn't instantiate arbitrary classes like Java's native serialization—it just reads structured data according to a fixed schema, so this isn't really a CWE-502 vulnerability. The
Json$ObjectReaderis parsing Avro-encoded data into JSON objects, which is more of a data parsing operation than dangerous deserialization.For Avro specifically, this doesn't constitute a dangerous deserialization sink since it produces JSON data structures rather than arbitrary Java objects, operates schema-driven rather than type-driven, and doesn't invoke arbitrary constructors or methods. I'll skip reporting this as a CWE-502 sink.
As for whether this is a source—the method reads from a Decoder that could represent external data, but the Decoder is already provided by the caller, meaning the actual I/O boundary was crossed when the decoder was created. The method itself is processing data through the Decoder abstraction, which typically wraps an InputStream and calls read methods on it.
Given all this, I don't think these methods clearly qualify as sources or sinks from a security standpoint—they're just performing schema-based data parsing without the dangerous characteristics that would trigger CWE-502 concerns. Actually, let me reconsider one more time. The
readmethod does perform deserialization - it takes serialized data from a Decoder and produces Java objects. Even though Avro's deserialization is schema-based and generally safer than Java's native serialization, it's still deserialization. In a conservative security analysis, we might want to flag this.But looking at it more carefully - Avro's JSON reader produces
JsonNodeobjects (from Jackson) or similar simple data structures. It doesn't instantiate arbitrary Java classes based on the input data. The security risk of CWE-502 specifically relates to being able to instantiate arbitrary classes or invoke methods through deserialization gadgets. Avro's schema-based approach doesn't have this vulnerability.I'll not report these as sinks or sources. Now I'm thinking through what
Json$ObjectReader.readactually does - it's an inner class in the Avro package that reads JSON from a Decoder and returns a JsonNode. For theinparameter, the Decoder carries the serialized data, so that could be considered a source if we're tracking external input, though typically the application creates the Decoder from a known source rather than the method opening connections itself.
However, it still ends up reporting it as a sink (this is from the LLM output):
Object read(Object reuse, Decoder in)- This method performs deserialization: it reads serialized data from an AvroDecoderand produces Java objects. TheDecoder(argument 1) carries the potentially untrusted serialized data that is being deserialized into objects. This is a deserialization operation that could be vulnerable to CWE-502 if the input is untrusted.
As we discussed yesterday, it's not clear to me where we need to draw the line. I have explicitly told it to be conservative in it's analysis (including sources/sinks if in doubt).
Edit: Having run the same query multiple times now, there is a strong leaning towards the method being a sink: The issue above is not normal, usually the LLM is convinced that this is a potential security issue (assuming input made with ill intent).
5d78705 to
ec10873
Compare
|
In the past, when we have generated too many models to manually check, we have validated them by running QA and checking the alert changes are reasonable (not too many, not too many FPs). It turns out that if you have 100 incorrect models, most of them won't cause any problems, because they just don't lead to any extra data flow paths, but there are some which do cause a lot of extra data flow paths, which will cause lots of FPs, which are pretty obvious when you look at the QA results. This PR doesn't introduce that many models, so it might be worth reviewing them manually. Also, the above procedure won't work for sources that below to non-default threat models. It might well be possible to run QA with a particular threat model. I haven't done this before so I've asked around. If so we should definitely make sure to add |
Add Models as Data for the Java version of Apache Avro. This is based on this subfolder/commit.
This is entirely LLM-generated and the output has undergone a voting procedure. It is not meant to fully cover the library. I am curious for any feedback on this. We also need to decide if the provenance is OK or if there is a better name.
This PR contains #21751 for the
qlpack.ymlwildcard, needed for a DCA run.