Conversation
| #define htobe16t(x) htons(x) | ||
| #define htobe32t(x) htonl(x) | ||
| #define htobe64t(x) htonll(x) | ||
| #define be16toht(x) ntohs(x) | ||
| #define be32toht(x) ntohl(x) | ||
| #define be64toht(x) ntohll(x) |
There was a problem hiding this comment.
The 16-bit variants aren't used in this implementation.
There was a problem hiding this comment.
I would rather leave them in to minimize differences from H5Zlz4.c.
supportApp/lz4hdf5Src/lz4hdf5.c
Outdated
| * This file implements lz4 compression/decompression where the data | ||
| * has a 12-byte header that specified the block size and other information | ||
| * It is different from the lz4 compression in ADSupport that does not have this header | ||
| * This file is needed because Dectris Stream2 sends lz4 data with smaller block size than | ||
| * the default, so the lz4 decompression won't work. | ||
| * | ||
| * This code is modified from H5Zlz4.c in the HDF5 library. |
There was a problem hiding this comment.
I think it would be nice to document that it's not only the header, there's the framing around each block that informs the size of the compressed data in that block.
Maybe with a link to the documentation of the data format? e.g. https://github.com/dectris/HDF5Plugin/blob/master/HDF5_LZ4.pdf
| blockSize = nbytes; | ||
| } | ||
| nBlocks = (nbytes-1)/blockSize +1; | ||
| maxDestSize = LZ4_compressBound((int)nbytes) + 4 + 8 + nBlocks*4; |
There was a problem hiding this comment.
I don't think this is accurate, per https://github.com/lz4/lz4/blob/9da37b2eebf082bfab6e57c49be71cc41119a40d/lib/lz4.h#L215 , if we do intend to support multiple blocks in the output. I think it should be 4 + 8 + nBlocks * (4 + LZ4_compressBound(blockSize))
There was a problem hiding this comment.
I took my code directly from the HDF5 source here, which has maxDestSize defined as I do:
https://github.com/nexusformat/HDF5-External-Filter-Plugins/blob/49e3b65eca772bca77af13ba047d8b577673afba/LZ4/src/H5Zlz4.c#L154
There was a problem hiding this comment.
That seems like an oversight... I will open an issue about it there, then.
supportApp/lz4hdf5Src/lz4hdf5.c
Outdated
| return ret_value; | ||
| } | ||
|
|
||
| epicsShareFunc size_t compress_lz4hdf5(const char *inbuf, char *outbuf, size_t nbytes, size_t maxOutputSize) |
There was a problem hiding this comment.
Wouldn't it make sense to expose blockSize in the function parameters? But use the default if set to 0. That could even be exposed in NDPluginCodec...
|
I've been trying to distribute all of the EPICS modules/dependencies via RPM packages at our beamlines, and in the spirit of system packages, whenever possible I try to use already existing versions of libraries in the default package manager repos. As such I've been able to build areaDetector with just system packages, omitting ADSupport. Do you know by any chance if the upstream of this new codec is available in any of the standard hdf library packages for the major distros? Are there any modifications made here that are required for the ADEiger functionality to work correctly that are not available in the upstream version? I haven't had a chance to look through the code in detail - I'll do that once I'm back in office tomorrow. |
I don't think this codec is available in any distro. It is basically a wrapper around the lz4 plugin, breaking the data into chunks. This allows parallel compression, and allows larger overall datasets than lz4 allows. I only know of 2 implementations:
My new lz4hdf5 codec is different than other libraries in ADSupport because it is not available as a distro, or even in source code form form anywhere else. One option would be to put the lz4hdf5 code in ADCore, rather than in ADSupport. Perhaps that would be a good solution? |
I thought about this some more, and I think that ADSupport is the correct location. The reason is that ADSupport is the location where the shareable libraries that are called dynamically by HDF5 applications and PVA clients are located. We use PVA to send JPEG compressed to ImageJ clients, saving 10X on network bandwidth. That requires that ImageJ dynamically load libdecompressJPEG.so. That library is not available in any distro. The same is now true of lz4hdf5, which needs to find liblz4hdf5.so to decompress LZ4 images from ADEiger. The libraries for HDF5 applications are found using the environment variable HDF5_PLUGIN_PATH, e.g. If you don't build ADSupport then how are you building those libraries and setting HDF5_PLUGIN_PATH? For ImageJ to find the libraries LD_LIBRARY_PATH must include XXX/ADSupport/lib/linux-x86_64 on Linux and PATH must include XXX/ADSupport/bin/windows-x64 on Windows. You can build the required libraries for ImageJ and HDF5 in ADSupport and still use system distros for the underlying packages. |


This adds code for a new lz4hdf5 codec. This is different from the LZ4 codec because it can compress in blocks, and it contains a header that contains the original size and the block size. It is used by the ADEiger with the Stream2 interface.