Skip to content

fix(azure): prepend bucket prefix to all Azure Blob storage paths#14561

Open
Ricardo-M-L wants to merge 3 commits into
infiniflow:mainfrom
Ricardo-M-L:fix/azure-blob-missing-bucket-prefix
Open

fix(azure): prepend bucket prefix to all Azure Blob storage paths#14561
Ricardo-M-L wants to merge 3 commits into
infiniflow:mainfrom
Ricardo-M-L:fix/azure-blob-missing-bucket-prefix

Conversation

@Ricardo-M-L
Copy link
Copy Markdown
Contributor

What

The Azure SPN (rag/utils/azure_spn_conn.py) and Azure SAS (rag/utils/azure_sas_conn.py) storage backends ignored the bucket parameter and wrote every file flat under the container using only fnm. Files with the same name uploaded from different datasets (different kb_ids) silently overwrote each other in Azure Blob storage.

Why

The MinIO and S3 backends already prepend the bucket as a path prefix (<bucket>/<fnm>) for logical isolation between datasets. The Azure backends should follow the same contract.

Change

Applies the f\"{bucket}/{fnm}\" prefix pattern to all path-taking operations in both Azure backends:

Method Before After
put create_file(fnm) / upload_blob(name=fnm, ...) create_file(f\"{bucket}/{fnm}\") / upload_blob(name=f\"{bucket}/{fnm}\", ...)
get get_file_client(fnm) / download_blob(fnm) …(f\"{bucket}/{fnm}\")
rm delete_file(fnm) / delete_blob(fnm) …(f\"{bucket}/{fnm}\")
obj_exist get_file_client(fnm) / get_blob_client(fnm) …(f\"{bucket}/{fnm}\")
get_presigned_url get_presigned_url(\"GET\", bucket, fnm, expires) get_presigned_url(\"GET\", f\"{bucket}/{fnm}\", expires)
health create_file(fnm) / upload_blob(name=fnm, ...) …(f\"{bucket}/{fnm}\") (renamed _bucketbucket since it is now used)

Tests

Adds test/unit_test/rag/utils/test_azure_blob_bucket_prefix.py (12 tests, all passing) covering every modified method on both backends, plus a dedicated regression test that verifies two datasets uploading the same filename produce two distinct storage paths.

$ pytest test/unit_test/rag/utils/test_azure_blob_bucket_prefix.py -v
============================== 12 passed in 0.02s ==============================

The tests stub the azure SDK and the common.settings module so they run without Azure credentials or live connectivity.

Notes

  • For get_presigned_url, the Azure SPN/SAS implementations were already non-functional (the underlying Azure SDK clients don't expose this method); this change keeps the path argument consistent with the rest of the API surface as requested in the issue, in case the method is reworked later.
  • No backward-compat shim is included — the previous behavior was buggy (collisions), and the issue description explicitly endorses this fix.

Fixes #14159

The Azure SPN (`azure_spn_conn.py`) and Azure SAS (`azure_sas_conn.py`)
storage backends ignored the `bucket` parameter and stored every file
flat under the container using only `fnm`. As a result, files with the
same name uploaded from different datasets (different `kb_id`s) silently
overwrote each other in Azure Blob storage.

The MinIO and S3 backends already prepend the bucket as a path prefix
(`<bucket>/<fnm>`) for logical isolation. This change applies the same
pattern to both Azure backends in `put`, `get`, `rm`, `obj_exist`,
`get_presigned_url`, and the `health` probe so all operations use
consistent paths.

Adds unit tests under `test/unit_test/rag/utils/` covering each method
and a regression case verifying that two datasets uploading the same
filename produce two distinct storage paths.

Fixes infiniflow#14159

Signed-off-by: Ricardo-M-L <ricardoporsche001@icloud.com>
@dosubot dosubot Bot added the size:S This PR changes 10-29 lines, ignoring generated files. label May 3, 2026
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented May 3, 2026

Review Change Stack
No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: af4a34cd-720b-47aa-9cf2-8eca3d4a17f2

📥 Commits

Reviewing files that changed from the base of the PR and between 215a807 and 4695bcf.

📒 Files selected for processing (2)
  • rag/utils/azure_sas_conn.py
  • rag/utils/azure_spn_conn.py
🚧 Files skipped from review as they are similar to previous changes (2)
  • rag/utils/azure_spn_conn.py
  • rag/utils/azure_sas_conn.py

📝 Walkthrough

Walkthrough

Azure SPN and SAS connectors now consistently prefix bucket to filenames ("{bucket}/{fnm}") across health, put, get, rm, obj_exist, and get_presigned_url. Unit tests with stubs verify the prefixing and that identical filenames in different buckets do not collide.

Changes

Azure Blob Path Prefix Alignment

Layer / File(s) Summary
Core Implementation (SPN) — health & put
rag/utils/azure_spn_conn.py
health() and put() create/write using create_file(f"{bucket}/{fnm}") and blob = f"{bucket}/{fnm}".
Core Implementation (SPN) — read/delete/existence/URL
rag/utils/azure_spn_conn.py
get(), rm(), obj_exist(), get_presigned_url() use get_file_client(blob), delete_file(blob), return client URL or call client.exists().
Core Implementation (SAS) — health & put
rag/utils/azure_sas_conn.py
health() and put() upload with upload_blob(name=f"{bucket}/{fnm}", ...); compute blob_name = f"{bucket}/{fnm}".
Core Implementation (SAS) — delete/existence/URL
rag/utils/azure_sas_conn.py
rm(), obj_exist(), get_presigned_url() use delete_blob(blob_name), get_blob_client(blob_name).exists(), and return get_blob_client(blob_name).url.
Validation Tests
test/unit_test/rag/utils/test_azure_blob_bucket_prefix.py
New tests inject stubs for common/azure SDKs, import connectors in isolation, instantiate connectors with mocked .conn, and assert all methods use "bucket/filename" paths; regression tests ensure same filename in different buckets yields distinct storage paths.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Suggested labels

ci, size:S

Suggested reviewers

  • Magicbook1108

Poem

🐇 I stitch each path with bucket and name,

No two datasets share the same fame.
SPN and SAS hop in a line,
Tests confirm each file stays fine,
A rabbit's cheer — no collisions this time!

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 10.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The PR title clearly and concisely describes the main change: prepending bucket prefix to Azure Blob storage paths to fix a critical bug preventing file collisions.
Description check ✅ Passed The PR description comprehensively covers the problem (bucket parameter ignored), the solution (applying f"{bucket}/{fnm}" pattern consistently), tests added, and notes on non-functional get_presigned_url.
Linked Issues check ✅ Passed All coding requirements from issue #14159 are met: bucket prefix applied to put(), get(), rm(), obj_exist(), get_presigned_url() in both azure_spn_conn.py and azure_sas_conn.py, plus comprehensive unit tests.
Out of Scope Changes check ✅ Passed All changes are directly related to the stated objective: modifying Azure backends to prepend bucket prefix and adding tests to validate the fix. No unrelated modifications detected.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

Tip

💬 Introducing Slack Agent: The best way for teams to turn conversations into code.

Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.

  • Generate code and open pull requests
  • Plan features and break down work
  • Investigate incidents and troubleshoot customer tickets together
  • Automate recurring tasks and respond to alerts with triggers
  • Summarize progress and report instantly

Built for teams:

  • Shared memory across your entire org—no repeating context
  • Per-thread sandboxes to safely plan and execute work
  • Governance built-in—scoped access, auditability, and budget controls

One agent for your entire SDLC. Right inside Slack.

👉 Get started


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@dosubot dosubot Bot added 🐞 bug Something isn't working, pull request that fix bug. 🧪 test Pull requests that update test cases. labels May 3, 2026
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
rag/utils/azure_spn_conn.py (1)

71-82: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Fix premature exit in retry loop in put.

Line 81 returns from inside except, so the for _ in range(3) loop never retries. This nullifies transient-failure recovery on writes.

Proposed fix
     def put(self, bucket, fnm, binary, tenant_id=None):
         for _ in range(3):
             try:
                 f = self.conn.create_file(f"{bucket}/{fnm}")
                 f.append_data(binary, offset=0, length=len(binary))
                 return f.flush_data(len(binary))
             except Exception:
                 logging.exception(f"Fail put {bucket}/{fnm}")
                 self.__open__()
                 time.sleep(1)
-                return None
         return None
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@rag/utils/azure_spn_conn.py` around lines 71 - 82, The put method's retry
loop is broken because the except block returns immediately, preventing further
retries; update put (method put in class using self.conn.create_file and
f.append_data/f.flush_data) to remove the premature return inside the except so
that on exception it logs the error, calls self.__open__(), sleeps, and then
continues the for _ in range(3) loop to retry; after the loop finishes without
success, return None (preserve existing successful return of f.flush_data when
no exception).
🧹 Nitpick comments (1)
test/unit_test/rag/utils/test_azure_blob_bucket_prefix.py (1)

109-186: ⚡ Quick win

Add health() coverage since health path construction changed in this PR.

Current tests validate put/get/rm/obj_exist/get_presigned_url but not health() for either connector. A small assertion on the called path/name would close that gap.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/unit_test/rag/utils/test_azure_blob_bucket_prefix.py` around lines 109 -
186, Add a new test in both TestAzureSpnBucketPrefix and
TestAzureSasBucketPrefix that calls the connector health() (on the
RAGFlowAzureSpnBlob and RAGFlowAzureSasBlob instances) with a bucket like "kb_a"
and asserts the underlying mocked connection was invoked with a path containing
the bucket prefix; for the SPN test call spn.health("kb_a") and assert any
recorded call in spn.conn.call_args_list has a first positional arg that starts
with "kb_a/" (or contains "kb_a"), and for the SAS test call sas.health("kb_a")
and assert any sas.conn.upload/download/delete/get call_args/kwargs include a
"name" or positional arg that starts with "kb_a/". Ensure you reference the
instance names spn and sas, the health() method, and the conn mock when adding
the assertions.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@rag/utils/azure_sas_conn.py`:
- Around line 80-84: The obj_exist method logs "Fail put ..." on exceptions
which is misleading; change the logging.exception call inside obj_exist to a
existence-specific message like "Fail checking existence of {bucket}/{fnm}"
(include the bucket/fnm) and keep logging.exception so the exception details are
recorded; also ensure obj_exist returns a boolean on error (e.g., return False)
so callers of obj_exist (which calls self.conn.get_blob_client(...).exists())
receive a consistent result.
- Line 65: The delete/get/exists/read methods currently call self.conn using the
new path format f"{bucket}/{fnm}" (e.g., the
self.conn.delete_blob(f"{bucket}/{fnm}") call) but must fall back to the legacy
plain fnm if the new path is not found; update each occurrence (the delete_blob,
get_blob/read operations and existence checks at the same spots) to first try
the f"{bucket}/{fnm}" call and on a not-found/error response attempt the legacy
fnm value before failing, returning the successful result or logging/raising the
original error only if both attempts fail.

In `@rag/utils/azure_spn_conn.py`:
- Around line 86-87: Modify the non-write operations to try the new prefixed
path first and fall back to the legacy plain filename on failure: for the delete
flow using self.conn.delete_file(f"{bucket}/{fnm}") (and similarly for
get/get_presigned_url/obj_exist), catch exceptions or a false/None result and
then attempt the same operation with the legacy path self.conn.delete_file(fnm)
(or self.conn.get_object(fnm), self.conn.get_presigned_url(fnm),
self.conn.object_exists(fnm) as appropriate); ensure both attempts preserve
return values/errors so callers receive the correct success/failure outcome and
avoid raising the first-path exception when the legacy attempt succeeds.
- Around line 102-107: The obj_exist method currently logs a misleading message
"Fail put ..." on exception; update the exception logging in obj_exist to use an
existence-specific message such as "Fail checking existence for {bucket}/{fnm}"
(use the same f-string pattern) so logs accurately reflect the operation and
include the bucket/fnm details; modify the logging.exception call inside
obj_exist to that new message.

In `@test/unit_test/rag/utils/test_azure_blob_bucket_prefix.py`:
- Around line 31-33: The helper _install_stubs currently mutates sys.modules
directly and never restores entries, causing cross-test pollution; change the
stubbing to use pytest's monkeypatch.setitem to insert the fake modules (e.g.,
replace sys.modules["ragflow.runtime"] etc.) so patches are automatically
reverted after each test, and update the three other stub blocks (the ones
creating fake modules around the same area) to use monkeypatch.setitem instead
of direct sys.modules assignment or add explicit teardown that restores
originals if monkeypatch is unavailable; locate uses in function _install_stubs
and the other stub creation sites and replace sys.modules[...] = fake_module
with monkeypatch.setitem(sys.modules, key, fake_module).

---

Outside diff comments:
In `@rag/utils/azure_spn_conn.py`:
- Around line 71-82: The put method's retry loop is broken because the except
block returns immediately, preventing further retries; update put (method put in
class using self.conn.create_file and f.append_data/f.flush_data) to remove the
premature return inside the except so that on exception it logs the error, calls
self.__open__(), sleeps, and then continues the for _ in range(3) loop to retry;
after the loop finishes without success, return None (preserve existing
successful return of f.flush_data when no exception).

---

Nitpick comments:
In `@test/unit_test/rag/utils/test_azure_blob_bucket_prefix.py`:
- Around line 109-186: Add a new test in both TestAzureSpnBucketPrefix and
TestAzureSasBucketPrefix that calls the connector health() (on the
RAGFlowAzureSpnBlob and RAGFlowAzureSasBlob instances) with a bucket like "kb_a"
and asserts the underlying mocked connection was invoked with a path containing
the bucket prefix; for the SPN test call spn.health("kb_a") and assert any
recorded call in spn.conn.call_args_list has a first positional arg that starts
with "kb_a/" (or contains "kb_a"), and for the SAS test call sas.health("kb_a")
and assert any sas.conn.upload/download/delete/get call_args/kwargs include a
"name" or positional arg that starts with "kb_a/". Ensure you reference the
instance names spn and sas, the health() method, and the conn mock when adding
the assertions.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: f0b953d4-2300-4db3-b28e-23038f1208e4

📥 Commits

Reviewing files that changed from the base of the PR and between 24af087 and 215a807.

📒 Files selected for processing (3)
  • rag/utils/azure_sas_conn.py
  • rag/utils/azure_spn_conn.py
  • test/unit_test/rag/utils/test_azure_blob_bucket_prefix.py

Comment thread rag/utils/azure_sas_conn.py Outdated
def rm(self, bucket, fnm):
try:
self.conn.delete_blob(fnm)
self.conn.delete_blob(f"{bucket}/{fnm}")
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Mirror a legacy-path fallback here as well for upgrade safety.

Like the SPN connector, non-write methods now only address "{bucket}/{fnm}". Without fallback to legacy fnm, pre-existing blobs written before this PR can become inaccessible.

Also applies to: 72-72, 82-82, 90-90

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@rag/utils/azure_sas_conn.py` at line 65, The delete/get/exists/read methods
currently call self.conn using the new path format f"{bucket}/{fnm}" (e.g., the
self.conn.delete_blob(f"{bucket}/{fnm}") call) but must fall back to the legacy
plain fnm if the new path is not found; update each occurrence (the delete_blob,
get_blob/read operations and existence checks at the same spots) to first try
the f"{bucket}/{fnm}" call and on a not-found/error response attempt the legacy
fnm value before failing, returning the successful result or logging/raising the
original error only if both attempts fail.

Comment thread rag/utils/azure_sas_conn.py Outdated
Comment on lines 80 to 84
def obj_exist(self, bucket, fnm):
try:
return self.conn.get_blob_client(fnm).exists()
return self.conn.get_blob_client(f"{bucket}/{fnm}").exists()
except Exception:
logging.exception(f"Fail put {bucket}/{fnm}")
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Fix misleading log verb in obj_exist exception path.

obj_exist currently logs "Fail put ...". This should be existence-specific to keep storage diagnostics accurate.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@rag/utils/azure_sas_conn.py` around lines 80 - 84, The obj_exist method logs
"Fail put ..." on exceptions which is misleading; change the logging.exception
call inside obj_exist to a existence-specific message like "Fail checking
existence of {bucket}/{fnm}" (include the bucket/fnm) and keep logging.exception
so the exception details are recorded; also ensure obj_exist returns a boolean
on error (e.g., return False) so callers of obj_exist (which calls
self.conn.get_blob_client(...).exists()) receive a consistent result.

Comment thread rag/utils/azure_spn_conn.py Outdated
Comment on lines 86 to 87
self.conn.delete_file(f"{bucket}/{fnm}")
except Exception:
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Add legacy-path fallback for non-write operations to avoid post-deploy data invisibility.

After this change, get/rm/obj_exist/get_presigned_url only target "{bucket}/{fnm}". Any objects previously written as plain fnm become unreachable after rollout. Please add a fallback read/delete/existence/presign attempt to legacy fnm (prefixed path first, legacy path second).

Also applies to: 93-94, 104-105, 113-113

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@rag/utils/azure_spn_conn.py` around lines 86 - 87, Modify the non-write
operations to try the new prefixed path first and fall back to the legacy plain
filename on failure: for the delete flow using
self.conn.delete_file(f"{bucket}/{fnm}") (and similarly for
get/get_presigned_url/obj_exist), catch exceptions or a false/None result and
then attempt the same operation with the legacy path self.conn.delete_file(fnm)
(or self.conn.get_object(fnm), self.conn.get_presigned_url(fnm),
self.conn.object_exists(fnm) as appropriate); ensure both attempts preserve
return values/errors so callers receive the correct success/failure outcome and
avoid raising the first-path exception when the legacy attempt succeeds.

Comment thread rag/utils/azure_spn_conn.py Outdated
Comment on lines 102 to 107
def obj_exist(self, bucket, fnm):
try:
client = self.conn.get_file_client(fnm)
client = self.conn.get_file_client(f"{bucket}/{fnm}")
return client.exists()
except Exception:
logging.exception(f"Fail put {bucket}/{fnm}")
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Correct the exception log message in obj_exist.

The obj_exist error path logs "Fail put ...", which is misleading during incident triage. Use an existence-specific message.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@rag/utils/azure_spn_conn.py` around lines 102 - 107, The obj_exist method
currently logs a misleading message "Fail put ..." on exception; update the
exception logging in obj_exist to use an existence-specific message such as
"Fail checking existence for {bucket}/{fnm}" (use the same f-string pattern) so
logs accurately reflect the operation and include the bucket/fnm details; modify
the logging.exception call inside obj_exist to that new message.

Comment on lines +31 to +33
def _install_stubs():
"""Replace heavyweight runtime modules so the connection modules can be
imported in isolation without the full ragflow runtime or the real
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Isolate sys.modules stubbing to avoid cross-test pollution.

These fixtures replace global module entries but never restore them. That can make unrelated tests fail depending on run order. Please switch to monkeypatch.setitem (or restore snapshot on teardown) so patches are automatically reverted.

Also applies to: 74-83, 86-97

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/unit_test/rag/utils/test_azure_blob_bucket_prefix.py` around lines 31 -
33, The helper _install_stubs currently mutates sys.modules directly and never
restores entries, causing cross-test pollution; change the stubbing to use
pytest's monkeypatch.setitem to insert the fake modules (e.g., replace
sys.modules["ragflow.runtime"] etc.) so patches are automatically reverted after
each test, and update the three other stub blocks (the ones creating fake
modules around the same area) to use monkeypatch.setitem instead of direct
sys.modules assignment or add explicit teardown that restores originals if
monkeypatch is unavailable; locate uses in function _install_stubs and the other
stub creation sites and replace sys.modules[...] = fake_module with
monkeypatch.setitem(sys.modules, key, fake_module).

@dosubot dosubot Bot added size:M This PR changes 30-99 lines, ignoring generated files. and removed size:S This PR changes 10-29 lines, ignoring generated files. labels May 8, 2026
@JinHai-CN JinHai-CN marked this pull request as draft May 8, 2026 08:48
@JinHai-CN JinHai-CN marked this pull request as ready for review May 8, 2026 08:48
@JinHai-CN JinHai-CN added the ci Continue Integration label May 8, 2026
@JinHai-CN JinHai-CN marked this pull request as draft May 8, 2026 08:48
@JinHai-CN JinHai-CN marked this pull request as ready for review May 8, 2026 08:48
@JinHai-CN
Copy link
Copy Markdown
Contributor

=================================== FAILURES ===================================
______ TestAzureSpnBucketPrefix.test_get_presigned_url_uses_bucket_prefix ______
test/unit_test/rag/utils/test_azure_blob_bucket_prefix.py:137: in test_get_presigned_url_uses_bucket_prefix
    spn.conn.get_presigned_url.assert_called_once_with("GET", "kb_a/doc.pdf", 3600)
/home/alice/.local/share/uv/python/cpython-3.12.12-linux-x86_64-gnu/lib/python3.12/unittest/mock.py:960: in assert_called_once_with
    raise AssertionError(msg)
E   AssertionError: Expected 'get_presigned_url' to be called once. Called 0 times.
______ TestAzureSasBucketPrefix.test_get_presigned_url_uses_bucket_prefix ______
test/unit_test/rag/utils/test_azure_blob_bucket_prefix.py:177: in test_get_presigned_url_uses_bucket_prefix
    sas.conn.get_presigned_url.assert_called_once_with("GET", "kb_a/doc.pdf", 3600)
/home/alice/.local/share/uv/python/cpython-3.12.12-linux-x86_64-gnu/lib/python3.12/unittest/mock.py:960: in assert_called_once_with
    raise AssertionError(msg)
E   AssertionError: Expected 'get_presigned_url' to be called once. Called 0 times.
=========================== short test summary info ============================

@Ricardo-M-L Would you please fix the unit test?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

🐞 bug Something isn't working, pull request that fix bug. ci Continue Integration size:M This PR changes 30-99 lines, ignoring generated files. 🧪 test Pull requests that update test cases.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Azure Blob SPN storage: files from different datasets can overwrite each other due to missing bucket prefix

2 participants