[PECOBLR-2409] Add CallableStatement support with IN parameters#1371
[PECOBLR-2409] Add CallableStatement support with IN parameters#1371gopalldb wants to merge 33 commits intodatabricks:mainfrom
Conversation
Implement DatabricksCallableStatement extending DatabricksPreparedStatement
to support stored procedure invocation via Connection.prepareCall().
- IN parameters work via inherited setXXX(int, value) methods
- {call proc(?)} JDBC escape syntax converted to CALL proc(?)
- OUT/INOUT params throw SQLFeatureNotSupportedException
- Named parameters throw SQLFeatureNotSupportedException
- {? = call ...} return-value syntax rejected at construction time
- Wire up all 3 prepareCall() overloads in DatabricksConnection
Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
Co-authored-by: Isaac
Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
- Add explicit Java 8+ SQLType overrides for registerOutParameter (6) and setObject(String, Object, SQLType) (2) to ensure consistent Databricks-specific error messages - Split throw helpers into void/generic variants to avoid autoboxing fragility - Add exhaustive test coverage: all getXXX(int), getXXX(String), setXXX(String) methods, SQLType overloads, Calendar variants, deprecated getBigDecimal(int,int), null SQL, closed connection, executeQuery behavior, batch operations, getParameterMetaData - Test count: 18 → 26 Signed-off-by: Gopal Lal <gopal.lal@databricks.com> Co-authored-by: Isaac Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
edfd0f5 to
8a783bc
Compare
CallableStatement constructor now calls setEscapeProcessing(true) so
that {call proc(?)} JDBC escape syntax is automatically converted to
CALL proc(?). This only affects callable statement instances — the
global DEFAULT_ESCAPE_PROCESSING (false) remains unchanged for
Statement and PreparedStatement.
Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
Co-authored-by: Isaac
Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
- Override shouldReturnResultSet=true in CallableStatement constructor
so executeQuery() works and execute() returns true (procedures can
return result sets)
- Override setEscapeProcessing(false) to be rejected — escape processing
is required for {call ...} → CALL ... conversion
- Add tests: executeQuery works, execute returns true, executeUpdate
throws, setEscapeProcessing(false) rejected, parameter counting in
{call} syntax, no-arg procedure parameter count
Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
Co-authored-by: Isaac
Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
Convert {call proc(?)} to CALL proc(?) at construction time via
convertCallEscapeSyntax(), independent of the escapeProcessing flag.
setEscapeProcessing controls only standard JDBC escapes ({d}, {t},
{ts}, {fn}, {oj}) while {call} conversion always happens for
callable statements.
Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
Co-authored-by: Isaac
Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
0691183 to
e639e0f
Compare
- Document nested braces limitation in CALL_ESCAPE_SYNTAX Javadoc (consistent with StringUtil's existing limitation) - Document executeUpdate() behavior in class Javadoc: use execute() for DML procedures, check getUpdateCount() afterward - Add test for native CALL syntax (no JDBC escape braces) confirming execute() returns true and shouldReturnResultSet works via CALL_PATTERN Signed-off-by: Gopal Lal <gopal.lal@databricks.com> Co-authored-by: Isaac Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
Add LOGGER.debug() calls to every method implementation following the same pattern used in DatabricksStatement and DatabricksPreparedStatement (logging the public method signature at entry). Signed-off-by: Gopal Lal <gopal.lal@databricks.com> Co-authored-by: Isaac Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
- Drop Windows from cache warmer matrix — Windows runners in
databricks-protected-runner-group lack bash (command not found)
- Remove runner.os from cache key — Maven JARs/POMs are platform-
independent, so one cache entry serves both Linux and Windows
- Cache key is now: maven-deps-{hash(pom.xml)}
Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
Co-authored-by: Isaac
Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
The cache was missing maven-toolchains-plugin (and potentially other plugins like spotless, jacoco) because mvn install only resolves plugins needed for the install lifecycle. Plugins activated by specific goals or profiles (used in PR unit-test and formatting workflows) were not cached, causing 401 errors for forked PRs. Fix: after install, also run dependency:resolve-plugins and trigger spotless/jacoco plugin downloads to ensure all PR workflow dependencies are cached. Signed-off-by: Gopal Lal <gopal.lal@databricks.com> Co-authored-by: Isaac Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
## Summary The cache warmer was not caching `surefire-junit-platform:3.1.2` because surefire resolves its provider JAR lazily at test execution time, not at plugin initialization. The previous approach (`-Dtest=NoSuchTest`) failed before surefire downloaded the provider. Fix: run a real lightweight test (`DatabricksParameterMetaDataTest#testInitialization`) to force full surefire provider resolution. ### Verified locally All 4 previously missing artifacts are now in the local cache after running the test: - `surefire-junit-platform:3.1.2` (blocked unit-tests + integration tests) - `maven-toolchains-plugin:3.2.0` (blocked unit-tests) - `maven-clean-plugin:3.2.0` (blocked coverage) - `exec-maven-plugin:1.2.1` (blocked coverage) The 5 maven-metadata.xml warnings are non-blocking (Maven falls back to cached plugin JARs). ## Test plan - [x] Verified locally: `mvn -pl jdbc-core test -Dtest="DatabricksParameterMetaDataTest#testInitialization"` passes and resolves all missing plugins - [ ] After merge: re-trigger cache warmer on main, then re-run PR #1371 CI NO_CHANGELOG=true This pull request was AI-assisted by Isaac. --------- Signed-off-by: Gopal Lal <gopal.lal@databricks.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
…d artifacts (#1389) ## Summary Even with artifacts in the local cache (~350MB restored successfully), Maven checks the remote repository for POM metadata updates. For forked PRs the JFrog mirror has no credentials, so these update checks get 401 errors — causing `maven-toolchains-plugin:3.2.0` resolution to fail even though the JAR is cached. Fix: for forked PRs only, configure repository and pluginRepository with `updatePolicy=never` in an active profile. This tells Maven to use cached artifacts without contacting the remote for updates. The non-fork path (JFrog OIDC with full credentials) is unchanged. ## Test plan - [ ] Re-run PR #1371 CI after merge — toolchains plugin should resolve from cache without 401 NO_CHANGELOG=true This pull request was AI-assisted by Isaac. --------- Signed-off-by: Gopal Lal <gopal.lal@databricks.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Signed-off-by: Gopal Lal <gopal.lal@databricks.com> Co-authored-by: Isaac Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
The updatePolicy=never approach didn't work because Maven plugin
resolution goes through the mirror directly, ignoring repository
update policies.
Fix: for forked PRs, point the mirror to file://{local-repo}. This
forces Maven to resolve everything from ~/.m2/repository (the restored
cache) without any network requests. Same-repo PRs continue using
JFrog OIDC unchanged.
Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
Co-authored-by: Isaac
Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
Maven tracks which repository ID an artifact was downloaded from. The cache warmer uses mirror ID 'jfrog-central', so cached artifacts have _remote.repositories entries pointing to jfrog-central. Using a different mirror ID (local-cache) causes Maven to re-verify artifacts, which fails for some plugins (maven-clean-plugin). Fix: use the same ID 'jfrog-central' for the file:// mirror so Maven recognizes cached artifacts without re-verification. Signed-off-by: Gopal Lal <gopal.lal@databricks.com> Co-authored-by: Isaac Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
The file:// mirror approach failed because Maven's file:// protocol doesn't properly serve artifacts from the local repo layout. The real issue: _remote.repositories marker files in ~/.m2/repository track which remote repo each artifact came from. Maven re-verifies artifacts against the remote, which fails without credentials. Fix: delete _remote.repositories files from the restored cache and use an empty settings.xml. Without markers, Maven treats all cached artifacts as locally installed — no remote verification needed. Verified locally: BUILD SUCCESS with zero network requests. Signed-off-by: Gopal Lal <gopal.lal@databricks.com> Co-authored-by: Isaac Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
Empty settings.xml caused Maven to fall back to Maven Central (blocked on runners). Need both: remove _remote.repositories markers (prevent re-verification) AND file:// mirror with jfrog-central ID (prevent fallback to Maven Central). Verified locally: BUILD SUCCESS with zero network requests. Signed-off-by: Gopal Lal <gopal.lal@databricks.com> Co-authored-by: Isaac Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
file:// mirror doesn't work for Maven plugin resolution (can't find JARs despite them being in the local repo). Instead, use Maven offline mode (-o via .mvn/maven.config) combined with _remote.repositories removal. This prevents all network requests while resolving everything from the local cache. Verified locally: BUILD SUCCESS with zero network requests. Signed-off-by: Gopal Lal <gopal.lal@databricks.com> Co-authored-by: Isaac Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
The _remote.repositories files track which repo ID (jfrog-central) each artifact was downloaded from. When forked PRs restore the cache and use offline mode, Maven refuses artifacts because they were "not downloaded from central". Removing markers at restore time was too late. Fix: remove _remote.repositories in the warmer BEFORE saving the cache. The saved cache is clean — forked PRs get artifacts with no repo ID association, so offline mode works. Signed-off-by: Gopal Lal <gopal.lal@databricks.com> Co-authored-by: Isaac Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
…PRs (#1391) ## Summary Two fixes for forked PR dependency resolution: 1. **Warmer**: Remove `_remote.repositories` marker files before saving the cache. These files track which repo ID (`jfrog-central`) each artifact was downloaded from. When forked PRs restore the cache in offline mode, Maven refuses artifacts because they were "not downloaded from central". 2. **Fork composite action**: Use Maven offline mode (`-o` via `.mvn/maven.config`) + empty `settings.xml`. With clean cache (no repo ID markers), offline mode resolves everything from `~/.m2/repository` with zero network requests. Also deleted the existing stale cache entry to force a fresh save. ## Test plan - [x] Verified locally: `mvn -o` with removed `_remote.repositories` → BUILD SUCCESS, zero network requests - [ ] After merge: re-trigger cache warmer, then re-run PR #1371 CI NO_CHANGELOG=true This pull request was AI-assisted by Isaac. --------- Signed-off-by: Gopal Lal <gopal.lal@databricks.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
The non-fork path in the composite action was using actions/cache (save+restore) which saved dirty cache entries WITH _remote.repositories files. These overwrote the clean cache saved by the warmer, breaking offline mode for forked PRs. Fix: change non-fork path to actions/cache/restore (restore-only). Only the warmMavenCache workflow saves the cache (with _remote.repositories stripped). Signed-off-by: Gopal Lal <gopal.lal@databricks.com> Co-authored-by: Isaac Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
## Summary The non-fork path in the composite action used `actions/cache` (save+restore), which saved dirty cache entries WITH `_remote.repositories` files. These overwrote the clean cache saved by the warmer, breaking offline mode for forked PRs. Fix: change to `actions/cache/restore` (restore-only). Only the `warmMavenCache` workflow saves the cache (with `_remote.repositories` stripped). Also deleted all existing stale cache entries. ## Test plan - [ ] After merge: trigger cache warmer, then re-run PR #1371 CI NO_CHANGELOG=true This pull request was AI-assisted by Isaac. --------- Signed-off-by: Gopal Lal <gopal.lal@databricks.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Deleting _remote.repositories was not enough — Maven 3.9+ still refused cached artifacts in offline mode. The root cause: Maven checks that the artifact was "downloaded from central" but the _remote.repositories files referenced "jfrog-central" (the warmer's mirror ID). Fix: in the warmer, sed-replace 'jfrog-central' with 'central' in all _remote.repositories files before saving the cache. This makes Maven's offline mode accept cached artifacts as coming from 'central' (the default Maven Central repo ID). Verified locally: all 5 CI commands pass with replaced repo IDs + offline mode (spotless, unit tests, jacoco, packaging, test-compile). Signed-off-by: Gopal Lal <gopal.lal@databricks.com> Co-authored-by: Isaac Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
## Summary Deleting _remote.repositories was not enough — Maven 3.9+ still refused cached artifacts in offline mode with "has not been downloaded from it before". The root cause: Maven checks that the artifact was downloaded from "central" but the _remote.repositories files referenced "jfrog-central" (the warmer mirror ID). Fix: in the warmer, sed-replace "jfrog-central" with "central" in all _remote.repositories files before saving the cache. This makes Maven offline mode accept cached artifacts as coming from "central" (the default repo ID). Verified locally: ALL 5 CI commands pass with replaced repo IDs + offline mode: - spotless:check → BUILD SUCCESS - unit tests → BUILD SUCCESS - jacoco:report → BUILD SUCCESS - packaging install → BUILD SUCCESS - test-compile → BUILD SUCCESS ## After merge 1. Delete stale cache entries 2. Trigger cache warmer 3. Re-run PR #1371 CI NO_CHANGELOG=true This pull request was AI-assisted by Isaac. --------- Signed-off-by: Gopal Lal <gopal.lal@databricks.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
…ven-plugin (#1394) ## Summary maven-clean-plugin:3.2.0 and exec-maven-plugin:1.2.1 were missing from the dependency cache because the warmer ran `mvn install` and `mvn test` without the `clean` lifecycle phase. The CI workflows use `mvn clean test`, which requires these plugins. Fix: add `clean` to every warmer step to match CI commands exactly. ## Test plan - [ ] Trigger cache warmer from this PR branch (workflow_dispatch) - [ ] Verify maven-clean-plugin and exec-maven-plugin appear in warmer logs - [ ] Re-run PR #1371 CI after merge NO_CHANGELOG=true This pull request was AI-assisted by Isaac. Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
## Summary
Two fixes for the forked PR dependency cache:
1. **Add Windows runner to warmer matrix.** GitHub Actions cache is
OS-scoped — a cache saved on Linux cannot be restored on Windows. The
warmer must run on both OS.
2. **Timestamp-based cache keys.** GitHub caches are immutable (can't
overwrite). Previously required manual deletion before re-running
warmer. Now uses key format `maven-deps-{timestamp}-{hash}`, so each run
creates a new entry. The restore step uses prefix `maven-deps-` to match
the latest. Old entries auto-expire after 7 days.
Also runs exact same Maven commands as PR CI workflows (8 steps) to
ensure all plugins are cached.
## Test plan
- [ ] Trigger warmer from this branch — verify both Linux and Windows
jobs pass
- [ ] After merge: trigger warmer on main, re-run PR #1371 CI
NO_CHANGELOG=true
This pull request was AI-assisted by Isaac.
Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
Coverage was 84.91% due to DatabricksCallableStatement's throw-only methods having unreachable bytecode that JaCoCo counts as missed. Added targeted tests to close the gap: - CallableStatementTest: cover 6 remaining typed setXXX(String, Blob/ Clob/NClob/SQLXML/RowId/URL) overloads - ConnectionTest: cover getMetaData, beginRequest, endRequest, and prepareCall with valid holdability - ParameterMetaDataTest: cover isNullable, isSigned, getPrecision, getScale - WildcardUtilTest: cover isNullOrWildcard, jdbcPatternToHive(null) Coverage: 84.91% -> 85.05% (+32 instruction buffer) Signed-off-by: Gopal Lal <gopal.lal@databricks.com> Co-authored-by: Isaac Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
Keep main's 8-step approach that mirrors exact CI commands. Signed-off-by: Gopal Lal <gopal.lal@databricks.com> Co-authored-by: Isaac Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
Code Review:
|
| Severity | # | Key Items |
|---|---|---|
| Critical | 1 | Constructor validation ordering — validate before super() |
| Major | 2 | Missing closed-connection check on 1-arg/2-arg prepareCall; executeUpdate() always throws |
| Minor | 2 | Regex false positive on string literals; nested brace limitation (documented) |
| Nit | 2 | Closed-connection test coverage; debug log content |
The critical item (constructor ordering) should be fixed before merge. The rest are improvements that could go in a follow-up.
Test coverage is strong — construction, execution, escape processing independence, batch operations, parameter metadata, exhaustive OUT/named parameter rejection, and error message validation are all well covered. Nice work. 👏
This comment was generated with GitHub MCP.
Summary
DatabricksCallableStatementextendingDatabricksPreparedStatementto support stored procedure invocation viaConnection.prepareCall()setXXX(int, value)methods{call proc(?)}JDBC escape syntax is converted toCALL proc(?)at construction time, independent of theescapeProcessingflag{? = call ...}return-value syntax throwSQLFeatureNotSupportedExceptionwith clear messagesprepareCall()overloads inDatabricksConnectionwith proper resultSetType/holdability validationshouldReturnResultSetis alwaystrue-- useexecute()orexecuteQuery()to invoke proceduresSQLType-based overrides explicitly implemented for consistent error messagesKnown limitations
{call ...}arguments are not supported -- consistent withStringUtil.convertJdbcEscapeSequences. Use native SQL syntax instead.{? = call ...}return-value syntax is rejected at construction timeexecuteUpdate()always throws for callable statements sinceshouldReturnResultSet=true. Useexecute()for DML procedures.Test plan
DatabricksCallableStatementTest-- 30 testsDatabricksConnectionTest-- 42 tests passDatabricksPreparedStatementTest-- 48 tests, 0 regressionsThis pull request was AI-assisted by Isaac.