Skip to content

[PECOBLR-2409] Add CallableStatement support with IN parameters#1371

Open
gopalldb wants to merge 33 commits intodatabricks:mainfrom
gopalldb:feature/callable-statement-in-params
Open

[PECOBLR-2409] Add CallableStatement support with IN parameters#1371
gopalldb wants to merge 33 commits intodatabricks:mainfrom
gopalldb:feature/callable-statement-in-params

Conversation

@gopalldb
Copy link
Copy Markdown
Collaborator

@gopalldb gopalldb commented Apr 7, 2026

Summary

  • Implement DatabricksCallableStatement extending DatabricksPreparedStatement to support stored procedure invocation via Connection.prepareCall()
  • IN parameters work via inherited setXXX(int, value) methods
  • {call proc(?)} JDBC escape syntax is converted to CALL proc(?) at construction time, independent of the escapeProcessing flag
  • OUT/INOUT parameters, named parameters, and {? = call ...} return-value syntax throw SQLFeatureNotSupportedException with clear messages
  • Wire up all 3 prepareCall() overloads in DatabricksConnection with proper resultSetType/holdability validation
  • shouldReturnResultSet is always true -- use execute() or executeQuery() to invoke procedures
  • Java 8+ SQLType-based overrides explicitly implemented for consistent error messages

Known limitations

  • Nested JDBC escape sequences inside {call ...} arguments are not supported -- consistent with StringUtil.convertJdbcEscapeSequences. Use native SQL syntax instead.
  • {? = call ...} return-value syntax is rejected at construction time
  • executeUpdate() always throws for callable statements since shouldReturnResultSet=true. Use execute() for DML procedures.

Test plan

  • DatabricksCallableStatementTest -- 30 tests
  • DatabricksConnectionTest -- 42 tests pass
  • DatabricksPreparedStatementTest -- 48 tests, 0 regressions
  • Full core test suite -- 3208 tests, 0 failures

This pull request was AI-assisted by Isaac.

gopalldb added 2 commits April 7, 2026 10:56
Implement DatabricksCallableStatement extending DatabricksPreparedStatement
to support stored procedure invocation via Connection.prepareCall().

- IN parameters work via inherited setXXX(int, value) methods
- {call proc(?)} JDBC escape syntax converted to CALL proc(?)
- OUT/INOUT params throw SQLFeatureNotSupportedException
- Named parameters throw SQLFeatureNotSupportedException
- {? = call ...} return-value syntax rejected at construction time
- Wire up all 3 prepareCall() overloads in DatabricksConnection

Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
Co-authored-by: Isaac
Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
- Add explicit Java 8+ SQLType overrides for registerOutParameter (6)
  and setObject(String, Object, SQLType) (2) to ensure consistent
  Databricks-specific error messages
- Split throw helpers into void/generic variants to avoid autoboxing
  fragility
- Add exhaustive test coverage: all getXXX(int), getXXX(String),
  setXXX(String) methods, SQLType overloads, Calendar variants,
  deprecated getBigDecimal(int,int), null SQL, closed connection,
  executeQuery behavior, batch operations, getParameterMetaData
- Test count: 18 → 26

Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
Co-authored-by: Isaac
Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
@gopalldb gopalldb force-pushed the feature/callable-statement-in-params branch from edfd0f5 to 8a783bc Compare April 7, 2026 06:14
gopalldb added 3 commits April 7, 2026 11:46
CallableStatement constructor now calls setEscapeProcessing(true) so
that {call proc(?)} JDBC escape syntax is automatically converted to
CALL proc(?). This only affects callable statement instances — the
global DEFAULT_ESCAPE_PROCESSING (false) remains unchanged for
Statement and PreparedStatement.

Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
Co-authored-by: Isaac
Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
- Override shouldReturnResultSet=true in CallableStatement constructor
  so executeQuery() works and execute() returns true (procedures can
  return result sets)
- Override setEscapeProcessing(false) to be rejected — escape processing
  is required for {call ...} → CALL ... conversion
- Add tests: executeQuery works, execute returns true, executeUpdate
  throws, setEscapeProcessing(false) rejected, parameter counting in
  {call} syntax, no-arg procedure parameter count

Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
Co-authored-by: Isaac
Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
Convert {call proc(?)} to CALL proc(?) at construction time via
convertCallEscapeSyntax(), independent of the escapeProcessing flag.
setEscapeProcessing controls only standard JDBC escapes ({d}, {t},
{ts}, {fn}, {oj}) while {call} conversion always happens for
callable statements.

Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
Co-authored-by: Isaac
Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
@gopalldb gopalldb force-pushed the feature/callable-statement-in-params branch from 0691183 to e639e0f Compare April 7, 2026 07:56
- Document nested braces limitation in CALL_ESCAPE_SYNTAX Javadoc
  (consistent with StringUtil's existing limitation)
- Document executeUpdate() behavior in class Javadoc: use execute()
  for DML procedures, check getUpdateCount() afterward
- Add test for native CALL syntax (no JDBC escape braces) confirming
  execute() returns true and shouldReturnResultSet works via CALL_PATTERN

Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
Co-authored-by: Isaac
Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
@gopalldb gopalldb changed the title Add CallableStatement support with IN parameters [PECOBLR-2409] Add CallableStatement support with IN parameters Apr 7, 2026
gopalldb and others added 7 commits April 7, 2026 16:33
Add LOGGER.debug() calls to every method implementation following the
same pattern used in DatabricksStatement and DatabricksPreparedStatement
(logging the public method signature at entry).

Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
Co-authored-by: Isaac
Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
- Drop Windows from cache warmer matrix — Windows runners in
  databricks-protected-runner-group lack bash (command not found)
- Remove runner.os from cache key — Maven JARs/POMs are platform-
  independent, so one cache entry serves both Linux and Windows
- Cache key is now: maven-deps-{hash(pom.xml)}

Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
Co-authored-by: Isaac
Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
The cache was missing maven-toolchains-plugin (and potentially other
plugins like spotless, jacoco) because mvn install only resolves
plugins needed for the install lifecycle. Plugins activated by specific
goals or profiles (used in PR unit-test and formatting workflows) were
not cached, causing 401 errors for forked PRs.

Fix: after install, also run dependency:resolve-plugins and trigger
spotless/jacoco plugin downloads to ensure all PR workflow dependencies
are cached.

Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
Co-authored-by: Isaac
Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
@gopalldb gopalldb requested a review from a team as a code owner April 8, 2026 07:12
@gopalldb gopalldb requested a review from deeksha-db April 8, 2026 07:12
gopalldb added a commit that referenced this pull request Apr 8, 2026
## Summary
The cache warmer was not caching `surefire-junit-platform:3.1.2` because
surefire resolves its provider JAR lazily at test execution time, not at
plugin initialization. The previous approach (`-Dtest=NoSuchTest`)
failed before surefire downloaded the provider.

Fix: run a real lightweight test
(`DatabricksParameterMetaDataTest#testInitialization`) to force full
surefire provider resolution.

### Verified locally
All 4 previously missing artifacts are now in the local cache after
running the test:
- `surefire-junit-platform:3.1.2` (blocked unit-tests + integration
tests)
- `maven-toolchains-plugin:3.2.0` (blocked unit-tests)
- `maven-clean-plugin:3.2.0` (blocked coverage)
- `exec-maven-plugin:1.2.1` (blocked coverage)

The 5 maven-metadata.xml warnings are non-blocking (Maven falls back to
cached plugin JARs).

## Test plan
- [x] Verified locally: `mvn -pl jdbc-core test
-Dtest="DatabricksParameterMetaDataTest#testInitialization"` passes and
resolves all missing plugins
- [ ] After merge: re-trigger cache warmer on main, then re-run PR #1371
CI

NO_CHANGELOG=true

This pull request was AI-assisted by Isaac.

---------

Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
gopalldb added a commit that referenced this pull request Apr 8, 2026
…d artifacts (#1389)

## Summary
Even with artifacts in the local cache (~350MB restored successfully),
Maven checks the remote repository for POM metadata updates. For forked
PRs the JFrog mirror has no credentials, so these update checks get 401
errors — causing `maven-toolchains-plugin:3.2.0` resolution to fail even
though the JAR is cached.

Fix: for forked PRs only, configure repository and pluginRepository with
`updatePolicy=never` in an active profile. This tells Maven to use
cached artifacts without contacting the remote for updates. The non-fork
path (JFrog OIDC with full credentials) is unchanged.

## Test plan
- [ ] Re-run PR #1371 CI after merge — toolchains plugin should resolve
from cache without 401

NO_CHANGELOG=true

This pull request was AI-assisted by Isaac.

---------

Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
gopalldb and others added 4 commits April 8, 2026 16:14
Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
Co-authored-by: Isaac
Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
The updatePolicy=never approach didn't work because Maven plugin
resolution goes through the mirror directly, ignoring repository
update policies.

Fix: for forked PRs, point the mirror to file://{local-repo}. This
forces Maven to resolve everything from ~/.m2/repository (the restored
cache) without any network requests. Same-repo PRs continue using
JFrog OIDC unchanged.

Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
Co-authored-by: Isaac
Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
Maven tracks which repository ID an artifact was downloaded from. The
cache warmer uses mirror ID 'jfrog-central', so cached artifacts have
_remote.repositories entries pointing to jfrog-central. Using a
different mirror ID (local-cache) causes Maven to re-verify artifacts,
which fails for some plugins (maven-clean-plugin).

Fix: use the same ID 'jfrog-central' for the file:// mirror so Maven
recognizes cached artifacts without re-verification.

Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
Co-authored-by: Isaac
Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
gopalldb added 4 commits April 8, 2026 16:40
The file:// mirror approach failed because Maven's file:// protocol
doesn't properly serve artifacts from the local repo layout. The
real issue: _remote.repositories marker files in ~/.m2/repository
track which remote repo each artifact came from. Maven re-verifies
artifacts against the remote, which fails without credentials.

Fix: delete _remote.repositories files from the restored cache and
use an empty settings.xml. Without markers, Maven treats all cached
artifacts as locally installed — no remote verification needed.

Verified locally: BUILD SUCCESS with zero network requests.

Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
Co-authored-by: Isaac
Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
Empty settings.xml caused Maven to fall back to Maven Central (blocked
on runners). Need both: remove _remote.repositories markers (prevent
re-verification) AND file:// mirror with jfrog-central ID (prevent
fallback to Maven Central).

Verified locally: BUILD SUCCESS with zero network requests.

Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
Co-authored-by: Isaac
Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
file:// mirror doesn't work for Maven plugin resolution (can't find
JARs despite them being in the local repo). Instead, use Maven offline
mode (-o via .mvn/maven.config) combined with _remote.repositories
removal. This prevents all network requests while resolving everything
from the local cache.

Verified locally: BUILD SUCCESS with zero network requests.

Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
Co-authored-by: Isaac
Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
The _remote.repositories files track which repo ID (jfrog-central)
each artifact was downloaded from. When forked PRs restore the cache
and use offline mode, Maven refuses artifacts because they were
"not downloaded from central". Removing markers at restore time was
too late.

Fix: remove _remote.repositories in the warmer BEFORE saving the
cache. The saved cache is clean — forked PRs get artifacts with no
repo ID association, so offline mode works.

Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
Co-authored-by: Isaac
Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
gopalldb added a commit that referenced this pull request Apr 8, 2026
…PRs (#1391)

## Summary
Two fixes for forked PR dependency resolution:

1. **Warmer**: Remove `_remote.repositories` marker files before saving
the cache. These files track which repo ID (`jfrog-central`) each
artifact was downloaded from. When forked PRs restore the cache in
offline mode, Maven refuses artifacts because they were "not downloaded
from central".

2. **Fork composite action**: Use Maven offline mode (`-o` via
`.mvn/maven.config`) + empty `settings.xml`. With clean cache (no repo
ID markers), offline mode resolves everything from `~/.m2/repository`
with zero network requests.

Also deleted the existing stale cache entry to force a fresh save.

## Test plan
- [x] Verified locally: `mvn -o` with removed `_remote.repositories` →
BUILD SUCCESS, zero network requests
- [ ] After merge: re-trigger cache warmer, then re-run PR #1371 CI

NO_CHANGELOG=true

This pull request was AI-assisted by Isaac.

---------

Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
The non-fork path in the composite action was using actions/cache
(save+restore) which saved dirty cache entries WITH _remote.repositories
files. These overwrote the clean cache saved by the warmer, breaking
offline mode for forked PRs.

Fix: change non-fork path to actions/cache/restore (restore-only).
Only the warmMavenCache workflow saves the cache (with _remote.repositories
stripped).

Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
Co-authored-by: Isaac
Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
gopalldb added a commit that referenced this pull request Apr 8, 2026
## Summary
The non-fork path in the composite action used `actions/cache`
(save+restore), which saved dirty cache entries WITH
`_remote.repositories` files. These overwrote the clean cache saved by
the warmer, breaking offline mode for forked PRs.

Fix: change to `actions/cache/restore` (restore-only). Only the
`warmMavenCache` workflow saves the cache (with `_remote.repositories`
stripped).

Also deleted all existing stale cache entries.

## Test plan
- [ ] After merge: trigger cache warmer, then re-run PR #1371 CI

NO_CHANGELOG=true

This pull request was AI-assisted by Isaac.

---------

Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Deleting _remote.repositories was not enough — Maven 3.9+ still
refused cached artifacts in offline mode. The root cause: Maven
checks that the artifact was "downloaded from central" but the
_remote.repositories files referenced "jfrog-central" (the warmer's
mirror ID).

Fix: in the warmer, sed-replace 'jfrog-central' with 'central' in
all _remote.repositories files before saving the cache. This makes
Maven's offline mode accept cached artifacts as coming from 'central'
(the default Maven Central repo ID).

Verified locally: all 5 CI commands pass with replaced repo IDs +
offline mode (spotless, unit tests, jacoco, packaging, test-compile).

Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
Co-authored-by: Isaac
Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
gopalldb added a commit that referenced this pull request Apr 8, 2026
## Summary
Deleting _remote.repositories was not enough — Maven 3.9+ still refused
cached artifacts in offline mode with "has not been downloaded from it
before". The root cause: Maven checks that the artifact was downloaded
from "central" but the _remote.repositories files referenced
"jfrog-central" (the warmer mirror ID).

Fix: in the warmer, sed-replace "jfrog-central" with "central" in all
_remote.repositories files before saving the cache. This makes Maven
offline mode accept cached artifacts as coming from "central" (the
default repo ID).

Verified locally: ALL 5 CI commands pass with replaced repo IDs +
offline mode:
- spotless:check → BUILD SUCCESS
- unit tests → BUILD SUCCESS
- jacoco:report → BUILD SUCCESS
- packaging install → BUILD SUCCESS
- test-compile → BUILD SUCCESS

## After merge
1. Delete stale cache entries
2. Trigger cache warmer
3. Re-run PR #1371 CI

NO_CHANGELOG=true

This pull request was AI-assisted by Isaac.

---------

Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
gopalldb added a commit that referenced this pull request Apr 8, 2026
…ven-plugin (#1394)

## Summary
maven-clean-plugin:3.2.0 and exec-maven-plugin:1.2.1 were missing from
the dependency cache because the warmer ran `mvn install` and `mvn test`
without the `clean` lifecycle phase. The CI workflows use `mvn clean
test`, which requires these plugins.

Fix: add `clean` to every warmer step to match CI commands exactly.

## Test plan
- [ ] Trigger cache warmer from this PR branch (workflow_dispatch)
- [ ] Verify maven-clean-plugin and exec-maven-plugin appear in warmer
logs
- [ ] Re-run PR #1371 CI after merge

NO_CHANGELOG=true

This pull request was AI-assisted by Isaac.

Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
gopalldb added a commit that referenced this pull request Apr 9, 2026
## Summary
Two fixes for the forked PR dependency cache:

1. **Add Windows runner to warmer matrix.** GitHub Actions cache is
OS-scoped — a cache saved on Linux cannot be restored on Windows. The
warmer must run on both OS.

2. **Timestamp-based cache keys.** GitHub caches are immutable (can't
overwrite). Previously required manual deletion before re-running
warmer. Now uses key format `maven-deps-{timestamp}-{hash}`, so each run
creates a new entry. The restore step uses prefix `maven-deps-` to match
the latest. Old entries auto-expire after 7 days.

Also runs exact same Maven commands as PR CI workflows (8 steps) to
ensure all plugins are cached.

## Test plan
- [ ] Trigger warmer from this branch — verify both Linux and Windows
jobs pass
- [ ] After merge: trigger warmer on main, re-run PR #1371 CI

NO_CHANGELOG=true

This pull request was AI-assisted by Isaac.

Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
gopalldb and others added 4 commits April 9, 2026 09:54
Coverage was 84.91% due to DatabricksCallableStatement's throw-only
methods having unreachable bytecode that JaCoCo counts as missed.
Added targeted tests to close the gap:

- CallableStatementTest: cover 6 remaining typed setXXX(String, Blob/
  Clob/NClob/SQLXML/RowId/URL) overloads
- ConnectionTest: cover getMetaData, beginRequest, endRequest, and
  prepareCall with valid holdability
- ParameterMetaDataTest: cover isNullable, isSigned, getPrecision,
  getScale
- WildcardUtilTest: cover isNullOrWildcard, jdbcPatternToHive(null)

Coverage: 84.91% -> 85.05% (+32 instruction buffer)

Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
Co-authored-by: Isaac
Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
Keep main's 8-step approach that mirrors exact CI commands.

Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
Co-authored-by: Isaac
Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
@vikrantpuppala
Copy link
Copy Markdown
Collaborator

Code Review: DatabricksCallableStatement — CallableStatement with IN Parameters

Overall this is a solid implementation. The design choice to extend DatabricksPreparedStatement is the right call — it follows the JDBC hierarchy and gets all setXXX(int, value) / execution / batch functionality for free. The test coverage is thorough (30 tests covering construction, execution, OUT param rejection, named param rejection, and error messages). The {call ...} escape conversion at construction time is a clever solution to ensure shouldReturnResultSet works correctly regardless of escapeProcessing.

A few issues to address before merge:


Critical: Constructor validates return-value syntax AFTER calling super()

DatabricksCallableStatement.java:70-71

super(connection, convertCallEscapeSyntax(sql));   // line 70
validateNoReturnValueSyntax(sql);                   // line 71

If a user passes {? = call my_func()}, the CALL_ESCAPE_SYNTAX pattern (\{\s*call\s+...) does not match {? = call ...} (because ?\s*=\s* precedes call), so convertCallEscapeSyntax returns the original string unchanged. The parent constructor then receives {? = call my_func()} as-is, which:

  1. Won't match CALL_PATTERN (^CALL\b) in shouldReturnResultSetWithConfig, so shouldReturnResultSet is set to false (wrong for a callable statement)
  2. Partially initializes the parent object before the validation on line 71 throws

The statement gets added to statementSet in DatabricksConnection.prepareCall() only after the constructor returns, so there's no leak — but the ordering is still incorrect.

Fix: Move validation before the super() call using a static factory method:

public DatabricksCallableStatement(DatabricksConnection connection, String sql)
    throws SQLException {
  super(connection, validateAndConvert(sql));
  LOGGER.debug("Created DatabricksCallableStatement for SQL: {}", sql);
}

private static String validateAndConvert(String sql) throws SQLException {
  if (sql != null && RETURN_VALUE_SYNTAX.matcher(sql).find()) {
    throw new DatabricksSQLFeatureNotSupportedException(
        "Return value syntax {? = call ...} is not supported. "
            + "Use {call proc(...)} and retrieve results via the ResultSet.");
  }
  return convertCallEscapeSyntax(sql);
}

Major: Missing closed-connection check in prepareCall(String) and prepareCall(String, int, int)

DatabricksConnection.java

The 3-arg prepareCall(sql, type, concurrency, holdability) correctly checks isClosed(), but the 1-arg and 2-arg overloads do not. I note that prepareStatement(String) also lacks this check (pre-existing), so this is consistent with the existing pattern — but since you're adding the check in the 3-arg overload, it would be good to be consistent across all three prepareCall overloads.

The 2-arg overload delegates to the 1-arg, so adding the check to prepareCall(String) covers both:

@Override
public CallableStatement prepareCall(String sql) throws SQLException {
  throwExceptionIfConnectionIsClosed();
  LOGGER.debug(...);
  DatabricksCallableStatement statement = new DatabricksCallableStatement(this, sql);
  statementSet.add(statement);
  return statement;
}

Severity note: This is a pre-existing inconsistency (prepareStatement has the same gap), so I wouldn't block the PR on it. But since you're already touching these methods, it's a good opportunity to fix.


Major: executeUpdate() always throws for callable statements

Since CALL_PATTERN matches, shouldReturnResultSet is true, and DatabricksPreparedStatement.executeUpdate() throws when shouldReturnResultSet is true. This means executeUpdate() always fails for callable statements, even for DML procedures.

The Javadoc correctly documents the workaround ("use execute() and check getUpdateCount()"), but per JDBC spec Section 13.2, executeUpdate() should be usable for stored procedures that don't return result sets.

This is acceptable as a known limitation for v1, but consider:

  1. Overriding executeUpdate() in DatabricksCallableStatement with a clearer error message specific to callable statements (rather than the generic prepared statement error)
  2. Adding this to the "Known limitations" section of the PR description ✅ (already done)

Minor: RETURN_VALUE_SYNTAX pattern doesn't account for string literals

DatabricksCallableStatement.java:43

The pattern \{\s*\?\s*=\s*call\b performs a naive search of the full SQL string. A (contrived) SQL like:

CALL proc('{? = call foo}')

would be incorrectly rejected because the pattern appears inside a string literal argument. This is an unlikely edge case for callable statements, but worth a Javadoc note.


Minor: CALL_ESCAPE_SYNTAX uses [^}]* which doesn't handle nested braces

DatabricksCallableStatement.java:52

Already documented in the Javadoc as a known limitation consistent with StringUtil.convertJdbcEscapeSequences. No action needed — just confirming the documentation is accurate. 👍


Nit: Test testPrepareCallOnClosedConnectionThrows only tests the 3-arg overload

DatabricksCallableStatementTest.java:148-156

The closed-connection test only verifies the 3-arg prepareCall(sql, type, concurrency, holdability). If the closed-connection check is added to the 1-arg overload (per the suggestion above), add a test for that path too.


Nit: Debug log messages are method signatures

Throughout DatabricksCallableStatement, debug log messages are method signatures like "public void registerOutParameter(int parameterIndex, int sqlType)". This is consistent with the parent class's logging style, so not specific to this PR, but including actual parameter values would be more useful for debugging.


Summary

Severity # Key Items
Critical 1 Constructor validation ordering — validate before super()
Major 2 Missing closed-connection check on 1-arg/2-arg prepareCall; executeUpdate() always throws
Minor 2 Regex false positive on string literals; nested brace limitation (documented)
Nit 2 Closed-connection test coverage; debug log content

The critical item (constructor ordering) should be fixed before merge. The rest are improvements that could go in a follow-up.

Test coverage is strong — construction, execution, escape processing independence, batch operations, parameter metadata, exhaustive OUT/named parameter rejection, and error message validation are all well covered. Nice work. 👏


This comment was generated with GitHub MCP.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants