feat(backend/kernel): wire TSparkParameter through to kernel bind_param#789
Open
vikrantpuppala wants to merge 1 commit into
Open
feat(backend/kernel): wire TSparkParameter through to kernel bind_param#789vikrantpuppala wants to merge 1 commit into
vikrantpuppala wants to merge 1 commit into
Conversation
Lifts the NotSupportedError that execute_command currently raises
for parametrized queries. The kernel-side PyO3 binding for
Statement.bind_param landed in databricks-sql-kernel#18; this
commit wires the connector's TSparkParameter shape through to it.
Implementation:
- New `bind_tspark_params(kernel_stmt, parameters)` in
type_mapping.py forwards each TSparkParameter to the kernel as
`(ordinal, value.stringValue, type)`. ordinal is the 1-based
position in the parameters list; the connector's `ordinal: bool`
flag is checked only to reject named bindings (kernel v0 doesn't
accept them on the wire).
- execute_command no longer raises on `parameters=[...]`. The
query_tags branch stays — that's a separate gap.
Tests:
- 6 new unit tests in tests/unit/test_kernel_type_mapping.py for
the mapper:
- positional forwarding preserves ordering and (ordinal, value, type)
- None value forwards as SQL NULL
- VOID passes through verbatim (kernel parser ignores value for VOID)
- named bindings raise NotSupportedError with a pointed message
- missing TSparkParameter.type defaults to STRING (defensive)
- empty parameters list is a no-op
- 3 new e2e tests in tests/e2e/test_kernel_backend.py against
dogfood:
- mixed-type round-trip (INT, STRING, BOOLEAN) via the
connector's native IntegerParameter/StringParameter/BooleanParameter
- None parameter (VoidParameter → SQL NULL)
- DECIMAL parameter with precision/scale carried in the SQL type
string (auto-inferred — explicit-arg path has a pre-existing
bug in native.py where format-args are swapped)
Unit pass: 45/45 (was 39). Full unit suite: 625 pass.
Live e2e: 14/14 (was 11).
Co-authored-by: Isaac
Signed-off-by: Vikrant Puppala <vikrant.puppala@databricks.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Lifts the `NotSupportedError` the kernel backend currently raises for parametrized queries. The kernel-side PyO3 binding for `Statement.bind_param` lands in databricks-sql-kernel#18; this PR wires the connector's `TSparkParameter` shape through to it.
Before this PR, calling `cursor.execute("SELECT ?", [IntegerParameter(42)])` on `use_sea=True` raised:
```
NotSupportedError: Parameter binding is not yet supported on the kernel backend
```
After: positional parameter binding works end-to-end.
What it does
`bind_tspark_params(kernel_stmt, parameters)` in `type_mapping.py` forwards each `TSparkParameter` to the kernel as `(ordinal, value.stringValue, type)`:
The connector's `ordinal: bool` flag is checked only to reject named bindings — kernel v0 doesn't accept named bindings on the SEA wire, so we surface that at the connector layer with a pointed `NotSupportedError` rather than letting the server reject.
What's supported
Every type the connector's native parameter classes emit:
What's not supported (known gaps, raises `NotSupportedError`)
Test plan
6 new unit tests in `tests/unit/test_kernel_type_mapping.py` — positional forwarding, None / VOID, named binding rejection, missing-type defaulting, empty list. 45/45 pass.
Pre-existing unit suite still green: 625 pass.
3 new live e2e tests in `tests/e2e/test_kernel_backend.py` against dogfood:
14/14 pass.
`cargo test` / `cargo clippy` / `cargo fmt --check` clean on the kernel side (PR Make GetOperationStatus attempts retryable #18).
This pull request and its description were written by Isaac.