Implement GCP container deployments#275
Conversation
|
Warning Rate limit exceeded
To keep reviews running without waiting, you can enable usage-based add-on for your organization. This allows additional reviews beyond the hourly cap. Account admins can enable it under billing. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📝 WalkthroughWalkthroughThis PR introduces Google Cloud container deployment support via a strategy-based architecture, updates request ID resolution across language runtimes to prefer Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant GCP as GCP Orchestrator
participant Strategy as Deployment Strategy
participant CloudAPI as Google Cloud API
participant Registry as Artifact Registry
participant Container as Cloud Run/Functions
User->>GCP: deploy_function(GCPFunction)
alt Deployment Type: Container
GCP->>GCP: select RunContainerStrategy
GCP->>Strategy: create_function(function, image_uri)
Strategy->>Registry: push_image(container_uri)
Registry-->>Strategy: digest@sha256:...
Strategy->>CloudAPI: Cloud Run service create/update
CloudAPI->>Container: deploy service
Container-->>CloudAPI: service ready
CloudAPI-->>Strategy: status
Strategy-->>GCP: deployment result
else Deployment Type: Gen1/Gen2
GCP->>GCP: select CloudFunctionGen1Strategy
GCP->>Strategy: create_function(function)
Strategy->>CloudAPI: Cloud Functions API v1
CloudAPI->>Container: provision function
Container-->>CloudAPI: deployed
CloudAPI-->>Strategy: status
Strategy-->>GCP: deployment result
end
GCP-->>User: function deployed
User->>GCP: invoke_function(function)
alt Deployment Type: Container
GCP->>Container: HTTP POST (Cloud Run)
Container->>Container: execute handler
Container-->>GCP: response + x-cloud-trace-context
else Deployment Type: Gen1
GCP->>Container: Cloud Functions library call
Container->>Container: execute handler
Container-->>GCP: result + function-execution-id
end
GCP-->>User: invocation result
sequenceDiagram
participant Handler as Language Handler
participant CloudRuntime as Cloud Runtime
participant TraceHeader as x-cloud-trace-context
participant ExecHeader as function-execution-id
CloudRuntime->>Handler: HTTP Request
Handler->>Handler: extract request_id
Handler->>TraceHeader: check header
alt TraceHeader present
TraceHeader-->>Handler: trace_id
Handler->>Handler: normalize_request_id(trace_id)
else TraceHeader absent
Handler->>ExecHeader: check fallback header
ExecHeader-->>Handler: exec_id
Handler->>Handler: use exec_id as request_id
end
Handler->>Handler: execute function logic
Handler->>Handler: build response with request_id
Handler-->>CloudRuntime: HTTP Response + metadata
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
36a431c to
32a401e
Compare
We should wait for deployment after each creation and update; no need to do extra checks
There was a problem hiding this comment.
Actionable comments posted: 13
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (5)
sebs/aws/aws.py (1)
637-649:⚠️ Potential issue | 🟡 MinorDocstring lags the new signature; runtime breakage tracked at root cause.
The
functionparameter is now aFunctionper the new signature, but the docstring still saysfunc_name: Name of the function to delete. Please refresh.The actual failure (
function.nameon a dict passed bySystem.cleanup_functions) is a downstream effect of the change insebs/faas/system.py— see the comment there for the recommended fix.Also, the broad
except Exceptioncollapses every failure mode (IAM denied, throttled, network) into "Function does not exist!", which can hide real cleanup bugs. Consider catchingself.client.exceptions.ResourceNotFoundExceptionspecifically and re-raising/logging others.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sebs/aws/aws.py` around lines 637 - 649, Update delete_function to reflect the new Function parameter and to handle AWS errors explicitly: change the docstring to document that it accepts a Function object (use function.name) and avoid catching all exceptions; catch self.client.exceptions.ResourceNotFoundException to log "does not exist" and let other exceptions propagate or log/re-raise them so IAM/network errors aren't masked. Ensure callers (e.g., System.cleanup_functions) pass a Function instance (or adapt here to accept dicts defensively) and reference the delete_function method and Function.name in your changes.sebs/azure/azure.py (1)
671-708:⚠️ Potential issue | 🔴 Critical
function.namecrashes withAttributeErrorbecause the caller passes a serialized dict, not a Function object.
System.cleanup_functions(sebs/faas/system.py L742–744) does:for name, func in functions.items(): if not dry_run: self.delete_function(func) deleted.append(name)
get_all_functionsreturnsDict[str, Any](serialized dicts), not Function instances. However, all three providers now expectdelete_function(self, function: Function):
- sebs/aws/aws.py:637 — tries
function.nameimmediately (L644), crashes on dict- sebs/azure/azure.py:671 — tries
function.nameimmediately (L677), crashes on dict (Azure then deserializes at L692 as a fallback, but the error occurs first)- sebs/gcp/gcp.py:1978 — tries
cast(GCPFunction, function)and accesses.name(L1991), crashes on dictAll three also have stale docstrings documenting a
func_nameparameter that no longer exists.Fix: Make
cleanup_functionsdeserialize before delegating (sebs/faas/system.py):for name, func in functions.items(): if not dry_run: - self.delete_function(func) + self.delete_function(self.function_type().deserialize(func)) deleted.append(name)Then remove the redundant deserialize blocks from each provider (azure.py L685–692, and any similar logic in aws.py and gcp.py). Update all three docstrings to document
function: Functioninstead offunc_name.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sebs/azure/azure.py` around lines 671 - 708, The delete flow crashes because System.cleanup_functions passes serialized dicts from get_all_functions to provider.delete_function, but delete_function implementations (sebs/azure/azure.py: delete_function, sebs/aws/aws.py: delete_function, sebs/gcp/gcp.py: delete_function) expect a Function object and access function.name immediately; fix by changing System.cleanup_functions to deserialize each func entry (use the provider's function_type().deserialize or the shared deserialization helper) before calling delete_function so providers always receive a Function instance, then remove the now-redundant deserialize fallback logic inside each provider's delete_function (e.g., the Azure block that casts/deserializes the cached entry) and update the docstrings on all three delete_function methods to accept and document parameter as function: Function instead of the stale func_name description.sebs/faas/system.py (1)
722-744:⚠️ Potential issue | 🔴 CriticalType mismatch:
cleanup_functionspasses a dict todelete_function(self, function: Function).
self.cache_client.get_all_functions(...)returnsDict[str, Any](serialized dicts). The loop at line 742–744 passes these dicts directly todelete_function, which expects aFunctionobject. All three implementations immediately accessfunction.name(aws.py:643, azure.py:678, gcp.py:1993), causingAttributeError: 'dict' object has no attribute 'name'at runtime.Fix by deserializing before delegation:
🐛 Proposed fix
def cleanup_functions(self, dry_run: bool) -> List[str]: functions = self.cache_client.get_all_functions(self.name()) deleted = [] for name, func in functions.items(): if not dry_run: - self.delete_function(func) + self.delete_function(self.function_type().deserialize(func)) deleted.append(name)Also fix the docstring at line 726—it incorrectly lists the parameter as
func_namewhen the actual parameter isfunction: Function.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sebs/faas/system.py` around lines 722 - 744, The cleanup_functions method is passing serialized dicts from cache_client.get_all_functions(...) into delete_function(function: Function), causing AttributeError when implementations read function.name; update cleanup_functions to deserialize each dict into a Function instance before calling delete_function (e.g., convert the dict returned by cache_client.get_all_functions(self.name()) into a Function object) and ensure the loop passes that Function object to self.delete_function; also correct the delete_function docstring parameter name from func_name to function: Function so it matches the signature and behavior.sebs/gcp/gcp.py (2)
1920-1976:⚠️ Potential issue | 🟡 MinorDocstring drift:
container_uriis not a parameter of this method.
update_function_configurationtakes(function, code_package, env_variables)only. The docstring lists acontainer_uriarg that doesn't exist.code_package: Benchmark package with configuration requirements env_variables: Additional environment variables to set - container_uri: Container image URI (for container deployments) Returns: Version ID of the updated function🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sebs/gcp/gcp.py` around lines 1920 - 1976, The docstring for update_function_configuration is out of sync: it lists a non-existent parameter container_uri; update the docstring for the function update_function_configuration to remove any reference to container_uri (or replace it with the actual parameters: function, code_package, env_variables) and ensure the Args section, Returns, and Raises descriptions correctly match the function signature and behavior (including that it returns a version/id int and may raise RuntimeError).
2003-2030:⚠️ Potential issue | 🟠 MajorFragile dispatch:
"-" in function_nameis not a reliable proxy for deployment type.Today this works because
format_function_namestrips hyphens for Gen1 anddefault_function_nameappends-dockerfor container deployments. But:
- Any future renaming convention that introduces a hyphen into Gen1 names will silently route Gen1 functions to
RunContainerStrategy.download_execution_metricsand report zero metrics.- A user-overridden function name containing
-already breaks this.The function objects in
requests/elsewhere already carrydeployment_type. Either accept aFunction(the waydelete_function/is_deployedwere updated) or cache-look-up the function by name to recover the type.🛠️ Sketch of a more robust dispatch
- strategy = ( - self.run_container_strategy - if "-" in function_name - else self.cloud_function_gen1_strategy - ) + # Look up the cached function to recover its deployment type + cached_function = cast( + Optional[GCPFunction], + self.cache_client.get_function(deployment="gcp", benchmark=None, name=function_name), + ) + is_container = ( + cached_function is not None + and cached_function.deployment_type.is_container + ) or "-" in function_name # fallback heuristic + strategy = ( + self.run_container_strategy if is_container else self.cloud_function_gen1_strategy + )(Adapt to the cache-lookup API actually exposed by
Cache.)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sebs/gcp/gcp.py` around lines 2003 - 2030, The dispatch in download_metrics currently uses the fragile "-" in function_name check; change it to use the authoritative deployment_type instead: update download_metrics (or its callers) to accept a Function object (or perform a Cache lookup by function_name to retrieve the Function) and read function.deployment_type (or equivalent) to select between run_container_strategy and cloud_function_gen1_strategy; then call strategy.download_execution_metrics(function_name, start_time, end_time, requests) as before. Locate the download_metrics method and replace the "-" check with a conditional on the Function.deployment_type (or cache-retrieved deployment_type) so routing is robust to name changes and user-supplied names.
🧹 Nitpick comments (8)
benchmarks/wrappers/gcp/python/setup.py (1)
13-13:package_dataglob may pick up unintended files.
glob('**', recursive=True)is evaluated at the build cwd and will includesetup.pyitself and any incidental files (e.g.__pycache__, build leftovers,.pyc). Consider scoping the pattern or, simpler, rely onMANIFEST.in/include_package_data=True.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@benchmarks/wrappers/gcp/python/setup.py` at line 13, The package_data entry in setup.py currently uses glob('**', recursive=True) which will capture unintended files (including setup.py, __pycache__, .pyc); change this to either scope the pattern to the package directory (e.g., only include files under the "function" package via a pattern like 'function/**') or remove package_data and set include_package_data=True combined with a proper MANIFEST.in so only intended files are packaged; update the package_data line in setup.py (and ensure MANIFEST.in exists if using include_package_data) accordingly.dockerfiles/gcp/python/Dockerfile.build (1)
10-12:gpg-agentis now unused.It was needed for the deadsnakes PPA key, which has been removed. Since base images switched to
python:<ver>-slim, no third-party apt repositories are added anymore. Safe to drop and shrink the build image.♻️ Proposed cleanup
RUN apt-get update\ - && apt-get install -y --no-install-recommends gcc build-essential libxml2 libxml2-dev zlib1g-dev gpg-agent zip\ + && apt-get install -y --no-install-recommends gcc build-essential libxml2 libxml2-dev zlib1g-dev zip\ && apt-get purge -y --auto-remove🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dockerfiles/gcp/python/Dockerfile.build` around lines 10 - 12, Remove the now-unused package gpg-agent from the RUN apt-get install line in the Dockerfile build stage (the RUN command that installs gcc build-essential libxml2 libxml2-dev zlib1g-dev gpg-agent zip) so the image no longer installs gpg-agent; update that RUN invocation to only install the needed packages (gcc, build-essential, libxml2, libxml2-dev, zlib1g-dev, zip) and leave the apt-get purge/auto-remove step intact to shrink the final build image.sebs/gcp/function.py (2)
128-135: Optional: serialize the enum value explicitly.
self.deployment_typeis astr-Enum sojson.dumpswill emit its value, but the cached dict in memory will still hold the Enum object until serialization. Using.valuemakes the round-trip withFunctionDeploymentType.deserialize(cached_config["deployment_type"])symmetric and avoids surprises if the cache implementation ever bypasses JSON.out = { **super().serialize(), "container-uri": self._container_uri, "bucket": self.bucket, - "deployment_type": self.deployment_type, + "deployment_type": self.deployment_type.value, "deployment_config": self._deployment_config.serialize(), }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sebs/gcp/function.py` around lines 128 - 135, The serialized dict currently stores the Enum object under the "deployment_type" key; update the Function.serialize() output to store the enum's primitive by replacing the current "deployment_type": self.deployment_type entry with its value (use self.deployment_type.value) so that cached dicts hold the same primitive string that FunctionDeploymentType.deserialize(cached_config["deployment_type"]) expects and to keep in-memory cache behavior symmetric with JSON round-trips.
85-95: Docstring nit:container-gen1is not a real value, and parameter order is wrong.
FunctionDeploymentTypeonly hasFUNCTION_GEN1,FUNCTION_GEN2, andCONTAINER. Also, in the docstringbucketis listed beforedeployment_configbut the signature hasdeployment_configbeforebucket.- deployment_type: Type of deployment (function-gen1, container-gen1, etc.) - bucket: Optional Cloud Storage bucket name for code storage + deployment_type: Type of deployment (function-gen1, function-gen2, container) deployment_config: Deployment-specific configuration + bucket: Optional Cloud Storage bucket name for code storage + container_uri: Optional container image URI for container deployments🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sebs/gcp/function.py` around lines 85 - 95, The docstring for the GCP Cloud Function initializer misstates valid deployment types and lists parameters out of order; update the docstring on the __init__ (or the function that initializes the Cloud Function instance) to list the correct FunctionDeploymentType values (FUNCTION_GEN1, FUNCTION_GEN2, CONTAINER) instead of "container-gen1" and reorder the parameter list in the docstring to match the actual signature (put deployment_config before bucket) while preserving existing descriptions for name, benchmark, code_package_hash, cfg, deployment_type, deployment_config, and bucket.sebs/gcp/container.py (1)
138-152: Preserve the original exception chain on push failure.When the docker login/push fails, the re-raised
RuntimeErrormasks the underlyingDockerException. Useraise ... from eso callers and tracebacks retain the cause (also satisfies Ruff B904).except docker.errors.DockerException as e: self.logging.error(f"Failed to push the image to registry {repository_uri}.") - self.logging.error(f"Error: {str(e)}") - raise RuntimeError("Couldn't push to registry.") + self.logging.error(f"Error: {e}") + raise RuntimeError("Couldn't push to registry.") from e🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sebs/gcp/container.py` around lines 138 - 152, In push_image, the except block catches docker.errors.DockerException as e and raises a new RuntimeError which loses the original exception chain; change the re-raise to use "raise RuntimeError(\"Couldn't push to registry.\") from e" so the original DockerException (e) is preserved in the traceback and meets Ruff B904; update the raise in push_image's except handler to include "from e" while keeping the existing log statements and error message.sebs/gcp/triggers.py (1)
192-205: Optional: avoid creating a freshThreadPoolExecutorperasync_invokecall.Each call instantiates an executor that is never explicitly shut down; CPython's atexit hook eventually joins it, but for repeated invocations this leaks short-lived worker threads and is harder to reason about than a class-level executor. Consider a single executor on the trigger (or even at module level) reused across calls. Note:
HTTPTrigger.async_invokefollows the same pattern, so a fix should likely cover both.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sebs/gcp/triggers.py` around lines 192 - 205, The async_invoke method currently creates a new ThreadPoolExecutor on every call (pool = concurrent.futures.ThreadPoolExecutor()) and never shuts it down, leaking threads; change this to reuse a single executor by moving the executor to a longer-lived scope (e.g., a class-level attribute on this trigger class or a module-level executor) and update async_invoke to submit to that shared executor (still calling self.sync_invoke(payload)); apply the same change to HTTPTrigger.async_invoke so both triggers share the reuse pattern, and ensure you provide a shutdown/close method or document lifecycle semantics so the executor can be cleanly shutdown when the application/trigger is torn down.dockerfiles/gcp/python/Dockerfile.function (1)
1-45: Container runs as root (Trivy DS-0002).Functions Framework / gunicorn does not need root at runtime; consider creating and switching to a non-root user before
CMDto reduce blast radius. This is hardening, not blocking.🛡️ Example hardening
ENV PYTHONPATH="/" + +RUN useradd --create-home --uid 1001 app && chown -R app /ff_app.py /function +USER 1001 CMD ["sh", "-c", "exec gunicorn --bind :${PORT:-8080} --workers ${GUNICORN_WORKERS:-1} --threads ${GUNICORN_THREADS:-8} --timeout 0 ff_app:app"]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dockerfiles/gcp/python/Dockerfile.function` around lines 1 - 45, The container runs as root; create a non-root user and switch to it before CMD to harden runtime: in the Dockerfile add a RUN that creates a user/group (e.g., nonroot), ensure ownership of installed files (the installed packages and the function/ directory and /ff_app.py) is changed to that user (chown), set a suitable HOME/WORKDIR if needed, and then add a USER nonroot line before the existing CMD so gunicorn/functions-framework run as the non-root user instead of root.sebs/gcp/gcp.py (1)
1016-1016: Cloud Run create/patch/update bypasses_execute_with_retry.The Gen1 strategy consistently funnels every API call through
_execute_with_retry, but the Cloud Run strategy uses barereq.execute()forcreate,patch(update_code), andpatch(update_config). Cloud Run's admin API is just as susceptible to transient 429/503s. Wrapping these would make the two strategies' resilience symmetric.- self._operation_response = create_req.execute() + self._operation_response = self._execute_with_retry(self.logging, create_req)Also applies to: 1055-1055, 1088-1088
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sebs/gcp/gcp.py` at line 1016, The Cloud Run strategy is calling create_req.execute() (and similarly the patch calls in update_code and update_config) directly, bypassing the retry logic; replace those direct execute() calls with calls to the existing _execute_with_retry wrapper (e.g., call self._execute_with_retry(create_req) and likewise for the patch requests) so create, patch (update_code) and patch (update_config) all funnel through _execute_with_retry and gain the same 429/503 retry resilience as the Gen1 strategy.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@benchmarks/wrappers/gcp/python/setup.py`:
- Around line 1-3: Replace the deprecated distutils import with setuptools:
change the import line that currently reads "from distutils.core import setup"
to import setup from setuptools instead, ensure any uses of setup() continue to
call the setuptools.setup function, and keep/continue using
pkg_resources.parse_requirements (or switch to setuptools' utilities if desired)
so the module doesn't depend on distutils and will work on Python 3.12; update
any packaging-related comments or metadata to reflect the setuptools usage.
In `@configs/python.json`:
- Around line 70-96: The configuration has mismatched concurrency settings: in
the "configuration" object the "function-gen2" block sets "gcp-concurrency": 80
and "worker-concurrency": 1 while the "container" block sets "gcp-concurrency":
1 and "worker-concurrency": 80, which will cap Container instances to a single
request regardless of worker threads; decide whether the intent is per-instance
request concurrency (gcp-concurrency) or internal worker count
(worker-concurrency) and make them consistent—either set both blocks to
"gcp-concurrency": 80 and "worker-concurrency": 80 to compare raw throughput, or
set both to "gcp-concurrency": 1 and "worker-concurrency": 1 if you want
single-request-per-instance behavior; update the "function-gen2" and "container"
entries accordingly so gcp-concurrency and worker-concurrency align with the
intended performance model.
In `@configs/systems.json`:
- Around line 291-295: The configs/systems.json exposes "arm64" entries but
deploy_function (sebs/gcp/gcp.py) still raises RuntimeError("GCP does not
support arm64 deployments"), and the arm64 images are duplicates without
platform args; update either the config or the code: if arm64 support isn't
implemented yet, remove or restrict "arm64" from the architecture listings in
configs/systems.json (where "arm64" appears for Python versions) so only "x64"
is advertised; otherwise implement real arm64 support by removing the
RuntimeError in deploy_function (sebs/gcp/gcp.py), ensure the
Dockerfile.function build path uses the manylinux_2_17_aarch64 branch and passes
--platform=linux/arm64 (or sets buildx/args) so images are actually built for
arm64, and add tests/validation to the deploy flow to avoid advertising
unsupported architectures.
In `@dockerfiles/gcp/python/Dockerfile.function`:
- Around line 14-35: The RUN block that checks TARGET_ARCHITECTURE uses the
Bash-only [[ ... == ... ]] syntax which fails under the default /bin/sh in the
base images; update the conditional expressions in that RUN (the checks that set
PLATFORM_ARG and the subsequent if that tests for requirements files) to POSIX
shell form using [ "$TARGET_ARCHITECTURE" = "arm64" ] and similar single-bracket
tests (replace == with = and [[ with [ and ensure proper spacing and closing ]),
or alternatively add a SHELL directive to use /bin/bash before the RUN; adjust
the conditional references to TARGET_ARCHITECTURE and PLATFORM_ARG accordingly
so the build works under /bin/sh.
In `@sebs/gcp/config.py`:
- Around line 484-505: The create_container_repository function currently polls
Artifact Registry in an unbounded while True loop (using operation and op_name)
which can hang; add a total timeout (e.g., 300 seconds) and polling deadline
logic: record a start timestamp before the loop, on each iteration check elapsed
time against the timeout, log and raise a timeout-specific Exception if
exceeded, and otherwise continue the existing polling flow and sleep(2). Make
sure to reference the existing variables operation, op_name and op when adding
the timeout check and error path so behavior and logging remain consistent.
- Around line 346-353: GCPConfiguration.initialize currently assumes keys exist
and unconditionally calls GCPFunctionGen1Config.deserialize,
GCPFunctionGen2Config.deserialize and GCPContainerConfig.deserialize which
raises KeyError for partial configs; change the method to use
dct.get("function-gen1", {}) / dct.get("function-gen2", {}) /
dct.get("container", {}) and only call the respective .deserialize(...) when the
returned dict is truthy, otherwise leave config._function_gen1_config,
config._function_gen2_config, and config._container_config at their constructor
defaults; reference GCPConfiguration.initialize and the three _* _config
attributes plus the GCPFunctionGen1Config.deserialize,
GCPFunctionGen2Config.deserialize, and GCPContainerConfig.deserialize calls to
locate where to apply this conditional logic.
- Around line 301-313: GCPContainerConfig.deserialize currently uses strict dict
indexing which raises KeyError for missing keys; change it to mirror the
permissive behavior of GCPFunctionGen2Config.deserialize by using dct.get(...)
with sensible defaults for environment, vcpus, gcp-concurrency,
worker-concurrency, min-instances, max-instances, cpu-boost, and cpu-throttle
(keep worker-threads as-is), or delegate to a shared helper that provides
default values so cached/partial configs don't error; update the
GCPContainerConfig.deserialize method to read each field with .get and
appropriate default values consistent with the parent class.
- Around line 469-483: In check_container_repository_exists, change the log
level used when catching a 404 from self.logging.error to a non-error level
(self.logging.info or self.logging.debug) because a missing repo is expected
(get_container_repository/create_container_repository handle creation); update
the message there accordingly (e.g., "Container repository does not exist.") and
keep raising other HttpError cases unchanged so callers like
get_container_repository can proceed without spurious error logs.
In `@sebs/gcp/container.py`:
- Around line 54-78: In find_image, fix three bugs: replace the strict check
("latest" in image["tags"] and image_tag in image["tags"]) with a simple
membership test using image.get("tags", []) and only check for image_tag (i.e.,
use image_tag in image.get("tags", [])) to avoid KeyError on untagged images and
to honor the method contract; and implement pagination by iterating through all
pages returned by
self.ar_client.projects().locations().repositories().dockerImages().list(...)
using the client’s list_next() or nextPageToken loop so every page of
dockerImages is searched before returning False (keep the existing HttpError
handling and GCPCredentials usage intact).
In `@sebs/gcp/gcp.py`:
- Around line 1451-1458: The docstring for the container_client property
incorrectly references AWS/ECR; update the docstring to describe the
GCP/GCR-specific container manager instead (mention GCRContainer and that it
returns the GCR container manager via self.gcr_client), adjust the summary and
Returns section to reference GCP/GCR rather than AWS/ECR, and ensure the return
type description matches the annotated type GCRContainer | None and the actual
returned attribute self.gcr_client.
- Around line 1016-1123: The current approach stores operation state on
self._operation_response in create(), update_code(), and update_config(), which
is brittle for reused/shared RunContainerStrategy; modify create(),
update_code(), and update_config() (symbols: create, update_code, update_config)
to return the operation name (extracted from req.execute() response e.g.
response["name"]) instead of mutating self, and change wait_for_deployment to
accept an op_name parameter (rename to wait_for_deployment(op_name, func_name)
or overload) so it reads the operation via
operations().wait(name=op_name).execute() without relying on hasattr/delattr;
remove uses of self._operation_response and update callers (e.g.,
enforce_cold_start) to thread the returned op_name through to
wait_for_deployment to avoid stale/overwritten state.
- Around line 458-462: The code sets self._expected_version in update_code but
never uses it; either remove the dead assignment or thread it into the
deployment-wait path—modify update_code to pass the expected version to
wait_for_deployment/_wait_for_active_status (or have wait_for_deployment read
self._expected_version) so the stored int(res["metadata"]["versionId"]) is
actually used; reference symbols: update_code, wait_for_deployment,
_wait_for_active_status, and _expected_version.
In `@sebs/gcp/triggers.py`:
- Around line 132-151: LibraryTrigger.sync_invoke currently raises
NotImplementedError for FunctionDeploymentType.CONTAINER and FUNCTION_GEN2 while
GCP.create_function unconditionally attaches a LibraryTrigger to every function;
update GCP.create_function to only add a LibraryTrigger when deployment_type ==
FunctionDeploymentType.FUNCTION_GEN1 (skip attaching for CONTAINER and
FUNCTION_GEN2), and also harden LibraryTrigger.sync_invoke by adding a graceful
fallback: if self._deployment_type is CONTAINER or FUNCTION_GEN2, locate the
function's HTTP trigger (e.g., an HTTPTrigger instance or http URL on the
function object) and forward the call to that HTTP trigger (or perform an HTTP
POST to the function URL) instead of raising NotImplementedError, otherwise keep
using _invoke_gen1_function for FUNCTION_GEN1.
---
Outside diff comments:
In `@sebs/aws/aws.py`:
- Around line 637-649: Update delete_function to reflect the new Function
parameter and to handle AWS errors explicitly: change the docstring to document
that it accepts a Function object (use function.name) and avoid catching all
exceptions; catch self.client.exceptions.ResourceNotFoundException to log "does
not exist" and let other exceptions propagate or log/re-raise them so
IAM/network errors aren't masked. Ensure callers (e.g.,
System.cleanup_functions) pass a Function instance (or adapt here to accept
dicts defensively) and reference the delete_function method and Function.name in
your changes.
In `@sebs/azure/azure.py`:
- Around line 671-708: The delete flow crashes because System.cleanup_functions
passes serialized dicts from get_all_functions to provider.delete_function, but
delete_function implementations (sebs/azure/azure.py: delete_function,
sebs/aws/aws.py: delete_function, sebs/gcp/gcp.py: delete_function) expect a
Function object and access function.name immediately; fix by changing
System.cleanup_functions to deserialize each func entry (use the provider's
function_type().deserialize or the shared deserialization helper) before calling
delete_function so providers always receive a Function instance, then remove the
now-redundant deserialize fallback logic inside each provider's delete_function
(e.g., the Azure block that casts/deserializes the cached entry) and update the
docstrings on all three delete_function methods to accept and document parameter
as function: Function instead of the stale func_name description.
In `@sebs/faas/system.py`:
- Around line 722-744: The cleanup_functions method is passing serialized dicts
from cache_client.get_all_functions(...) into delete_function(function:
Function), causing AttributeError when implementations read function.name;
update cleanup_functions to deserialize each dict into a Function instance
before calling delete_function (e.g., convert the dict returned by
cache_client.get_all_functions(self.name()) into a Function object) and ensure
the loop passes that Function object to self.delete_function; also correct the
delete_function docstring parameter name from func_name to function: Function so
it matches the signature and behavior.
In `@sebs/gcp/gcp.py`:
- Around line 1920-1976: The docstring for update_function_configuration is out
of sync: it lists a non-existent parameter container_uri; update the docstring
for the function update_function_configuration to remove any reference to
container_uri (or replace it with the actual parameters: function, code_package,
env_variables) and ensure the Args section, Returns, and Raises descriptions
correctly match the function signature and behavior (including that it returns a
version/id int and may raise RuntimeError).
- Around line 2003-2030: The dispatch in download_metrics currently uses the
fragile "-" in function_name check; change it to use the authoritative
deployment_type instead: update download_metrics (or its callers) to accept a
Function object (or perform a Cache lookup by function_name to retrieve the
Function) and read function.deployment_type (or equivalent) to select between
run_container_strategy and cloud_function_gen1_strategy; then call
strategy.download_execution_metrics(function_name, start_time, end_time,
requests) as before. Locate the download_metrics method and replace the "-"
check with a conditional on the Function.deployment_type (or cache-retrieved
deployment_type) so routing is robust to name changes and user-supplied names.
---
Nitpick comments:
In `@benchmarks/wrappers/gcp/python/setup.py`:
- Line 13: The package_data entry in setup.py currently uses glob('**',
recursive=True) which will capture unintended files (including setup.py,
__pycache__, .pyc); change this to either scope the pattern to the package
directory (e.g., only include files under the "function" package via a pattern
like 'function/**') or remove package_data and set include_package_data=True
combined with a proper MANIFEST.in so only intended files are packaged; update
the package_data line in setup.py (and ensure MANIFEST.in exists if using
include_package_data) accordingly.
In `@dockerfiles/gcp/python/Dockerfile.build`:
- Around line 10-12: Remove the now-unused package gpg-agent from the RUN
apt-get install line in the Dockerfile build stage (the RUN command that
installs gcc build-essential libxml2 libxml2-dev zlib1g-dev gpg-agent zip) so
the image no longer installs gpg-agent; update that RUN invocation to only
install the needed packages (gcc, build-essential, libxml2, libxml2-dev,
zlib1g-dev, zip) and leave the apt-get purge/auto-remove step intact to shrink
the final build image.
In `@dockerfiles/gcp/python/Dockerfile.function`:
- Around line 1-45: The container runs as root; create a non-root user and
switch to it before CMD to harden runtime: in the Dockerfile add a RUN that
creates a user/group (e.g., nonroot), ensure ownership of installed files (the
installed packages and the function/ directory and /ff_app.py) is changed to
that user (chown), set a suitable HOME/WORKDIR if needed, and then add a USER
nonroot line before the existing CMD so gunicorn/functions-framework run as the
non-root user instead of root.
In `@sebs/gcp/container.py`:
- Around line 138-152: In push_image, the except block catches
docker.errors.DockerException as e and raises a new RuntimeError which loses the
original exception chain; change the re-raise to use "raise
RuntimeError(\"Couldn't push to registry.\") from e" so the original
DockerException (e) is preserved in the traceback and meets Ruff B904; update
the raise in push_image's except handler to include "from e" while keeping the
existing log statements and error message.
In `@sebs/gcp/function.py`:
- Around line 128-135: The serialized dict currently stores the Enum object
under the "deployment_type" key; update the Function.serialize() output to store
the enum's primitive by replacing the current "deployment_type":
self.deployment_type entry with its value (use self.deployment_type.value) so
that cached dicts hold the same primitive string that
FunctionDeploymentType.deserialize(cached_config["deployment_type"]) expects and
to keep in-memory cache behavior symmetric with JSON round-trips.
- Around line 85-95: The docstring for the GCP Cloud Function initializer
misstates valid deployment types and lists parameters out of order; update the
docstring on the __init__ (or the function that initializes the Cloud Function
instance) to list the correct FunctionDeploymentType values (FUNCTION_GEN1,
FUNCTION_GEN2, CONTAINER) instead of "container-gen1" and reorder the parameter
list in the docstring to match the actual signature (put deployment_config
before bucket) while preserving existing descriptions for name, benchmark,
code_package_hash, cfg, deployment_type, deployment_config, and bucket.
In `@sebs/gcp/gcp.py`:
- Line 1016: The Cloud Run strategy is calling create_req.execute() (and
similarly the patch calls in update_code and update_config) directly, bypassing
the retry logic; replace those direct execute() calls with calls to the existing
_execute_with_retry wrapper (e.g., call self._execute_with_retry(create_req) and
likewise for the patch requests) so create, patch (update_code) and patch
(update_config) all funnel through _execute_with_retry and gain the same 429/503
retry resilience as the Gen1 strategy.
In `@sebs/gcp/triggers.py`:
- Around line 192-205: The async_invoke method currently creates a new
ThreadPoolExecutor on every call (pool =
concurrent.futures.ThreadPoolExecutor()) and never shuts it down, leaking
threads; change this to reuse a single executor by moving the executor to a
longer-lived scope (e.g., a class-level attribute on this trigger class or a
module-level executor) and update async_invoke to submit to that shared executor
(still calling self.sync_invoke(payload)); apply the same change to
HTTPTrigger.async_invoke so both triggers share the reuse pattern, and ensure
you provide a shutdown/close method or document lifecycle semantics so the
executor can be cleanly shutdown when the application/trigger is torn down.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 6436cd96-f011-44ae-a99d-0f8c01aecf19
📒 Files selected for processing (15)
benchmarks/wrappers/gcp/python/handler.pybenchmarks/wrappers/gcp/python/setup.pyconfigs/python.jsonconfigs/systems.jsondockerfiles/gcp/nodejs/Dockerfile.builddockerfiles/gcp/python/Dockerfile.builddockerfiles/gcp/python/Dockerfile.functionsebs/aws/aws.pysebs/azure/azure.pysebs/faas/system.pysebs/gcp/config.pysebs/gcp/container.pysebs/gcp/function.pysebs/gcp/gcp.pysebs/gcp/triggers.py
There was a problem hiding this comment.
Actionable comments posted: 7
♻️ Duplicate comments (2)
sebs/gcp/gcp.py (2)
1015-1018:⚠️ Potential issue | 🟠 MajorRemove shared
_operation_responsestate fromRunContainerStrategy.State stored on the strategy instance is brittle for reused/interleaved calls (overwrite/stale wait risks). Thread the operation name explicitly from
create/update_*intowait_for_deployment.💡 Proposed direction
- def create(...) -> None: + def create(...) -> str: ... - self._operation_response = create_req.execute() + op = self._execute_with_retry(self.logging, create_req) + return op["name"] - def wait_for_deployment(self, func_name: str) -> None: - op_name = self._operation_response["name"] + def wait_for_deployment(self, func_name: str, op_name: str) -> None: op_res = self.run_client.projects().locations().operations().wait(name=op_name).execute() - delattr(self, "_operation_response")Also applies to: 1054-1057, 1087-1090, 1099-1123
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sebs/gcp/gcp.py` around lines 1015 - 1018, The RunContainerStrategy currently stores create/update operation state in self._operation_response which is brittle for concurrent or reused instances; modify create and the update_* methods (e.g., create, update_service, update_traffic) to return or expose the operation name/string (from create_req.execute() or equivalent) and change wait_for_deployment to accept that operation name as an explicit parameter instead of reading self._operation_response; remove uses of self._operation_response and update all call sites (including where logging happens around operation creation at the create block and the blocks around lines referenced) to pass the operation name into wait_for_deployment so no shared mutable operation state remains on the strategy instance.
461-462:⚠️ Potential issue | 🟡 Minor
_expected_versionis still dead state.The assignment is not consumed in the deployment wait path, so version pinning intent is currently unused.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sebs/gcp/gcp.py` around lines 461 - 462, The assignment to self._expected_version in the deployment code is unused; update the deployment wait path (wait_for_deployment) to consume this pin by reading the resource's metadata/versionId during its polling loop and comparing it to self._expected_version (or accept expected_version as a parameter) so the waiter only returns when the observed version matches the pinned value; locate references to self._expected_version, the method wait_for_deployment, and the place that sets _expected_version and thread them together (pass the expected value into the poll or check it inside the loop) and ensure timeouts/errors still behave correctly if the version never matches.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@dockerfiles/gcp/java/Dockerfile.function`:
- Around line 37-46: The final Docker stage runs as root because no USER is set;
update the Dockerfile.function final stage to create an unprivileged user (e.g.,
add a non-root group/user), chown /workspace (where function.jar and
java-function-invoker.jar are copied) to that user, and then set USER before the
CMD so the java command (the CMD invoking java -jar
/workspace/java-function-invoker.jar --classpath /workspace/function.jar
--target org.serverlessbench.Handler --port ${PORT:-8080}) runs unprivileged;
ensure any ENV (JAVA_TOOL_OPTIONS) remains accessible to the non-root user.
In `@dockerfiles/gcp/python/Dockerfile.function`:
- Around line 11-38: Remove the explicit "USER root" line and rely on the base
image's default unprivileged user; in the Dockerfile remove the USER root
instruction and ensure the subsequent steps (the RUN lines that call "pip
install --target .", "touch function/__init__.py", and the "printf ... >
/ff_app.py" creation) run as the base image user and write into directories that
are writable by that user (adjust working directory or file/directory ownership
beforehand if necessary) while leaving the CMD that launches gunicorn
(ff_app:app) unchanged.
In `@sebs/faas/function.py`:
- Around line 330-337: The shared parser is truncating
ExecutionResult.request_id by splitting on "/" which can corrupt non-GCP IDs and
loses the original value; remove the in-place normalization (the if "/" in
self.request_id: ... block that mutates self.request_id) and instead perform any
slash-truncation only in the GCP-specific flow that prepares queries or create a
separate field (e.g., request_id_query or trace_id_normalized) to hold the
truncated value while leaving ExecutionResult.request_id intact for
debugging/correlation; update callers that rely on the truncated value to use
the new field or the GCP flow transformation.
In `@sebs/gcp/gcp.py`:
- Around line 1010-1011: Different transforms are being applied to container
service names across lifecycle operations (creation, trigger lookup, deployment
checks, deletion), causing mismatched service IDs; fix by centralizing name
normalization: add a single helper (e.g., normalize_service_name) and call it
once to produce a canonical service_id used everywhere instead of repeatedly
transforming func_name locally (replace usages like serviceId=func_name and
other spots around the service_body assignments and the blocks at the referenced
locations 1160-1162, 1231-1234, 1251-1254, 1542-1548 with the canonical
service_id).
- Around line 501-504: The code is double-waiting after calling
CloudFunctionGen1Strategy.update_config because update_config already waits for
ACTIVE and returns the updated version; remove or conditionalize the
unconditional call to strategy.wait_for_deployment(function.name) so we don't
force a build wait on config-only updates. Specifically, in the deployment flow
that calls update_config (CloudFunctionGen1Strategy.update_config) and then
calls strategy.wait_for_deployment(function.name), either drop the
wait_for_deployment call or only call it when update_config returns no
version/indicates a build was triggered; use the returned value from
update_config (or _wait_for_active_status/function version) to decide whether to
skip strategy.wait_for_deployment.
- Around line 336-340: The current exception handling in function_exists
swallows all HttpError instances and returns False; change it to catch HttpError
as e and inspect the HTTP status: if the response status is 404 return False,
otherwise re-raise the exception so permission/transient errors bubble up.
Locate the try/except around self._execute_with_retry(self.logging, get_req) in
function_exists and replace the bare except HttpError with except HttpError as
e: check e.resp.status (or e.status_code) for 404 then return False, else raise.
Apply the same fix to the other identical HttpError handling block later in the
file that follows the same pattern.
- Line 1422: The initialize function currently declares a mutable default for
parameter config (config: Dict[str, str] = {}), which can lead to shared state;
change the signature to use an immutable default (e.g., config:
Optional[Dict[str, str]] = None) and inside the initialize function normalize it
with something like if config is None: config = {} so callers get a fresh dict
and the anti-pattern is avoided; update any references to initialize and its
config parameter accordingly.
---
Duplicate comments:
In `@sebs/gcp/gcp.py`:
- Around line 1015-1018: The RunContainerStrategy currently stores create/update
operation state in self._operation_response which is brittle for concurrent or
reused instances; modify create and the update_* methods (e.g., create,
update_service, update_traffic) to return or expose the operation name/string
(from create_req.execute() or equivalent) and change wait_for_deployment to
accept that operation name as an explicit parameter instead of reading
self._operation_response; remove uses of self._operation_response and update all
call sites (including where logging happens around operation creation at the
create block and the blocks around lines referenced) to pass the operation name
into wait_for_deployment so no shared mutable operation state remains on the
strategy instance.
- Around line 461-462: The assignment to self._expected_version in the
deployment code is unused; update the deployment wait path (wait_for_deployment)
to consume this pin by reading the resource's metadata/versionId during its
polling loop and comparing it to self._expected_version (or accept
expected_version as a parameter) so the waiter only returns when the observed
version matches the pinned value; locate references to self._expected_version,
the method wait_for_deployment, and the place that sets _expected_version and
thread them together (pass the expected value into the poll or check it inside
the loop) and ensure timeouts/errors still behave correctly if the version never
matches.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 533791a9-2c8a-4aeb-a33e-d75bcf042054
📒 Files selected for processing (16)
benchmarks/wrappers/gcp/java/src/main/java/org/serverlessbench/Handler.javabenchmarks/wrappers/gcp/nodejs/handler.jsbenchmarks/wrappers/gcp/python/handler.pybenchmarks/wrappers/gcp/python/setup.pyconfigs/python.jsonconfigs/systems.jsondockerfiles/gcp/java/Dockerfile.functiondockerfiles/gcp/nodejs/Dockerfile.builddockerfiles/gcp/nodejs/Dockerfile.functiondockerfiles/gcp/python/Dockerfile.builddockerfiles/gcp/python/Dockerfile.functionsebs/aws/aws.pysebs/azure/azure.pysebs/faas/function.pysebs/faas/system.pysebs/gcp/gcp.py
✅ Files skipped from review due to trivial changes (1)
- benchmarks/wrappers/gcp/nodejs/handler.js
🚧 Files skipped from review as they are similar to previous changes (3)
- benchmarks/wrappers/gcp/python/setup.py
- benchmarks/wrappers/gcp/python/handler.py
- configs/systems.json
| FROM $BASE_IMAGE | ||
|
|
||
| WORKDIR /workspace | ||
|
|
||
| COPY --from=builder /workspace/function.jar /workspace/function.jar | ||
| COPY --from=builder /workspace/java-function-invoker.jar /workspace/java-function-invoker.jar | ||
|
|
||
| ENV JAVA_TOOL_OPTIONS="-XX:+TieredCompilation -XX:TieredStopAtLevel=1" | ||
|
|
||
| CMD ["sh", "-c", "exec java -jar /workspace/java-function-invoker.jar --classpath /workspace/function.jar --target org.serverlessbench.Handler --port ${PORT:-8080}"] |
There was a problem hiding this comment.
Switch the final image away from root.
The final stage never sets a non-root USER, so the deployed container will run with full privileges. Please switch to an unprivileged runtime user in the last stage.
🔧 Suggested fix
FROM $BASE_IMAGE
WORKDIR /workspace
COPY --from=builder /workspace/function.jar /workspace/function.jar
COPY --from=builder /workspace/java-function-invoker.jar /workspace/java-function-invoker.jar
ENV JAVA_TOOL_OPTIONS="-XX:+TieredCompilation -XX:TieredStopAtLevel=1"
+RUN useradd --system --create-home --uid 10001 appuser
+USER appuser
CMD ["sh", "-c", "exec java -jar /workspace/java-function-invoker.jar --classpath /workspace/function.jar --target org.serverlessbench.Handler --port ${PORT:-8080}"]📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| FROM $BASE_IMAGE | |
| WORKDIR /workspace | |
| COPY --from=builder /workspace/function.jar /workspace/function.jar | |
| COPY --from=builder /workspace/java-function-invoker.jar /workspace/java-function-invoker.jar | |
| ENV JAVA_TOOL_OPTIONS="-XX:+TieredCompilation -XX:TieredStopAtLevel=1" | |
| CMD ["sh", "-c", "exec java -jar /workspace/java-function-invoker.jar --classpath /workspace/function.jar --target org.serverlessbench.Handler --port ${PORT:-8080}"] | |
| FROM $BASE_IMAGE | |
| WORKDIR /workspace | |
| COPY --from=builder /workspace/function.jar /workspace/function.jar | |
| COPY --from=builder /workspace/java-function-invoker.jar /workspace/java-function-invoker.jar | |
| ENV JAVA_TOOL_OPTIONS="-XX:+TieredCompilation -XX:TieredStopAtLevel=1" | |
| RUN useradd --system --create-home --uid 10001 appuser | |
| USER appuser | |
| CMD ["sh", "-c", "exec java -jar /workspace/java-function-invoker.jar --classpath /workspace/function.jar --target org.serverlessbench.Handler --port ${PORT:-8080}"] |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@dockerfiles/gcp/java/Dockerfile.function` around lines 37 - 46, The final
Docker stage runs as root because no USER is set; update the Dockerfile.function
final stage to create an unprivileged user (e.g., add a non-root group/user),
chown /workspace (where function.jar and java-function-invoker.jar are copied)
to that user, and then set USER before the CMD so the java command (the CMD
invoking java -jar /workspace/java-function-invoker.jar --classpath
/workspace/function.jar --target org.serverlessbench.Handler --port
${PORT:-8080}) runs unprivileged; ensure any ENV (JAVA_TOOL_OPTIONS) remains
accessible to the non-root user.
| USER root | ||
|
|
||
| RUN pip install --no-cache-dir functions-framework==3.10.1 gunicorn==25.3.0 \ | ||
| && pip cache purge | ||
|
|
||
| RUN touch function/__init__.py \ | ||
| && if test -f "function/requirements.txt.${PYTHON_VERSION}"; then \ | ||
| pip install --no-cache-dir ${PLATFORM_ARG} --target . \ | ||
| -r function/requirements.txt \ | ||
| -r function/requirements.txt.${PYTHON_VERSION} \ | ||
| function/ && \ | ||
| pip cache purge; \ | ||
| else \ | ||
| pip install --no-cache-dir ${PLATFORM_ARG} --target . \ | ||
| -r function/requirements.txt \ | ||
| function/ && \ | ||
| pip cache purge; \ | ||
| fi | ||
|
|
||
| RUN printf '%s\n' \ | ||
| 'from functions_framework import create_app' \ | ||
| 'app = create_app(target="handler", source="function/handler.py")' \ | ||
| > /ff_app.py | ||
|
|
||
|
|
||
| ENV PYTHONPATH="/" | ||
|
|
||
| CMD ["sh", "-c", "exec gunicorn --bind :${PORT:-8080} --workers ${GUNICORN_WORKERS:-1} --threads ${GUNICORN_THREADS:-8} --timeout 0 ff_app:app"] |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n dockerfiles/gcp/python/Dockerfile.functionRepository: spcl/serverless-benchmarks
Length of output: 1422
🏁 Script executed:
rg "BASE_IMAGE" --type dockerfile --type txt --type yaml -B 2 -A 2 | head -50Repository: spcl/serverless-benchmarks
Length of output: 105
🏁 Script executed:
rg "BASE_IMAGE" -B 2 -A 2 | head -100Repository: spcl/serverless-benchmarks
Length of output: 5261
🏁 Script executed:
rg "gcp.*python" -A 5 -B 5 | grep -i "base\|image" | head -50Repository: spcl/serverless-benchmarks
Length of output: 387
🏁 Script executed:
find . -name "*.yaml" -o -name "*.yml" -o -name "*.json" | xargs grep -l "gcp" | head -5Repository: spcl/serverless-benchmarks
Length of output: 207
🏁 Script executed:
rg "gcp.*python" configs/ -B 3 -A 10Repository: spcl/serverless-benchmarks
Length of output: 52
🏁 Script executed:
find configs -name "*.json" -exec cat {} \; | grep -A 10 -B 10 "gcp" | head -100Repository: spcl/serverless-benchmarks
Length of output: 2159
🏁 Script executed:
rg "python.*3\." sebs/ -B 3 -A 3 | grep -i "gcp\|base\|image" | head -50Repository: spcl/serverless-benchmarks
Length of output: 52
🏁 Script executed:
find sebs -name "*.py" | xargs grep -l "gcp.*python" | head -3Repository: spcl/serverless-benchmarks
Length of output: 89
🏁 Script executed:
rg "python" sebs/faas/gcp.py -B 3 -A 3 | head -50Repository: spcl/serverless-benchmarks
Length of output: 127
🏁 Script executed:
rg "base_image" sebs/faas/ -B 2 -A 2Repository: spcl/serverless-benchmarks
Length of output: 890
🏁 Script executed:
rg "benchmark_base_images" sebs/ -B 3 -A 10 | head -80Repository: spcl/serverless-benchmarks
Length of output: 1825
🏁 Script executed:
rg "benchmark_base_images" sebs/config.py -A 30Repository: spcl/serverless-benchmarks
Length of output: 1380
🏁 Script executed:
find . -name "*.yaml" -o -name "*.yml" -o -name "*.json" | xargs grep -l "base_images" | head -5Repository: spcl/serverless-benchmarks
Length of output: 93
🏁 Script executed:
rg "gcp" ./configs/systems.json -B 3 -A 20 | grep -A 20 "python"Repository: spcl/serverless-benchmarks
Length of output: 908
🏁 Script executed:
rg "USER" dockerfiles/ | grep -v "root"Repository: spcl/serverless-benchmarks
Length of output: 1874
🏁 Script executed:
rg "USER root" dockerfiles/ -B 5 -A 10Repository: spcl/serverless-benchmarks
Length of output: 6699
🏁 Script executed:
cat dockerfiles/gcp/java/Dockerfile.function | tail -20Repository: spcl/serverless-benchmarks
Length of output: 934
🏁 Script executed:
cat dockerfiles/gcp/nodejs/Dockerfile.function | tail -20Repository: spcl/serverless-benchmarks
Length of output: 552
Remove USER root and rely on the base image's default unprivileged user.
The operations in this Dockerfile do not require root privileges. The pip install --target . command installs to the current directory without elevation, and touch and printf operations succeed without root. The GCP Java function runtime follows this pattern without an explicit USER root statement. Keeping the root user unnecessarily weakens the container security boundary established by the Google-provided base image.
🧰 Tools
🪛 Trivy (0.69.3)
[error] 11-11: Image user should not be 'root'
Last USER command in Dockerfile should not be 'root'
Rule: DS-0002
(IaC/Dockerfile)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@dockerfiles/gcp/python/Dockerfile.function` around lines 11 - 38, Remove the
explicit "USER root" line and rely on the base image's default unprivileged
user; in the Dockerfile remove the USER root instruction and ensure the
subsequent steps (the RUN lines that call "pip install --target .", "touch
function/__init__.py", and the "printf ... > /ff_app.py" creation) run as the
base image user and write into directories that are writable by that user
(adjust working directory or file/directory ownership beforehand if necessary)
while leaving the CMD that launches gunicorn (ff_app:app) unchanged.
| serviceId=func_name, | ||
| body=service_body, # type: ignore[arg-type] |
There was a problem hiding this comment.
Normalize container service names once and reuse everywhere.
Creation, trigger lookup, deployment checks, and deletion currently apply different name transforms. That can make lifecycle calls point at different service IDs.
💡 Proposed fix
+ `@staticmethod`
+ def _normalize_service_name(name: str) -> str:
+ return name.replace("_", "-").replace(".", "-").lower()
...
- serviceId=func_name,
+ serviceId=self._normalize_service_name(func_name),
...
- service_id = func_name.lower()
+ service_id = self._normalize_service_name(func_name)
...
- service_name = func_name.replace("_", "-").lower()
+ service_name = self._normalize_service_name(func_name)Also applies to: 1160-1162, 1231-1234, 1251-1254, 1542-1548
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@sebs/gcp/gcp.py` around lines 1010 - 1011, Different transforms are being
applied to container service names across lifecycle operations (creation,
trigger lookup, deployment checks, deletion), causing mismatched service IDs;
fix by centralizing name normalization: add a single helper (e.g.,
normalize_service_name) and call it once to produce a canonical service_id used
everywhere instead of repeatedly transforming func_name locally (replace usages
like serviceId=func_name and other spots around the service_body assignments and
the blocks at the referenced locations 1160-1162, 1231-1234, 1251-1254,
1542-1548 with the canonical service_id).
|
|
||
| def initialize( | ||
| self, | ||
| config: Dict[str, str] = {}, |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's check the file structure and locate the initialize method
fd "gcp.py" | grep -E "sebs/gcp/gcp\.py"Repository: spcl/serverless-benchmarks
Length of output: 86
🏁 Script executed:
# Read the file around line 1422 to see the initialize method
wc -l sebs/gcp/gcp.pyRepository: spcl/serverless-benchmarks
Length of output: 91
🏁 Script executed:
# Get context around line 1422
sed -n '1415,1445p' sebs/gcp/gcp.pyRepository: spcl/serverless-benchmarks
Length of output: 1185
🏁 Script executed:
# Read more of the initialize method
sed -n '1419,1460p' sebs/gcp/gcp.pyRepository: spcl/serverless-benchmarks
Length of output: 1596
🏁 Script executed:
# Check if the config parameter is used anywhere else in the initialize method
# or if it's dead code
sed -n '1419,1450p' sebs/gcp/gcp.py | grep -i "config"Repository: spcl/serverless-benchmarks
Length of output: 406
🏁 Script executed:
# Check the parent class or interface to see if initialize() is defined elsewhere
# First, let's see what class this is
sed -n '1,100p' sebs/gcp/gcp.py | grep -E "^class|def initialize"Repository: spcl/serverless-benchmarks
Length of output: 106
🏁 Script executed:
# Search for the class definition and parent class
rg "class.*GCP.*:" sebs/gcp/gcp.py -A 3Repository: spcl/serverless-benchmarks
Length of output: 237
🏁 Script executed:
# Find the System class definition to check if initialize() is defined there
rg "class System" sebs/ -A 5Repository: spcl/serverless-benchmarks
Length of output: 926
🏁 Script executed:
# Search for where System class initializes with config to understand the pattern
rg "def initialize" sebs/gcp/gcp.py sebs/core/system.py -B 2 -A 5Repository: spcl/serverless-benchmarks
Length of output: 466
🏁 Script executed:
# Find the System base class definition
rg "class System" sebs/ -A 10 | head -50Repository: spcl/serverless-benchmarks
Length of output: 1575
🏁 Script executed:
# Check other cloud implementations to see the pattern
fd "\.py$" sebs/aws/ sebs/azure/ | head -5Repository: spcl/serverless-benchmarks
Length of output: 169
🏁 Script executed:
# Look at AWS implementation to see if it has the same pattern
rg "def initialize" sebs/aws/ -B 2 -A 8Repository: spcl/serverless-benchmarks
Length of output: 2740
🏁 Script executed:
# Check the full AWS initialize method to see if config is used there
sed -n '/def initialize/,/^ def /p' sebs/aws/aws.py | head -40Repository: spcl/serverless-benchmarks
Length of output: 1083
🏁 Script executed:
# Check if the config parameter is ever used/mutated in the GCP initialize method
# Look for any assignments or modifications to config
rg "config\s*=" sebs/gcp/gcp.py -B 2 -A 2 | grep -A 5 "def initialize" | head -20Repository: spcl/serverless-benchmarks
Length of output: 52
🏁 Script executed:
# Check other implementations to see if they use the config parameter
rg "def initialize.*config" sebs/ -A 15Repository: spcl/serverless-benchmarks
Length of output: 1911
🏁 Script executed:
# Check if initialize() is called anywhere with the config parameter
rg "\.initialize\(" sebs/ -B 2 -A 2Repository: spcl/serverless-benchmarks
Length of output: 14960
🏁 Script executed:
# Look at the base class definition to see if initialize is defined there
sed -n '/class System/,/^class /p' sebs/faas/system.py | grep -A 20 "def initialize"Repository: spcl/serverless-benchmarks
Length of output: 1579
🏁 Script executed:
# Verify if the config parameter is actually used anywhere in the GCP initialize method
# Get the full initialize method implementation
sed -n '1419,1450p' sebs/gcp/gcp.pyRepository: spcl/serverless-benchmarks
Length of output: 1289
🏁 Script executed:
# Check if config is referenced anywhere in the method body
sed -n '1419,1450p' sebs/gcp/gcp.py | grep -i "config"Repository: spcl/serverless-benchmarks
Length of output: 406
🏁 Script executed:
# Confirm the parameter is completely unused by checking if it's referenced in the method
awk '/def initialize\(/,/^ def /' sebs/gcp/gcp.py | grep -n "config" | head -20Repository: spcl/serverless-benchmarks
Length of output: 52
🏁 Script executed:
# Check the base class to see if initialize() accepts config in the abstract definition
sed -n '/class System/,/^class /p' sebs/faas/system.py | grep -A 15 "def initialize"Repository: spcl/serverless-benchmarks
Length of output: 1347
🏁 Script executed:
# Check other cloud providers (Azure, AWS) for comparison
rg "def initialize" sebs/azure/azure.py sebs/aws/aws.py -A 25 | grep -E "def initialize|config" | head -20Repository: spcl/serverless-benchmarks
Length of output: 572
🏁 Script executed:
# Verify if there's any subclass that actually uses the config parameter
rg "def initialize" sebs/ -B 2 -A 20 | grep -A 10 "config\s*=" | head -40Repository: spcl/serverless-benchmarks
Length of output: 2228
Use an immutable default for initialize(config=...).
The config parameter uses a mutable default argument {}, which is a Python anti-pattern that can cause shared state across method calls. Although the parameter is currently unused in this implementation, the anti-pattern should still be corrected.
💡 Proposed fix
- def initialize(
- self,
- config: Dict[str, str] = {},
+ def initialize(
+ self,
+ config: Optional[Dict[str, str]] = None,
resource_prefix: Optional[str] = None,
quiet: bool = False,
) -> None:
+ config = config or {}🧰 Tools
🪛 Ruff (0.15.12)
[warning] 1422-1422: Do not use mutable data structures for argument defaults
Replace with None; initialize within function
(B006)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@sebs/gcp/gcp.py` at line 1422, The initialize function currently declares a
mutable default for parameter config (config: Dict[str, str] = {}), which can
lead to shared state; change the signature to use an immutable default (e.g.,
config: Optional[Dict[str, str]] = None) and inside the initialize function
normalize it with something like if config is None: config = {} so callers get a
fresh dict and the anti-pattern is avoided; update any references to initialize
and its config parameter accordingly.
There was a problem hiding this comment.
♻️ Duplicate comments (1)
sebs/gcp/container.py (1)
86-119:⚠️ Potential issue | 🟡 Minor
find_imagestill lacks pagination support.The method only examines the first page of results. Artifact Registry API returns paginated responses (default ~100 images per page). For repositories with many images, this could return
Falseeven when the image exists, causing unnecessary rebuilds.🛠️ Suggested fix with pagination
def find_image(self, repository_name, image_tag) -> bool: try: credentials = cast(GCPCredentials, self.config.credentials) parent = ( f"projects/{credentials.project_name}" f"/locations/{self.config.region}" f"/repositories/{repository_name}" ) - response = ( + request = ( self.ar_client.projects() .locations() .repositories() .dockerImages() .list(parent=parent) - .execute() ) - if "dockerImages" in response: - for image in response["dockerImages"]: - if image_tag in image.get("tags", []): - return True + while request is not None: + response = request.execute() + for image in response.get("dockerImages", []): + if image_tag in image.get("tags", []): + return True + request = ( + self.ar_client.projects() + .locations() + .repositories() + .dockerImages() + .list_next(previous_request=request, previous_response=response) + ) except HttpError as e: if e.resp.status == 404: return False raise e return False🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sebs/gcp/container.py` around lines 86 - 119, The find_image method only checks the first page from Artifact Registry; update it to handle pagination by looping over pages from ar_client.projects().locations().repositories().dockerImages().list(...).execute(), checking each page's "dockerImages" for the image_tag, and using the response's "nextPageToken" to request subsequent pages (pass pageToken to .list(...)). Preserve the existing HttpError handling (including the 404 case) and return True if any page contains the tag, otherwise False after exhausting pages.
🧹 Nitpick comments (5)
sebs/gcp/function.py (1)
207-221: Consider using a property setter instead of a separate method.Using
container_uri()as a getter method while havingset_container_uri()as a setter creates an asymmetric API. The existing pattern in this class uses properties (e.g.,deployment_type,deployment_config).♻️ Suggested refactor
- def container_uri(self) -> str | None: - """Return the container image URI if this function uses one. - - Returns: - Container image URI, or ``None`` for non-container deployments. - """ - return self._container_uri - - def set_container_uri(self, container_uri: str | None) -> None: - """Update the container image URI for this function. - - Args: - container_uri: Container image URI to store, or ``None``. - """ - self._container_uri = container_uri + `@property` + def container_uri(self) -> str | None: + """Return the container image URI if this function uses one. + + Returns: + Container image URI, or ``None`` for non-container deployments. + """ + return self._container_uri + + `@container_uri.setter` + def container_uri(self, container_uri: str | None) -> None: + """Update the container image URI for this function. + + Args: + container_uri: Container image URI to store, or ``None``. + """ + self._container_uri = container_uri🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sebs/gcp/function.py` around lines 207 - 221, Replace the asymmetric pair container_uri() and set_container_uri() with a single property: mark the existing container_uri method with `@property` and convert set_container_uri into a `@container_uri.setter` that assigns self._container_uri; update the docstring to remain on the property and remove the separate set_container_uri API so callers use instance.container_uri and assignment (instance.container_uri = value) consistently with other properties like deployment_type and deployment_config.sebs/gcp/config.py (1)
287-306: Inconsistent deserialize pattern between Gen1 and Gen2 configs.
GCPFunctionGen1Config.deserializeuses.get()with defaults (lines 215-217), butGCPFunctionGen2Config.deserializeuses strict dict access. If new fields are added in the future or if cached configs from older versions are loaded, this will raiseKeyError.Consider aligning with Gen1's pattern for forward/backward compatibility:
♻️ Suggested fix
`@staticmethod` def deserialize(dct: Dict) -> GCPFunctionGen2Config: return GCPFunctionGen2Config( - vcpus=dct["vcpus"], - gcp_concurrency=dct["gcp-concurrency"], - worker_concurrency=dct["worker-concurrency"], - worker_threads=dct["worker-threads"], - min_instances=dct["min-instances"], - max_instances=dct["max-instances"], - cpu_boost=dct["cpu-boost"], - cpu_throttle=dct["cpu-throttle"], + vcpus=dct.get("vcpus", 1), + gcp_concurrency=dct.get("gcp-concurrency", 80), + worker_concurrency=dct.get("worker-concurrency", 80), + worker_threads=dct.get("worker-threads", 8), + min_instances=dct.get("min-instances", 0), + max_instances=dct.get("max-instances", 20), + cpu_boost=dct.get("cpu-boost", False), + cpu_throttle=dct.get("cpu-throttle", True), )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sebs/gcp/config.py` around lines 287 - 306, GCPFunctionGen2Config.deserialize currently uses strict dct[...] access which can raise KeyError for missing/old fields; change GCPFunctionGen2Config.deserialize to mirror GCPFunctionGen1Config.deserialize by using dct.get("key", default) for each field (vcpus, gcp-concurrency, worker-concurrency, worker-threads, min-instances, max-instances, cpu-boost, cpu-throttle) so missing keys fall back to sensible defaults (same defaults used by GCPFunctionGen1Config.deserialize or the Gen2 constructor defaults) to ensure forward/backward compatibility.sebs/gcp/container.py (1)
231-234: Use exception chaining for clearer tracebacks.Per static analysis hint B904, exceptions raised in
exceptblocks should useraise ... from errto preserve the original traceback context.♻️ Suggested fix
except docker.errors.DockerException as e: self.logging.error(f"Failed to push the image to registry {repository_uri}.") - self.logging.error(f"Error: {str(e)}") - raise RuntimeError("Couldn't push to registry.") + self.logging.error(f"Error: {e!s}") + raise RuntimeError("Couldn't push to registry.") from e🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sebs/gcp/container.py` around lines 231 - 234, The except block catching docker.errors.DockerException should preserve the original traceback by using exception chaining; in the method handling image push (the except that references docker.errors.DockerException, self.logging and repository_uri) change the final raise to "raise RuntimeError(\"Couldn't push to registry.\") from e" so the original exception is chained while keeping the existing error logs intact.sebs/gcp/triggers.py (2)
315-326: SameThreadPoolExecutorconcern applies toHTTPTrigger.async_invoke.This has the same resource accumulation pattern as
LibraryTrigger.async_invoke.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sebs/gcp/triggers.py` around lines 315 - 326, The async_invoke implementation creates a new ThreadPoolExecutor on every call, leaking threads; change HTTPTrigger.async_invoke to reuse a long-lived executor (e.g., an instance attribute like self._executor or a shared module-level executor) and submit sync_invoke to that executor instead of creating a new one each time; also ensure the executor is created lazily and provide a shutdown/close path (e.g., HTTPTrigger.close or __del__) to clean up the executor when the trigger is disposed.
221-234:ThreadPoolExecutoris created but never shut down.Each
async_invokecall creates a newThreadPoolExecutor. Python'sThreadPoolExecutormanages a pool of threads that persist until explicitly shut down or garbage collected. Under heavy invocation patterns, this could accumulate threads.Consider using a context manager or a shared executor instance:
♻️ Option 1: Use context manager (immediate shutdown after submit)
def async_invoke(self, payload: Dict) -> concurrent.futures.Future: - pool = concurrent.futures.ThreadPoolExecutor() - fut = pool.submit(self.sync_invoke, payload) - return fut + pool = concurrent.futures.ThreadPoolExecutor(max_workers=1) + fut = pool.submit(self.sync_invoke, payload) + pool.shutdown(wait=False) + return fut♻️ Option 2: Reuse a module-level or instance-level executor
# At module level or as class attribute _ASYNC_EXECUTOR = concurrent.futures.ThreadPoolExecutor(max_workers=4) def async_invoke(self, payload: Dict) -> concurrent.futures.Future: return _ASYNC_EXECUTOR.submit(self.sync_invoke, payload)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sebs/gcp/triggers.py` around lines 221 - 234, The current async_invoke creates a new ThreadPoolExecutor each call (using ThreadPoolExecutor) and never shuts it down; change async_invoke to submit tasks to a shared executor (e.g., a module-level or instance-level _ASYNC_EXECUTOR) instead of instantiating a new executor per call so threads are reused, and ensure you provide a shutdown path (e.g., call _ASYNC_EXECUTOR.shutdown() on cleanup or process exit); keep async_invoke calling _ASYNC_EXECUTOR.submit(self.sync_invoke, payload) and remove per-call ThreadPoolExecutor construction.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@sebs/gcp/container.py`:
- Around line 86-119: The find_image method only checks the first page from
Artifact Registry; update it to handle pagination by looping over pages from
ar_client.projects().locations().repositories().dockerImages().list(...).execute(),
checking each page's "dockerImages" for the image_tag, and using the response's
"nextPageToken" to request subsequent pages (pass pageToken to .list(...)).
Preserve the existing HttpError handling (including the 404 case) and return
True if any page contains the tag, otherwise False after exhausting pages.
---
Nitpick comments:
In `@sebs/gcp/config.py`:
- Around line 287-306: GCPFunctionGen2Config.deserialize currently uses strict
dct[...] access which can raise KeyError for missing/old fields; change
GCPFunctionGen2Config.deserialize to mirror GCPFunctionGen1Config.deserialize by
using dct.get("key", default) for each field (vcpus, gcp-concurrency,
worker-concurrency, worker-threads, min-instances, max-instances, cpu-boost,
cpu-throttle) so missing keys fall back to sensible defaults (same defaults used
by GCPFunctionGen1Config.deserialize or the Gen2 constructor defaults) to ensure
forward/backward compatibility.
In `@sebs/gcp/container.py`:
- Around line 231-234: The except block catching docker.errors.DockerException
should preserve the original traceback by using exception chaining; in the
method handling image push (the except that references
docker.errors.DockerException, self.logging and repository_uri) change the final
raise to "raise RuntimeError(\"Couldn't push to registry.\") from e" so the
original exception is chained while keeping the existing error logs intact.
In `@sebs/gcp/function.py`:
- Around line 207-221: Replace the asymmetric pair container_uri() and
set_container_uri() with a single property: mark the existing container_uri
method with `@property` and convert set_container_uri into a `@container_uri.setter`
that assigns self._container_uri; update the docstring to remain on the property
and remove the separate set_container_uri API so callers use
instance.container_uri and assignment (instance.container_uri = value)
consistently with other properties like deployment_type and deployment_config.
In `@sebs/gcp/triggers.py`:
- Around line 315-326: The async_invoke implementation creates a new
ThreadPoolExecutor on every call, leaking threads; change
HTTPTrigger.async_invoke to reuse a long-lived executor (e.g., an instance
attribute like self._executor or a shared module-level executor) and submit
sync_invoke to that executor instead of creating a new one each time; also
ensure the executor is created lazily and provide a shutdown/close path (e.g.,
HTTPTrigger.close or __del__) to clean up the executor when the trigger is
disposed.
- Around line 221-234: The current async_invoke creates a new ThreadPoolExecutor
each call (using ThreadPoolExecutor) and never shuts it down; change
async_invoke to submit tasks to a shared executor (e.g., a module-level or
instance-level _ASYNC_EXECUTOR) instead of instantiating a new executor per call
so threads are reused, and ensure you provide a shutdown path (e.g., call
_ASYNC_EXECUTOR.shutdown() on cleanup or process exit); keep async_invoke
calling _ASYNC_EXECUTOR.submit(self.sync_invoke, payload) and remove per-call
ThreadPoolExecutor construction.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 94523065-f11f-4453-9dcf-cebd0456b9d5
📒 Files selected for processing (9)
README.mdconfigs/cpp.jsonconfigs/java.jsonconfigs/nodejs.jsonsebs/gcp/config.pysebs/gcp/container.pysebs/gcp/function.pysebs/gcp/gcp.pysebs/gcp/triggers.py
✅ Files skipped from review due to trivial changes (1)
- README.md
This PR enables to run SeBs with the --container-deployment flag on GCP. I also changed the base image. This shouldn't be a problem for zip deployments as it is also debian-based. With GCP functions v2, the base images will then change to the platform-native ones.
Summary by CodeRabbit
New Features
Bug Fixes
Chores