NO-JIRA: Add a dev container setup and a basic MCP server#3476
NO-JIRA: Add a dev container setup and a basic MCP server#3476openshift-merge-bot[bot] merged 8 commits intoopenshift:mainfrom
Conversation
|
Pipeline controller notification For optional jobs, comment This repository is configured in: automatic mode |
|
@smg247: This pull request explicitly references no jira issue. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughAdds a new FastMCP server "sippy-dev" with multiple tools for managing local Sippy services, registers Cursor skills to call those tools, and introduces a Podman-based VS Code devcontainer (Dockerfile, init/post-create scripts, env template), related docs, and logging/configuration helpers. Changes
Sequence Diagram(s)sequenceDiagram
participant Dev as Developer (Cursor/CLI)
participant MCP as sippy-dev MCP Server
participant Proc as Local Process
participant DB as Postgres
participant Cache as Redis
Dev->>MCP: call_mcp_tool(tool, args)
MCP->>Proc: inspect running processes / logs
alt already running
Proc-->>MCP: pid / listen hint / log path
else start required
MCP->>Proc: spawn command (stdout/stderr -> sippy-dev-logs/)
Proc-->>MCP: pid / start status
MCP->>DB: perform readiness check if needed
MCP->>Cache: perform readiness check if needed
end
MCP-->>Dev: return status, log tail, listen hints
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes 🚥 Pre-merge checks | ✅ 14 | ❌ 3❌ Failed checks (3 warnings)
✅ Passed checks (14 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 12
🧹 Nitpick comments (5)
.devcontainer/Dockerfile (1)
1-3: Pin the devcontainer toolchain instead of following moving targets.
registry.access.redhat.com/ubi9/ubi:latestandgotestsum@latestmake rebuilds non-reproducible; the same commit can produce a different image later. Please pin both to explicit versions/digests.Also applies to: 45-45
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.devcontainer/Dockerfile around lines 1 - 3, Replace moving targets with fixed versions: change the base image reference used in the FROM line (registry.access.redhat.com/ubi9/ubi:latest) to a specific, immutable tag or digest and pin any tool installs referenced elsewhere (e.g., gotestsum@latest) to explicit released versions or digests; update or add ARGs like GOLANGCI_LINT_VERSION only if they are set to a known stable version (keep ARG GOLANGCI_LINT_VERSION but ensure its value is a fixed release), and verify the gotestsum install command at the location around the gotestsum@latest occurrence is updated to a specific version so rebuilds are reproducible..devcontainer/init-services.sh (1)
6-19: Validate reused service containers before starting them.
podman startwill reuse any existingsippy-postgres/sippy-rediscontainer by name, even if it was created with a different network, image, or env. In that state the devcontainer can come up pointed at stale or unreachable services.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.devcontainer/init-services.sh around lines 6 - 19, Ensure existing containers are validated before reuse: for each named container (sippy-postgres and sippy-redis) call podman inspect and verify required properties (network sippy-net, expected image quay.io/enterprisedb/postgresql or redis:7-alpine, and needed env vars like POSTGRES_PASSWORD/POSTGRES_HOST_AUTH_METHOD) and only run podman start if they match; otherwise remove the stale container (podman rm -f <name>) and recreate it with the podman run command shown. Use podman inspect --format or jq to check .Config.Image, .NetworkSettings.Networks, and .Config.Env for mismatches, then conditionally start vs. remove+run for sippy-postgres and sippy-redis.mcp/README.md (1)
64-67: Add a caution note to avoid repeatedrun_e2eexecutions.Since this section defines the default test order, it should explicitly note that
run_e2eis expensive and should be run sparingly (ideally once per request/session).Based on learnings: Never run
make e2emore than once per request to avoid expensive BigQuery queries and long test times.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@mcp/README.md` around lines 64 - 67, Add a short caution under the sippy-dev-tests test order (the line listing run_lint → run_test → run_e2e) stating that run_e2e is expensive and should be executed sparingly (ideally once per request/session), and call out that running make e2e repeatedly will trigger costly BigQuery queries and long test times; update the README section that lists sippy-dev-tests to include this note so developers see it alongside the default test order..devcontainer/post-create.sh (2)
22-25: Differentiate “not installed” vs “not authenticated” forgcloud.Right now both cases end with the same message. A quick
command -v gcloudcheck would make troubleshooting clearer.💡 Proposed refactor
echo "==> Checking GCP auth..." -if ! gcloud auth application-default print-access-token >/dev/null 2>&1; then - echo " GCP credentials not found. Run 'gcloud auth application-default login' to authenticate." +if ! command -v gcloud >/dev/null 2>&1; then + echo " gcloud CLI not found in PATH." +elif ! gcloud auth application-default print-access-token >/dev/null 2>&1; then + echo " GCP credentials not found. Run 'gcloud auth application-default login' to authenticate." fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.devcontainer/post-create.sh around lines 22 - 25, The script currently treats missing gcloud binary and missing application-default credentials the same; add an explicit existence check using command -v gcloud before running gcloud auth application-default print-access-token and emit different messages: if command -v gcloud fails, print a “gcloud not installed” message suggesting installation, otherwise run gcloud auth application-default print-access-token and on failure print the existing “GCP credentials not found. Run 'gcloud auth application-default login'” guidance; update the block around the gcloud auth application-default print-access-token invocation to branch on the command -v gcloud result.
5-7: Pin Go tool versions to ensure reproducible devcontainer builds.Lines 5-7 use
@latestforgopls,dlv, andstaticcheck, which causes non-deterministic installs and can break setups on future rebuilds.♻️ Proposed refactor
+GOPLS_VERSION="${GOPLS_VERSION:-v0.16.2}" +DLV_VERSION="${DLV_VERSION:-v1.24.0}" +STATICCHECK_VERSION="${STATICCHECK_VERSION:-2024.1.1}" + echo "==> Installing Go IDE tools..." -go install golang.org/x/tools/gopls@latest -go install github.com/go-delve/delve/cmd/dlv@latest -go install honnef.co/go/tools/cmd/staticcheck@latest +go install golang.org/x/tools/gopls@"${GOPLS_VERSION}" +go install github.com/go-delve/delve/cmd/dlv@"${DLV_VERSION}" +go install honnef.co/go/tools/cmd/staticcheck@"${STATICCHECK_VERSION}"This approach allows overriding versions via environment variables while maintaining consistent defaults across rebuilds.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.devcontainer/post-create.sh around lines 5 - 7, The install commands using `@latest` for gopls, dlv and staticcheck are non-deterministic; change them to use pinned versions (e.g., GOPLS_VERSION, DLV_VERSION, STATICCHECK_VERSION variables with sensible defaults) and update the three install lines (golang.org/x/tools/gopls, github.com/go-delve/delve/cmd/dlv, honnef.co/go/tools/cmd/staticcheck) to reference those variables so builds are reproducible while still allowing overrides via environment variables.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.cursor/skills/sippy-dev-frontend/SKILL.md:
- Line 14: The wording in SKILL.md is ambiguous about the open_browser default;
update the sentence that mentions "Defaults and `open_browser`: `mcp/server.py`"
to clearly state that default configuration values are defined in mcp/server.py
and that the `open_browser` option is a boolean which defaults to false (e.g.,
"Default values live in mcp/server.py; `open_browser` is a boolean flag that
defaults to false"). Ensure you reference the `open_browser` flag and
mcp/server.py so readers know where to find and change the default.
In @.cursor/skills/sippy-dev-regression-cache/SKILL.md:
- Line 16: The documentation incorrectly lists "mcp/server.py" as an argument to
the regression_cache call contract; remove that reference and any similar "Other
args" mention so the SKILL.md only documents actual public inputs (e.g.,
bigquery_credentials_file and the SIPPY_BIGQUERY_CREDENTIALS_FILE /
GOOGLE_APPLICATION_CREDENTIALS env vars) for the regression_cache skill, and do
not present implementation details of the MCP server (mcp/server.py) as a
call_mcp_tool argument.
In @.cursor/skills/sippy-dev-serve/SKILL.md:
- Line 13: The documentation line in SKILL.md incorrectly lists "mcp/server.py"
as an "Other args" parameter; remove that reference so the documented tool
parameters for bigquery_credentials_file only mention args accepted by
sippy_serve. Update the sentence containing "Other args: **`mcp/server.py`**" to
either delete the clause or replace it with a note that call_mcp_tool should
pass only sippy_serve args, and ensure mentions of call_mcp_tool and sippy_serve
remain to guide agents toward valid calls (do not document mcp/server.py as a
tool argument).
In @.cursor/skills/sippy-dev-tests/SKILL.md:
- Around line 11-17: Update the SKILL.md workflow description to call out
in-container limitations: explicitly note that run_lint requires CI=true to run
correctly inside the devcontainer and that run_e2e currently does not work from
within the devcontainer, so users should not expect the full run_lint → run_test
→ run_e2e flow to succeed in-container; reference the existing symbols
(call_mcp_tool, run_lint, run_test, run_e2e, bigquery_credentials_file,
GCS_SA_JSON_PATH, and sippy-dev-logs/run_*.log) and add a short caveat paragraph
explaining these constraints and any recommended workaround (e.g., run lint/e2e
from CI or an external environment).
In @.devcontainer/devcontainer.json:
- Line 25: The post-create script call (post-create.sh) runs pip to install
mcp/requirements.txt without the --ignore-scripts flag; update the pip install
invocation inside the post-create.sh script so the venv python install command
(the line that runs something like "<venv>/bin/python -m pip install -r
mcp/requirements.txt") includes the --ignore-scripts option (i.e., add
--ignore-scripts before -r) to match the npm install behavior and suppress
package install scripts.
In @.devcontainer/Dockerfile:
- Line 52: The Dockerfile RUN invoking the Claude install script is unpinned and
causes non-deterministic builds; update the RUN line that currently uses "curl
-fsSL https://claude.ai/install.sh | sh" to call the installer with an explicit
version argument via the installer's "-s <VERSION>" form (e.g., a numeric
version or "stable") so the build always installs a known release; ensure the
new RUN uses the same curl invocation but pipes to "sh -s <VERSION>" and commit
the chosen version string.
In @.devcontainer/init-services.sh:
- Around line 21-26: The init script (.devcontainer/init-services.sh) currently
loops 30 times running podman exec sippy-postgres pg_isready but always prints
"Services ready." and exits 0 even when the checks fail; update the for-loop
logic to detect timeout (e.g., track success via the pg_isready exit status or a
flag) and if the loop completes without a successful pg_isready, print a clear
error message and exit with a non-zero status instead of printing "Services
ready."; modify the block around the for loop and the echo "Services ready." to
only run on success and call exit 1 (or another non-zero code) on failure so the
devcontainer init fails fast.
In @.devcontainer/README.md:
- Around line 69-71: Add a language identifier to the fenced code block that
begins with the snippet `Cmd+Shift+P > "Dev Containers: Attach to Running
Container" > sippy-dev`; update that triple-backtick fence to include a language
tag such as `text` (e.g., ```text) so markdownlint rule MD040 is satisfied and
the block is explicitly marked as plain text.
In `@mcp/requirements.txt`:
- Line 1: Update the dependency specification for fastmcp in requirements to pin
a safe minimum version: replace the bare "fastmcp" entry with a constrained
specifier such as "fastmcp>=3.2.4" so pip will install a release that pulls
patched transitive dependencies (this change targets the fastmcp entry in the
requirements list).
In `@mcp/server.py`:
- Around line 73-80: Wrap the subprocess.run call that assigns r in migrate_db
with a try/except catching subprocess.TimeoutExpired; when a timeout occurs,
build a fallback CompletedProcess-like result (or set r to an object with
stdout/stderr containing a clear timeout message and a non-zero returncode) so
the code returns a usable status and log output path instead of raising, and
ensure you include the original timeout exception message in the stderr text;
reference the subprocess.run invocation that assigns r and the
subprocess.TimeoutExpired exception for locating the change.
- Around line 339-346: The spawned background process in sippy_serve() currently
inherits the parent stdin; update the subprocess.Popen call that creates proc so
it detaches stdin (set stdin to subprocess.DEVNULL) to prevent the child from
keeping the MCP session transport open or reading from the parent stdin; keep
other args (cwd=REPO_ROOT, env=os.environ.copy(), stdout=logf,
stderr=subprocess.STDOUT, start_new_session=True) unchanged when making this
change.
- Around line 21-25: _the _repo_path function currently allows absolute paths
and .. escapes which can let writable args overwrite files outside the repo; add
a new validator function named _repo_output_path(p: str) -> Path that expands
and resolves p but enforces the path is inside REPO_ROOT (reject/raise if
path.is_absolute() and not under REPO_ROOT after resolve or if resolved
path.relative_to(REPO_ROOT) would fail), create parent dirs as needed, and
return the resolved Path; then replace uses of writable output args (notably the
log_file call sites in regression_cache, sippy_serve, and sippy_ng_start) to
call _repo_output_path instead of _repo_path so writable outputs cannot escape
the workspace.
---
Nitpick comments:
In @.devcontainer/Dockerfile:
- Around line 1-3: Replace moving targets with fixed versions: change the base
image reference used in the FROM line
(registry.access.redhat.com/ubi9/ubi:latest) to a specific, immutable tag or
digest and pin any tool installs referenced elsewhere (e.g., gotestsum@latest)
to explicit released versions or digests; update or add ARGs like
GOLANGCI_LINT_VERSION only if they are set to a known stable version (keep ARG
GOLANGCI_LINT_VERSION but ensure its value is a fixed release), and verify the
gotestsum install command at the location around the gotestsum@latest occurrence
is updated to a specific version so rebuilds are reproducible.
In @.devcontainer/init-services.sh:
- Around line 6-19: Ensure existing containers are validated before reuse: for
each named container (sippy-postgres and sippy-redis) call podman inspect and
verify required properties (network sippy-net, expected image
quay.io/enterprisedb/postgresql or redis:7-alpine, and needed env vars like
POSTGRES_PASSWORD/POSTGRES_HOST_AUTH_METHOD) and only run podman start if they
match; otherwise remove the stale container (podman rm -f <name>) and recreate
it with the podman run command shown. Use podman inspect --format or jq to check
.Config.Image, .NetworkSettings.Networks, and .Config.Env for mismatches, then
conditionally start vs. remove+run for sippy-postgres and sippy-redis.
In @.devcontainer/post-create.sh:
- Around line 22-25: The script currently treats missing gcloud binary and
missing application-default credentials the same; add an explicit existence
check using command -v gcloud before running gcloud auth application-default
print-access-token and emit different messages: if command -v gcloud fails,
print a “gcloud not installed” message suggesting installation, otherwise run
gcloud auth application-default print-access-token and on failure print the
existing “GCP credentials not found. Run 'gcloud auth application-default
login'” guidance; update the block around the gcloud auth application-default
print-access-token invocation to branch on the command -v gcloud result.
- Around line 5-7: The install commands using `@latest` for gopls, dlv and
staticcheck are non-deterministic; change them to use pinned versions (e.g.,
GOPLS_VERSION, DLV_VERSION, STATICCHECK_VERSION variables with sensible
defaults) and update the three install lines (golang.org/x/tools/gopls,
github.com/go-delve/delve/cmd/dlv, honnef.co/go/tools/cmd/staticcheck) to
reference those variables so builds are reproducible while still allowing
overrides via environment variables.
In `@mcp/README.md`:
- Around line 64-67: Add a short caution under the sippy-dev-tests test order
(the line listing run_lint → run_test → run_e2e) stating that run_e2e is
expensive and should be executed sparingly (ideally once per request/session),
and call out that running make e2e repeatedly will trigger costly BigQuery
queries and long test times; update the README section that lists
sippy-dev-tests to include this note so developers see it alongside the default
test order.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Central YAML (inherited)
Review profile: CHILL
Plan: Enterprise
Run ID: 02714681-5d29-4292-abb1-cad56781673d
📒 Files selected for processing (19)
.cursor/mcp.json.cursor/skills/sippy-dev-app/SKILL.md.cursor/skills/sippy-dev-frontend/SKILL.md.cursor/skills/sippy-dev-migrate/SKILL.md.cursor/skills/sippy-dev-regression-cache/SKILL.md.cursor/skills/sippy-dev-serve/SKILL.md.cursor/skills/sippy-dev-tests/SKILL.md.devcontainer/.env.example.devcontainer/Dockerfile.devcontainer/README.md.devcontainer/devcontainer.json.devcontainer/init-services.sh.devcontainer/post-create.sh.gitignoreAGENTS.mdDEVELOPMENT.mdmcp/README.mdmcp/requirements.txtmcp/server.py
|
@coderabbitai[bot]: changing LGTM is restricted to collaborators DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.devcontainer/init-services.sh:
- Around line 15-19: The Redis startup sequence currently just starts or runs a
container (sippy-redis) but never verifies Redis is accepting connections, so
the script can falsely report success once PostgreSQL is ready; update the
sippy-redis startup block to fail fast: after the existing podman start/run for
sippy-redis, run a readiness probe (e.g. attempt TCP connect to localhost:6379
or run redis-cli PING inside the sippy-redis container) with a short timeout and
retry loop, and if the probe fails after N attempts log an error and exit
non-zero so the script does not proceed to the "services ready" success path
tied to PostgreSQL readiness. Ensure the probe references the same container
name sippy-redis and port 6379 and that any podman run/start failure also
triggers an immediate exit.
In `@mcp/server.py`:
- Around line 390-391: The docstring claims absolute log paths outside the repo
are allowed, but the actual enforcement in _safe_log_path confines log files to
the repo root; update the docstring for the surrounding function/class to state
that log_file paths are normalized and validated to remain within the repository
root (absolute paths outside the repo are rejected), and mention CRA default
port 3000 and that starting is skipped if a matching npm/react-scripts process
is running to keep behavior description accurate.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Central YAML (inherited)
Review profile: CHILL
Plan: Enterprise
Run ID: fd8a6226-c33b-4a42-990f-205e7a94e501
📒 Files selected for processing (10)
.cursor/skills/sippy-dev-frontend/SKILL.md.cursor/skills/sippy-dev-regression-cache/SKILL.md.cursor/skills/sippy-dev-serve/SKILL.md.cursor/skills/sippy-dev-tests/SKILL.md.devcontainer/README.md.devcontainer/init-services.sh.devcontainer/post-create.shmcp/README.mdmcp/requirements.txtmcp/server.py
✅ Files skipped from review due to trivial changes (3)
- .cursor/skills/sippy-dev-serve/SKILL.md
- .cursor/skills/sippy-dev-regression-cache/SKILL.md
- .cursor/skills/sippy-dev-frontend/SKILL.md
🚧 Files skipped from review as they are similar to previous changes (2)
- .devcontainer/post-create.sh
- mcp/README.md
|
@coderabbitai[bot]: changing LGTM is restricted to collaborators DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
@coderabbitai, please close out issues that have been fixed once you do your next review |
|
✅ Actions performedFull review triggered. |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
.devcontainer/init-services.sh (1)
15-19: Consider adding fail-fast error handling for Redis startup.The Redis container startup (lines 15-19) doesn't explicitly check if the
podman startorpodman runcommand succeeds. While the readiness loop at lines 35-47 will eventually catch failures, adding explicit error handling would provide faster, clearer feedback:♻️ Suggested improvement
-podman start sippy-redis 2>/dev/null || \ - podman run -d --name sippy-redis \ +if ! (podman start sippy-redis 2>/dev/null || \ + podman run -d --name sippy-redis \ --network sippy-net \ -p 6379:6379 \ - redis:7-alpine + redis:7-alpine); then + echo "ERROR: Failed to start Redis container." >&2 + exit 1 +fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.devcontainer/init-services.sh around lines 15 - 19, The Redis startup block calls "podman start sippy-redis" and falls back to "podman run -d --name sippy-redis ..." without checking return codes; add immediate fail-fast checks after each command (check the exit status of podman start and podman run) and if either fails, print a clear error including the failing command and any stderr output and exit non-zero so the script stops early instead of relying only on the later readiness loop; update the podman start / podman run sequence to capture and log errors from those commands before continuing.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@mcp/server.py`:
- Around line 85-101: The migrate_db function's subprocess.run timeout path
returns before writing the log file so log_path is missing; catch
subprocess.TimeoutExpired in the except block and write the captured output to
log_path before returning (use the exception's stdout/stderr attributes if
available), or switch to streaming output to the file like regression_cache
does; update the except handler around the subprocess.run call (the block that
currently returns "migrate timed out after 600s. log: {log_path}") to write the
combined stdout/stderr to log_path (ensuring trailing newline) and then return
the timeout message.
---
Nitpick comments:
In @.devcontainer/init-services.sh:
- Around line 15-19: The Redis startup block calls "podman start sippy-redis"
and falls back to "podman run -d --name sippy-redis ..." without checking return
codes; add immediate fail-fast checks after each command (check the exit status
of podman start and podman run) and if either fails, print a clear error
including the failing command and any stderr output and exit non-zero so the
script stops early instead of relying only on the later readiness loop; update
the podman start / podman run sequence to capture and log errors from those
commands before continuing.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Central YAML (inherited)
Review profile: CHILL
Plan: Enterprise
Run ID: 3dd6b3ef-bb31-49ad-b36a-dec29b386caf
📒 Files selected for processing (19)
.cursor/mcp.json.cursor/skills/sippy-dev-app/SKILL.md.cursor/skills/sippy-dev-frontend/SKILL.md.cursor/skills/sippy-dev-migrate/SKILL.md.cursor/skills/sippy-dev-regression-cache/SKILL.md.cursor/skills/sippy-dev-serve/SKILL.md.cursor/skills/sippy-dev-tests/SKILL.md.devcontainer/.env.example.devcontainer/Dockerfile.devcontainer/README.md.devcontainer/devcontainer.json.devcontainer/init-services.sh.devcontainer/post-create.sh.gitignoreAGENTS.mdDEVELOPMENT.mdmcp/README.mdmcp/requirements.txtmcp/server.py
✅ Files skipped from review due to trivial changes (11)
- .cursor/mcp.json
- AGENTS.md
- .cursor/skills/sippy-dev-migrate/SKILL.md
- DEVELOPMENT.md
- .gitignore
- .cursor/skills/sippy-dev-frontend/SKILL.md
- mcp/README.md
- .devcontainer/README.md
- .cursor/skills/sippy-dev-app/SKILL.md
- .cursor/skills/sippy-dev-serve/SKILL.md
- .cursor/skills/sippy-dev-regression-cache/SKILL.md
🚧 Files skipped from review as they are similar to previous changes (1)
- .devcontainer/post-create.sh
|
@coderabbitai[bot]: changing LGTM is restricted to collaborators DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (2)
.devcontainer/Dockerfile (1)
52-52:⚠️ Potential issue | 🟠 MajorPin Claude installer input to avoid non-deterministic builds.
Current install pulls whatever the endpoint serves at build time. Please pass an explicit channel/version (for example
stableor a fixed version string).Proposed diff
+ARG CLAUDE_CODE_VERSION=stable @@ -RUN curl -fsSL https://claude.ai/install.sh | sh +RUN curl -fsSL https://claude.ai/install.sh | sh -s -- "${CLAUDE_CODE_VERSION}"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.devcontainer/Dockerfile at line 52, The Dockerfile RUN line that executes the remote installer (the command starting with "RUN curl -fsSL https://claude.ai/install.sh | sh") must be made deterministic by pinning the installer channel or version; change the curl URL to include an explicit channel or version (for example append a query or path such as ?channel=stable or /v1.2.3/install.sh) or fetch a specific release artifact and verify it (e.g., download https://claude.ai/install.sh?channel=stable or a fixed-version URL and optionally validate a checksum) so builds no longer depend on whatever the endpoint returns at build time.mcp/server.py (1)
94-97:⚠️ Potential issue | 🟡 MinorPersist timeout output before returning from
migrate_db.On timeout, the function returns a log path but does not write timeout/partial output there, which can leave callers with no usable log context.
🛠️ Proposed fix
- except subprocess.TimeoutExpired: - return f"migrate timed out after 600s. log: {log_path}" + except subprocess.TimeoutExpired as e: + partial = ((e.stdout or "") + (e.stderr or "")).strip() + log_path.write_text( + (partial + "\n") if partial else "(timed out with no output)\n", + encoding="utf-8", + ) + return f"migrate timed out after 600s. log: {log_path}"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@mcp/server.py` around lines 94 - 97, When migrate_db catches subprocess.TimeoutExpired it currently returns without persisting partial output; update the except block to capture the exception (e.g., except subprocess.TimeoutExpired as e), extract any partial output from e.stdout and e.stderr, assemble a string that includes a clear timeout message plus the partial output, write that string to log_path using log_path.write_text(..., encoding="utf-8") (matching the existing newline handling), and then return the timeout message; reference the migrate_db function, the subprocess.TimeoutExpired exception, the log_path variable and e.stdout/e.stderr (or r.stdout/r.stderr semantics) so the timeout log is persisted before returning.
🧹 Nitpick comments (3)
.cursor/skills/sippy-dev-tests/SKILL.md (1)
15-15: Polish wording for the e2e caveat.Consider replacing “requires nested Podman/Docker” with “requires Podman/Docker-in-Docker support” for clearer phrasing.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.cursor/skills/sippy-dev-tests/SKILL.md at line 15, Update the caveat wording for the run_e2e step: find the sentence in SKILL.md that reads “e2e requires nested Podman/Docker and does not work inside the devcontainer — run it on the host instead” (reference: the run_e2e bullet) and replace “requires nested Podman/Docker” with “requires Podman/Docker-in-Docker support” so the full caveat reads clearly that Podman/Docker-in-Docker support is needed and it still does not work inside the devcontainer..devcontainer/README.md (1)
10-10: Make the devcontainer CLI install step platform-neutral.
brew install devcontaineris macOS-centric; adding an OS-agnostic pointer to official install methods would reduce onboarding friction.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.devcontainer/README.md at line 10, Replace the macOS-specific install instruction "brew install devcontainer" with a platform-neutral pointer to the official Dev Containers CLI installation docs (e.g., change the line under "devcontainer CLI" to link to the official install page or add text like "See the official Dev Containers CLI installation guide for platform-specific instructions"). Update the entry that currently reads "devcontainer CLI: `brew install devcontainer`" to reference the generic install doc URL and optionally list OS-agnostic alternatives (package manager doc, installer, or GitHub releases) so onboarding is not macOS-centric..devcontainer/devcontainer.json (1)
21-24: Avoid hardcoded DSN/Redis values in tracked config when--env-fileis already used.
runArgsalready loads.devcontainer/.env(Line 9), so keeping credentials/connection strings incontainerEnvis redundant and can override local values unexpectedly.💡 Proposed simplification
- "containerEnv": { - "SIPPY_DATABASE_DSN": "postgresql://postgres:password@sippy-postgres:5432/postgres", - "REDIS_URL": "redis://sippy-redis:6379" - }, + // SIPPY_DATABASE_DSN and REDIS_URL are provided via --env-file (.devcontainer/.env)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.devcontainer/devcontainer.json around lines 21 - 24, The containerEnv block currently hardcodes SIPPY_DATABASE_DSN and REDIS_URL which will override values loaded via runArgs (--env-file .devcontainer/.env); remove the SIPPY_DATABASE_DSN and REDIS_URL entries from the containerEnv object so the container uses the .devcontainer/.env provided by runArgs instead, leaving containerEnv for only non-sensitive defaults if needed and keeping runArgs and .devcontainer/.env as the single source of truth.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.devcontainer/init-services.sh:
- Around line 6-19: The host port mappings expose Postgres and Redis on all
interfaces; update the podman run commands for sippy-postgres and sippy-redis to
bind to loopback only by changing the -p flags to include 127.0.0.1 (e.g.,
replace -p 5432:5432 and -p 6379:6379 with -p 127.0.0.1:5432:5432 and -p
127.0.0.1:6379:6379 respectively) so the containers remain reachable from
localhost but not from external hosts.
In `@mcp/server.py`:
- Around line 275-276: The fallback in sippy_ng_start currently returns any PID
that matches "react-scripts/scripts/start.js" via _pgrep_pids, which can include
processes from other repos; update the logic in _pgrep_pids (or immediately
after its call in sippy_ng_start) to filter returned PIDs by verifying each
PID's working directory or command path matches this repository (e.g., compare
/proc/<pid>/cwd or ps/pwdx output to the repo root or os.getcwd()). Only include
PIDs whose cwd/command path equals the project path so the function returns only
local CRA processes.
---
Duplicate comments:
In @.devcontainer/Dockerfile:
- Line 52: The Dockerfile RUN line that executes the remote installer (the
command starting with "RUN curl -fsSL https://claude.ai/install.sh | sh") must
be made deterministic by pinning the installer channel or version; change the
curl URL to include an explicit channel or version (for example append a query
or path such as ?channel=stable or /v1.2.3/install.sh) or fetch a specific
release artifact and verify it (e.g., download
https://claude.ai/install.sh?channel=stable or a fixed-version URL and
optionally validate a checksum) so builds no longer depend on whatever the
endpoint returns at build time.
In `@mcp/server.py`:
- Around line 94-97: When migrate_db catches subprocess.TimeoutExpired it
currently returns without persisting partial output; update the except block to
capture the exception (e.g., except subprocess.TimeoutExpired as e), extract any
partial output from e.stdout and e.stderr, assemble a string that includes a
clear timeout message plus the partial output, write that string to log_path
using log_path.write_text(..., encoding="utf-8") (matching the existing newline
handling), and then return the timeout message; reference the migrate_db
function, the subprocess.TimeoutExpired exception, the log_path variable and
e.stdout/e.stderr (or r.stdout/r.stderr semantics) so the timeout log is
persisted before returning.
---
Nitpick comments:
In @.cursor/skills/sippy-dev-tests/SKILL.md:
- Line 15: Update the caveat wording for the run_e2e step: find the sentence in
SKILL.md that reads “e2e requires nested Podman/Docker and does not work inside
the devcontainer — run it on the host instead” (reference: the run_e2e bullet)
and replace “requires nested Podman/Docker” with “requires
Podman/Docker-in-Docker support” so the full caveat reads clearly that
Podman/Docker-in-Docker support is needed and it still does not work inside the
devcontainer.
In @.devcontainer/devcontainer.json:
- Around line 21-24: The containerEnv block currently hardcodes
SIPPY_DATABASE_DSN and REDIS_URL which will override values loaded via runArgs
(--env-file .devcontainer/.env); remove the SIPPY_DATABASE_DSN and REDIS_URL
entries from the containerEnv object so the container uses the
.devcontainer/.env provided by runArgs instead, leaving containerEnv for only
non-sensitive defaults if needed and keeping runArgs and .devcontainer/.env as
the single source of truth.
In @.devcontainer/README.md:
- Line 10: Replace the macOS-specific install instruction "brew install
devcontainer" with a platform-neutral pointer to the official Dev Containers CLI
installation docs (e.g., change the line under "devcontainer CLI" to link to the
official install page or add text like "See the official Dev Containers CLI
installation guide for platform-specific instructions"). Update the entry that
currently reads "devcontainer CLI: `brew install devcontainer`" to reference the
generic install doc URL and optionally list OS-agnostic alternatives (package
manager doc, installer, or GitHub releases) so onboarding is not macOS-centric.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Central YAML (inherited)
Review profile: CHILL
Plan: Enterprise
Run ID: e7c604a5-9ad2-4b3d-a68c-0044908d8aa1
📒 Files selected for processing (19)
.cursor/mcp.json.cursor/skills/sippy-dev-app/SKILL.md.cursor/skills/sippy-dev-frontend/SKILL.md.cursor/skills/sippy-dev-migrate/SKILL.md.cursor/skills/sippy-dev-regression-cache/SKILL.md.cursor/skills/sippy-dev-serve/SKILL.md.cursor/skills/sippy-dev-tests/SKILL.md.devcontainer/.env.example.devcontainer/Dockerfile.devcontainer/README.md.devcontainer/devcontainer.json.devcontainer/init-services.sh.devcontainer/post-create.sh.gitignoreAGENTS.mdDEVELOPMENT.mdmcp/README.mdmcp/requirements.txtmcp/server.py
|
@coderabbitai[bot]: changing LGTM is restricted to collaborators DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
.devcontainer/Dockerfile (1)
52-54:⚠️ Potential issue | 🟠 Major
CLAUDE_INSTALLER_VERSIONstill doesn't pin the installed release.The build arg never reaches the installer, so rebuilds still consume whatever release the script serves that day. Please pass the chosen version/channel through so the image is reproducible.
Suggested fix
ARG CLAUDE_INSTALLER_VERSION=latest -RUN curl -fsSL https://claude.ai/install.sh | sh +RUN curl -fsSL https://claude.ai/install.sh | sh -s -- "${CLAUDE_INSTALLER_VERSION}"Does the official Claude Code install script support selecting a release via `sh -s -- <version>` or a channel like `stable`, and what is the documented syntax?🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.devcontainer/Dockerfile around lines 52 - 54, The Dockerfile declares ARG CLAUDE_INSTALLER_VERSION but never passes it to the installer, so make the build reproducible by forwarding that argument into the install script invocation: keep ARG CLAUDE_INSTALLER_VERSION (default "latest") and change the RUN that pipes curl into sh to call the script with sh -s -- "$CLAUDE_INSTALLER_VERSION" (or the documented installer syntax if different) so the chosen version/channel is actually supplied to the installer.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@mcp/server.py`:
- Around line 50-74: _resolve_bigquery_creds currently forces both explicit and
env-derived credential paths through _repo_path, which blocks legitimate
external credential locations; change it so only paths meant to be repo-local
use _repo_path. Specifically, for the explicit parameter and the env keys
("SIPPY_BIGQUERY_CREDENTIALS_FILE", "GOOGLE_APPLICATION_CREDENTIALS") skip
calling _repo_path and treat them as normal filesystem paths (expand user/home
if needed) when checking .is_file(); leave _repo_path usage only for values that
must be confined to the repo. Ensure error messages still return the original
path and clear context when file not found or when explicit is malformed.
---
Duplicate comments:
In @.devcontainer/Dockerfile:
- Around line 52-54: The Dockerfile declares ARG CLAUDE_INSTALLER_VERSION but
never passes it to the installer, so make the build reproducible by forwarding
that argument into the install script invocation: keep ARG
CLAUDE_INSTALLER_VERSION (default "latest") and change the RUN that pipes curl
into sh to call the script with sh -s -- "$CLAUDE_INSTALLER_VERSION" (or the
documented installer syntax if different) so the chosen version/channel is
actually supplied to the installer.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Central YAML (inherited)
Review profile: CHILL
Plan: Enterprise
Run ID: 93cf168c-86ca-4c9a-83a0-3017e9002e98
📒 Files selected for processing (5)
.cursor/skills/sippy-dev-tests/SKILL.md.devcontainer/Dockerfile.devcontainer/README.md.devcontainer/init-services.shmcp/server.py
✅ Files skipped from review due to trivial changes (1)
- .cursor/skills/sippy-dev-tests/SKILL.md
🚧 Files skipped from review as they are similar to previous changes (1)
- .devcontainer/README.md
|
@coderabbitai[bot]: changing LGTM is restricted to collaborators DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
mcp/server.py (1)
52-69:⚠️ Potential issue | 🟠 MajorStop repo-confining BigQuery credential inputs.
_resolve_bigquery_creds()still routes explicit and env-derived credential paths through_repo_path(), so common locations like~/.config/...or/var/run/secrets/...are rejected. That unnecessarily breaksregression_cache(),sippy_serve(), andrun_e2e()unless the secret is copied into the repo.Possible fix
def _resolve_bigquery_creds(explicit: str | None) -> tuple[Path | None, str | None]: if explicit: - try: - p = _repo_path(explicit) - except ValueError as e: - return None, str(e) + p = Path(explicit).expanduser().resolve() if not p.is_file(): return None, f"BigQuery credentials file not found: {p}" return p, None for key in ("SIPPY_BIGQUERY_CREDENTIALS_FILE", "GOOGLE_APPLICATION_CREDENTIALS"): v = os.environ.get(key) if v: - try: - p = _repo_path(v) - except ValueError as e: - return None, f"{key}: {e}" + p = Path(v).expanduser().resolve() if p.is_file(): return p, None return None, f"{key} is set but file not found: {p}"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@mcp/server.py` around lines 52 - 69, The function _resolve_bigquery_creds wrongly forces both explicit and environment-sourced paths through _repo_path(), rejecting valid external locations and breaking callers like regression_cache, sippy_serve, and run_e2e; change it to accept raw filesystem paths for explicit and env values instead of calling _repo_path() — for example, for explicit (explicit param) convert to a Path, expanduser()/resolve(), check .is_file() and return (Path, None) or (None, error) and do the same for environment keys ("SIPPY_BIGQUERY_CREDENTIALS_FILE","GOOGLE_APPLICATION_CREDENTIALS") so only existence is validated (and keep the existing error messages format) while leaving _repo_path() usage only where repo-confinement is actually required.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@mcp/server.py`:
- Around line 507-522: The make subprocess must be started in its own process
group and detached from MCP stdin so children are killed on timeout; replace the
subprocess.run(...) in this block with subprocess.Popen(...) using
stdin=subprocess.DEVNULL and platform-appropriate group creation
(preexec_fn=os.setsid on POSIX or creationflags=CREATE_NEW_PROCESS_GROUP on
Windows), wait with Popen.wait(timeout=tout) and on TimeoutExpired kill the
whole group (os.killpg(proc.pid, signal.SIGTERM)/signal.CTRL_BREAK_EVENT then
os.killpg(..., SIGKILL) as needed), close the process and then call
_tail_file(log_path, 80) and return the same timeout message (use the existing
symbols make_target, REPO_ROOT, run_env, tool_label, timeout_seconds, tout,
log_path and function _tail_file).
---
Duplicate comments:
In `@mcp/server.py`:
- Around line 52-69: The function _resolve_bigquery_creds wrongly forces both
explicit and environment-sourced paths through _repo_path(), rejecting valid
external locations and breaking callers like regression_cache, sippy_serve, and
run_e2e; change it to accept raw filesystem paths for explicit and env values
instead of calling _repo_path() — for example, for explicit (explicit param)
convert to a Path, expanduser()/resolve(), check .is_file() and return (Path,
None) or (None, error) and do the same for environment keys
("SIPPY_BIGQUERY_CREDENTIALS_FILE","GOOGLE_APPLICATION_CREDENTIALS") so only
existence is validated (and keep the existing error messages format) while
leaving _repo_path() usage only where repo-confinement is actually required.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Central YAML (inherited)
Review profile: CHILL
Plan: Enterprise
Run ID: e35a55d9-2f19-4a84-8ca0-83c67a57f1c2
📒 Files selected for processing (1)
mcp/server.py
|
@coderabbitai[bot]: changing LGTM is restricted to collaborators DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
Scheduling required tests: |
|
Scheduling required tests: |
This makes it difficult for my work for |
There was a problem hiding this comment.
Hm, this is only available to cursor. If we merge #3472 and you use .apm/ we can make the rules available to everyone including Claude Code
There was a problem hiding this comment.
I am skeptical about the value of an MCP server over just telling the coding assistants how to do this, but I'm willing to try it.
There was a problem hiding this comment.
Some of the MCP commands are overly simplistic. I thought they might get more complex as we went on and supported other workflows. The real goal was to have a single place to define the skills though that many AI platforms would use. I think your APM stuff satisfies that, and we can drop the simple ones.
|
/hold cancel |
|
@coderabbitai[bot]: changing LGTM is restricted to collaborators DetailsIn response to this: Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
Scheduling required tests: |
sosiouxme
left a comment
There was a problem hiding this comment.
Just passing on the highlights from /code-review:pr:
- Zero test coverage for all new source code
- No test framework infrastructure for Python in the repository
Perhaps out of the scope of this PR :)
3. DRY Compliance
High Priority Duplication
1. Server Credentials Resolution (repeated 3 times: lines 98-103, 291-296, 511-513)
# This pattern appears in regression_cache, sippy_serve, run_e2e:
creds_path, err = _resolve_bigquery_creds(bigquery_credentials_file)
if err:
return err
dsn = database_dsn or _default_database_dsn()
redis = redis_url or os.environ.get("REDIS_URL", "redis://localhost:6379")
Impact: Changes to credential resolution require 3 edits
Recommendation: Extract to _resolve_server_config() helper
2. Process Startup Validation (repeated 2 times: lines 367-375, 430-437)
- Identical post-spawn validation in sippy_serve and sippy_ng_start
- Magic numbers: 0.75 (startup delay), 4000 (tail length)
Recommendation: Extract to _validate_process_started() helper
(maybe we don't need to chase individual items like this, but can improve prompting to try to avoid redundancy / be aware of DRY?)
4. SOLID Principles
1. Single Responsibility Principle (SRP) - Medium
- regression_cache() function does 6 things: validation, path resolution, arg building, subprocess
execution, timeout handling, log formatting
- Similar pattern in sippy_serve() and sippy_ng_start()
2. Open/Closed Principle (OCP) - High
- Adding new tools requires copying 80+ lines and modifying validation logic
- No abstraction for the common "dev command execution" pattern
- All 4 tool functions follow nearly identical patterns
Recommendation: Create base execution pattern or command class
3. Interface Segregation Principle (ISP) - Medium
- regression_cache() has 8 parameters
- Most callers will use defaults, hard to remember parameter order
4. Dependency Inversion Principle (DIP) - Medium
- Hard-coded paths: "config/views.yaml", "./cmd/sippy"
- Hard-coded binaries: "stdbuf", "pgrep", "lsof"
- Cannot test with mock commands
again not sure these really need attention individually, but probably ought to be part of our dev harness. the redundancy in the tool functions at least sounds worthy of attention here.
|
Scheduling required tests: |
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
Scheduling required tests: |
|
Scheduling required tests: |
|
This overall worked really well for me, nice addition |
|
Scheduling required tests: |
|
/lgtm It'd be great to see a demo of this during a team meeting |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: coderabbitai[bot], smg247, stbenjam The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
Scheduling required tests: |
|
@smg247: all tests passed! Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
This isn't meant to be perfect, but it gives us a basic dev containers set up and MCP server to iterate on. The main, glaring issue right now is the lack of ability to run
lintande2efrom inside the dev container. I played around with nested podman, but that had some snags that I didn't want to spend additional time working through right now. I think rewritting the e2e script to be used without podman is probably the proper approach. Lint actually can work fine within the container by simply passingCI=true.Summary by CodeRabbit
New Features
Documentation
Chores