CONSOLE-5212: Set up Playwright e2e test infrastructure#16320
CONSOLE-5212: Set up Playwright e2e test infrastructure#16320stefanonardo wants to merge 2 commits intoopenshift:mainfrom
Conversation
The hoisted @types/js-yaml v4 (from @kubernetes/client-node) removed safeLoad/safeDump/safeLoadAll from the type definitions, causing TS2305 errors even though the runtime js-yaml is still v3. Pin @types/js-yaml to ^3.12.7 and add type assertions to all safeLoad/safeDump call sites to match the patterns from OCPBUGS-78980. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
@stefanonardo: This pull request references CONSOLE-5212 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "5.0.0" version, but no target version was set. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: stefanonardo The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
📝 WalkthroughWalkthroughAdds a Playwright end-to-end test framework for the frontend: new Playwright config, global setup/teardown, worker and per-test fixtures (including a Kubernetes client and a cleanup fixture), a BasePage helper, and a smoke test. Introduces a Kubernetes e2e client for token/kubeconfig handling and resource/namespace lifecycle management, plus an Important Pre-merge checks failedPlease resolve all errors before merging. Addressing warnings is optional. ❌ Failed checks (1 warning, 2 inconclusive)
✅ Passed checks (9 passed)
✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
frontend/packages/knative-plugin/src/utils/create-channel-utils.ts (1)
170-174:⚠️ Potential issue | 🟠 MajorAdd error handling for YAML parsing and complete optional chaining in channel config access
The
safeLoad()call can return a string or undefined (not just objects), and the type assertion masks this. Additionally, the property access chain is incomplete—cfg?.namespaceDefaults[namespace].kindandcfg?.clusterDefault.kindlack optional chaining on the nested properties, which will throw if those keys are missing. Finally,safeLoad()throws aYAMLExceptionon malformed YAML and is currently unhandled in this hook path, which would crash theChannelFormcomponent.Suggested fix
- const cfg = safeLoad(configMap.data?.['default-ch-config']) as Record<string, any>; + let cfg: Record<string, any> = {}; + try { + const parsed = safeLoad(configMap.data?.['default-ch-config']); + cfg = _.isPlainObject(parsed) ? (parsed as Record<string, any>) : {}; + } catch { + cfg = {}; + } defaultConfiguredChannel = _.hasIn(cfg?.namespaceDefaults, namespace) - ? cfg?.namespaceDefaults[namespace].kind - : cfg?.clusterDefault.kind; + ? cfg?.namespaceDefaults?.[namespace]?.kind ?? EVENTING_IMC_KIND + : cfg?.clusterDefault?.kind ?? EVENTING_IMC_KIND;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/packages/knative-plugin/src/utils/create-channel-utils.ts` around lines 170 - 174, Handle YAML parse errors from safeLoad and avoid unsafe property access: wrap the safeLoad(configMap.data?.['default-ch-config']) call in a try/catch and on error log/return early so ChannelForm won't crash; assert the result is an object (e.g., check typeof cfg === 'object' && cfg !== null) before using it; replace direct accesses cfg?.namespaceDefaults[namespace].kind and cfg?.clusterDefault.kind with fully guarded access (e.g., cfg?.namespaceDefaults?.[namespace]?.kind and cfg?.clusterDefault?.kind guarded by cfg?.clusterDefault) and provide a safe fallback if neither exists before assigning defaultConfiguredChannel. Ensure these changes are made in create-channel-utils.ts where safeLoad, cfg, and defaultConfiguredChannel are used.frontend/packages/helm-plugin/src/components/forms/HelmChartRepository/CreateHelmChartRepository.tsx (1)
92-101:⚠️ Potential issue | 🟠 MajorGuard YAML result before using it as a Kubernetes resource object.
A syntactically valid non-object YAML document can pass parse and then fail later when building
referenceFor(...)/modelFor(...).Suggested validation guard
if (values.editorType === EditorType.YAML) { try { - HelmChartRepositoryRes = safeLoad(values.yamlData) as HelmChartRepositoryType; + const parsed = safeLoad(values.yamlData); + const isObject = parsed && typeof parsed === 'object' && !Array.isArray(parsed); + if (!isObject) { + actions.setStatus({ + submitSuccess: '', + submitError: t('helm-plugin~Invalid YAML - expected a Kubernetes resource object.'), + }); + return null; + } + HelmChartRepositoryRes = parsed as HelmChartRepositoryType; if ( HelmChartRepositoryRes && HelmChartRepositoryRes.kind === 'ProjectHelmChartRepository' && !HelmChartRepositoryRes?.metadata?.namespace ) { + HelmChartRepositoryRes.metadata = HelmChartRepositoryRes.metadata ?? ({} as any); HelmChartRepositoryRes.metadata.namespace = namespace; } } catch (err) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/packages/helm-plugin/src/components/forms/HelmChartRepository/CreateHelmChartRepository.tsx` around lines 92 - 101, The parsed YAML (safeLoad) may yield non-object values which later break calls like referenceFor(...) / modelFor(...); update the block that assigns HelmChartRepositoryRes (from safeLoad) to first verify it's a plain object (e.g., typeof === 'object' && HelmChartRepositoryRes !== null && !Array.isArray(HelmChartRepositoryRes)) and that required resource fields exist (kind and metadata) before mutating or passing it to referenceFor/modelFor; if the guard fails, handle it by returning/throwing a validation error or setting form errors so downstream code (including the namespace assignment and any referenceFor/modelFor calls) never operates on non-object YAML.frontend/packages/knative-plugin/src/components/add/brokers/AddBroker.tsx (1)
37-67:⚠️ Potential issue | 🔴 CriticalFix
createResourcesreturn type inconsistency blocking error handling on invalid YAMLLine 40 declares return type
Promise<K8sResourceKind | null>, but line 52 returns rawnullinstead of a Promise. This breaks the.then()chain inhandleSubmit(line 62) when YAML validation fails, preventing proper error handling.Update the catch block to return
Promise.resolve(null)and add a guard inhandleSubmitto skip redirect when creation fails:Suggested fix
} catch (err) { actions.setStatus({ submitError: `Invalid YAML - ${err}` }); - return null; + return Promise.resolve(null); } } return k8sCreate(EventingBrokerModel, broker); }; const handleSubmit = ( values: AddBrokerFormYamlValues, actions: FormikHelpers<AddBrokerFormYamlValues>, ) => { return createResources(values, actions) - .then(() => { + .then((createdBroker) => { + if (!createdBroker) { + return; + } handleRedirect(values.formData.project.name, perspective, perspectiveExtension, navigate); }) .catch((err) => { actions.setStatus({ submitError: err.message }); }); };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/packages/knative-plugin/src/components/add/brokers/AddBroker.tsx` around lines 37 - 67, The createResources function currently returns raw null on YAML parse failure which breaks the Promise chain; update the catch in createResources (where safeLoad and actions.setStatus are used) to return Promise.resolve(null) so the function always returns a Promise<K8sResourceKind | null>, and then add a null-check in handleSubmit after the createResources promise resolves (i.e., skip calling handleRedirect when the resolved value is null) to prevent redirecting on failed creation.
🧹 Nitpick comments (12)
frontend/public/components/monitoring/receiver-forms/alert-manager-receiver-forms.tsx (1)
546-546: Type-safe YAML parse: guard shape before destructuringThe
as Record<string, any>assertion on line 546 skips shape validation before destructuringglobal. While the try/catch handles parse errors and the Alertmanager API contract guarantees valid YAML at the root level, adding a runtime guard would improve type safety and align with defensive parsing patterns—especially valuable iforiginalAlertmanagerConfigJSONsource ever changes.The current code is safe (undefined handling downstream is correct), but consider:
- const { global } = safeLoad(originalAlertmanagerConfigJSON) as Record<string, any>; - setAlertmanagerGlobals(global); + const parsed = safeLoad(originalAlertmanagerConfigJSON); + const global = + typeof parsed === 'object' && parsed !== null && !Array.isArray(parsed) + ? (parsed as Record<string, unknown>).global + : undefined; + setAlertmanagerGlobals(global);This ensures only mapping-shaped YAML is destructured and avoids relying solely on the API contract.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/public/components/monitoring/receiver-forms/alert-manager-receiver-forms.tsx` at line 546, The code currently does an unchecked cast when parsing YAML: safeLoad(originalAlertmanagerConfigJSON) as Record<string, any> and then destructures { global }, which bypasses runtime shape validation; update the parsing to verify the result is an object/map before destructuring — call safeLoad(originalAlertmanagerConfigJSON), check that the returned value is a non-null object (e.g., typeof === "object" && !Array.isArray(result)), then extract const { global } = result as Record<string, any> (or default to an empty object) so you avoid unsafe casting; adjust any downstream logic that expects undefined accordingly.frontend/packages/helm-plugin/src/components/forms/url-chart/HelmURLChartInstallPage.tsx (1)
137-145: Validate parsed YAML shape before submit payload construction.
valuesshould be a mapping object. Valid YAML scalars/sequences can pass parse and fail later downstream.Proposed hardening
} else if (yamlData) { try { - valuesObj = safeLoad(yamlData) as Record<string, unknown>; + const parsed = safeLoad(yamlData); + if (!parsed || typeof parsed !== 'object' || Array.isArray(parsed)) { + actions.setStatus({ + submitError: t('helm-plugin~Invalid YAML - values must be a key/value map.'), + }); + return; + } + valuesObj = parsed as Record<string, unknown>; } catch (err) { actions.setStatus({ submitError: t('helm-plugin~Invalid YAML - {{errorText}}', { errorText: err.toString() }),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/packages/helm-plugin/src/components/forms/url-chart/HelmURLChartInstallPage.tsx` around lines 137 - 145, The parsed YAML (via safeLoad in HelmURLChartInstallPage) may produce non-mapping types (scalars/arrays/null) that pass parsing but break downstream; after safeLoad assigns valuesObj, validate that valuesObj is a non-null plain object (not an array) before proceeding to construct the submit payload, and if it is not, call actions.setStatus with a clear submitError (e.g., "Invalid YAML - values must be a mapping/object") and return; reference the safeLoad call, the valuesObj variable, and the existing actions.setStatus error path to integrate the check.frontend/packages/helm-plugin/src/components/forms/install-upgrade/HelmInstallUpgradePage.tsx (1)
168-176: Validate YAML values are an object map before sending request.This path currently accepts scalar/array YAML and only fails later at API time.
Proposed validation
} else if (yamlData) { try { - valuesObj = safeLoad(yamlData) as any; + const parsed = safeLoad(yamlData); + if (!parsed || typeof parsed !== 'object' || Array.isArray(parsed)) { + actions.setStatus({ + submitError: t('helm-plugin~Invalid YAML - values must be a key/value map.'), + }); + return Promise.resolve(); + } + valuesObj = parsed; } catch (err) { actions.setStatus({ submitError: t('helm-plugin~Invalid YAML - {{errorText}}', { errorText: err.toString() }),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/packages/helm-plugin/src/components/forms/install-upgrade/HelmInstallUpgradePage.tsx` around lines 168 - 176, The YAML parsing path in HelmInstallUpgradePage.tsx currently accepts non-object YAML (scalars/arrays) which later causes API errors; after safeLoad(yamlData) (the valuesObj variable) validate that valuesObj is a non-null plain object and not an array (e.g., typeof valuesObj === 'object' && valuesObj !== null && !Array.isArray(valuesObj')); if validation fails call actions.setStatus with a submitError (e.g., t('helm-plugin~Values must be a YAML map/object')) and return Promise.resolve() to stop submission; update the block that handles safeLoad(yamlData) to include this check so only object maps are sent to the API.frontend/packages/helm-plugin/src/components/forms/install-upgrade/HelmChartVersionDropdown.tsx (1)
125-133: Return early after index fetch/parse failure to avoid undefined-state propagation.After the
catch, execution continues and may apply updates based on undefinedjson.Safer failure handling
try { const response = await coFetch(`/api/helm/charts/index.yaml?namespace=${namespace}`); const yaml = await response.text(); json = safeLoad(yaml) as { entries: HelmChartEntries }; } catch { - if (ignore) return; + if (!ignore) { + setHelmChartRepos({}); + setHelmChartEntries([]); + setHelmChartVersions({}); + } + return; } if (ignore) return;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/packages/helm-plugin/src/components/forms/install-upgrade/HelmChartVersionDropdown.tsx` around lines 125 - 133, The fetch/parse catch block can leave `json` undefined and execution continues into `getChartEntriesByName`, so update the `HelmChartVersionDropdown` logic to return immediately on failure: inside the try/catch that loads `/api/helm/charts/index.yaml` ensure the catch either rethrows or sets a failure path that returns when `ignore` is false (or simply return early from the function/component when parse fails), so that code using `json` (used by getChartEntriesByName) never runs with an undefined value; adjust the existing `catch { if (ignore) return; }` to also `return` (or handle the error path) when not ignoring.frontend/packages/console-shared/src/components/editor/yaml-download-utils.ts (1)
9-12: Add explicit metadata guards for filename derivation.Avoid relying on exception flow when parsed YAML has
kindbut missingmetadata.name.Small robustness tweak
try { const obj = safeLoad(String(data)) as K8sResourceKind; - if (obj.kind) { + if (obj?.kind && obj?.metadata?.name) { filename = `${obj.kind.toLowerCase()}-${obj.metadata.name}.yaml`; } } catch (e) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/packages/console-shared/src/components/editor/yaml-download-utils.ts` around lines 9 - 12, The filename derivation currently assumes obj.metadata.name exists after safeLoad (in yaml-download-utils.ts); add explicit guards before using obj.metadata.name—check that obj.metadata is an object and obj.metadata.name is a non-empty string (e.g., typeof === 'string' && obj.metadata.name.trim()), and only then set filename = `${obj.kind.toLowerCase()}-${obj.metadata.name}.yaml`; otherwise fall back to a safe default filename (e.g., `${obj.kind.toLowerCase() || 'resource'}-unnamed.yaml` or include a timestamp) so no exception is thrown when metadata or name is missing.frontend/e2e/global.setup.ts (2)
101-109: Consider: Setup continues after namespace creation failure.If namespace creation fails (line 107-109), setup continues but the test namespace won't exist. Tests depending on this namespace will fail with confusing errors.
Consider either:
- Making namespace creation fatal (throw)
- Setting a flag that tests can check to skip gracefully
Current behavior may be intentional for local no-auth mode, but the warning could be clearer about implications.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/e2e/global.setup.ts` around lines 101 - 109, The namespace creation catch currently only warns and allows setup to continue, which leaves tests running against a non-existent testNamespace; update the error handling in the block that calls k8sClient.createNamespace and k8sClient.waitForNamespaceReady so it either rethrows the caught error (making namespace creation fatal) or sets a clearly named flag (e.g., namespaceCreationFailed) on the setup context that tests can check to skip/abort, and make the warning message explicit about the implications (include testNamespace and that dependent tests will be skipped/failed); modify the code around k8sClient.createNamespace, waitForNamespaceReady and the catch to implement one of these two behaviors and ensure downstream test logic checks the new flag if you choose the skip approach.
119-125: Security note: Auth token written to.test-config.jsonshould be handled carefully.The
authTokenis written to a JSON file on disk. While the file is gitignored, ensure:
- File permissions are restrictive (consider
fs.writeFileSyncwith mode0o600)- Token is short-lived (OAuth tokens typically are)
- CI artifacts don't inadvertently capture this file
🔒 Suggested: Set restrictive file permissions
fs.mkdirSync(path.dirname(CONFIG_FILE), { recursive: true }); - fs.writeFileSync(CONFIG_FILE, JSON.stringify(testConfig, null, 2)); + fs.writeFileSync(CONFIG_FILE, JSON.stringify(testConfig, null, 2), { mode: 0o600 });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/e2e/global.setup.ts` around lines 119 - 125, The test auth token is being written to disk insecurely; update the write path around CONFIG_FILE and testConfig so the file is created with restrictive permissions and not exposed in CI artifacts: when creating the directory and writing the file (see fs.mkdirSync, fs.writeFileSync and CONFIG_FILE) ensure the file is written with mode 0o600 (owner read/write only) and/or call fs.chmodSync(CONFIG_FILE, 0o600) immediately after write, ensure the file remains gitignored, and add CI cleanup steps to delete CONFIG_FILE after tests finish to avoid leaving tokens in artifacts.frontend/e2e/pages/base-page.ts (2)
91-96: Minor redundancy:navigateToTabduplicates waits performed insiderobustClick.
robustClickalready callswaitForLoadingCompleteandlocator.waitFor({ state: 'visible' }). The explicit waits at lines 92-93 add latency without benefit. Consider simplifying if performance becomes a concern.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/e2e/pages/base-page.ts` around lines 91 - 96, navigateToTab currently duplicates waiting logic that robustClick already performs: remove the pre-click waits (the call to waitForLoadingComplete(Math.min(...)) and the explicit locator.waitFor) inside navigateToTab and simply call robustClick(locator) followed by the existing post-click waitForLoadingComplete(); rely on robustClick's internal calls to waitForLoadingComplete and locator.waitFor to avoid added latency while preserving stability (refer to the navigateToTab and robustClick methods and the waitForLoadingComplete and locator.waitFor calls).
70-79: Minor: Type-narrow thrown value before accessing.message.
clickErrorcould be a non-Error thrown value (string, undefined, etc.). The optional chaining helps, but a safer pattern would be:🛡️ Suggested defensive typing
} catch (clickError) { + const errorMsg = clickError instanceof Error ? clickError.message : String(clickError); if ( attempt < retries && - (clickError.message?.includes('intercept') || - clickError.message?.includes('not visible')) + (errorMsg.includes('intercept') || errorMsg.includes('not visible')) ) { await locator.click({ force: true, timeout: attemptTimeout }); return; } throw clickError; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/e2e/pages/base-page.ts` around lines 70 - 79, The catch block reads clickError.message but clickError may not be an Error, so first narrow/coerce the thrown value before accessing message; inside the catch for the locator click (referencing clickError, locator.click, attempt, retries, attemptTimeout) compute a safe message string (e.g., use clickError instanceof Error ? clickError.message : String(clickError) or check typeof/canAccess 'message') and use that safe string in the includes checks, then proceed with the existing retry/force-click logic; ensure you don't assume clickError has a message property before reading it.frontend/e2e/clients/kubernetes-client.ts (2)
28-41:pollUntilsignature requiresintervalMsbut callers may forget it.The function requires three parameters with no default for
intervalMs. This is fine, but consider adding a default (e.g.,intervalMs = 1000) for convenience, as polling intervals are often similar across use cases.♻️ Optional: add default interval
async function pollUntil( condition: () => Promise<boolean>, timeoutMs: number, - intervalMs: number, + intervalMs = 1000, ): Promise<boolean> {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/e2e/clients/kubernetes-client.ts` around lines 28 - 41, The pollUntil function requires an explicit intervalMs which callers may forget; update the async function pollUntil(condition: () => Promise<boolean>, timeoutMs: number, intervalMs: number) to provide a sensible default (e.g. intervalMs = 1000) in its signature and ensure callers can omit the third argument; keep the existing behavior otherwise so the loop still uses intervalMs for the setTimeout and the return semantics remain unchanged.
209-214: Kubeconfig file contains sensitive token—consider restricting permissions.The generated kubeconfig is written with default permissions, which may allow other users on shared systems to read the OAuth token. For defense-in-depth, set restrictive file permissions.
🔒 Proposed fix to restrict file permissions
if (!fs.existsSync(dir)) { fs.mkdirSync(dir, { recursive: true }); } - fs.writeFileSync(outputPath, kubeconfigYaml, 'utf8'); + fs.writeFileSync(outputPath, kubeconfigYaml, { encoding: 'utf8', mode: 0o600 }); return outputPath;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/e2e/clients/kubernetes-client.ts` around lines 209 - 214, The kubeconfig is written with default permissions; update the logic around outputPath/kubeconfigYaml to create the containing directory with restrictive mode (e.g., 0o700) and write the kubeconfig file with restrictive mode (e.g., 0o600) so the OAuth token is not world-readable; modify the mkdirSync call for dir to include { recursive: true, mode: 0o700 } and change the fs.writeFileSync call to set file mode (e.g., fs.writeFileSync(outputPath, kubeconfigYaml, { encoding: 'utf8', mode: 0o600 })) or, if writeFileSync signature in your runtime doesn't accept options, call fs.chmodSync(outputPath, 0o600) immediately after writing. Ensure these changes are applied where outputPath, kubeconfigYaml, dir, fs.mkdirSync and fs.writeFileSync are used.frontend/e2e/fixtures/index.ts (1)
45-56: Consider makingk8sClientworker-scoped for efficiency.The
k8sClientfixture currently uses the default test scope, meaning a newKubernetesClientinstance is created for every test. Since the client performs kubeconfig loading and API client initialization, promoting this to worker scope (liketestConfig) would reduce overhead when running multiple tests in parallel workers.♻️ Proposed refactor to worker scope
- k8sClient: async ({ testConfig }, use) => { + k8sClient: [ + async ({ testConfig }, use) => { const client = new KubernetesClient( { clusterUrl: process.env.CLUSTER_URL || '', username: process.env.OPENSHIFT_USERNAME || 'kubeadmin', password: process.env.BRIDGE_KUBEADMIN_PASSWORD || '', token: testConfig.authToken, }, testConfig.kubeConfigPath, ); await use(client); }, + { scope: 'worker' }, + ],🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/e2e/fixtures/index.ts` around lines 45 - 56, The k8sClient fixture creates a new KubernetesClient per test and should be promoted to worker scope to reuse the heavy kubeconfig/API init; change the fixture declaration for k8sClient from a single async function to an array form with options e.g. k8sClient: [ async ({ testConfig }, use) => { const client = new KubernetesClient(..., testConfig.kubeConfigPath); await use(client); }, { scope: 'worker' } ], ensuring you keep the same dependency on the worker-scoped testConfig fixture and the KubernetesClient instantiation unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@frontend/e2e/clients/kubernetes-client.ts`:
- Around line 242-245: The code unconditionally disables TLS verification by
setting NODE_TLS_REJECT_UNAUTHORIZED = '0' when KubernetesClient.getProxyUrl()
returns a proxy, which affects all HTTPS in the process; change this to require
an explicit opt-in (e.g. check process.env.ALLOW_INSECURE_TLS === 'true') before
mutating NODE_TLS_REJECT_UNAUTHORIZED, or better yet avoid the global mutation
entirely and instead configure the specific HTTP client/agent used by the code
path that talks to the proxy (locate usages around KubernetesClient.getProxyUrl
and the HTTP/agent creation) to accept insecure certs only for that agent when
ALLOW_INSECURE_TLS is true.
In `@frontend/e2e/fixtures/cleanup-fixture.ts`:
- Around line 115-143: The cleanup loop in
frontend/e2e/fixtures/cleanup-fixture.ts currently only deletes ConfigMap and
Secret in the switch inside the for (const resource of others) block and
silently ignores other core API resource types; update the switch in that loop
(the resource.type handling) to either add handlers for commonly used kinds
(e.g., Service, Pod, PersistentVolumeClaim) by calling the appropriate client
methods, or if you prefer a minimal change, add a warning in the default case
that logs the unhandled resource type and its name (e.g., console.warn or
processLogger.warn) so orphaned resources are visible; alternatively, if the
client exposes a generic deleteNamespacedResource or
deleteCoreV1Namespaced(kind, name, namespace) helper, replace the specific cases
with that generic call to delete any namespaced core resource.
- Line 33: The SKIP_TEST_CLEANUP check in cleanup-fixture.ts is inconsistent
with global.teardown.ts; update the skipCleanup condition (the const skipCleanup
that reads process.env.SKIP_TEST_CLEANUP and process.env.DEBUG) to treat DEBUG
as both '1' and 'true' (same behavior as global.teardown.ts) so both per-test
cleanup and global teardown use the same DEBUG semantics.
In `@frontend/e2e/global.setup.ts`:
- Around line 45-49: The current Playwright selector in global.setup.ts uses a
title attribute (providerButton = page.locator(`[title="Log in with
${opts.provider}"]`)) which is brittle for external OAuth UIs; change the
locator to use Playwright's text-based selector (e.g., use page.locator or
page.getByText with opts.provider) so the login button is found by visible text
content instead of a title attribute, and keep the existing existence check and
click flow around providerButton and opts.provider.
In `@frontend/playwright.config.ts`:
- Around line 73-79: The developer project references a hard-coded storageState
file (storageState: path.resolve(__dirname, 'e2e', '.auth', 'developer.json'))
but that file is only created when BRIDGE_HTPASSWD_USERNAME and
BRIDGE_HTPASSWD_PASSWORD are set in global.setup.ts, so running the 'developer'
project can fail if htpasswd is not configured; update
frontend/playwright.config.ts to detect the existence of that file (or the
presence of the BRIDGE_HTPASSWD_* env vars) before assigning storageState for
the 'developer' project and fallback to no storageState (or mark the project as
skipped) when missing, or alternatively add a stub developer.json in no-auth
mode — change handling around the 'developer' project configuration to be
conditional to avoid a missing-file error.
---
Outside diff comments:
In
`@frontend/packages/helm-plugin/src/components/forms/HelmChartRepository/CreateHelmChartRepository.tsx`:
- Around line 92-101: The parsed YAML (safeLoad) may yield non-object values
which later break calls like referenceFor(...) / modelFor(...); update the block
that assigns HelmChartRepositoryRes (from safeLoad) to first verify it's a plain
object (e.g., typeof === 'object' && HelmChartRepositoryRes !== null &&
!Array.isArray(HelmChartRepositoryRes)) and that required resource fields exist
(kind and metadata) before mutating or passing it to referenceFor/modelFor; if
the guard fails, handle it by returning/throwing a validation error or setting
form errors so downstream code (including the namespace assignment and any
referenceFor/modelFor calls) never operates on non-object YAML.
In `@frontend/packages/knative-plugin/src/components/add/brokers/AddBroker.tsx`:
- Around line 37-67: The createResources function currently returns raw null on
YAML parse failure which breaks the Promise chain; update the catch in
createResources (where safeLoad and actions.setStatus are used) to return
Promise.resolve(null) so the function always returns a Promise<K8sResourceKind |
null>, and then add a null-check in handleSubmit after the createResources
promise resolves (i.e., skip calling handleRedirect when the resolved value is
null) to prevent redirecting on failed creation.
In `@frontend/packages/knative-plugin/src/utils/create-channel-utils.ts`:
- Around line 170-174: Handle YAML parse errors from safeLoad and avoid unsafe
property access: wrap the safeLoad(configMap.data?.['default-ch-config']) call
in a try/catch and on error log/return early so ChannelForm won't crash; assert
the result is an object (e.g., check typeof cfg === 'object' && cfg !== null)
before using it; replace direct accesses cfg?.namespaceDefaults[namespace].kind
and cfg?.clusterDefault.kind with fully guarded access (e.g.,
cfg?.namespaceDefaults?.[namespace]?.kind and cfg?.clusterDefault?.kind guarded
by cfg?.clusterDefault) and provide a safe fallback if neither exists before
assigning defaultConfiguredChannel. Ensure these changes are made in
create-channel-utils.ts where safeLoad, cfg, and defaultConfiguredChannel are
used.
---
Nitpick comments:
In `@frontend/e2e/clients/kubernetes-client.ts`:
- Around line 28-41: The pollUntil function requires an explicit intervalMs
which callers may forget; update the async function pollUntil(condition: () =>
Promise<boolean>, timeoutMs: number, intervalMs: number) to provide a sensible
default (e.g. intervalMs = 1000) in its signature and ensure callers can omit
the third argument; keep the existing behavior otherwise so the loop still uses
intervalMs for the setTimeout and the return semantics remain unchanged.
- Around line 209-214: The kubeconfig is written with default permissions;
update the logic around outputPath/kubeconfigYaml to create the containing
directory with restrictive mode (e.g., 0o700) and write the kubeconfig file with
restrictive mode (e.g., 0o600) so the OAuth token is not world-readable; modify
the mkdirSync call for dir to include { recursive: true, mode: 0o700 } and
change the fs.writeFileSync call to set file mode (e.g.,
fs.writeFileSync(outputPath, kubeconfigYaml, { encoding: 'utf8', mode: 0o600 }))
or, if writeFileSync signature in your runtime doesn't accept options, call
fs.chmodSync(outputPath, 0o600) immediately after writing. Ensure these changes
are applied where outputPath, kubeconfigYaml, dir, fs.mkdirSync and
fs.writeFileSync are used.
In `@frontend/e2e/fixtures/index.ts`:
- Around line 45-56: The k8sClient fixture creates a new KubernetesClient per
test and should be promoted to worker scope to reuse the heavy kubeconfig/API
init; change the fixture declaration for k8sClient from a single async function
to an array form with options e.g. k8sClient: [ async ({ testConfig }, use) => {
const client = new KubernetesClient(..., testConfig.kubeConfigPath); await
use(client); }, { scope: 'worker' } ], ensuring you keep the same dependency on
the worker-scoped testConfig fixture and the KubernetesClient instantiation
unchanged.
In `@frontend/e2e/global.setup.ts`:
- Around line 101-109: The namespace creation catch currently only warns and
allows setup to continue, which leaves tests running against a non-existent
testNamespace; update the error handling in the block that calls
k8sClient.createNamespace and k8sClient.waitForNamespaceReady so it either
rethrows the caught error (making namespace creation fatal) or sets a clearly
named flag (e.g., namespaceCreationFailed) on the setup context that tests can
check to skip/abort, and make the warning message explicit about the
implications (include testNamespace and that dependent tests will be
skipped/failed); modify the code around k8sClient.createNamespace,
waitForNamespaceReady and the catch to implement one of these two behaviors and
ensure downstream test logic checks the new flag if you choose the skip
approach.
- Around line 119-125: The test auth token is being written to disk insecurely;
update the write path around CONFIG_FILE and testConfig so the file is created
with restrictive permissions and not exposed in CI artifacts: when creating the
directory and writing the file (see fs.mkdirSync, fs.writeFileSync and
CONFIG_FILE) ensure the file is written with mode 0o600 (owner read/write only)
and/or call fs.chmodSync(CONFIG_FILE, 0o600) immediately after write, ensure the
file remains gitignored, and add CI cleanup steps to delete CONFIG_FILE after
tests finish to avoid leaving tokens in artifacts.
In `@frontend/e2e/pages/base-page.ts`:
- Around line 91-96: navigateToTab currently duplicates waiting logic that
robustClick already performs: remove the pre-click waits (the call to
waitForLoadingComplete(Math.min(...)) and the explicit locator.waitFor) inside
navigateToTab and simply call robustClick(locator) followed by the existing
post-click waitForLoadingComplete(); rely on robustClick's internal calls to
waitForLoadingComplete and locator.waitFor to avoid added latency while
preserving stability (refer to the navigateToTab and robustClick methods and the
waitForLoadingComplete and locator.waitFor calls).
- Around line 70-79: The catch block reads clickError.message but clickError may
not be an Error, so first narrow/coerce the thrown value before accessing
message; inside the catch for the locator click (referencing clickError,
locator.click, attempt, retries, attemptTimeout) compute a safe message string
(e.g., use clickError instanceof Error ? clickError.message : String(clickError)
or check typeof/canAccess 'message') and use that safe string in the includes
checks, then proceed with the existing retry/force-click logic; ensure you don't
assume clickError has a message property before reading it.
In
`@frontend/packages/console-shared/src/components/editor/yaml-download-utils.ts`:
- Around line 9-12: The filename derivation currently assumes obj.metadata.name
exists after safeLoad (in yaml-download-utils.ts); add explicit guards before
using obj.metadata.name—check that obj.metadata is an object and
obj.metadata.name is a non-empty string (e.g., typeof === 'string' &&
obj.metadata.name.trim()), and only then set filename =
`${obj.kind.toLowerCase()}-${obj.metadata.name}.yaml`; otherwise fall back to a
safe default filename (e.g., `${obj.kind.toLowerCase() ||
'resource'}-unnamed.yaml` or include a timestamp) so no exception is thrown when
metadata or name is missing.
In
`@frontend/packages/helm-plugin/src/components/forms/install-upgrade/HelmChartVersionDropdown.tsx`:
- Around line 125-133: The fetch/parse catch block can leave `json` undefined
and execution continues into `getChartEntriesByName`, so update the
`HelmChartVersionDropdown` logic to return immediately on failure: inside the
try/catch that loads `/api/helm/charts/index.yaml` ensure the catch either
rethrows or sets a failure path that returns when `ignore` is false (or simply
return early from the function/component when parse fails), so that code using
`json` (used by getChartEntriesByName) never runs with an undefined value;
adjust the existing `catch { if (ignore) return; }` to also `return` (or handle
the error path) when not ignoring.
In
`@frontend/packages/helm-plugin/src/components/forms/install-upgrade/HelmInstallUpgradePage.tsx`:
- Around line 168-176: The YAML parsing path in HelmInstallUpgradePage.tsx
currently accepts non-object YAML (scalars/arrays) which later causes API
errors; after safeLoad(yamlData) (the valuesObj variable) validate that
valuesObj is a non-null plain object and not an array (e.g., typeof valuesObj
=== 'object' && valuesObj !== null && !Array.isArray(valuesObj')); if validation
fails call actions.setStatus with a submitError (e.g., t('helm-plugin~Values
must be a YAML map/object')) and return Promise.resolve() to stop submission;
update the block that handles safeLoad(yamlData) to include this check so only
object maps are sent to the API.
In
`@frontend/packages/helm-plugin/src/components/forms/url-chart/HelmURLChartInstallPage.tsx`:
- Around line 137-145: The parsed YAML (via safeLoad in HelmURLChartInstallPage)
may produce non-mapping types (scalars/arrays/null) that pass parsing but break
downstream; after safeLoad assigns valuesObj, validate that valuesObj is a
non-null plain object (not an array) before proceeding to construct the submit
payload, and if it is not, call actions.setStatus with a clear submitError
(e.g., "Invalid YAML - values must be a mapping/object") and return; reference
the safeLoad call, the valuesObj variable, and the existing actions.setStatus
error path to integrate the check.
In
`@frontend/public/components/monitoring/receiver-forms/alert-manager-receiver-forms.tsx`:
- Line 546: The code currently does an unchecked cast when parsing YAML:
safeLoad(originalAlertmanagerConfigJSON) as Record<string, any> and then
destructures { global }, which bypasses runtime shape validation; update the
parsing to verify the result is an object/map before destructuring — call
safeLoad(originalAlertmanagerConfigJSON), check that the returned value is a
non-null object (e.g., typeof === "object" && !Array.isArray(result)), then
extract const { global } = result as Record<string, any> (or default to an empty
object) so you avoid unsafe casting; adjust any downstream logic that expects
undefined accordingly.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Central YAML (inherited)
Review profile: CHILL
Plan: Pro Plus
Run ID: 89d27142-2385-4601-b35f-ec42ef0bfd2b
⛔ Files ignored due to path filters (1)
frontend/yarn.lockis excluded by!**/yarn.lock,!**/*.lock
📒 Files selected for processing (26)
.gitignorefrontend/.eslintrc.jsfrontend/e2e/.env.examplefrontend/e2e/clients/kubernetes-client.tsfrontend/e2e/fixtures/cleanup-fixture.tsfrontend/e2e/fixtures/index.tsfrontend/e2e/global.setup.tsfrontend/e2e/global.teardown.tsfrontend/e2e/pages/base-page.tsfrontend/e2e/tests/smoke/smoke-test.spec.tsfrontend/package.jsonfrontend/packages/console-shared/src/components/editor/yaml-download-utils.tsfrontend/packages/console-shared/src/hooks/useResourceSidebarSamples.tsfrontend/packages/dev-console/src/components/deployments/EditDeployment.tsxfrontend/packages/helm-plugin/src/catalog/providers/useHelmCharts.tsxfrontend/packages/helm-plugin/src/components/forms/HelmChartRepository/CreateHelmChartRepository.tsxfrontend/packages/helm-plugin/src/components/forms/install-upgrade/HelmChartVersionDropdown.tsxfrontend/packages/helm-plugin/src/components/forms/install-upgrade/HelmInstallUpgradePage.tsxfrontend/packages/helm-plugin/src/components/forms/url-chart/HelmURLChartInstallPage.tsxfrontend/packages/knative-plugin/src/components/add/brokers/AddBroker.tsxfrontend/packages/knative-plugin/src/utils/create-channel-utils.tsfrontend/packages/vsphere-plugin/src/components/persist.tsfrontend/playwright.config.tsfrontend/public/components/edit-yaml.tsxfrontend/public/components/monitoring/alertmanager/alertmanager-utils.tsxfrontend/public/components/monitoring/receiver-forms/alert-manager-receiver-forms.tsx
📜 Review details
🔇 Additional comments (26)
frontend/packages/console-shared/src/hooks/useResourceSidebarSamples.ts (1)
2-2: Namespace import update looks correct.Switching to
import * as YAMLis consistent with theYAML.dump(...)call sites in this file and keeps the typing/runtime usage aligned.frontend/packages/vsphere-plugin/src/components/persist.ts (1)
135-135: Type assertion update looks good here.This keeps the existing YAML/INI fallback behavior intact while resolving the typing friction around parsed config usage.
frontend/public/components/monitoring/alertmanager/alertmanager-utils.tsx (2)
47-48: Config parse typing change is fine.No runtime behavior regression in this path; this is a type-level cleanup.
58-59: Nice hardening for YAML encoding path.Normalizing non-object inputs via
String(yaml)before Base64 encoding avoids accidental non-string encoding issues.frontend/packages/helm-plugin/src/catalog/providers/useHelmCharts.tsx (1)
55-56: Looks good as a type-safety cleanup.No control-flow regression introduced by this change.
frontend/packages/dev-console/src/components/deployments/EditDeployment.tsx (1)
49-49: No actionable concern in this cast-only hunk.frontend/public/components/edit-yaml.tsx (3)
77-80: No actionable concern in this cast-only hunk.
336-381: No actionable concern in this cast-only helper update.
597-599: No actionable concern in this cast-only save path update.frontend/.eslintrc.js (1)
121-129: LGTM — sensible ESLint overrides for E2E tests.Disabling
no-consoleandno-empty-patternfore2e/**/*.tsis appropriate. Playwright fixtures commonly use empty destructuring patterns, and console output is expected in test setup/teardown flows.frontend/e2e/pages/base-page.ts (1)
3-16: Good coverage of PatternFly loading indicators across versions.The selector list handles PF v5, v6, legacy spinners, and various skeleton patterns. This will help maintain test stability across PatternFly upgrades.
frontend/e2e/tests/smoke/smoke-test.spec.ts (1)
1-6: LGTM — minimal smoke test validating the E2E infrastructure.This serves its purpose as a sanity check that the global setup, authentication, and console loading work end-to-end. Consider adding more specific assertions (e.g., user dropdown visibility) as the test suite matures.
.gitignore (1)
30-33: LGTM — appropriate gitignore entries for Playwright artifacts.Correctly excludes sensitive auth state, runtime config, and generated test reports.
frontend/package.json (2)
52-55: LGTM — well-structured Playwright script commands.The script variants cover common execution modes: standard, headed, debug, and UI. This provides good developer ergonomics.
238-239: All package versions are valid and current.@playwright/test: ^1.59.1,@kubernetes/client-node: ^1.4.0, anddotenv: ^17.4.2are confirmed as the latest versions as of April 2026.frontend/e2e/.env.example (1)
1-37: LGTM — comprehensive and well-documented environment template.The
.env.exampleclearly documents all configuration options with sensible defaults and inline explanations. The organization into logical groups (console config, cluster access, auth, execution controls) makes it easy for developers to set up their environment.frontend/playwright.config.ts (2)
12-27: Note: Chrome launch arguments include sandbox-disabling flags.The
--no-sandboxand--disable-setuid-sandboxflags are necessary for containerized CI environments but reduce browser isolation. This is acceptable for E2E testing infrastructure where the test environment is trusted.
29-46: LGTM — well-configured Playwright settings.Good choices:
testIdAttribute: 'data-test'aligns with existing console data-test attributesglobalSetup/globalTeardownproperly wired- CI-specific retries and reporters
- Generous timeouts appropriate for cluster-based testing
frontend/e2e/global.setup.ts (1)
67-99: LGTM — well-structured global setup with graceful fallback.The phased approach (cluster auth → namespace setup → browser login) is clean. Falling back to local no-auth mode when cluster access fails provides good developer experience for local testing without a full cluster.
frontend/e2e/global.teardown.ts (1)
1-62: LGTM! Clean teardown implementation with appropriate error handling.The teardown logic is well-structured: it gracefully handles missing config files, provides informative logging, and uses best-effort cleanup semantics. The 120s timeout for namespace deletion is reasonable given potential finalizer delays on OpenShift clusters.
One minor observation: the DEBUG check on line 9 could also consider
process.env.DEBUG === 'true'for consistency, which you've partially covered. The pattern looks good.frontend/e2e/fixtures/index.ts (1)
58-73: LGTM! Well-structured cleanup fixture with proper error isolation.The try/finally pattern ensures cleanup runs regardless of test outcome. The guard conditions (
!fixture.shouldSkipCleanup() && fixture.count > 0) provide appropriate early-exit semantics, and errors during cleanup are logged without failing the test—which is the right behavior for teardown.frontend/e2e/fixtures/cleanup-fixture.ts (1)
59-94: LGTM! Clean fixture API design.The
track,trackNamespace, andtrackCustomResourcemethods provide a clear and type-safe interface for registering resources. Thecountgetter is useful for the guard inindex.ts, and the separation of concerns is well thought out.frontend/e2e/clients/kubernetes-client.ts (4)
100-149: LGTM! Solid OAuth token exchange implementation.The OAuth flow correctly handles the OpenShift challenging-client pattern, extracts the access token from the redirect location, and properly integrates proxy support. Error messages are informative without leaking credentials.
217-250: LGTM! Robust kubeconfig loading with sensible fallback chain.The constructor tries multiple sources in order: explicit path → discovered test config → default kubeconfig → token-based config. This provides flexibility for both CI and local development scenarios. The error message clearly guides users on how to resolve authentication issues.
370-379: Please provide the review comment that needs to be verified and rewritten by including it within<review_comment>tags.
287-301: Document or removerun-levellabel from E2E test namespaces.The label
openshift.io/run-level: '0'is used intentionally in production code (e.g., NodeTerminal.tsx for debug pods), where it appears alongside other privileged access markers. However, in the E2E client, this label is applied unconditionally to all test namespaces without explanation. Since the label affects operator and controller behavior, clarify whether test namespaces actually require it or if it was copied from the debug pod pattern without proper justification. If needed, add a comment explaining the rationale; if not, remove it to avoid unintended side effects on cluster operators during test execution.
| const proxyUrl = KubernetesClient.getProxyUrl(); | ||
| if (proxyUrl && !process.env.NODE_TLS_REJECT_UNAUTHORIZED) { | ||
| process.env.NODE_TLS_REJECT_UNAUTHORIZED = '0'; | ||
| } |
There was a problem hiding this comment.
Global TLS verification disable is a security concern.
Setting NODE_TLS_REJECT_UNAUTHORIZED = '0' affects all HTTPS connections in the Node.js process, not just those going through the proxy. This could inadvertently disable certificate validation for other HTTP clients used by Playwright or test code.
Consider scoping this to the specific agent or making it opt-in via an explicit environment variable like ALLOW_INSECURE_TLS=true.
🔒 Proposed fix to make this explicit
const proxyUrl = KubernetesClient.getProxyUrl();
- if (proxyUrl && !process.env.NODE_TLS_REJECT_UNAUTHORIZED) {
- process.env.NODE_TLS_REJECT_UNAUTHORIZED = '0';
+ if (proxyUrl && process.env.ALLOW_INSECURE_TLS === 'true') {
+ if (!process.env.NODE_TLS_REJECT_UNAUTHORIZED) {
+ console.warn(
+ '⚠️ ALLOW_INSECURE_TLS is set; disabling TLS verification globally for proxy support.',
+ );
+ process.env.NODE_TLS_REJECT_UNAUTHORIZED = '0';
+ }
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const proxyUrl = KubernetesClient.getProxyUrl(); | |
| if (proxyUrl && !process.env.NODE_TLS_REJECT_UNAUTHORIZED) { | |
| process.env.NODE_TLS_REJECT_UNAUTHORIZED = '0'; | |
| } | |
| const proxyUrl = KubernetesClient.getProxyUrl(); | |
| if (proxyUrl && process.env.ALLOW_INSECURE_TLS === 'true') { | |
| if (!process.env.NODE_TLS_REJECT_UNAUTHORIZED) { | |
| console.warn( | |
| '⚠️ ALLOW_INSECURE_TLS is set; disabling TLS verification globally for proxy support.', | |
| ); | |
| process.env.NODE_TLS_REJECT_UNAUTHORIZED = '0'; | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/e2e/clients/kubernetes-client.ts` around lines 242 - 245, The code
unconditionally disables TLS verification by setting
NODE_TLS_REJECT_UNAUTHORIZED = '0' when KubernetesClient.getProxyUrl() returns a
proxy, which affects all HTTPS in the process; change this to require an
explicit opt-in (e.g. check process.env.ALLOW_INSECURE_TLS === 'true') before
mutating NODE_TLS_REJECT_UNAUTHORIZED, or better yet avoid the global mutation
entirely and instead configure the specific HTTP client/agent used by the code
path that talks to the proxy (locate usages around KubernetesClient.getProxyUrl
and the HTTP/agent creation) to accept insecure certs only for that agent when
ALLOW_INSECURE_TLS is true.
There was a problem hiding this comment.
I'm wondering since Console is already a k8s client, if we could use our existing k8sGet/ co
Fetch infrastructure instead of @kubernetes/client-node so we're touching more "code" (or maybe we should be using @kubernetes/client-node in console instead?)
|
/label px-approved |
a19710c to
09b45a2
Compare
|
@CodeRabbit review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (2)
frontend/e2e/pages/base-page.ts (1)
93-95: UserobustClickinclickButtonByTextfor consistencyThis helper currently bypasses the resilience logic already centralized in
robustClick, so overlay/loading interception flakiness can leak back in here.Suggested refactor
async clickButtonByText(buttonText: string): Promise<void> { const button = this.locator('button', { hasText: buttonText }); - await button.click(); + await this.robustClick(button); + await this.waitForLoadingComplete(); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/e2e/pages/base-page.ts` around lines 93 - 95, The clickButtonByText helper currently calls button.click() directly which bypasses the existing resilience in robustClick; update clickButtonByText to locate the button (using this.locator('button', { hasText: buttonText }) or equivalent) and invoke the shared robustClick helper (e.g., this.robustClick or robustClick(...)) against that locator instead of calling click() directly so overlay/loading interception handling is used consistently.frontend/e2e/fixtures/cleanup-fixture.ts (1)
38-59: Avoid swallowing client initialization failures ingetClient().Returning
nullwithout logging the caught error makes cleanup failures hard to debug (e.g., malformed.test-config.json, invalid kubeconfig path). Log the root cause before returning.🛠️ Proposed fix
- } catch { + } catch (error) { + const msg = error instanceof Error ? error.message : String(error); + console.warn(`[Cleanup] Failed to initialize Kubernetes client: ${msg}`); return null; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/e2e/fixtures/cleanup-fixture.ts` around lines 38 - 59, The getClient() helper currently swallows all errors and returns null; update it to catch the thrown error, log the error (including the exception message and context such as configPath or kubeConfigPath) using the existing logger or console before returning null, so failures parsing .test-config.json or invalid kubeconfig paths are visible; locate the try/catch around new KubernetesClient(...) in getClient() and add an error log that includes the caught error and relevant variables (configPath, kubeConfigPath, authToken) prior to the null return.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@frontend/e2e/global.setup.ts`:
- Around line 67-71: In globalSetup, remove any stale CONFIG_FILE
(.test-config.json) before the rest of the setup runs: locate the globalSetup
function and the CONFIG_FILE constant and add logic early in the function
(before the setup/cluster logic and before any code that writes CONFIG_FILE) to
check for the CONFIG_FILE file and unlink/delete it if it exists so
local/no-cluster runs cannot reuse an old config; ensure this deletion happens
only when SKIP_GLOBAL_SETUP is not set to 'true' (or run it unconditionally
before the SKIP check if you prefer skipping the rest but still want to clear
stale state).
- Around line 38-39: Persisted auth files are created with default permissions;
tighten them by ensuring the containing directory is created/set to mode 0o700
(owner only) and by setting each written storage file (e.g., when calling
page.context().storageState with opts.storageStatePath and other writes around
the storageState calls at ~lines 60–62) and the .auth file creation (symbol/name
".auth" at ~line 132) to file mode 0o600 after write; locate the storageState
calls and the .auth creation and add post-write permission fixes (fs.chmod /
fs.mkdir with mode) so directories are 0700 and files are 0600.
In `@frontend/e2e/pages/base-page.ts`:
- Around line 22-27: The current logic counts loadingElements then waits only on
loadingElements.first() and swallows any errors; change this to wait for every
matched loader instead of only the first and allow timeouts/errors to surface
(or rethrow) so real hang conditions aren't masked: iterate over loadingElements
(or map to an array of elements) and await Promise.all of element.waitFor({
state: 'hidden', timeout: timeoutMs }) for each, and remove the blanket catch
that returns/ignores errors so failures propagate; keep a safe no-elements
branch when count === 0 to skip waiting. Reference the loadingElements variable
and its waitFor calls in the method in base-page.ts.
- Around line 57-60: The code computes attemptTimeout = timeout / retries
without validating inputs; validate and normalize the retry inputs (e.g., ensure
retries is an integer >= 1 and timeout is a positive number) before computing
attemptTimeout in the function or method that defines timeout, retries,
retryDelay, force (refer to the options destructuring and attemptTimeout
variable). If invalid values are passed, either throw a clear error or coerce
them to safe defaults (for example set retries = Math.max(1,
Math.floor(retries)) and enforce timeout > 0) so attemptTimeout is always a
valid positive number.
---
Nitpick comments:
In `@frontend/e2e/fixtures/cleanup-fixture.ts`:
- Around line 38-59: The getClient() helper currently swallows all errors and
returns null; update it to catch the thrown error, log the error (including the
exception message and context such as configPath or kubeConfigPath) using the
existing logger or console before returning null, so failures parsing
.test-config.json or invalid kubeconfig paths are visible; locate the try/catch
around new KubernetesClient(...) in getClient() and add an error log that
includes the caught error and relevant variables (configPath, kubeConfigPath,
authToken) prior to the null return.
In `@frontend/e2e/pages/base-page.ts`:
- Around line 93-95: The clickButtonByText helper currently calls button.click()
directly which bypasses the existing resilience in robustClick; update
clickButtonByText to locate the button (using this.locator('button', { hasText:
buttonText }) or equivalent) and invoke the shared robustClick helper (e.g.,
this.robustClick or robustClick(...)) against that locator instead of calling
click() directly so overlay/loading interception handling is used consistently.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Central YAML (inherited)
Review profile: CHILL
Plan: Pro Plus
Run ID: d7e99e91-378f-4362-8a08-4da6bce5491e
📒 Files selected for processing (11)
.gitignorefrontend/.eslintrc.jsfrontend/e2e/.env.examplefrontend/e2e/clients/kubernetes-client.tsfrontend/e2e/fixtures/cleanup-fixture.tsfrontend/e2e/fixtures/index.tsfrontend/e2e/global.setup.tsfrontend/e2e/global.teardown.tsfrontend/e2e/pages/base-page.tsfrontend/e2e/tests/smoke/smoke-test.spec.tsfrontend/playwright.config.ts
✅ Files skipped from review due to trivial changes (4)
- frontend/e2e/tests/smoke/smoke-test.spec.ts
- .gitignore
- frontend/e2e/.env.example
- frontend/e2e/global.teardown.ts
🚧 Files skipped from review as they are similar to previous changes (4)
- frontend/.eslintrc.js
- frontend/e2e/fixtures/index.ts
- frontend/playwright.config.ts
- frontend/e2e/clients/kubernetes-client.ts
📜 Review details
🔇 Additional comments (1)
frontend/e2e/fixtures/cleanup-fixture.ts (1)
117-168: Cleanup ordering looks solid (resources first, namespace last).Deleting namespaced resources before namespace deletion, then waiting for namespace termination, is the right lifecycle order and reduces orphan/leak risk.
| const count = await loadingElements.count().catch(() => 0); | ||
| if (count > 0) { | ||
| await loadingElements.first().waitFor({ state: 'hidden', timeout: timeoutMs }); | ||
| } | ||
| } catch { | ||
| // Loading indicators may have already disappeared — continue |
There was a problem hiding this comment.
Don’t mask loading failures or wait on only one loader
Line 24 waits only on the first matched indicator, and Lines 26-27 swallow timeouts/errors. This can proceed while other loaders are still visible and hide true page-hang conditions.
Suggested fix
protected async waitForLoadingComplete(timeoutMs = 5_000): Promise<void> {
const loadingSelector = this.loadingIndicators.join(', ');
const loadingElements = this.page.locator(loadingSelector);
- try {
- const count = await loadingElements.count().catch(() => 0);
- if (count > 0) {
- await loadingElements.first().waitFor({ state: 'hidden', timeout: timeoutMs });
- }
- } catch {
- // Loading indicators may have already disappeared — continue
- }
+ const count = await loadingElements.count().catch(() => 0);
+ if (count === 0) {
+ return;
+ }
+
+ await this.page.waitForFunction(
+ (selector: string) =>
+ [...document.querySelectorAll(selector)].every((el) => {
+ if (!(el instanceof HTMLElement)) return true;
+ const style = window.getComputedStyle(el);
+ return (
+ style.display === 'none' ||
+ style.visibility === 'hidden' ||
+ el.getClientRects().length === 0
+ );
+ }),
+ loadingSelector,
+ { timeout: timeoutMs },
+ );
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const count = await loadingElements.count().catch(() => 0); | |
| if (count > 0) { | |
| await loadingElements.first().waitFor({ state: 'hidden', timeout: timeoutMs }); | |
| } | |
| } catch { | |
| // Loading indicators may have already disappeared — continue | |
| protected async waitForLoadingComplete(timeoutMs = 5_000): Promise<void> { | |
| const loadingSelector = this.loadingIndicators.join(', '); | |
| const loadingElements = this.page.locator(loadingSelector); | |
| const count = await loadingElements.count().catch(() => 0); | |
| if (count === 0) { | |
| return; | |
| } | |
| await this.page.waitForFunction( | |
| (selector: string) => | |
| [...document.querySelectorAll(selector)].every((el) => { | |
| if (!(el instanceof HTMLElement)) return true; | |
| const style = window.getComputedStyle(el); | |
| return ( | |
| style.display === 'none' || | |
| style.visibility === 'hidden' || | |
| el.getClientRects().length === 0 | |
| ); | |
| }), | |
| loadingSelector, | |
| { timeout: timeoutMs }, | |
| ); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/e2e/pages/base-page.ts` around lines 22 - 27, The current logic
counts loadingElements then waits only on loadingElements.first() and swallows
any errors; change this to wait for every matched loader instead of only the
first and allow timeouts/errors to surface (or rethrow) so real hang conditions
aren't masked: iterate over loadingElements (or map to an array of elements) and
await Promise.all of element.waitFor({ state: 'hidden', timeout: timeoutMs })
for each, and remove the blanket catch that returns/ignores errors so failures
propagate; keep a safe no-elements branch when count === 0 to skip waiting.
Reference the loadingElements variable and its waitFor calls in the method in
base-page.ts.
| const { timeout = 30_000, retries = 3, retryDelay = 1_000, force = false } = options; | ||
| let lastError: Error | null = null; | ||
| const attemptTimeout = timeout / retries; | ||
|
|
There was a problem hiding this comment.
Validate retry inputs before deriving attempt timeout
Line 59 divides by retries directly. If retries <= 0 (or very small timeout), this can produce invalid attempt timings and skip retries entirely with an unhelpful failure.
Suggested fix
): Promise<void> {
const { timeout = 30_000, retries = 3, retryDelay = 1_000, force = false } = options;
+ if (retries < 1) throw new Error('retries must be >= 1');
+ if (timeout < 1) throw new Error('timeout must be >= 1');
let lastError: Error | null = null;
- const attemptTimeout = timeout / retries;
+ const attemptTimeout = Math.max(250, Math.floor(timeout / retries));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/e2e/pages/base-page.ts` around lines 57 - 60, The code computes
attemptTimeout = timeout / retries without validating inputs; validate and
normalize the retry inputs (e.g., ensure retries is an integer >= 1 and timeout
is a positive number) before computing attemptTimeout in the function or method
that defines timeout, retries, retryDelay, force (refer to the options
destructuring and attemptTimeout variable). If invalid values are passed, either
throw a clear error or coerce them to safe defaults (for example set retries =
Math.max(1, Math.floor(retries)) and enforce timeout > 0) so attemptTimeout is
always a valid positive number.
09b45a2 to
e139b98
Compare
|
@CodeRabbit review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
🧹 Nitpick comments (5)
frontend/e2e/global.teardown.ts (1)
21-30: Config parsing is resilient but silent on corruption.The try-catch gracefully handles corrupted JSON, but a warning log would help debug scenarios where teardown unexpectedly skips cleanup due to a malformed config file.
📝 Proposed fix to log corruption
} catch { - // config file corrupted — skip cleanup + console.warn('⚠️ .test-config.json is corrupted — skipping cleanup'); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/e2e/global.teardown.ts` around lines 21 - 30, The config-read block silently swallows JSON.parse errors; update the catch to capture the thrown error and emit a warning so corrupted CONFIG_FILE cases are visible. Modify the try-catch around fs.readFileSync/JSON.parse (the block that sets testNamespace, kubeConfigPath, authToken when fs.existsSync(CONFIG_FILE)) to catch (err) and call the project's logger or console.warn with a descriptive message including CONFIG_FILE and the error message/stack.frontend/e2e/clients/kubernetes-client.ts (3)
365-379: Type assertions on K8s client responses are pragmatic but worth noting.The
as anycasts on lines 371 and 378 work around incomplete type definitions in@kubernetes/client-node. This is common practice, but consider adding a brief comment explaining why the cast is needed (e.g., response typing gaps) for future maintainers.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/e2e/clients/kubernetes-client.ts` around lines 365 - 379, The two "as any" type assertions in patchConfigMap (on the result of k8sApi.readNamespacedConfigMap and the patchNamespacedConfigMap call) are pragmatic workarounds for incomplete types in `@kubernetes/client-node`; add a short inline comment above each cast explaining that the library's response/parameter typings are incomplete (or mismatched) and that the cast is intentional to merge existing data and use k8s.PatchStrategy.MergePatch, so future maintainers know the risk and can remove the cast when the upstream types are fixed.
58-98: Consider adding a connection timeout to the proxy socket.The
net.connectcall has no timeout, which could cause tests to hang indefinitely if the proxy is unresponsive. This is particularly problematic in CI environments where stuck jobs waste resources.⏱️ Proposed fix to add timeout
const proxySocket = net.connect( { host: proxy.hostname, port: parseInt(proxy.port || '3128', 10) }, () => { proxySocket.write( [ `CONNECT ${options.host}:${options.port} HTTP/1.1`, `Host: ${options.host}:${options.port}`, 'Connection: keep-alive', '', '', ].join('\r\n'), ); }, ); + proxySocket.setTimeout(30_000, () => { + proxySocket.destroy(new Error('Proxy connection timed out')); + });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/e2e/clients/kubernetes-client.ts` around lines 58 - 98, The createProxyAgent function's net.connect call lacks a connection timeout causing potential hangs; update the proxySocket setup in createProxyAgent to set a connection timeout (e.g., 5s) using proxySocket.setTimeout(...) and handle the 'timeout' event by removing listeners, destroying the socket, and invoking the original callback with a timeout Error; also clear the timeout and remove the timeout listener when the proxy returns a full response (inside onData when statusCode parsed) or on 'error' to avoid leaks. Ensure the timeout value is configurable or defined as a constant and reference createProxyAgent and the proxySocket event handlers when making the changes.
457-467: Silent error swallowing inlistCustomResourcesmay mask issues.The catch block returns an empty array for any error, making it impossible to distinguish between "no resources exist" and "API call failed". For e2e tests, this could lead to confusing false positives if the CRD isn't installed or permissions are missing.
🔍 Proposed fix to log errors
} catch { + console.warn(`[KubernetesClient] Failed to list ${plural} in ${namespace}`); return []; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/e2e/clients/kubernetes-client.ts` around lines 457 - 467, The current try/catch in listCustomResources silently swallows all errors from this.coApi.listNamespacedCustomObject and returns [], hiding API failures; update the catch to capture the error (catch (err)), log a clear contextual error message including the error details (e.g., "Failed to list custom resources for group/namespace/plural/version") referencing this.coApi.listNamespacedCustomObject and the method name listCustomResources, and then rethrow the error (or if you must preserve the original behavior, return [] only after logging) so e2e failures aren't masked.frontend/playwright.config.ts (1)
63-63: Consider validatingWORKERSparsing.
parseInt(process.env.WORKERS, 10)will returnNaNfor invalid values like"abc", which could cause unexpected Playwright behavior. A fallback or validation would be safer.🔢 Proposed fix with validation
- workers: process.env.WORKERS ? parseInt(process.env.WORKERS, 10) : isCI ? 1 : undefined, + workers: process.env.WORKERS + ? parseInt(process.env.WORKERS, 10) || 1 + : isCI + ? 1 + : undefined,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/playwright.config.ts` at line 63, The workers assignment in the Playwright config uses parseInt(process.env.WORKERS, 10) without validation, so invalid values produce NaN; update the workers expression to parse and validate process.env.WORKERS (e.g., const w = Number.parseInt(process.env.WORKERS, 10); if Number.isInteger(w) && w > 0 use w, otherwise fall back to isCI ? 1 : undefined) and replace the current workers: parseInt(...) usage with that validated value so only a positive integer is passed to Playwright.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@frontend/e2e/clients/kubernetes-client.ts`:
- Around line 365-379: The two "as any" type assertions in patchConfigMap (on
the result of k8sApi.readNamespacedConfigMap and the patchNamespacedConfigMap
call) are pragmatic workarounds for incomplete types in `@kubernetes/client-node`;
add a short inline comment above each cast explaining that the library's
response/parameter typings are incomplete (or mismatched) and that the cast is
intentional to merge existing data and use k8s.PatchStrategy.MergePatch, so
future maintainers know the risk and can remove the cast when the upstream types
are fixed.
- Around line 58-98: The createProxyAgent function's net.connect call lacks a
connection timeout causing potential hangs; update the proxySocket setup in
createProxyAgent to set a connection timeout (e.g., 5s) using
proxySocket.setTimeout(...) and handle the 'timeout' event by removing
listeners, destroying the socket, and invoking the original callback with a
timeout Error; also clear the timeout and remove the timeout listener when the
proxy returns a full response (inside onData when statusCode parsed) or on
'error' to avoid leaks. Ensure the timeout value is configurable or defined as a
constant and reference createProxyAgent and the proxySocket event handlers when
making the changes.
- Around line 457-467: The current try/catch in listCustomResources silently
swallows all errors from this.coApi.listNamespacedCustomObject and returns [],
hiding API failures; update the catch to capture the error (catch (err)), log a
clear contextual error message including the error details (e.g., "Failed to
list custom resources for group/namespace/plural/version") referencing
this.coApi.listNamespacedCustomObject and the method name listCustomResources,
and then rethrow the error (or if you must preserve the original behavior,
return [] only after logging) so e2e failures aren't masked.
In `@frontend/e2e/global.teardown.ts`:
- Around line 21-30: The config-read block silently swallows JSON.parse errors;
update the catch to capture the thrown error and emit a warning so corrupted
CONFIG_FILE cases are visible. Modify the try-catch around
fs.readFileSync/JSON.parse (the block that sets testNamespace, kubeConfigPath,
authToken when fs.existsSync(CONFIG_FILE)) to catch (err) and call the project's
logger or console.warn with a descriptive message including CONFIG_FILE and the
error message/stack.
In `@frontend/playwright.config.ts`:
- Line 63: The workers assignment in the Playwright config uses
parseInt(process.env.WORKERS, 10) without validation, so invalid values produce
NaN; update the workers expression to parse and validate process.env.WORKERS
(e.g., const w = Number.parseInt(process.env.WORKERS, 10); if
Number.isInteger(w) && w > 0 use w, otherwise fall back to isCI ? 1 : undefined)
and replace the current workers: parseInt(...) usage with that validated value
so only a positive integer is passed to Playwright.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Central YAML (inherited)
Review profile: CHILL
Plan: Pro Plus
Run ID: 1fb0f476-7f2b-47a6-9f81-c4ac593ede18
📒 Files selected for processing (11)
.gitignorefrontend/.eslintrc.jsfrontend/e2e/.env.examplefrontend/e2e/clients/kubernetes-client.tsfrontend/e2e/fixtures/cleanup-fixture.tsfrontend/e2e/fixtures/index.tsfrontend/e2e/global.setup.tsfrontend/e2e/global.teardown.tsfrontend/e2e/pages/base-page.tsfrontend/e2e/tests/smoke/smoke-test.spec.tsfrontend/playwright.config.ts
✅ Files skipped from review due to trivial changes (4)
- .gitignore
- frontend/.eslintrc.js
- frontend/e2e/.env.example
- frontend/e2e/pages/base-page.ts
🚧 Files skipped from review as they are similar to previous changes (3)
- frontend/e2e/tests/smoke/smoke-test.spec.ts
- frontend/e2e/fixtures/index.ts
- frontend/e2e/global.setup.ts
📜 Review details
🔇 Additional comments (15)
frontend/e2e/clients/kubernetes-client.ts (5)
242-245: Global TLS verification disable remains a security concern.As previously noted, setting
NODE_TLS_REJECT_UNAUTHORIZED = '0'affects all HTTPS connections in the Node.js process, not just proxy traffic. Consider making this opt-in via an explicitALLOW_INSECURE_TLS=truecheck or scoping to the specific agent.
16-26: DefensiveisNotFoundhelper looks solid.The function handles multiple error shapes (statusCode, response.statusCode, message parsing) which is good for working with the K8s client library's inconsistent error structures. The fallback to string matching for "404" and "not found" is pragmatic.
100-149: OAuth token flow is well-structured.The implementation correctly uses the OpenShift challenging client flow with Basic auth header and X-CSRF-Token. The token extraction from the redirect location is standard practice. Good error messaging with response body truncation.
209-214: Good security practice on file permissions.Creating the directory with
mode: 0o700and the kubeconfig file withmode: 0o600properly restricts access to credentials. This aligns with K8s best practices for kubeconfig handling.
217-240: Kubeconfig loading fallback chain is well-designed.The priority order (explicit path → discovered local config →
loadFromDefault()→ token fallback) handles both CI and local development scenarios gracefully. The error message clearly guides users to runoc loginor setKUBECONFIG.frontend/e2e/global.teardown.ts (2)
8-12: Debug mode skip is appropriately defensive.Checking both
'1'and'true'forDEBUGis consistent withcleanup-fixture.tsand provides flexibility for different user preferences. Good practice for e2e debugging.
37-57: Namespace cleanup flow is CI-friendly.Using
console.warninstead of throwing on cleanup failures prevents CI from failing on teardown issues unrelated to test correctness. The 120-second wait is appropriate given namespace finalizers can take time.frontend/playwright.config.ts (3)
12-27: Chrome args are comprehensive for headless e2e.Good set of flags for stable CI execution:
--disable-dev-shm-usageprevents crashes in Docker,--no-sandboxworks with containerized runs, and disabling various Chrome features reduces flakiness. The--ignore-certificate-errorsaligns with self-signed cert handling elsewhere.
36-56: Timeouts are calibrated for OpenShift Console's API-heavy nature.The 90-second navigation timeout and 60-second action timeout account for Console's many K8s API calls during page loads. These are appropriate starting points; consider adding a comment noting they can be tuned down once the test suite matures.
65-84: Project configuration cleanly handles multi-user scenarios.The conditional
developerproject based onBRIDGE_HTPASSWD_USERNAMEpresence addresses the previous review feedback. Usingpath.resolvefor storage state paths ensures VS Code Playwright extension compatibility.frontend/e2e/fixtures/cleanup-fixture.ts (5)
6-13:TrackedResourceinterface is well-designed.Captures all necessary metadata for resource cleanup: name, optional namespace, API group/version, plural form, and kind. This structure supports both core and custom resources cleanly.
33-36:skipCleanuplogic is now consistent.The
DEBUGcheck now matchesglobal.teardown.tsbehavior ('1' || 'true'), addressing the previous review feedback about inconsistent environment variable handling.
38-63: Client initialization is appropriately defensive.Returning
nullon failure instead of throwing allows the cleanup fixture to gracefully degrade when K8s isn't available (e.g., local-only UI tests). The error logging captures the actual exception for debugging.
117-152: Cleanup ordering and error handling are well-implemented.The two-phase approach (delete resources → delete namespaces) prevents cascading errors. The warning for unhandled core types (lines 132-134) addresses previous feedback. Suppressing 404 errors during cleanup is correct since the resource may have been cleaned up by owner references.
166-171: Namespace wait timeout is appropriately shorter for per-test cleanup.The 60-second timeout is intentionally lower than global teardown's 120 seconds since per-test namespaces typically have fewer resources. The
.catch()ensures cleanup issues don't fail the test itself.
1094877 to
22bb15d
Compare
Add the foundation for migrating Cypress e2e tests to Playwright: config with admin and developer projects using separate storageState files, global setup and teardown with cluster authentication and namespace lifecycle, KubernetesClient for typed K8s API calls, BasePage with robustClick and loading detection, CleanupFixture for automatic resource cleanup, and a smoke test that proves the full lifecycle works against both localhost (auth-disabled) and remote clusters. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
22bb15d to
9f3bc0a
Compare
|
@stefanonardo: all tests passed! Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
Summary
adminand conditionaldeveloperprojects using separatestorageStatefiles,testIdAttribute: 'data-test'storageStatepersistence, auth-disabled mode for local dev.auth/forSKIP_GLOBAL_SETUP)KubernetesClient: typed K8s API client adapted from openshift-ui-tests-template, with OAuth token exchange and proxy supportBasePage: abstract page object withrobustClick()(PatternFly overlay handling) and loading detectionCleanupFixture: automatic per-test resource cleanup with namespaces-last orderinge2e/files (no-console,no-empty-pattern)How to verify
Local no-auth (localhost:9000)
source contrib/oc-environment.sh && ./bin/bridgecd frontend && npx playwright test --project=adminAgainst a remote cluster
oc login <cluster>frontend/e2e/.envfromfrontend/e2e/.env.example, setWEB_CONSOLE_URLandBRIDGE_KUBEADMIN_PASSWORDcd frontend && npx playwright test --project=adminoc get ns | grep console-e2eshould return nothing after test completesTest plan
SKIP_GLOBAL_SETUP=truereuses existing.auth/filesnpx playwright test --listshows tests without TypeScript errors🤖 Generated with Claude Code
Summary by CodeRabbit