GitHub Actions Interview Questions

Dependency Review and Security Scanning

questions
Scroll to track progress

A developer updates a dependency in a PR: `lodash: 4.17.20 → 4.17.21`. GitHub shows a yellow warning: "Lodash has a known vulnerability (CVE-2021-23337)." The PR should be blocked, but GitHub's warning is just informational. The developer merges anyway. The vulnerability reaches production. How do you enforce dependency checks?

GitHub's Dependency Review is passive by default. Make it active: (1) Enable Dependency Review workflow: GitHub provides a built-in workflow (`.github/workflows/dependency-review.yml`) that scans PRs for vulnerable dependencies. Set this as a required check. (2) GitHub's check will fail PRs that introduce new vulnerabilities. This blocks merge until the dev updates to a safe version. (3) Use a stricter policy: Snyk, Dependabot, or similar tools can be configured to reject any PR introducing vulnerabilities (not just new ones). (4) For the `lodash` case: check if `4.17.21` fixes the CVE. If not, the check should fail. Update to a version that includes the fix. (5) For inherited vulnerabilities: if a new version of a dependency has a known CVE, tools should flag it. (6) Implement a policy: "No dependencies with critical or high CVEs allowed. Medium CVEs require approval. Low CVEs are informational only." (7) Use multiple tools: GitHub's built-in scanning + Snyk for depth. Multiple layers catch more vulnerabilities. (8) For transitive dependencies: if your dependency depends on a vulnerable library (transitive), the tool should flag it. You might not be able to fix it directly, but you can apply pressure on the maintainer or find alternatives. (9) Manual review: when a vulnerability is unavoidable (no fix available yet), a senior engineer reviews and approves the risk. Require their signature on the approval.

Follow-up: How would you implement a tiered vulnerability policy that auto-blocks critical CVEs but allows low-risk ones with approval?

Your Dependency Review workflow flags a PR: "New dependency introduced: `faker-js@6.0.0`. This is a new external dependency not previously used." The developer responds: "I checked—it's from a trusted author on npm, popular library (1M weekly downloads), MIT license." Should you allow this dependency?

Introducing new dependencies has risk beyond CVEs. Evaluate holistically: (1) Is the library maintained? Check: when was the last release? Are PRs being reviewed promptly? Is the maintainer responsive? Inactive projects might not release security patches quickly. (2) License compliance: is it compatible with your project's license? (MIT is permissive, but some licenses have restrictions.) (3) Size impact: does the library add 100 KB to your bundle? Run `npm ls faker-js --depth=0 --all` to see dependency tree. Bloated libraries slow down your app. (4) Necessity: is this the best library for the task? If you're using `faker-js` just to generate test data, consider simpler alternatives or generating data manually. (5) Security history: check the npm registry for vulnerabilities history. Even popular libraries have had breaches. (6) Code review: for critical dependencies, review the library's source code. Look for suspicious patterns, eval() calls, require() of dynamic modules. (7) Deprecation: is the library facing sunset? Check the GitHub repo's README for any "deprecated" warnings. (8) For trusted libraries like lodash, faker, etc., the trust is earned. But do a sanity check: count downloads (1M is good), check GitHub stars (high star count is positive), verify maintainer identity. (9) Policy: require approval from tech lead for any new external dependency. This gates the decision.

Follow-up: Design a dependency approval workflow that evaluates security, license, and maintenance.

You use GitHub's Dependency Review and Dependabot. Dependabot creates a PR to update a vulnerable dependency. The Dependency Review check passes (the update fixes the CVE). But then, your organization's security team manually scans the updated library and finds it has a new, undisclosed vulnerability. Automated scanning missed it. What do you do?

Automated scanning is not foolproof: (1) Acknowledge the limitation: public CVE databases are updated regularly, but zero-day vulnerabilities exist before they're disclosed. Automated tools scan against known CVEs, not unknown ones. (2) Implement defense-in-depth: don't rely solely on automated scanning. Combine: (a) automated CVE scanning (GitHub, Snyk), (b) manual code review of critical dependencies, (c) runtime monitoring (detect unusual behavior post-deployment). (3) For this scenario: your security team found a real vulnerability. (a) Report it to the maintainer (responsibly disclose). (b) If the maintainer is unresponsive, patch it yourself or switch libraries. (c) Update Snyk's database manually if they aren't aware. (4) Implement SCA (Software Composition Analysis) tools beyond CVE scanning: Snyk Code, Semgrep, SonarQube can detect suspicious code patterns (eval(), hard-coded secrets, SQL injection risks) even if there's no CVE filed yet. (5) For zero-days: implement runtime protection (WAF, rate limiting, output encoding) to mitigate exploitation even if the code has vulnerabilities. (6) Build trust: maintain a list of "blessed" dependencies that you've reviewed deeply and trust. For non-blessed deps, higher scrutiny. (7) Incident post-mortem: when a vulnerability is discovered (automated or not), analyze the gap. Why did automated scanning miss it? Update your tools/policies to catch similar cases.

Follow-up: How would you implement static analysis tools to detect vulnerability patterns beyond known CVEs?

Your monorepo has 50 services. Each service depends on `express@4.17.1` with a known CVE. You update the root `package.json` to `express@4.18.2` (fixes the CVE). But some services still use the old version (they have overrides in their local `package.json`). Dependency Review misses these because it only checks root dependencies. How do you catch all vulnerable dependencies in a monorepo?

Dependency Review tools often miss transitive or overridden dependencies. Implement comprehensive scanning: (1) Scan all dependency files: don't just check root `package.json`. Scan each service's own `package.json`, lock files, and any overrides. GitHub's Dependency Review can be configured to scan subdirectories. (2) Use `npm ls` or similar commands to generate a complete dependency tree: `npm ls --all` shows every dependency, transitive and direct. Scan this tree for vulnerabilities. (3) Lock files are definitive: `package-lock.json` or `yarn.lock` contain the exact versions installed. Scan these files (not just `package.json`) because they represent reality. (4) For monorepos: use workspace-aware scanning. npm workspaces, Yarn workspaces, and pnpm all support this. Run `npm audit --all` in the workspace root to scan all packages at once. (5) Centralize vulnerability policies: all services should use the same dependency versions for shared libraries. Use a shared `package.json` (extends pattern) or lock file. (6) Enforce consistency: add a check in CI that fails if a service uses a different version of `express` than the root specifies. `npm ls express` should return the same version everywhere. (7) For services with overrides: require explicit approval and documentation: "Service X uses express 4.17.1 (instead of root's 4.18.2) because [reason]. Reviewed by [person]." (8) Tool: use SBOM generators (CycloneDX, SPDX) that can generate a complete bill of materials for the entire monorepo, then scan it for vulnerabilities.

Follow-up: Design a monorepo dependency governance system that enforces consistency.

Dependabot creates 50+ PRs to update dependencies (patch and minor versions). Most are low-risk (e.g., lodash 4.17.20 → 4.17.21). A few are higher-risk (major version upgrades). Your team can't review all 50 PRs manually. How do you handle dependency update volume?

Automate dependency management: (1) Configure Dependabot to group updates: instead of 50 individual PRs, group them into: (a) patches (4.17.20 → 4.17.21), (b) minor versions (4.17 → 4.18), (c) major versions (4 → 5). This reduces PR count to 3. (2) Set rules: auto-merge low-risk updates. Dependabot can merge patches automatically if tests pass: `dependabot auto-merge: true for: patch-updates`. (3) For minor and major updates: require manual review and approval. (4) Use Renovate (alternative to Dependabot): more sophisticated grouping and rules. You can define: "merge all patches and minor updates automatically; require approval for major updates." (5) Implement a batching strategy: instead of merging each update immediately, batch them monthly. Merge all updates on the first Monday of the month. This reduces churn. (6) For security updates: fast-track them. If Dependabot detects a CVE fix, auto-merge regardless of version bump. (7) Automate testing: the key is having comprehensive, fast tests. If tests pass (which they should for most updates), auto-merge is safe. (8) Trust scorecard: rank dependencies by trust score. Trusted, well-maintained libraries (lodash, express, react) auto-merge on patches. Unknown libraries require manual review. (9) Use "dependabot auto-merge" GitHub Actions: automatically approve and merge Dependabot PRs based on rules.

Follow-up: How would you implement intelligent auto-merge logic for dependency updates based on trust and risk?

Your team uses npm, Python, and Docker. You have security scanning for npm (npm audit). But Python packages and Docker images aren't scanned. A Python dependency introduces a vulnerability, and it makes it to production. You realize your security coverage is incomplete.

Extend scanning across all dependency types: (1) Use multi-language SCA tools: Snyk, Dependabot, or Mend (formerly WhiteSource) support npm, pip, Maven, Go, Ruby, etc. Configure one tool for all languages. (2) For Python: `pip audit` scans `requirements.txt` and `pyproject.toml` for CVEs. Add to your workflow: `- run: pip audit`. (3) For Docker: scan base images and installed packages. Use Trivy, Grype, or Snyk to scan images for vulnerabilities. Example: `trivy image node:18-alpine`. (4) Container registry scanning: most registries (Docker Hub, ECR, GCR) have built-in vulnerability scanning. Enable it. Before pulling an image in CI, verify no critical CVEs. (5) Implement scanning in your workflow: ```yaml - name: Scan Python run: pip audit - name: Build Docker image run: docker build -t myapp . - name: Scan Docker image run: trivy image myapp - name: Scan npm run: npm audit```. (6) Use a unified policy: all languages must pass the same CVE threshold (e.g., no critical CVEs, medium/low require approval). (7) For compliance: generate an SBOM (Software Bill of Materials) for each deployment, covering all languages. This is required for government contracts and regulated industries.

Follow-up: Design a multi-language vulnerability scanning pipeline that covers npm, Python, Docker, and Java.

Your organization has a policy: "No external dependencies with unknown authors." A PR introduces a new library from a maintainer you've never heard of. Dependency Review doesn't flag unknown authors—it only flags known CVEs. The PR merges, and later, the maintainer's account is compromised, and malicious code is inserted. Your system is now compromised.

Dependency Review needs supplementation for supply chain attacks: (1) Implement author/maintainer verification: before allowing a new dependency, verify the maintainer's identity. Check: (a) maintainer reputation (GitHub stars, downloads, publication history), (b) maintainer history (when did they create the account?), (c) other projects by the maintainer (are they legitimate?). (2) Use npm scorecard / registry reputation tools: npm has `npm search` which shows popularity; GitHub's RepoRater scores repositories. (3) For critical dependencies: require multi-factor authentication (MFA) verification. Check if the maintainer has MFA enabled on npm (safer than not). (4) Monitor for account takeovers: if a previously-trusted maintainer publishes a new version with suspicious code, automated scanning should catch it. Use SAST tools (SonarQube, Semgrep) to detect suspicious patterns. (5) Use "trusted source" whitelists: maintain a list of vetted, approved libraries. Only these can be used without special approval. (6) For open-source projects: use library pinning. Instead of allowing any version of lodash, pin to a specific commit SHA. When you want to upgrade, you review the changes explicitly. (7) Implement code signing: if libraries sign their releases (via GPG or Sigstore), verify signatures before use. (8) For extremely critical systems: audit library source code before use. This is expensive but necessary for national security / critical infrastructure.

Follow-up: Design a system that detects and prevents maintainer account compromises.

Want to go deeper?