Platform has 200+ internal packages. Install time is 45 seconds. Transitive dependency conflict: A needs B==1.0, C needs B==2.0. How do you resolve without massive refactor?
Dependency resolution is NP-complete. Solutions: (1) use poetry/pipenv for locked dependencies—lock file pins all versions, ensures reproducibility. (2) identify conflict: `pip install A==X C==Y --verbose` shows candidates and why. (3) negotiate version: check if B==1.5 satisfies both. (4) monorepo strategy: enforce compatible versions for internal packages. (5) profile install: network vs CPU? Usually network; use fast mirror or local cache. Measure: commit lock file to VCS. Test: CI should test all combinations (tox). For 45s install: if network is 80%, local cache helps.
Follow-up: How do you set up a private PyPI server for internal packages?
After updating to Python 3.12, 20 transitive dependencies fail: "no module named X". Version pins are in requirements.txt. How do you debug?
Python version changes break compatibility. Solutions: (1) `pip install --verbose` shows import errors during install. (2) check PyPI for compatibility. (3) audit requirements.txt—packages without version constraints pull incompatible versions. Add: `package>=1.0,<2.0`. (4) run CI on multiple Python versions (3.11, 3.12) to catch early. (5) update packages: newer usually support new releases. (6) `python -W error::DeprecationWarning` shows warnings as errors. Identify top 5 problematic packages, update or replace.
Follow-up: How do you automate Python version compatibility testing in CI?
Dependency security update changes internal API. Your code relied on internal API. After update: "AttributeError: module has no attribute X". How do you fix?
Using internal APIs is risky—not stable. Solutions: (1) migrate to public API (contact maintainer for alternatives). (2) pin version: `package==1.0` freezes incompatible change. (3) wrapper layer: abstract both old/new APIs. (4) polyfill: reimplement in your code. (5) submit PR to restore/add public API. Best: use only public APIs. If forced to use internal, isolate in separate module. Search codebase: `grep -r "_private import"` flags all internal imports as tech debt.
Follow-up: How do you set up static analysis to catch internal API usage automatically?
Using pip-compile to lock dependencies. Critical security patch released for transitive dependency. How do you update just that package without recompiling everything?
pip-compile locks all; updating one requires recompiling. Solutions: (1) re-run `pip-compile requirements.in`—pip-compile respects existing pins where possible. (2) manually edit requirements.txt to bump package (risky—may introduce conflicts). (3) use poetry: `poetry update package` updates single and re-resolves. (4) for critical patches: deploy immediately: `pip install --upgrade package==new_version` (breaks reproducibility but security wins). (5) implement CI to detect security advisories, auto-update. Measure: time to deploy patch. Should be <1 hour. For critical patches: weigh reproducibility vs security.
Follow-up: How do you automate security patch detection and safe updating?
Package declares circular dependency: A depends on B, B depends on C, C depends on A. pip silently installs. Should fail? How do you avoid building circular deps?
Circular imports are usually mistakes. Solutions: (1) break cycle: restructure A->C, B->A (linear). (2) shared module D: A, B, C depend on D. (3) lazy imports: defer B import to function scope. (4) registry pattern for plugins. (5) test: `python -c "import A"` fails if circular. Run in CI. Use `pipdeptree --reverse package` to visualize, spot cycles. Best: avoid cycles entirely. If unavoidable, lazy imports + registry pattern mitigates.
Follow-up: How do you implement a plugin system that avoids circular dependency imports?
Installing package with many C extensions (numpy, scipy) is slow: 30 seconds. Pre-built wheels exist but not used. How do you ensure wheels preferred over source builds?
Source builds require compilation. Wheels are pre-compiled. Solutions: (1) check pip version: old may not support wheels. `pip --version` should show 8.0+. (2) use `pip install --only-binary :all:` to force wheels, fail if unavailable. (3) verify wheel exists: `pip download --only-binary :all: package`. (4) official wheels on PyPI (numpy, scipy). (5) if building, use `pip install --no-cache-dir` to skip re-downloading. Measure: time with/without `--only-binary`. Should be 10x faster. For internal packages: publish wheels via CI/CD. Test: `pip install --verbose package 2>&1 | grep "wheel"` verifies.
Follow-up: How do you build and distribute wheels for internal C extension packages?
Test environment requires different packages than production (pytest, mock). After installing for tests, deploying finds extra packages shipped to production. How do you separate dev/prod?
Mixing dev/prod causes bloated deployments. Solutions: (1) requirements files: requirements.txt (prod), requirements-dev.txt (includes prod). Install prod in prod, dev+prod locally. (2) poetry: `poetry install --only main` (prod), `poetry install` (dev+prod). (3) extras: `setup.py` with `extras_require={'dev': [...]}`. Install `package[dev]` for dev. (4) CI verify: `pip list | grep pytest` empty in production. (5) container layers: dev in dev container, prod-only in production. Measure: compare sizes. Test: verify pytest absent in production.
Follow-up: How do you implement container layers to minimize production image size?
You maintain package used by 100+ internal teams. Releasing breaking change (API rename) requires all teams to update. Coordinating is difficult. How do you avoid breaking changes?
Breaking changes disrupt downstream. Solutions: (1) deprecation warnings: rename old API, keep with warning: `warnings.warn("Use new_api", DeprecationWarning)`. Teams get 2-3 releases to migrate. (2) semantic versioning: breaking in major (1.0->2.0), minor in minor (1.0->1.1). Teams opt-in to major. (3) feature flags: support both APIs, toggle via config. (4) migration guide: document change, provide examples. (5) staged rollout: 10% teams first, gather feedback, then full. Measure: adoption of new API. For 100+ teams: breaking changes rare. Deprecation 6+ months standard. Test: CI tests deprecated APIs to ensure compatibility.
Follow-up: How do you implement a deprecation system that tracks warnings and guides teams to migrate?