Fix sf_describe_report_type hang via two-layer timeout enforcement (session-layer 300s read backstop in RefreshingSalesforceSession.request + tool-layer 120s asyncio.wait_for on sf_describe_report_typ

Decision

Fix sf_describe_report_type hang via two-layer timeout enforcement (session-layer 300s read backstop in RefreshingSalesforceSession.request + tool-layer 120s asyncio.wait_for on sf_describe_report_type) in same v7.1.1 in-place patch; cascade Gotcha #20 into Salesforce SKILL.md, CHANGELOG entry, contract.yaml enforcement_layers annotation.

Rationale

EA reported Claude session hang on sf_describe_report_type(report_type_key=Booking_With_or_W_o_Loan__c, org_alias=production). Source inspection at /opt/mcp-servers/salesforce-mcp/src/salesforce_mcp/server.py:3151-3164 confirmed asyncio.to_thread(conn.restful, …) had no wait_for wrapper. Tracing through core/api_client.py:79-95 revealed RefreshingSalesforceSession.request() never passed timeout= to the underlying OAuth2Session.request, so requests defaulted to None (infinite wait). Contract.yaml G4 documented 600s default but was contract-vs-code drift — same defect affected every conn.restful/conn.query call across all 684 tools, not just describe_report_type. Considered narrow fix (only wrap describe_report_type) vs broad fix (session-layer backstop only) vs both — chose both because: (a) narrow alone leaves 683 sibling tools vulnerable to the same hang class, violating zero-compromise root-cause principle; (b) backstop alone is too generous for interactive describe/list reads (300s read backstop vs 120s tool cap). Two-layer covers the class with defense-in-depth, overridable per-call for legitimate bulk/deploy. Same-tag v7.1.1 patch (no API surface change). Verified live: container rebuilt, /health green with 684 tools + 2 orgs connected, docker exec grep confirmed both code changes present in running image, drift-check clean, mirror parity OK, MEMORY.md links all resolve. 0.85

Alternatives Rejected

Outcome

Pending