Problem
Bare requests.get() and requests.post() calls are fine for scripts, but they are wasteful in production services that call the same hosts repeatedly. Each new connection can pay DNS lookup, TCP handshake, and TLS negotiation again.
In the source note, the pattern appeared across roughly 75 outbound HTTP calls in 28 files. The fix was not a new service or a cache. It was reusing connections properly.
Mechanism
A requests.Session owns connection pools. When the same warm process calls the same host again, the session can reuse an existing TCP/TLS connection instead of starting from zero.
import requests
from requests.adapters import HTTPAdapter
http_session = requests.Session()
adapter = HTTPAdapter(pool_connections=10, pool_maxsize=10)
http_session.mount("http://", adapter)
http_session.mount("https://", adapter)
response = http_session.post("https://api.example.com/sign", json={"path": path})Fix
Create one shared session at module level, then route repeated outbound calls through it. In serverless environments, warm instances can keep that session alive across invocations.
Benchmark a small call where handshake overhead is a meaningful part of total time. End-to-end flows often hide this improvement behind downloads, database work, or model calls.
What changed in practice
| Metric | Before | After | Delta |
|---|---|---|---|
| Average | 107 ms | 85 ms | -22 ms (-21%) |
| p50 | 106 ms | 85 ms | -21 ms (-20%) |
| Minimum | 90 ms | 40 ms | -50 ms (-56%) |
Production lesson
Connection pooling is table stakes for services. The gain may look small per call, but repeated 20 ms savings across many outbound calls compound quickly.