Asked By – Kirill Zaitsev
The Python requests module is simple and elegant but one thing bugs me.
It is possible to get a requests.exception.ConnectionError with a message like:
Max retries exceeded with url: ...
This implies that requests can attempt to access the data several times. But there is not a single mention of this possibility anywhere in the docs. Looking at the source code I didn’t find any place where I could alter the default (presumably 0) value.
So is it possible to somehow set the maximum number of retries for requests?
Now we will see solution for issue: Can I set max_retries for requests.request?
It is the underlying
urllib3 library that does the retrying. To set a different maximum retry count, use alternative transport adapters:
from requests.adapters import HTTPAdapter s = requests.Session() s.mount('http://stackoverflow.com', HTTPAdapter(max_retries=5))
max_retries argument takes an integer or a
Retry() object; the latter gives you fine-grained control over what kinds of failures are retried (an integer value is turned into a
Retry() instance which only handles connection failures; errors after a connection is made are by default not handled as these could lead to side-effects).
Old answer, predating the release of requests 1.2.1:
requests library doesn’t really make this configurable, nor does it intend to (see this pull request). Currently (requests 1.1), the retries count is set to 0. If you really want to set it to a higher value, you’ll have to set this globally:
import requests requests.adapters.DEFAULT_RETRIES = 5
This constant is not documented; use it at your own peril as future releases could change how this is handled.
Update: and this did change; in version 1.2.1 the option to set the
max_retries parameter on the
HTTPAdapter() class was added, so that now you have to use alternative transport adapters, see above. The monkey-patch approach no longer works, unless you also patch the
HTTPAdapter.__init__() defaults (very much not recommended).