Rate Limiting
Rate limits, timeouts, and retry mechanisms for the Stream API.
Different endpoints of the Stream API are subject to different rate limits. Rate limits are calculated based on the client's authentication token and URL path of the accessed endpoint.
As visible in the following table, products and batches endpoints get rate limited by client's authentication token and by the unique stream ID from the URL. Meaning that each stream gets rate limited independently.
| Endpoint | Description | Limit |
|---|---|---|
| /streams/{streamId}/products | Products endpoint for pushing data | 30 requests per second per {streamId} |
| /streams/{streamId}/batches | Batches endpoint for reading the processing status of a batch | 100 requests per second per {streamId} |
| /* | All other endpoints (the limit applies to each endpoint individually) | 5 requests per second |
We recommend you implement a retry and exponential backoff mechanism. This ensures that systems operate correctly when the Stream API rate-limits them.
Rate limit HTTP headers
The response headers of all HTTP requests sent to the Stream API show your current rate limits:
$ curl -I https://stream-api.productsup.com
HTTP/2 200
ratelimit-limit: 5
ratelimit-observed: 1
ratelimit-remaining: 4
ratelimit-reset: 1651174223| Name | Description |
|---|---|
| ratelimit-limit | The maximum number of requests allowed per second. |
| ratelimit-observed | The number of requests remaining in the current rate limit window. |
| ratelimit-reset | The time at which the current rate limit window resets in Unix time. |
When you exceed the rate limit, an error response returns:
HTTP/2 429
ratelimit-limit: 5
ratelimit-observed: 5
ratelimit-remaining: 0
ratelimit-reset: 1651174223
{"errors":[{"status":"429","title":"Too Many Requests"}]}Timeouts
If the Stream API takes more than 10 seconds to process an API request, it will terminate the request. You will receive a timeout response and a Server Error message.
Productsup reserves the right to change the timeout window at any time to protect the API's speed and reliability.
The only exception to our default timeout is the data upload endpoint, which timeout we don't maintain.
Retry and backoff mechanisms
We recommend implementing retries for the following response status codes:
- 5xx Server Errors.
- 249 Custom Error.
- 408 Request Timeout.
- 429 Too Many Requests. Implement response headers with rate-limiting for this status code.
Key implementation points
- Implement at least five retry attempts per originally failed request
Reasons to use exponential backoff
Use the exponential backoff to:
- Avoid overloading the platform during error periods.
- Increase the likelihood of converting failed requests into successful ones.
- Have more effectiveness during error surges than with a simple fixed-delay mechanism, such as retrying every second.
Implementation reference
For PHP implementation, you can reference the exponentialDelay() method in Guzzle HTTP clients'
RetryMiddleware class.
We use this method in our internal integrations with great success.
How is this guide?