Skip to main content
The SquadAssist API enforces a rate limit on all authenticated endpoints to ensure fair usage and service stability. Understanding how the limit works helps you design integrations that stay within quota and recover gracefully when they do not.

Limit

DimensionValue
Requests allowed60
Window60 seconds (rolling)
ScopePer API key
Exempt endpointGET /health
The window is rolling, not fixed. Each time you make a request, the API counts how many requests have been made by your API key in the 60 seconds immediately preceding that request. If the count is already at 60, the request is rejected.
GET /health is exempt from rate limiting and does not count toward your quota. You can use it freely for monitoring and liveness checks.

429 response

When you exceed the rate limit, the API returns HTTP 429 Too Many Requests with a JSON body that tells you exactly how long to wait.
{
  "error": "Rate limit exceeded",
  "limit": 60,
  "window_seconds": 60,
  "retry_after_seconds": 14
}
FieldTypeDescription
errorstringAlways "Rate limit exceeded"
limitintegerThe maximum requests allowed per window (60)
window_secondsintegerThe length of the rolling window in seconds (60)
retry_after_secondsintegerSeconds until your oldest in-window request expires and capacity is freed
Wait for at least retry_after_seconds before retrying. The value is calculated precisely — retrying any sooner will return another 429.

Handling 429s in your code

import time
import requests

def call_with_retry(url, headers, max_retries=3):
    for attempt in range(max_retries):
        response = requests.get(url, headers=headers)
        if response.status_code == 429:
            wait = response.json().get("retry_after_seconds", 10)
            time.sleep(wait)
            continue
        return response
    raise Exception("Rate limit retries exhausted")

Tips for staying within the limit

Batch lookups with POST /query_player. If you need to resolve a list of player IDs, process them sequentially with a short sleep between requests rather than firing them all at once.
Cache player reference data. Responses from GET /player_positions, GET /role_description, and GET /player_info reflect data that changes infrequently. Cache these responses for several hours or until your data changes to avoid redundant requests.
Distribute requests over time. If you run batch analysis jobs (ROI, future transfer value, sportive impact), spread them across the minute window rather than sending all requests in a burst.