Skip to main content

Best practices

Guidelines for building reliable, efficient, and secure integrations with the EVSignals API.

Error Handling

Gracefully handle errors and implement retry logic

Implement exponential backoff for retrying failed requests. Different error types require different handling strategies.

Error Type Retry? Strategy
400 Bad Request No Fix the request parameters
401 Unauthorized No Check/refresh API key
403 Forbidden No Upgrade plan or check permissions
429 Rate Limited Yes Wait for retry_after seconds
500+ Server Error Yes Exponential backoff (1s, 2s, 4s...)
Network Timeout Yes Exponential backoff with jitter
import evsignals
from evsignals.exceptions import (
    RateLimitError,
    AuthenticationError,
    APIError
)
import time

def fetch_signals_with_retry(client, max_retries=3):
    """Fetch signals with exponential backoff retry."""
    for attempt in range(max_retries):
        try:
            return client.signals.list(
                min_ev=0.02,
                status="active",
                limit=25
            )

        except RateLimitError as e:
            # Wait for the retry_after period
            wait_time = e.retry_after or (2 ** attempt)
            print(f"Rate limited. Waiting {wait_time}s...")
            time.sleep(wait_time)

        except AuthenticationError:
            # Don't retry auth errors
            raise

        except APIError as e:
            if e.status_code >= 500:
                # Retry server errors with backoff
                time.sleep(2 ** attempt)
            else:
                raise

    raise Exception("Max retries exceeded")

Rate Limiting

Stay within limits and monitor your usage

Do

  • - Monitor X-RateLimit-Remaining header
  • - Implement client-side rate limiting
  • - Use batch endpoints when available
  • - Cache responses to reduce requests
  • - Spread requests evenly over time

Don't

  • - Ignore rate limit headers
  • - Retry immediately after 429
  • - Make unnecessary duplicate requests
  • - Poll when webhooks are available
  • - Burst all requests at once

Rate Limit Response Headers

X-RateLimit-Limit Maximum requests allowed per window
X-RateLimit-Remaining Requests remaining in current window
X-RateLimit-Reset Unix timestamp when the window resets

Webhook Best Practices

Build reliable webhook handlers

Webhook Processing Checklist

1 Verify signature before processing
2 Return 200 within 30 seconds
3 Process async if work takes >5s
4 Handle duplicate events (idempotency)
5 Log event IDs for debugging
6 Use HTTPS endpoint only
from flask import Flask, request, jsonify
import hmac
import hashlib
import json

app = Flask(__name__)
WEBHOOK_SECRET = "whsec_your_secret_here"

def verify_signature(payload, signature):
    """Verify webhook signature."""
    expected = hmac.new(
        WEBHOOK_SECRET.encode(),
        payload,
        hashlib.sha256
    ).hexdigest()
    return hmac.compare_digest(
        f"sha256={expected}",
        signature
    )

@app.route('/webhooks/evsignals', methods=['POST'])
def handle_webhook():
    # 1. Verify signature FIRST
    signature = request.headers.get('X-EVSignals-Signature')
    if not verify_signature(request.data, signature):
        return jsonify({'error': 'Invalid signature'}), 401

    # 2. Parse the event
    event = request.get_json()

    # 3. Handle event types
    if event['type'] == 'signal.created':
        signal = event['data']['signal']
        process_new_signal(signal)

    elif event['type'] == 'signal.result':
        update_signal_result(event['data'])

    # 4. Always return 200 quickly
    return jsonify({'received': True}), 200
Idempotency: Webhooks may be delivered more than once. Use the event id field to deduplicate and ensure you don't process the same event twice.

Caching Strategies

Reduce API calls and improve performance

Data Type Recommended TTL Notes
Live Signals Do not cache Use webhooks instead
Quotes 15-60 seconds Depends on freshness needs
Historical Data 24 hours Data doesn't change
Model List 1 hour Rarely updated
Account Info 5 minutes Usage stats update frequently
from functools import lru_cache
from datetime import datetime, timedelta
import redis

# Option 1: Simple in-memory cache with TTL
class CachedClient:
    def __init__(self, client):
        self.client = client
        self.cache = {}
        self.cache_ttl = timedelta(seconds=60)

    def get_quote(self, symbol):
        cache_key = f"quote:{symbol}"

        # Check cache
        if cache_key in self.cache:
            data, expires = self.cache[cache_key]
            if datetime.now() < expires:
                return data

        # Fetch fresh data
        data = self.client.quotes.get(symbol)
        self.cache[cache_key] = (
            data,
            datetime.now() + self.cache_ttl
        )
        return data

# Option 2: Redis for distributed caching
redis_client = redis.Redis()

def get_quote_cached(client, symbol, ttl=60):
    cache_key = f"evsignals:quote:{symbol}"

    # Try cache first
    cached = redis_client.get(cache_key)
    if cached:
        return json.loads(cached)

    # Fetch and cache
    data = client.quotes.get(symbol)
    redis_client.setex(
        cache_key,
        ttl,
        json.dumps(data)
    )
    return data

Batch Requests

Minimize API calls by batching requests

# BAD: Making individual requests
markets = ["kalshi-fed-rate", "poly-election", "dk-nfl-mvp", "fd-superbowl", "pin-nba-spread"]
market_details = []
for market in markets:
    market_details.append(client.markets.retrieve(market))  # Sequential requests

# BETTER: Parallelize independent requests
import asyncio

async def get_multiple_markets(market_ids):
    async with evsignals.AsyncClient(api_key="...") as client:
        tasks = [
            client.markets.retrieve(market_id)
            for market_id in market_ids
        ]
        return await asyncio.gather(*tasks)

Practical rule

Prefer one filtered list request when it gives you the data shape you need. When you truly need many independent resources, parallelize requests with sensible concurrency limits instead of firing them sequentially.

Security

Keep your integration secure

API Key Management

  • - Store API keys in environment variables, never in code
  • - Use separate keys for development and production
  • - Rotate keys periodically (every 90 days recommended)
  • - Use scoped keys with minimal permissions when possible
  • - Revoke keys immediately if compromised

Transport Security

  • - Always use HTTPS (TLS 1.2+)
  • - Verify SSL certificates (don't disable verification)
  • - Pin certificates for mobile apps (optional but recommended)

Webhook Security

  • - Always verify webhook signatures before processing
  • - Use constant-time comparison for signatures
  • - Don't expose your webhook secret in logs or errors
  • - Reject requests with missing or invalid signatures
Never expose API keys: Don't include API keys in client-side code, git repositories, or logs. If a key is exposed, rotate it immediately in your dashboard.

Integration Checklist

Before Going Live

  • Implement exponential backoff retry
  • Handle all error types appropriately
  • Set up webhook signature verification
  • Implement response caching
  • Use batch endpoints where available

Monitoring

  • Log request IDs for debugging
  • Monitor rate limit usage
  • Set up alerts for error spikes
  • Track webhook delivery failures
  • Review API usage weekly