Skip to main content
The Sidecar service can be deployed in two primary ways:
  1. Sidecar pattern – Each application instance has a Sidecar container in the same network namespace (for example, the same Kubernetes Pod), and sends gRPC requests to it locally.
  2. Standalone service – The Sidecar runs as a shared service that multiple applications can access over an exposed port within your private network.
The service can be scaled horizontally to support a higher volume of requests, but keep in mind that if in-memory cache is used, there will be a higher ratio of cache misses. In case of a cache miss, the service will fetch the data over the network directly from API, and update the cache to serve future requests. The sidecar is not intended to be exposed to the internet or to be accessed from a browser.
Sidecar running with in-memory cache

Sidecar running with in-memory cache

Sidecar service can be deployed in the same network namespace (or same K8 Pod), as the main application. When entitlement data is fetched from the Stigg API, it is stored in the configured cache:
  • In-memory cache (default): managed directly by the Sidecar
  • Redis cache: managed together with a Persistent Cache Service
Caching behavior:
  • The Sidecar uses an in-memory LRU cache capped by size.
  • You can control the cache size with the CACHE_MAX_SIZE_BYTES environment variable.
  • If not set, the Sidecar allocates up to 50% of the total available memory for the cache.
  • When scaled horizontally with in-memory cache only, you may see a higher cache-miss ratio because each instance maintains its own cache.
  • On a cache miss, the Sidecar fetches data from the Stigg API and updates the cache before serving future requests.
  • Only entitlements and current usage are cached. Subscriptions are not part of it.
If you enable Redis, an instance of the Persistent Cache Service must be deployed to keep Redis up to date with entitlements and usage data, as illustrated below:
Sidecar running with an external (Redis) cache

Sidecar running with an external (Redis) cache

Redis-backed caching is particularly useful when:
  • You run in serverless environments (for example, AWS Lambda) where processes are frequently terminated
  • You have a large fleet of containers and want fresh entitlements and usage available immediately when new instances start
  • You want cached entitlements and usage to survive restarts and be shared across instances
If you do not configure Redis, the Sidecar works “as is” with its default in-memory cache—no extra services are required.