Sidecar
The Stigg Sidecar is a tiny service that runs alongside your main application, acting as a proxy between the host and the Stigg API. The service provides low latency entitlement checks, handles caching, and subscribes to real-time entitlement and usage data updates.
The benefits of deploying the Sidecar service:
- Less CPU consumption and memory footprint for the host application when compared to embedding the SDK directly.
- Language neutral API defined in Protocol Buffers accessible over gRPC.
- Support for in-memory cache or external cache (like Redis) for entitlements and usage data.
- Scaled together with the main application, or independently if deployed as a standalone service.
Overview
The sidecar service can be deployed together with the host application/service by utilizing the sidecar pattern - meaning each application has a sidecar container next to it which it can send requests to.
Alternatively it can be deployed as a standalone service which can be accessed remotely over an exposed port.
The service can be scaled horizontally to support a higher volume of requests, but keep in mind that if in-memory cache is used, there will be a higher ratio of cache misses. In case of a cache miss, the service will fetch the data over the network directly from API, and update the cache to serve future requests.
The sidecar is not intended to be exposed to the internet or to be accessed from a browser.
Sidecar running with in-memory cache
Sidecar service can be deployed in the same network namespace (or same K8 Pod), as the main application.
Once entitlement data is fetched from the Stigg API, it is persisted in the local or external cache.
Local cache invalidation is handled by the Sidecar service. If using an external cache, an instance of the Persistent Cache Service must be deployed as well to handle cache updates, as illustrated below:
Sidecar running with an external (Redis) cache
Running the service
Prerequisites
- Docker
- Redis instance, if a persistent cache is in use
Usage
Login to AWS ECR:
Run the service:
Available options
Execution of the Sidecar service can be customized using the following environment variables:
The server API key of the environment.
The URL of the Stigg API.
The Edge URL from which entitlements will be accessed.
Whether to listen for updates using WebSockets.
The WebSocket API URL.
Identifier of the environment, used to prefix the keys in Redis. If provided, Redis will be used as the cache layer.
Redis host.
Redis port.
Redis DB identifier.
Redis username.
Redis password.
Time period for Redis to keep the data before eviction in seconds.
Global entitlement fallback strategy in JSON format.
Service port.
Size of the in-memory cache.
Health endpoint URL.
Ready endpoint URL.
Log level, can be one of: error
, warn
, info
, debug
.
Enables offline mode for local development.
Error handling
When the Sidecar encounters startup errors, such as an invalid API key or network errors to the Stigg API, it continues to run and serves entitlements from the:
- Persistent cache (if available)
- Global fallback strategy
Service monitoring
Health
The service exposes two endpoints accessible via HTTP:
GET /livez
Returns 200
if the service is alive.
Healthy response:
GET /readyz
Returns 200
if the service is ready.
Healthy response:
Metrics
The Sidecar exposes a GET /metrics
endpoint that returns service metrics in Prometheus format.
This endpoint includes both system-level and Sidecar-specific metrics. These metrics are helpful for monitoring the health and performance of your Sidecar service.
Total number of SDK initialization errors (e.g., due to misconfiguration or runtime issues).
Total number of invalid API key errors encountered during SDK operation.
Total number of network request failures between the SDK and the Stigg backend.
Total number of Redis client errors.
Total number of times data was successfully retrieved from the Sidecar cache.
Total number of times data was not found in the Sidecar cache.
These metrics can be scraped and visualized in any Prometheus-compatible observability stack like Grafana.
Persistent caching
In order for the cached data to survive service restarted, or shared across multiple instances of the Sidecar service, you can use Redis as the cache layer by providing the REDIS_*
environment variables:
To keep the cache up-to-date, you will also need to run a persistent cache service in a separate process.
Global fallback strategy
Global fallback strategy can be set by providing the Sidecar service with the ENTITLEMENTS_FALLBACK
environment variable. It expects a value in a JSON object in a string format.
For example, the following global fallback configuration JSON structure:
Can be formatted using JSON.stringify
and then set as value of ENTITLEMENTS_FALLBACK
when running the container:
Offline mode
During local development or testing, you might want to avoid making network requests to the Stigg API.
To do this, you can run the Sidecar service in offline mode by enabling the offline option. When enabled, API key validation will always succeed, regardless of the key provided.
In offline mode, the Sidecar respects the global fallback strategy, and entitlement evaluations are limited to the values defined as fallback entitlements. All other Sidecar service methods will effectively become no-ops. For example:
Interaction with the Sidecar service
Interacting with a running Sidecar service is possible using the Sidecar SDK: