The Stigg Sidecar is a tiny service that runs alongside your main application, acting as a proxy between the host and the Stigg API. The service provides low latency entitlement checks, handles caching, and subscribes to real-time entitlement and usage data updates.

The benefits of deploying the Sidecar service:

  • Less CPU consumption and memory footprint for the host application when compared to embedding the SDK directly.
  • Language neutral API defined in Protocol Buffers accessible over gRPC.
  • Support for in-memory cache or external cache (like Redis) for entitlements and usage data.
  • Scaled together with the main application, or independently if deployed as a standalone service.

Overview

The sidecar service can be deployed together with the host application/service by utilizing the sidecar pattern - meaning each application has a sidecar container next to it which it can send requests to.
Alternatively it can be deployed as a standalone service which can be accessed remotely over an exposed port.

The service can be scaled horizontally to support a higher volume of requests, but keep in mind that if in-memory cache is used, there will be a higher ratio of cache misses. In case of a cache miss, the service will fetch the data over the network directly from API, and update the cache to serve future requests.

The sidecar is not intended to be exposed to the internet or to be accessed from a browser.

Sidecar running with in-memory cache

Sidecar service can be deployed in the same network namespace (or same K8 Pod), as the main application.
Once entitlement data is fetched from the Stigg API, it is persisted in the local or external cache.

Local cache invalidation is handled by the Sidecar service. If using an external cache, an instance of the Persistent Cache Service must be deployed as well to handle cache updates, as illustrated below:

Sidecar running with an external (Redis) cache

Running the service

Prerequisites

  • Docker
  • Redis instance, if a persistent cache is in use

Usage

Login to AWS ECR:

aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin public.ecr.aws/stigg

Run the service:

docker run -it -p 8443:8443 \ 
  -e SERVER_API_KEY="<SERVER_API_KEY>" \
  public.ecr.aws/stigg/sidecar:latest

Available options

Execution of the Sidecar service can be customized using the following environment variables:

SERVER_API_KEY
string
required

The server API key of the environment.

API_URL
string
default:"https://api.stigg.io"

The URL of the Stigg API.

EDGE_ENABLED
boolean
default:"true"

Whether entitlements will be accessed from Edge.

EDGE_API_URL
string
default:"https://edge.api.stigg.io"

The Edge URL from which entitlements will be accessed.

WS_ENABLED
boolean
default:"true"

Whether to listen for updates using WebSockets.

WS_URL
string
default:"wss://api.stigg.io"

The WebSocket API URL.

REDIS_ENVIRONMENT_PREFIX
string

Identifier of the environment, used to prefix the keys in Redis. If provided, Redis will be used as the cache layer.

REDIS_HOST
string
default:"localhost"

Redis host.

REDIS_PORT
number
default:"6379"

Redis port.

REDIS_DB
number
default:"0"

Redis DB identifier.

REDIS_USERNAME
string

Redis username.

REDIS_PASSWORD
string

Redis password.

REDIS_KEYS_TTL_IN_SECS
number
default:"604800 (7 days)"

Time period for Redis to keep the data before eviction in seconds.

ENTITLEMENTS_FALLBACK
string
PORT
number
default:"8433"

Service port.

CACHE_MAX_SIZE_BYTES
number
default:"50% of total available memory size"

Size of the in-memory cache.

HEALTH_ENDPOINT_URL
string
default:"livez"

Health endpoint URL.

READY_ENDPOINT_URL
string
default:"readyz"

Ready endpoint URL.

METRICS_PORT
number
default:"8080"

The port of the health and metrics endpoints.

LOG_LEVEL
string
default:"warn"

Log level, can be one of: error, warn, info, debug.

OFFLINE
boolean
default:"false"

Enables offline mode for local development.

Error handling

When the Sidecar encounters startup errors, such as an invalid API key or network errors to the Stigg API, it continues to run and serves entitlements from the:

  1. Persistent cache (if available)
  2. Global fallback strategy

Service monitoring

Health

The service exposes two endpoints accessible via HTTP:

GET /livez

Returns 200 if the service is alive.

Healthy response:

{ "status": "UP" }

GET /readyz

Returns 200 if the service is ready.

Healthy response:

{ "status": "UP" }

Metrics

The Sidecar exposes a GET /metrics endpoint that returns service metrics in Prometheus format.

This endpoint includes both system-level and Sidecar-specific metrics. These metrics are helpful for monitoring the health and performance of your Sidecar service.

sidecar_initialization_errors_total
number

Total number of SDK initialization errors (e.g., due to misconfiguration or runtime issues).

sidecar_invalid_api_key_errors_total
number

Total number of invalid API key errors encountered during SDK operation.

sidecar_network_request_errors_total
number

Total number of network request failures between the SDK and the Stigg backend.

sidecar_redis_client_errors_total
number

Total number of Redis client errors.

sidecar_cache_hits_total
number

Total number of times data was successfully retrieved from the Sidecar cache.

sidecar_cache_misses_total
number

Total number of times data was not found in the Sidecar cache.

These metrics can be scraped and visualized in any Prometheus-compatible observability stack like Grafana.

Persistent caching

In order for the cached data to survive service restarted, or shared across multiple instances of the Sidecar service, you can use Redis as the cache layer by providing the REDIS_* environment variables:

docker run -it -p 8443:8443 \
    -e SERVER_API_KEY="<SERVER_API_KEY>" \
    -e REDIS_ENVIRONMENT_PREFIX="production" \
    -e REDIS_HOST="localhost" \
    public.ecr.aws/stigg/sidecar:latest

To keep the cache up-to-date, you will also need to run a persistent cache service in a separate process.

Global fallback strategy

Global fallback strategy can be set by providing the Sidecar service with the ENTITLEMENTS_FALLBACK environment variable. It expects a value in a JSON object in a string format.

For example, the following global fallback configuration JSON structure:

{
  'feature-01-templates': {
    hasAccess: true,
    usageLimit: 1000,
  },
  'feature-02-campaigns': {
    hasAccess: true,
    isUnlimited: true
  }
}

Can be formatted using JSON.stringify and then set as value of ENTITLEMENTS_FALLBACK when running the container:

docker run -it -p 8443:8443 \ 
  -e SERVER_API_KEY="<SERVER_API_KEY>" \
  -e ENTITLEMENTS_FALLBACK='{"feature-01-templates":{"hasAccess":true,"usageLimit":1000},"feature-02-campaigns":{"hasAccess":true,"isUnlimited":true}}' \
  public.ecr.aws/stigg/sidecar:latest

Offline mode

During local development or testing, you might want to avoid making network requests to the Stigg API.
To do this, you can run the Sidecar service in offline mode by enabling the offline option. When enabled, API key validation will always succeed, regardless of the key provided.

docker run -it -p 8443:8443 \
    -e SERVER_API_KEY="localhost" \
    -e OFFLINE=TRUE \
    public.ecr.aws/stigg/sidecar:latest

In offline mode, the Sidecar respects the global fallback strategy, and entitlement evaluations are limited to the values defined as fallback entitlements. All other Sidecar service methods will effectively become no-ops. For example:

docker run -it -p 8443:8443 \ 
  -e SERVER_API_KEY="<SERVER_API_KEY>" \
  -e OFFLINE=TRUE \
  -e ENTITLEMENTS_FALLBACK='{"feature-01-templates":{"hasAccess":true,"usageLimit":1000},"feature-02-campaigns":{"hasAccess":true,"isUnlimited":true}}' \
  public.ecr.aws/stigg/sidecar:latest

Interaction with the Sidecar service

Interacting with a running Sidecar service is possible using the Sidecar SDK:

Sidecar SDK