Sidecar is a tiny service that runs alongside your main application, acting as a proxy between the host and the Stigg API. The service provices low latency entitlement checks, handles caching and subscribes to real-time entitlements and usage data updates.

The benefits of deploying the Sidecar service:

  • Less CPU consumption and memory footprint for the host application when compared to embedding the SDK directly
  • Language neutral API defined in Protocol Buffers accessible over gRPC
  • Support for in-memory cache or external cache (like Redis) for entitlements and usage data
  • Scaled together with the main application, or independently if deployed as a standalone service
  • Synergy with persistent-cache-service

πŸ“˜

Sidecar SDK

Read about sending requests to the Sidecar service in the Sidecar SDK page.

Overview

The sidecar service can be deployed together with the host application/service by utilizing the sidecar pattern - meaning each application has a sidecar container next to it which it can send requests to.
Alternatively it can be deployed as a standalone service which can be accessed remotely over an exposed port.

The service can be scaled horizontally to support a higher volume of requests, but keep in mind that if in-memory cache is used, there will be a higher ratio of cache misses. In case of a cache miss, the service will fetch the data over the network directly from API, and update the cache to serve future requests.

The sidecar is not intended to be exposed to the internet or to be accessed from a browser.

Sidecar service can be deployed in the same network namespace (or same K8 Pod), as the main application.
Once entitlement data is fetched from the Stigg API, it is persisted in the local or external cache.

Local cache invalidation is handled by the Sidecar service. If using an external cache, an instance of the persistent-cache-service must be deployed as well to handle cache updates, as illustrated below:

Running the service

Prerequisites

  • Docker
  • Redis instance, if a persistent cache is in use

Usage

Login to AWS ECR:

aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin public.ecr.aws/stigg

Run the service:

docker run -it -p 8443:8443 \ 
  -e SERVER_API_KEY="<SERVER_API_KEY>" \
  public.ecr.aws/stigg/sidecar:latest

Available options:

Environment VariableTypeDefaultDescription
SERVER_API_KEYString*Environment Server API key
API_URLStringhttps://api.stigg.ioStigg API address URL
EDGE_ENABLEDBooleanTRUEEnables Edge API
EDGE_API_URLStringhttps://edge.api.stigg.ioEdge API URL
WS_ENABLEDBooleanTRUEEnables WebSockets API
WS_URLStringwss://api.stigg.ioWebSockets API URL
REDIS_ENVIRONMENT_PREFIXStringIdentifier of the environment, used to prefix the keys in Redis. If provided, Redis will be as the cache layer.
REDIS_HOSTStringlocalhostRedis host
REDIS_PORTNumber6379Redis port
REDIS_DBNumber0Redis DB identifier
REDIS_USERNAMEStringRedis username
REDIS_PASSWORDStringRedis password
REDIS_KEYS_TTL_IN_SECSNumber604,800
(7 days)
Time period for Redis to keep the data before eviction, in milliseconds
ENTITLEMENTS_FALLBACKStringFallback entitlements in a JSON string format
PORTNumber8433Service port
CACHE_MAX_SIZE_BYTESNumber50% of total available memory size Size of the in-memory cache

*Required fields

Health

The service exposes two endpoints accessible via HTTP:

GET /livez

Returns 200 if the service is alive.

Healthy response: {"status":"UP"}

GET /readyz

Returns 200 if the service is ready.

Healthy response: {"status":"UP"}

Usage with Sidecar SDK

To interact with a running Sidecar service, you can pass the remote sidecar host and post parameters:


import os
from stigg_sdk import Stigg, ApiConfig

stigg = Stigg(
    ApiConfig(
        api_key='<SERVER_API_KEY>',
    ),
    # set remote sidecar host and port:
    remote_sidecar_host='localhost',
    remote_sidecar_port=8443
)

Once configured, the Sidecar SDK will not launch a sidecar service as a sub-process, and send requests to the remote address instead.

Persistent cache

In order for the cached data to survive service restarted, or shared across multiple instances of the Sidecar service, you can use Redis as the cache layer by providing the REDIS_* environment variables:

docker run -it -p 8443:8443 \
    -e SERVER_API_KEY="<SERVER_API_KEY>" \
    -e REDIS_ENVIRONMENT_PREFIX="production" \
    -e REDIS_HOST="localhost" \
    public.ecr.aws/stigg/sidecar:latest

πŸ“˜

To keep the cache up-to-date, you will also need to run a persistent cache service in a separate process.

Global fallback strategy

Global fallback strategy can be set by providing the Sidecar service with the ENTITLEMENTS_FALLBACK environment variable. It expects a value in a JSON object in a string format.

For example, the following global fallback configuration JSON structure:

{
  'feature-01-templates': {
    hasAccess: true,
    usageLimit: 1000,
  },
  'feature-02-campaigns': {
    hasAccess: true,
    isUnlimited: true
  }
}

Can be formatted using JSON.stringify and then set as value of ENTITLEMENTS_FALLBACK when running the container:

docker run -it -p 8443:8443 \ 
  -e SERVER_API_KEY="<SERVER_API_KEY>" \
  -e ENTITLEMENTS_FALLBACK='{"feature-01-templates":{"hasAccess":true,"usageLimit":1000},"feature-02-campaigns":{"hasAccess":true,"isUnlimited":true}}' \
  public.ecr.aws/stigg/sidecar:latest

Sidecar service API

πŸ“˜

Sidecar SDK

Read about sending requests to the Sidecar service in the Sidecar SDK page.