The Stigg Sidecar service is available as a Docker container and can be deployed on Google Cloud Platform (GCP) using various compute services.

Deployment options on GCP

You can deploy the Sidecar on GCP in one of the following ways:
  1. Google Kubernetes Engine (GKE) – Ideal for production deployments using the sidecar pattern.
  2. Google Cloud Run – For event-driven or serverless-style deployments where scale-to-zero is needed.
  3. Google Compute Engine (GCE) – Running the container directly on a VM with Docker installed.
  4. Standalone service – Run the Sidecar as a central service accessible over an internal/external IP and port.
All deployment methods use the same Sidecar Docker image hosted on AWS ECR.

Pulling the Sidecar Docker image

To deploy the Sidecar, start by pulling the image:
docker pull public.ecr.aws/stigg/sidecar:latest

Running the Sidecar service

To run the Sidecar container locally or as part of a custom deployment:
docker run -it -p 8443:8443 \
  -e SERVER_API_KEY="<SERVER_API_KEY>" \
  public.ecr.aws/stigg/sidecar:latest

Using the Sidecar pattern in GKE

In Kubernetes environments (such as GKE), the Sidecar container can be deployed alongside your application container in the same pod. This allows for low-latency gRPC communication via localhost. You can configure it with the same environment variables used in standalone deployments, and optionally attach a Redis container for persistent caching.

Key features for GCP deployment

  • Cross-platform: Runs anywhere Docker is supported.
  • Language neutral: gRPC APIs defined using Protocol Buffers.
  • Caching support: Supports both in-memory and Redis-based cache layers.
  • Horizontal scaling: Stateless design allows scaling with your GCP workloads.
  • Health monitoring: Supports /livez, /readyz, and /metrics endpoints.
  • Configurable: Uses environment variables for flexible runtime setup.