Tggl Proxy

Proxy schema

The Tggl proxy sits between the Tggl API and your application on your own infrastructure. It is a simple HTTP server that periodically syncs with the Tggl API and caches the feature flags configuration in memory. This allows your application to query the feature flags without having to make a request to the Tggl API.

The proxy has 3 main benefits:

  • Performance: The proxy can run close to your end users and handle thousands of requests per second or more depending on your infrastructure.
  • Privacy: Since the proxy is running on your infrastructure, your data is never leaving your network, and never reaches the Tgl API.
  • Reliability: Since the proxy does not depend on the Tggl API, it can continue to serve feature flags even if the Tggl API is down. It can even store its configuration to be persistent across restarts.

Running the proxy with Docker

Create a docker-compose.yml file with the following content:

docker-compose.yml
version: "3.9"
services:
  tggl:
    image: tggl/tggl-proxy
    environment:
      TGGL_API_KEY: YOUR_SERVER_API_KEY
    ports:
      - '3000:3000'

Now simply run:

docker-compose up

The proxy will be available on port 3000 of your machine. You can change the port by changing the ports section, for instance '4001:3000' for port 4001.

Checkout the list of environment variables below to see all the options you can pass to the proxy.

Storage

By default, the proxy stores the configuration locally in a file. This allows you to serve the latest known configuration even if the Tggl API is down.

Info

While the proxy starts, it will wait for either the storage to be loaded or the configuration to be fetched from the Tggl API. This means that the proxy will not start serving requests until it has a configuration to serve.

You can also save the configuration to a Postgres database by setting the POSTGRES_URL environment variable. The proxy will create a table named tggl_config and store the configuration everytime it is updated.

Running the proxy with Node

Install the proxy as a dependency of your project:

npm i tggl-proxy

Then simply start the proxy:

import { createApp } from "tggl-proxy";
 
const app = createApp({
  apiKey: "API_KEY",
});
 
app.listen(3000, () => {
  console.log("Listening on port 3000");
})

The apiKey option is the only required option, it can also be passed via the TGGL_API_KEY environment variable.

Storage

By default, the proxy does not store the configuration. This means that if the proxy is restarted, it will have to fetch the configuration from the Tggl API again. This is fine for most use cases, but if you want to have a hot restart, you can configure the proxy to store the configuration.

Info

While the proxy starts, it will wait for either the storage to be loaded or the configuration to be fetched from the Tggl API. This means that the proxy will not start serving requests until it has a configuration to serve.

You can plug any storage that is capable of storing a string. It must implement the Storage interface exported from the package, for Redis it might look like this:

import { Storage, createApp } from "tggl-proxy";
 
const storage: Storage = {
  getConfig() {
    return redisClient.get("tggl_config_key");
  },
  async setConfig(config: string) {
    await redisClient.set("tggl_config_key", config)
  }
};
 
const app = createApp({
  apiKey: "API_KEY",
  storage,
});
 

Security

All calls to the POST/flags endpoint must be authenticated via the X-Tggl-Api-Key header. It is up to you decide which keys are accepted via the clientApiKeys option.

You can also completely disable the X-Tggl-Api-Key header check by setting the rejectUnauthorized option to false.

API

/flags

The proxy exposes a POST/flags endpoint that mirrors exactly the Tggl API. This means that all SDKs can be configured to use the proxy instead of the Tggl API simply by providing the right URL.

The path of the endpoint can be configured using the path option. You can also setup custom CORS rules using the cors option.

/health

The GET/health endpoint returns a 200 status code if the proxy is ready to serve requests. It returns a 503 status code if the proxy is unable to load the configuration from either the API or the storage.

Info

The /health endpoint will still return a 200 status code if the proxy is unable to fetch the configuration from the API but is able to load it from the storage.

The path of the endpoint can be configured using the healthCheckPath option. It can be disabled by passing the string false.

/metrics

The GET/metrics endpoint returns a Prometheus compatible metrics payload. The path of the endpoint can be configured using the pollingInterval option. It can be disabled by passing the string false.

Configuration reference

The proxy can be configured by passing an option object, all missing options will fallback to the environment variable before falling back to their default value.

NameEnv varRequiredDefaultDescription
apiKeyTGGL_API_KEYServer API key from you Tggl dashboard
pathTGGL_PROXY_PATH'/flags'URL to evaluate flags
healthCheckPathTGGL_HEALTH_CHECK_PATH'/health'URL to check if the server is healthy (pass the string false to disable)
metricsPathTGGL_METRICS_PATH'/metrics'URL to fetch the server metrics (pass the string false to disable)
corsnullCORS configuration
urlTGGL_URL'https://api.tggl.io/config'URL of the Tggl API to fetch configuration from
pollingIntervalTGGL_POLLING_INTERVAL5000Interval in milliseconds between two configuration updates. Pass 0 to disable polling.
rejectUnauthorizedTGGL_REJECT_UNAUTHORIZEDtrueWhen true, any call with an invalid X-Tggl-Api-Key header is rejected, see clientApiKeys
clientApiKeysTGGL_CLIENT_API_KEYS[]Use a coma separated string to pass an array of keys via the environment variable. The proxy will accept any of the given key via the X-Tggl-Api-Key header
storagenullA Storage object that is able to store and retrieve a string to persist config between restarts
POSTGRES_URLOnly works with the Docker image. If set, the configuration will be stored using Postgres in a table named tggl_config.