Tggl Proxy
The Tggl proxy sits between the Tggl API and your application on your own infrastructure. It is a simple HTTP server that periodically syncs with the Tggl API and caches the feature flags configuration in memory. This allows your application to query the feature flags without having to make a request to the Tggl API.
The proxy has 3 main benefits:
- Performance: The proxy can run close to your end users and handle thousands of requests per second or more depending on your infrastructure.
- Privacy: Since the proxy is running on your infrastructure, your data is never leaving your network, and never reaches the Tgl API.
- Reliability: Since the proxy does not depend on the Tggl API, it can continue to serve feature flags even if the Tggl API is down. It can even store its configuration to be persistent across restarts.
Running the proxy with Docker
Create a docker-compose.yml
file with the following content:
Now simply run:
The proxy will be available on port 3000
of your machine. You can change the port by changing the ports
section, for instance '4001:3000'
for port 4001
.
Checkout the list of environment variables below to see all the options you can pass to the proxy.
Storage
By default, the proxy stores the configuration locally in a file. This allows you to serve the latest known configuration even if the Tggl API is down.
While the proxy starts, it will wait for either the storage to be loaded or the configuration to be fetched from the Tggl API. This means that the proxy will not start serving requests until it has a configuration to serve.
You can also save the configuration to a Postgres database by setting the POSTGRES_URL
environment variable. The proxy will create a table named tggl_config
and store the configuration everytime it is updated.
Running the proxy with Node
Install the proxy as a dependency of your project:
Then simply start the proxy:
The apiKey
option is the only required option, it can also be passed via the TGGL_API_KEY
environment variable.
Storage
By default, the proxy does not store the configuration. This means that if the proxy is restarted, it will have to fetch the configuration from the Tggl API again. This is fine for most use cases, but if you want to have a hot restart, you can configure the proxy to store the configuration.
While the proxy starts, it will wait for either the storage to be loaded or the configuration to be fetched from the Tggl API. This means that the proxy will not start serving requests until it has a configuration to serve.
You can plug any storage that is capable of storing a string. It must implement the Storage
interface exported from the package, for Redis it might look like this:
Security
All calls to the POST/flags endpoint must be authenticated via the X-Tggl-Api-Key
header. It is up to you decide which keys are accepted via the clientApiKeys
option.
You can also completely disable the X-Tggl-Api-Key
header check by setting the rejectUnauthorized
option to false
.
Tggl is secured by default, either set rejectUnauthorized
to false
or provide a list of valid API keys to the clientApiKeys
option. If you do not, the proxy will reject any call.
API
/flags
The proxy exposes a POST/flags endpoint that mirrors exactly the Tggl API. This means that all SDKs can be configured to use the proxy instead of the Tggl API simply by providing the right URL.
The path of the endpoint can be configured using the path
option. You can also setup custom CORS rules using the cors
option.
/health
The GET/health endpoint returns a 200
status code if the proxy is ready to serve requests. It returns a 503
status code if the proxy is unable to load the configuration from either the API or the storage.
The /health
endpoint will still return a 200
status code if the proxy is unable to fetch the configuration from the API but is able to load it from the storage.
The path of the endpoint can be configured using the healthCheckPath
option. It can be disabled by passing the string false
.
/metrics
The GET/metrics endpoint returns a Prometheus compatible metrics payload. The path of the endpoint can be configured using the pollingInterval
option. It can be disabled by passing the string false
.
Configuration reference
The proxy can be configured by passing an option object, all missing options will fallback to the environment variable before falling back to their default value.
Name | Env var | Required | Default | Description |
---|---|---|---|---|
apiKey | TGGL_API_KEY | ✓ | Server API key from you Tggl dashboard | |
path | TGGL_PROXY_PATH | '/flags' | URL to evaluate flags | |
healthCheckPath | TGGL_HEALTH_CHECK_PATH | '/health' | URL to check if the server is healthy (pass the string false to disable) | |
metricsPath | TGGL_METRICS_PATH | '/metrics' | URL to fetch the server metrics (pass the string false to disable) | |
cors | null | CORS configuration | ||
url | TGGL_URL | 'https://api.tggl.io/config' | URL of the Tggl API to fetch configuration from | |
pollingInterval | TGGL_POLLING_INTERVAL | 5000 | Interval in milliseconds between two configuration updates. Pass 0 to disable polling. | |
rejectUnauthorized | TGGL_REJECT_UNAUTHORIZED | true | When true, any call with an invalid X-Tggl-Api-Key header is rejected, see clientApiKeys | |
clientApiKeys | TGGL_CLIENT_API_KEYS | [] | Use a coma separated string to pass an array of keys via the environment variable. The proxy will accept any of the given key via the X-Tggl-Api-Key header | |
storage | null | A Storage object that is able to store and retrieve a string to persist config between restarts | ||
POSTGRES_URL | Only works with the Docker image. If set, the configuration will be stored using Postgres in a table named tggl_config . |