This document outlines the most important configuration options available in the chart.
Dependencies
By default, the chart installs the following dependencies:
- altinity/clickhouse-operator
- bitnami/kafka
- bitnami/minio
- bitnami/postgresql
- bitnami/redis
- bitnami/zookeeper
There is optional support for the following additional dependencies:
- grafana/grafana
- grafana/loki
- grafana/promtail
- jetstack/cert-manager
- kubernetes/ingress-nginx
- prometheus-community/prometheus
- prometheus-community/prometheus-kafka-exporter
- prometheus-community/prometheus-postgres-exporter
- prometheus-community/prometheus-redis-exporter
- prometheus-community/prometheus-statsd-exporter
Chart configuration
All PostHog Helm chart configuration options can be found in the ALL_VALUES.md generated from the values.yaml file.
Dependent charts can also have values overwritten. See Chart.yaml for more info regarding the source shard and the namespace that can be used for the override.
Scaling up
The default configuration is geared towards minimizing costs. Here are example extra values overrides to use for scaling up:
Custom overrides for < 1M events/month
# Note: those overrides are experimental as each installation and workload is unique# Use larger storage for stateful servicesclickhouse:persistence:size: 60Gipostgresql:persistence:size: 30Gikafka:persistence:size: 60GilogRetentionBytes: _45_000_000_000# Add additional replicas for the stateless servicesevents:replicacount: 2pgbouncer:replicacount: 2plugins:replicacount: 2web:replicacount: 2worker:replicacount: 2
Custom overrides for > 1M events/month
# Note: those overrides are experimental as each installation and workload is unique# Use larger storage for stateful servicesclickhouse:persistence:size: 200Gipostgresql:persistence:size: 100Gikafka:persistence:size: 200GilogRetentionBytes: _150_000_000_000# Enable horizontal pod autoscaling for stateless servicesevents:hpa:enabled: truepgbouncer:hpa:enabled: trueplugins:hpa:enabled: trueweb:hpa:enabled: trueworker:hpa:enabled: true
Using dedicated nodes for services
For the stateful services (ClickHouse, Kafka, Redis, PostgreSQL, Zookeeper), we suggest you to run them on nodes with dedicated CPU resources and fast drives (SSD/NVMe).
In order to do so, after having labeled your Kubernetes nodes, you can assign pods to them using the following overrides:
- ClickHouse:
clickhouse.nodeSelector
- Kafka:
kafka.nodeSelector
- Redis:
redis.master.nodeSelector
- PostgreSQL:
postgresql.master.nodeSelector
- Zookeeper:
zookeeper.nodeSelector
Example:
clickhouse.nodeSelector:diskType: ssdnodeType: fast
For more fine grained options, affinity
and tolerations
overrides are also available for the majority of the stateful components. See the official Kubernetes documentation for more info.
Email (SMTP service)
For PostHog to be able to send emails we need a working SMTP service available. You can configure PostHog to use the service by editing the email
section of your values.yaml
file. Example:
email:host: <SMTP service host>port: <SMTP service port>
If your SMTP services requires authentication (recommended) you can either:
- directly provide the SMTP login in the
values.yaml
by simply settingemail.user
andemail.password
Example
email:host: <SMTP service host>port: <SMTP service port>user: <SMTP service user>password: <SMTP service password>
- provide the password via a Kubernetes secret, by configuring
email.existingSecret
andemail.existingSecretKey
accordingly
Example
create the secret by running:
kubectl -n posthog create secret generic "smtp-password" --from-literal="password=<YOUR_PASSWORD>"
configure your
values.yaml
to reference the secret:
email:host: <SMTP service host>port: <SMTP service port>user: <SMTP service user>existingSecret: 'smtp-password'existingSecretKey: 'password'
ClickHouse
ClickHouse is the datastore system that does the bulk of heavy lifting with regards to storing and analyzing the analytics data.
By default, ClickHouse is installed as a part of the chart, powered by clickhouse-operator. You can also use a ClickHouse managed service like Altinity (see here for more info).
Securing ClickHouse
By default, the PostHog Helm Chart will provision a ClickHouse cluster using a default username and password. Please provide a unique login by overriding the clickhouse.user
and clickhouse.password
values.
By default, the PostHog Helm Chart uses a ClusterIP
to expose the service
internally to the rest of the PostHog application. This should prevent any
external access.
If however you decide you want to access the ClickHouse cluster external to the Kubernetes cluster and need to expose it e.g. to the internet, keep in mind the following:
the Helm Chart does not configure TLS for ClickHouse, thus we would recommend that you ensure that you configure TLS e.g. within a load balancer in front of the cluster.
if exposing via a
LoadBalancer
orNodePort
service type viaclickhouse.serviceType
, these will both expose a port on your Kubernetes nodes. We recommend you ensure that your Kubernetes worker nodes are within a private network or in a public network with firewall rules in place.if exposing via a
LoadBalancer
service type, restrict the ingress network access to the load balancerto restrict access to the ClickHouse cluster, ClickHouse offers settings for restricting the IPs/hosts that can access the cluster. See the
user_name/networks
setting for details. We expose this setting via the Helm Chart asclickhouse.allowedNetworkIps
Use an external service
To use an external ClickHouse service, please set clickhouse.enabled
to false
and then configure the externalClickhouse
values.
Find out how to deploy PostHog using Altinity Cloud in our deployment configuration docs.
Custom settings
It's possible to pass custom settings to ClickHouse. This might be needed to e.g. set query time limits or increase max memory usable by clickhouse.
To do so, you can override the clickhouse.profiles
values as below. The default
profile is used by PostHog for all queries.
clickhouse:profiles:default/max_execution_time: '180'default/max_memory_usage: '40000000000'
Read more about ClickHouse settings here.
See ALL_VALUES.md for full configuration options.
MinIO
By default, MinIO
is not installed as part of the chart. If you want to enable it, please set minio.enabled
to true
.
MinIO provide a scalable, S3 compatible object storage system. You can customize all its settings by overriding values.yaml
variables in the minio
namespace.
Note: please override the default user authentication by either passing auth.rootUser
and auth.rootPassword
or auth.existingSecret
.
Use an external service
To use an external S3 like/compatible object storage, please set minio.enabled
to false
and then configure the externalObjectStorage
values.
See ALL_VALUES.md and the MinIO chart for full configuration options.
PostgreSQL
While ClickHouse powers the bulk of the analytics if you deploy PostHog using this chart, Postgres is still needed as a data store for PostHog to work.
PostgreSQL is installed by default as part of the chart. You can customize all its settings by overriding values.yaml
variables in the postgresql
namespace.
Note: to avoid issues when upgrading this chart, provide postgresql.postgresqlPassword
for subsequent upgrades. This is due to an issue in the PostgreSQL upstream chart where password will be overwritten with randomly generated passwords otherwise. See PostgreSQL#upgrade for more details.
Use an external service
To use an external PostgreSQL service, please set postgresql.enabled
to false
and then configure the externalPostgresql
values.
See ALL_VALUES.md and the PostgreSQL chart for full configuration options.
PgBouncer
PgBouncer is a lightweight connection pooler for PostgreSQL and it is installed by default as part of the chart. It is currently required in order for the installation to work (see here for more info).
If you've configured your PostgreSQL instance to require the use of TLS, you'll need to pass an additional env variables to the PgBouncer deployment (see the official documentation for more info). Example:
pgbouncer:env:- name: SERVER_TLS_SSLMODEvalue: 'your_value'
See ALL_VALUES.md for full configuration options.
Redis
Redis is installed by default as part of the chart. You can customize all its settings by overriding values.yaml
variables in the redis
namespace.
Use an external service
To use an external Redis service, please set redis.enabled
to false
and then configure the externalRedis
values.
Example
redis:enabled: falseexternalRedis:host: 'posthog.cache.us-east-1.amazonaws.com'port: 6379
Credentials
By default, Redis doesn't use any password for authentication. If you want to configure it to use a password (recommended) see the options below.
Internal Redis
set
redis.auth.enabled
totrue
to directly provide the password value in the
values.yaml
simply set it inredis.auth.password
if you want to provide the password via a Kubernetes secret, please configure
redis.auth.existingSecret
andredis.auth.existingSecretPasswordKey
accordingly:Example
create the secret by running:
kubectl -n posthog create secret generic "redis-existing-secret" --from-literal="redis-password=<YOUR_PASSWORD>"
configure your
values.yaml
to reference the secret:
YAMLredis:enabled: trueauth:enabled: trueexistingSecret: 'redis-existing-secret'existingSecretPasswordKey: 'redis-password'
External Redis
to directly provide the password value in the
values.yaml
simply set it inexternalRedis.password
if you want to provide a password via an existing secret, please configure
externalRedis.existingSecret
andexternalRedis.existingSecretPasswordKey
accordingly:Example
create the secret by running:
kubectl -n posthog create secret generic "redis-existing-secret" --from-literal="redis-password=<YOUR_PASSWORD>"
configure your
values.yaml
to reference the secret:YAMLredis:enable: falseexternalRedis:host: '<YOUR_REDIS_HOST>'port: <YOUR_REDIS_PORT>existingSecret: 'redis-existing-secret'existingSecretPasswordKey: 'redis-password'
See ALL_VALUES.md and the Redis chart for full configuration options.
Kafka
Kakfa is installed by default as part of the chart. You can customize all its settings by overriding values.yaml
variables in the kafka
namespace.
Use an external service
To use an external Kafka service, please set kafka.enabled
to false
and then configure the externalKafka
values.
Example
kafka:enabled: falseexternalKafka:brokers:- 'broker-1.posthog.kafka.us-east-1.amazonaws.com:9094'- 'broker-2.posthog.kafka.us-east-1.amazonaws.com:9094'- 'broker-3.posthog.kafka.us-east-1.amazonaws.com:9094'
See ALL_VALUES.md and the Kafka chart for full configuration options.
Ingress
This chart provides support for the Ingress resource. If you have an available Ingress Controller such as Nginx or Traefik you maybe want to set ingress.nginx.enabled
to true or ingress.type
and choose an ingress.hostname
for the URL. Then, you should be able to access the installation using that address.
Grafana
By default, grafana
is not installed as part of the chart. If you want to enable it, please set grafana.enabled
to true
.
The default settings provide a vanilla installation with an auto generated login. The username is admin
and the auto-generated password can be fetched by running:
kubectl -n posthog get secret posthog-grafana -o jsonpath="{.data.admin-password}" | base64 --decode
To configure the stack (like expose the service via an ingress resource, manage users, ...) please look at the inputs provided by the upstream chart.
See ALL_VALUES.md and the grafana chart for full configuration options.
Loki
By default, loki
is not installed as part of the chart. If you want to enable it, please set loki.enabled
to true
.
To configure the stack (like expose the service via an ingress resource, ...) please look at the inputs provided by the upstream chart.
See ALL_VALUES.md and the loki chart for full configuration options.
Promtail
By default, promtail
is not installed as part of the chart. If you want to enable it, please set promtail.enabled
to true
.
To configure the stack (like expose the service via an ingress resource, ...) please look at the inputs provided by the upstream chart.
See ALL_VALUES.md and the promtail chart for full configuration options.
Prometheus
This chart supports alerting. Set prometheus.enabled
to true and set prometheus.alertmanagerFiles
to the right configuration.
Read more at Prometheus chart and Prometheus configuration
Example configuration (PagerDuty)
prometheus:enabled: truealertmanagerFiles:alertmanager.yml:receivers:- name: default-receiverpagerduty_configs:- routing_key: YOUR_ROUTING_KEYdescription: "{{ range .Alerts }}{{ .Annotations.summary }}\n{{ end }}"route:group_by: [alertname]receiver: default-receiver
Getting access to the Prometheus UI
This might be useful when checking out metrics. Figure out your prometheus-server
pod name via kubectl get pods --namespace NS
and run:
kubectl --namespace NS port-forward posthog-prometheus-server-XXX 9090
.
After this, you should be able to access Prometheus server on localhost
.
Statsd / prometheus-statsd-exporter
By default, StatsD is not installed as part of the chart. If you want to enable it, please set prometheus-statsd-exporter.enabled
to true
.
Use an external service
To use an external StatsD service, please set prometheus-statsd-exporter.enabled
to false
and then configure the externalStatsd
values.
See ALL_VALUES.md and prometheus-statsd-exporter chart for full configuration options.
prometheus-kafka-exporter
By default, prometheus-kafka-exporter
is not installed as part of the chart. If you want to enable it, please set prometheus-kafka-exporter.enabled
to true
. If you are using an external Kafka, please configure prometheus-kafka-exporter.kafkaServer
accordingly.
See ALL_VALUES.md and prometheus-kafka-exporter chart for full configuration options.
prometheus-postgres-exporter
By default, prometheus-postgres-exporter
is not installed as part of the chart. If you want to enable it, please set prometheus-postgres-exporter.enabled
to true
. If you are using an external Kafka, please configure prometheus-postgres-exporter.config.datasource
accordingly.
See ALL_VALUES.md and prometheus-postgres-exporter chart for full configuration options.
prometheus-redis-exporter
By default, prometheus-redis-exporter
is not installed as part of the chart. If you want to enable it, please set prometheus-redis-exporter.enabled
to true
. If you are using an external Redis, please configure prometheus-redis-exporter.redisAddress
accordingly.
See ALL_VALUES.md and prometheus-redis-exporter chart for full configuration options.