ONLYOFFICE Docs for Kubernetes

Introduction

Important The following guide is valid only for the paid license solution.
  • You must have a Kubernetes or OpenShift cluster installed:
  • You should also have a local configured copy of kubectl. See this guide how to install and configure kubectl.
  • You should install Helm v3.7+. Please follow the instruction here to install it.
  • If you use OpenShift, you can use both oc and kubectl to manage deploy.
  • If the installation of components external to ‘Docs’ is performed from Helm Chart in an OpenShift cluster, then it is recommended to install them from a user who has the cluster-admin role, in order to avoid possible problems with access rights. See this guide to add the necessary roles to the user.

Deploy prerequisites

It may be required to apply SecurityContextConstraints policy when installing into OpenShift cluster, which adds permission to run containers from a user whose ID = 1001.

To do this, run the following commands:

$ oc apply -f https://raw.githubusercontent.com/ONLYOFFICE/Kubernetes-Docs/master/sources/scc/helm-components.yaml
$ oc adm policy add-scc-to-group scc-helm-components system:authenticated

Alternatively, you can specify the allowed range of users and groups from the target namespace, see the parameters runAsUser and fsGroup while installing dependencies, such as RabbitMQ, Redis, PostgreSQL, etc.

1. Add Helm repositories
$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
$ helm repo add nfs-server-provisioner https://kubernetes-sigs.github.io/nfs-ganesha-server-and-external-provisioner
$ helm repo add onlyoffice https://download.onlyoffice.com/charts/stable
$ helm repo update
2. Install Persistent Storage
Important If you want to use Amazon S3 as a cache, please skip this step.

Install NFS Server Provisioner

When installing NFS Server Provisioner, Storage Classes - NFS is created. When installing to an OpenShift cluster, the user must have a role that allows you to create Storage Classes in the cluster. Read more here.
$ helm install nfs-server nfs-server-provisioner/nfs-server-provisioner \
  --set persistence.enabled=true \
  --set persistence.storageClass=PERSISTENT_STORAGE_CLASS \
  --set persistence.size=PERSISTENT_SIZE
  • PERSISTENT_STORAGE_CLASS is a Persistent Storage Class available in your Kubernetes cluster.

    Persistent Storage Classes for different providers:

    • Amazon EKS: gp2
    • Digital Ocean: do-block-storage
    • IBM Cloud: Default ibmc-file-bronze. More storage classes
    • Yandex Cloud: yc-network-hdd or yc-network-ssd. More details
    • minikube: standard
  • PERSISTENT_SIZE is the total size of all Persistent Storages for the nfs Persistent Storage Class. You can express the size as a plain integer with one of these suffixes: T, G, M, Ti, Gi, Mi. For example: 9Gi.

See more details about installing NFS Server Provisioner via Helm here.

Configure a Persistent Volume Claim

The default nfs Persistent Volume Claim is 8Gi. You can change it in the values.yaml file in the persistence.storageClass and persistence.size section. It should be less than PERSISTENT_SIZE at least by about 5%. It's recommended to use 8Gi or more for persistent storage for every 100 active users of ONLYOFFICE Docs.

Important PersistentVolume type to be used for PVC placement must support Access Mode ReadWriteMany. Also, PersistentVolume must have as the owner the user from whom the ONLYOFFICE Docs will be started. By default it is ds (101:101).

If you want to enable WOPI, please set the parameter wopi.enabled=true. In this case Persistent Storage must be connected to the cluster nodes with the disabled caching attributes for the mounted directory for the clients. For NFS Server Provisioner it can be achieved by adding noac option to the parameter storageClass.mountOptions. Please find more information here.

3. Deploy RabbitMQ

To install RabbitMQ to your cluster, run the following command:

$ helm install rabbitmq --version 16.0.14 bitnami/rabbitmq \
  --set persistence.storageClass=PERSISTENT_STORAGE_CLASS \
  --set resourcesPreset=none \
  --set image.repository=bitnamilegacy/rabbitmq \
  --set image.tag=4.1.3-debian-12-r1 \
  --set global.security.allowInsecureImages=true \
  --set metrics.enabled=false
Set the metrics.enabled=true to enable exposing RabbitMQ metrics to be gathered by Prometheus.

See more details about installing RabbitMQ via Helm here.

4. Deploy Redis

To install Redis to your cluster, run the following command:

$ helm install redis --version 22.0.7 bitnami/redis \
  --set architecture=standalone \
  --set master.persistence.storageClass=PERSISTENT_STORAGE_CLASS \
  --set master.resourcesPreset=none \
  --set global.security.allowInsecureImages=true \
  --set image.repository=bitnamilegacy/redis \
  --set image.tag=8.2.1-debian-12-r0 \
  --set metrics.enabled=false
Set the metrics.enabled=true to enable exposing Redis metrics to be gathered by Prometheus. Also add the following parameters: metrics.image.repository=bitnamilegacy/redis-exporter and metrics.image.tag=1.76.0-debian-12-r0.

See more details about installing Redis via Helm here.

5. Deploy Database

As a database server, you can use PostgreSQL, MySQL or MariaDB.

To install PostgreSQL to your cluster, run the following command:

$ helm install postgresql --version 16.7.27 bitnami/postgresql \
  --set auth.database=postgres \
  --set primary.persistence.storageClass=PERSISTENT_STORAGE_CLASS \
  --set primary.persistence.size=PERSISTENT_SIZE \
  --set primary.resourcesPreset=none \
  --set image.repository=bitnamilegacy/postgresql \
  --set global.security.allowInsecureImages=true \
  --set image.tag=17.6.0-debian-12-r2 \
  --set metrics.enabled=false
Set the metrics.enabled=true to enable exposing Database metrics to be gathered by Prometheus. Also add the following parameters: metrics.image.repository=bitnamilegacy/postgres-exporter and metrics.image.tag=0.17.1-debian-12-r16.

See more details about installing PostgreSQL via Helm here.

To install MySQL to your cluster, run the following command:

$ helm install mysql --version 14.0.3 bitnami/mysql \
  --set auth.database=onlyoffice \
  --set auth.username=onlyoffice \
  --set primary.persistence.storageClass=PERSISTENT_STORAGE_CLASS \
  --set primary.persistence.size=PERSISTENT_SIZE \
  --set primary.resourcesPreset=none \
  --set image.repository=bitnamilegacy/mysql \
  --set global.security.allowInsecureImages=true \
  --set image.tag=9.4.0-debian-12-r1 \
  --set metrics.enabled=false

See more details about installing MySQL via Helm here.

Here PERSISTENT_SIZE is a size for the Database persistent volume. For example: 8Gi.

It's recommended to use at least 2Gi of persistent storage for every 100 active users of ONLYOFFICE Docs.

Set the metrics.enabled=true to enable exposing Database metrics to be gathered by Prometheus. Also add the following parameters: metrics.image.repository=bitnamilegacy/mysqld-exporter and metrics.image.tag=0.17.2-debian-12-r16.
6. Deploy StatsD exporter
Important This step is optional. You can skip it entirely if you don't want to run StatsD exporter.
  1. Add Helm repositories
    $ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
    $ helm repo add kube-state-metrics https://kubernetes.github.io/kube-state-metrics
    $ helm repo update
  2. Installing Prometheus

    To install Prometheus to your cluster, run the following command:

    $ helm install prometheus -f https://raw.githubusercontent.com/ONLYOFFICE/Kubernetes-Docs/master/sources/extraScrapeConfigs.yaml prometheus-community/prometheus \
      --set server.global.scrape_interval=1m

    To change the scrape interval, specify the server.global.scrape_interval parameter.

    See more details about installing Prometheus via Helm here.

  3. Installing StatsD exporter

    To install StatsD exporter to your cluster, run the following command:

    $ helm install statsd-exporter prometheus-community/prometheus-statsd-exporter \
      --set statsd.udpPort=8125 \
      --set statsd.tcpPort=8126 \
      --set statsd.eventFlushInterval=30000ms

    See more details about installing Prometheus StatsD exporter via Helm here.

    To allow the StatsD metrics in ONLYOFFICE Docs, follow this step.

7. Make changes to Node-config configuration files
Important This step is optional. You can skip it entirely if you don't need to make changes to the configuration files.
  1. Create a ConfigMap containing a json file

    In order to create a ConfigMap from a file that contains the local.json structure, you need to run the following command:

    $ kubectl create configmap local-config \
      --from-file=./local.json
    Any name can be used instead of local-config.
  2. Specify parameters when installing ONLYOFFICE Docs

    When installing ONLYOFFICE Docs, specify the extraConf.configMap=local-config and extraConf.filename=local.json parameters.

    If you need to add a configuration file after the ONLYOFFICE Docs is already installed, you need to execute step 7.1 and then run the
    helm upgrade documentserver onlyoffice/docs --set extraConf.configMap=local-config --set extraConf.filename=local.json --no-hooks
    command or
    helm upgrade documentserver -f ./values.yaml onlyoffice/docs --no-hooks
    if the parameters are specified in the values.yaml file.
8. Add custom Fonts
Important This step is optional. You can skip it entirely if you don't need to add your fonts.

In order to add fonts to images, you need to rebuild the images. Refer to the relevant steps in this manual. Then specify your images when installing the ONLYOFFICE Docs.

9. Add Plugins
Important This step is optional. You can skip it entirely if you don't need to add plugins.

In order to add plugins to images, you need to rebuild the images. Refer to the relevant steps in this manual. Then specify your images when installing the ONLYOFFICE Docs.

10. Add custom dictionaries
Important This step is optional. You can skip it entirely if you don't need to add your dictionaries.

In order to add your custom dictionaries to images, you need to rebuild the images. Refer to the relevant steps in this manual. Then specify your images when installing the ONLYOFFICE Docs.

11. Change interface themes
Important This step is optional. You can skip it entirely if you don't need to change the interface themes.
  1. Create a ConfigMap containing a json file

    To create a ConfigMap with a json file that contains the interface themes, you need to run the following command:

    $ kubectl create configmap custom-themes \
      --from-file=./custom-themes.json
    Instead of custom-themes and custom-themes.json you can use any other names.
  2. Specify parameters when installing ONLYOFFICE Docs

    When installing ONLYOFFICE Docs, specify the extraThemes.configMap=custom-themes and extraThemes.filename=custom-themes.json parameters.

    If you need to add interface themes after the ONLYOFFICE Docs is already installed, you need to execute step 11.1 and then run the helm upgrade documentserver onlyoffice/docs --set extraThemes.configMap=custom-themes --set extraThemes.filename=custom-themes.json --no-hooks command or helm upgrade documentserver -f ./values.yaml onlyoffice/docs --no-hooks if the parameters are specified in the values.yaml file.
12. Connecting Amazon S3 bucket as a cache to ONLYOFFICE Helm Docs

In order to connect Amazon S3 bucket as a cache, you need to create a configuration file or edit the existing one in accordance with this guide and change the value of the parameter persistence.storageS3 to true.

Deploy ONLYOFFICE Docs

It may be required to apply SecurityContextConstraints policy when installing into OpenShift cluster, which adds permission to run containers from a user whose ID = 101.

To do this, run the following commands:

$ oc apply -f https://raw.githubusercontent.com/ONLYOFFICE/Kubernetes-Docs/master/sources/scc/docs-components.yaml
$ oc adm policy add-scc-to-group scc-docs-components system:authenticated

Alternatively, you can apply the nonroot-v2 SecurityContextConstraints (SCC) policy in the commonAnnotations or annotations for all resources that describe the podTemplate. Ensure that both the user and the service account have the necessary permissions to use this SCC. To verify who has permission to use the nonroot-v2, execute the following command:

oc adm policy who-can use scc nonroot-v2
helm install documentserver onlyoffice/docs --set commonAnnotations."openshift\.io/required-scc"="nonroot-v2"

If necessary, set podSecurityContext.enabled and .containerSecurityContext.enabled to true

1. Deploy the ONLYOFFICE Docs license
  1. Create secret

    If you have a valid ONLYOFFICE Docs license, create a secret license from the file:

    $ kubectl create secret generic [SECRET_LICENSE_NAME] --from-file=path/to/license.lic

    Where SECRET_LICENSE_NAME is the name of a future secret with a license.

    The source license file name should be 'license.lic' because this name would be used as a field in the created secret.
    If the installation is performed without creating a secret with the existing license file, an empty secret license will be automatically created. For information on how to update an existing secret with a license, see here.
  2. Specify parameters when installing ONLYOFFICE Docs

    When installing ONLYOFFICE Docs, specify the license.existingSecret=[SECRET_LICENSE_NAME] parameter.

    $ helm install documentserver onlyoffice/docs --set license.existingSecret=[SECRET_LICENSE_NAME]
    If you need to add license after the ONLYOFFICE Docs is already installed, you need to execute step 1.1 and then run the helm upgrade documentserver onlyoffice/docs --set license.existingSecret=[SECRET_LICENSE_NAME] --no-hooks command or helm upgrade documentserver -f ./values.yaml onlyoffice/docs --no-hooks if the parameters are specified in the values.yaml file.
2. Deploy ONLYOFFICE Docs

To deploy ONLYOFFICE Docs with the release name documentserver:

$ helm install documentserver onlyoffice/docs

The command deploys ONLYOFFICE Docs on the Kubernetes cluster in the default configuration. The Parameters section lists the parameters that can be configured during installation.

When installing ONLYOFFICE Docs in a private k8s cluster behind a Web proxy or with no internet access, see the notes below.
3. Uninstall ONLYOFFICE Docs

To uninstall/delete the documentserver deployment:

$ helm delete documentserver

Executing the helm delete command launches hooks, which perform some preparatory actions before completely deleting the ONLYOFFICE Docs, which include stopping the server, cleaning up the used PVC and database tables. The default hook execution time is 300s. The execution time can be changed using --timeout [time], for example:

$ helm delete documentserver --timeout 25m
When deleting ONLYOFFICE Docs in a private k8s cluster behind a Web proxy or with no internet access, see the notes below.

If you want to delete the ONLYOFFICE Docs without any preparatory actions, run the following command:

$ helm delete documentserver --no-hooks

The helm delete command removes all the Kubernetes components associated with the chart and deletes the release.

4. Parameters
connections.dbTypeThe database type. Possible values are postgres, mariadb, mysql, oracle, mssql or damengpostgres
connections.dbHostThe IP address or the name of the Database hostpostgresql
connections.dbUserDatabase userpostgres
connections.dbPortDatabase server port number5432
connections.dbNameName of the Database database the application will be connected withpostgres
connections.dbPasswordDatabase user password. If set to, it takes priority over the connections.dbExistingSecret""
connections.dbSecretKeyNameThe name of the key that contains the Database user passwordpostgres-password
connections.dbExistingSecretName of existing secret to use for Database passwords. Must contain the key specified in connections.dbSecretKeyNamepostgresql
connections.redisConnectorNameDefines which connector to use to connect to Redis. If you need to connect to Redis Sentinel, set the value ioredisredis
connections.redistHostThe IP address or the name of the Redis host. Not used if the values are set in connections.redisClusterNodes and connections.redisSentinelNodesredis-master
connections.redisPortThe Redis server port number. Not used if the values are set in connections.redisClusterNodes and connections.redisSentinelNodes6379
connections.redisUserThe Redis user name. The value in this parameter overrides the value set in the options object in local.json if you add custom configuration filedefault
connections.redisDBNumNumber of the redis logical database to be selected. The value in this parameter overrides the value set in the options object in local.json if you add custom configuration file0
connections.redisClusterNodesList of nodes in the Redis cluster. There is no need to specify every node in the cluster, 3 should be enough. You can specify multiple values. It must be specified in the host:port format[]
connections.redisPasswordThe password set for the Redis account. If set to, it takes priority over the connections.redisExistingSecret. The value in this parameter overrides the value set in the options object in local.json if you add custom configuration file""
connections.redisSecretKeyNameThe name of the key that contains the Redis user passwordredis-password
connections.redisExistingSecretName of existing secret to use for Redis passwords. Must contain the key specified in connections.redisSecretKeyName. The password from this secret overrides password set in the options object in local.jsonredis
connections.redisNoPassDefines whether to use a Redis auth without a password. If the connection to Redis server does not require a password, set the value to truefalse
connections.redisSentinelNodesList of Redis Sentinel Nodes. There is no need to specify every node, 3 should be enough. You can specify multiple values. It must be specified in the host:port format. Used if connections.redisConnectorName is set to ioredis[]
connections.redisSentinelGroupNameName of a group of Redis instances composed of a master and one or more slaves. Used if connections.redisConnectorName is set to ioredismymaster
connections.redisSentinelExistingSecretName of existing secret to use for Redis Sentinel password. Must contain the key specified in connections.redisSentinelSecretKeyName. The password from this secret overrides the value for the password set in the iooptions object in local.json""
connections.redisSentinelSecretKeyNameThe name of the key that contains the Redis Sentinel user password. If you set a password in redisSentinelPassword, a secret will be automatically created, the key name of which will be the value set heresentinel-password
connections.redisSentinelPasswordThe password set for the Redis Sentinel account. If set to, it takes priority over the connections.redisSentinelExistingSecret. The value in this parameter overrides the value set in the iooptions object in local.json""
connections.redisSentinelNoPassDefines whether to use a Redis Sentinel auth without a password. If the connection to Redis Sentinel does not require a password, set the value to truetrue
connections.amqpTypeDefines the AMQP server type. Possible values are rabbitmq or activemqrabbitmq
connections.amqpHostThe IP address or the name of the AMQP serverrabbitmq
connections.amqpPortThe port for the connection to AMQP server5672
connections.amqpVhostThe virtual host for the connection to AMQP server/
connections.amqpUserThe username for the AMQP server accountuser
connections.amqpProtoThe protocol for the connection to AMQP serveramqp
connections.amqpPasswordAMQP server user password. If set to, it takes priority over the connections.amqpExistingSecret""
connections.amqpSecretKeyNameThe name of the key that contains the AMQP server user passwordrabbitmq-password
connections.amqpExistingSecretThe name of existing secret to use for AMQP server passwords. Must contain the key specified in connections.amqpSecretKeyNamerabbitmq
persistence.existingClaimName of an existing PVC to use. If not specified, a PVC named "ds-files" will be created""
persistence.annotationsDefines annotations that will be additionally added to "ds-files" PVC. If set to, it takes priority over the commonAnnotations{}
persistence.storageClassPVC Storage Class for Onlyoffice Docs data and runtime config volumesnfs
persistence.sizePVC Storage Request for ONLYOFFICE Docs volume8Gi
persistence.storageS3Defines whether S3 will be used as cache storage. Set to true if you will use S3 as cache storagefalse
persistence.runtimeConfig.enabledDefines whether to use PVC and whether to mount it in containerstrue
persistence.runtimeConfig.existingClaimThe name of the existing PVC used to store the runtime config. If not specified, a PVC named "ds-runtime-config" will be created""
persistence.runtimeConfig.annotationsDefines annotations that will be additionally added to "ds-runtime-config" PVC. If set to, it takes priority over the commonAnnotations{}
persistence.runtimeConfig.sizePVC Storage Request for runtime config volume1Gi
commonNameSuffixThe name that will be added to the name of all created resources as a suffix""
namespaceOverrideThe name of the namespace in which Onlyoffice Docs will be deployed. If not set, the name will be taken from .Release.Namespace""
commonLabelsDefines labels that will be additionally added to all the deployed resources. You can also use tpl as the value for the key{}
commonAnnotationsDefines annotations that will be additionally added to all the deployed resources. You can also use tpl as the value for the key. Some resources may override the values specified here with their own{}
serviceAccount.createEnable ServiceAccount creationfalse
serviceAccount.nameName of the ServiceAccount to be used. If not set and serviceAccount.create is true the name will be taken from .Release.Name or serviceAccount.create is false the name will be "default"""
serviceAccount.annotationsMap of annotations to add to the ServiceAccount. If set to, it takes priority over the commonAnnotations{}
serviceAccount.automountServiceAccountTokenEnable auto mount of ServiceAccountToken on the serviceAccount created. Used only if serviceAccount.create is truetrue
license.existingSecretName of the existing secret that contains the license. Must contain the key license.lic""
license.existingClaimName of the existing PVC in which the license is stored. Must contain the file license.lic""
log.levelDefines the type and severity of a logged event. Possible values are ALL, TRACE, DEBUG, INFO, WARN, ERROR, FATAL, MARK, OFFWARN
log.typeDefines the format of a logged event. Possible values are pattern, json, basic, coloured, messagePassThrough, dummypattern
log.patternDefines the log pattern if log.type=pattern[%d] [%p] %c - %.10000m
wopi.enabledDefines if WOPI is enabled. If the parameter is enabled, then caching attributes for the mounted directory (PVC) should be disabled for the clientfalse
wopi.keys.generationDefines whether to generate API keys. Used if you set wopi.enabled to truetrue
wopi.keys.newKeysExistingSecretName of existing secret containing the WOPI keys. Must contain the keys WOPI_PRIVATE_KEY, WOPI_PUBLIC_KEY, WOPI_MODULUS_KEY and WOPI_EXPONENT_KEY. If not set, new keys will be generated and a secret will be created from them""
wopi.keys.oldKeysExistingSecretName of existing secret containing the old WOPI keys. Must contain the keys WOPI_PRIVATE_KEY_OLD, WOPI_PUBLIC_KEY_OLD, WOPI_MODULUS_KEY_OLD and WOPI_EXPONENT_KEY_OLD. If not set, new keys will be generated and a secret will be created from them""
metrics.enabledSpecifies the enabling StatsD for ONLYOFFICE Docsfalse
metrics.hostDefines StatsD listening hoststatsd-exporter-prometheus-statsd-exporter
metrics.portDefines StatsD listening port8125
metrics.prefixDefines StatsD metrics prefix for backend servicesds.
extraConf.configMapThe name of the ConfigMap containing the json file that override the default values""
extraConf.filenameThe name of the json file that contains custom values. Must be the same as the key name in extraConf.ConfigMaplocal.json
extraThemes.configMapThe name of the ConfigMap containing the json file that contains the interface themes""
extraThemes.filenameThe name of the json file that contains custom interface themes. Must be the same as the key name in extraThemes.configMapcustom-themes.json
podAntiAffinity.typeTypes of Pod antiaffinity. Allowed values: soft or hardsoft
podAntiAffinity.topologyKeyNode label key to matchkubernetes.io/hostname
podAntiAffinity.weightPriority when selecting node. It is in the range from 1 to 100100
nodeSelectorNode labels for pods assignment. Each ONLYOFFICE Docs services can override the values specified here with its own{}
tolerationsTolerations for pods assignment. Each ONLYOFFICE Docs services can override the values specified here with its own[]
imagePullSecretsContainer image registry secret name""
requestFilteringAgent.allowPrivateIPAddressDefines if it is allowed to connect private IP address or not. requestFilteringAgent parameters are used if JWT is disabled: jwt.enabled=falsefalse
requestFilteringAgent.allowMetaIPAddressDefines if it is allowed to connect meta address or notfalse
requestFilteringAgent.allowIPAddressListDefines the list of IP addresses allowed to connect. This values are preferred than requestFilteringAgent.denyIPAddressList[]
requestFilteringAgent.denyIPAddressListDefines the list of IP addresses allowed to connect[]
docservice.annotationsDefines annotations that will be additionally added to Docservice Deployment. If set to, it takes priority over the commonAnnotations{}
docservice.podAnnotationsMap of annotations to add to the Docservice deployment podsrollme: "{{ randAlphaNum 5 | quote }}"
docservice.replicasDocservice replicas quantity. If the docservice.autoscaling.enabled parameter is enabled, it is ignored2
docservice.updateStrategy.typeDocservice deployment update strategy typeRecreate
docservice.customPodAntiAffinityProhibiting the scheduling of Docservice Pods relative to other Pods containing the specified labels on the same node{}
docservice.podAffinityDefines Pod affinity rules for Docservice Pods scheduling by nodes relative to other Pods{}
docservice.nodeAffinityDefines Node affinity rules for Docservice Pods scheduling by nodes{}
docservice.nodeSelectorNode labels for Docservice Pods assignment. If set to, it takes priority over the nodeSelector{}
docservice.tolerationsTolerations for Docservice Pods assignment. If set to, it takes priority over the tolerations[]
docservice.terminationGracePeriodSecondsThe time to terminate gracefully during which the Docservice Pod will have the Terminating status30
docservice.hostAliasesAdds additional entries to the hosts file in the Docservice and Proxy containers[]
docservice.initContainersDefines containers that run before docservice and proxy containers in the Docservice deployment pod. For example, a container that changes the owner of the PersistentVolume[]
docservice.image.repositoryDocservice container image repository*onlyoffice/docs-docservice-de
docservice.image.tagDocservice container image tag9.0.4-1
docservice.image.pullPolicyDocservice container image pull policyIfNotPresent
docservice.containerSecurityContext.enabledEnable security context for the Docservice containerfalse
docservice.lifecycleHooksDefines the Docservice container lifecycle hooks. It is used to trigger events to run at certain points in a container's lifecycle{}
docservice.resources.requestsThe requested resources for the Docservice container{}
docservice.resources.limitsThe resources limits for the Docservice container{}
docservice.extraEnvVarsAn array with extra env variables for the Docservice container[]
docservice.extraVolumesAn array with extra volumes for the Docservice Pod[]
docservice.extraVolumeMountsAn array with extra volume mounts for the Docservice container[]
docservice.readinessProbe.enabledEnable readinessProbe for Docservice containertrue
docservice.livenessProbe.enabledEnable livenessProbe for Docservice containertrue
docservice.startupProbe.enabledEnable startupProbe for Docservice containertrue
docservice.autoscaling.enabledEnable Docservice deployment autoscalingfalse
docservice.autoscaling.annotationsDefines annotations that will be additionally added to Docservice deployment HPA. If set to, it takes priority over the commonAnnotations{}
docservice.autoscaling.minReplicasDocservice deployment autoscaling minimum number of replicas2
docservice.autoscaling.maxReplicasDocservice deployment autoscaling maximum number of replicas4
docservice.autoscaling.targetCPU.enabledEnable autoscaling of Docservice deployment by CPU usage percentagetrue
docservice.autoscaling.targetCPU.utilizationPercentageDocservice deployment autoscaling target CPU percentage70
docservice.autoscaling.targetMemory.enabledEnable autoscaling of Docservice deployment by memory usage percentagefalse
docservice.autoscaling.targetMemory.utilizationPercentageDocservice deployment autoscaling target memory percentage70
docservice.autoscaling.customMetricsTypeCustom, additional or external autoscaling metrics for the Docservice deployment[]
docservice.autoscaling.behaviorConfiguring Docservice deployment scaling behavior policies for the scaleDown and scaleUp fields{}
proxy.accessLogDefines the nginx config access_log format directiveoff
proxy.logFormatDefines the format of log entries using text and various variables'$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'
proxy.gzipProxiedDefines the nginx config gzip_proxied directiveoff
proxy.clientMaxBodySizeDefines the nginx config client_max_body_size directive100m
proxy.workerConnectionsDefines the nginx config worker_connections directive4096
proxy.secureLinkSecretDefines secret for the nginx config directive secure_link_md5. If the value is empty, a random one will be generated, which will be used later in the upgrade. If a value is set, it will be used""
proxy.secureLinkExistingSecretName of existing secret to use for secure_link. If set to, it takes priority over the proxy.secureLinkSecret""
proxy.infoAllowedIPDefines ip addresses for accessing the info page[]
proxy.infoAllowedUserDefines user name for accessing the info page. If not set to, Nginx Basic Authentication will not be applied to access the info page. For more details, see here""
proxy.infoAllowedPasswordDefines user password for accessing the info page. Used if proxy.infoAllowedUser is set. If the value is empty, a random one will be generated, which will be used later in the upgrade. If a value is set, it will be used""
proxy.infoAllowedSecretKeyNameThe name of the key that contains the info auth user password. Used if proxy.infoAllowedUser is setinfo-auth-password
proxy.infoAllowedExistingSecretName of existing secret to use for info auth password. Used if proxy.infoAllowedUser is set. Must contain the key specified in proxy.infoAllowedSecretKeyName. If set to, it takes priority over the proxy.infoAllowedPassword""
proxy.welcomePage.enabledDefines whether the welcome page will be displayedtrue
proxy.image.repositoryDocservice Proxy container image repository*onlyoffice/docs-proxy-de
proxy.image.tagDocservice Proxy container image tag9.0.4-1
proxy.image.pullPolicyDocservice Proxy container image pull policyIfNotPresent
proxy.containerSecurityContext.enabledEnable security context for the Proxy containerfalse
proxy.lifecycleHooksDefines the Proxy container lifecycle hooks. It is used to trigger events to run at certain points in a container's lifecycle{}
proxy.resources.requestsThe requested resources for the Proxy container{}
proxy.resources.limitsThe resources limits for the Proxy container{}
proxy.extraEnvVarsAn array with extra env variables for the Proxy container[]
proxy.extraVolumeMountsAn array with extra volume mounts for the Proxy container[]
proxy.readinessProbe.enabledEnable readinessProbe for Proxy containertrue
proxy.livenessProbe.enabledEnable livenessProbe for Proxy containertrue
proxy.startupProbe.enabledEnable startupProbe for Proxy containertrue
converter.annotationsDefines annotations that will be additionally added to Converter Deployment. If set to, it takes priority over the commonAnnotations{}
converter.podAnnotationsMap of annotations to add to the Converter deployment podsrollme: "{{ randAlphaNum 5 | quote }}"
converter.replicasConverter replicas quantity. If the converter.autoscaling.enabled parameter is enabled, it is ignored2
converter.updateStrategy.typeConverter deployment update strategy typeRecreate
converter.customPodAntiAffinityProhibiting the scheduling of Converter Pods relative to other Pods containing the specified labels on the same node{}
converter.podAffinityDefines Pod affinity rules for Converter Pods scheduling by nodes relative to other Pods{}
converter.nodeAffinityDefines Node affinity rules for Converter Pods scheduling by nodes{}
converter.nodeSelectorNode labels for Converter Pods assignment. If set to, it takes priority over the nodeSelector{}
converter.tolerationsTolerations for Converter Pods assignment. If set to, it takes priority over the tolerations[]
converter.terminationGracePeriodSecondsThe time to terminate gracefully during which the Converter Pod will have the Terminating status30
converter.hostAliasesAdds additional entries to the hosts file in the Converter container[]
converter.initContainersDefines containers that run before Converter container in the Converter deployment pod. For example, a container that changes the owner of the PersistentVolume[]
converter.image.repositoryConverter container image repository*onlyoffice/docs-converter-de
converter.image.tagConverter container image tag9.0.4-1
converter.image.pullPolicyConverter container image pull policyIfNotPresent
converter.containerSecurityContext.enabledEnable security context for the Converter containerfalse
converter.lifecycleHooksDefines the Converter container lifecycle hooks. It is used to trigger events to run at certain points in a container's lifecycle{}
converter.resources.requestsThe requested resources for the Converter container{}
converter.resources.limitsThe resources limits for the Converter container{}
converter.extraEnvVarsAn array with extra env variables for the Converter container[]
converter.extraVolumesAn array with extra volumes for the Converter Pod[]
converter.extraVolumeMountsAn array with extra volume mounts for the Converter container[]
converter.autoscaling.enabledEnable Converter deployment autoscalingfalse
converter.autoscaling.annotationsDefines annotations that will be additionally added to Converter deployment HPA. If set to, it takes priority over the commonAnnotations{}
converter.autoscaling.minReplicasConverter deployment autoscaling minimum number of replicas2
converter.autoscaling.maxReplicasConverter deployment autoscaling maximum number of replicas16
converter.autoscaling.targetCPU.enabledEnable autoscaling of converter deployment by CPU usage percentagetrue
converter.autoscaling.targetCPU.utilizationPercentageConverter deployment autoscaling target CPU percentage70
converter.autoscaling.targetMemory.enabledEnable autoscaling of Converter deployment by memory usage percentagefalse
converter.autoscaling.targetMemory.utilizationPercentageConverter deployment autoscaling target memory percentage70
converter.autoscaling.customMetricsTypeCustom, additional or external autoscaling metrics for the Converter deployment[]
converter.autoscaling.behaviorConfiguring Converter deployment scaling behavior policies for the scaleDown and scaleUp fields{}
example.enabledEnables the installation of Examplefalse
example.annotationsDefines annotations that will be additionally added to Example StatefulSet. If set to, it takes priority over the commonAnnotations{}
example.podAnnotationsMap of annotations to add to the example podrollme: "{{ randAlphaNum 5 | quote }}"
example.updateStrategy.typeExample StatefulSet update strategy typeRollingUpdate
example.customPodAntiAffinityProhibiting the scheduling of Example Pod relative to other Pods containing the specified labels on the same node{}
example.podAffinityDefines Pod affinity rules for Example Pod scheduling by nodes relative to other Pods{}
example.nodeAffinityDefines Node affinity rules for Example Pod scheduling by nodes{}
example.nodeSelectorNode labels for Example Pods assignment. If set to, it takes priority over the nodeSelector{}
example.tolerationsTolerations for Example Pods assignment. If set to, it takes priority over the tolerations[]
example.terminationGracePeriodSecondsThe time to terminate gracefully during which the Example Pod will have the Terminating status30
example.hostAliasesAdds additional entries to the hosts file in the Example container[]
example.initContainersDefines containers that run before Example container in the Pod[]
example.image.repositoryExample container image nameonlyoffice/docs-example
example.image.tagExample container image tag9.0.4-1
example.image.pullPolicyExample container image pull policyIfNotPresent
example.containerSecurityContext.enabledEnable security context for the Example containerfalse
example.dsUrlONLYOFFICE Docs external address. It should be changed only if it is necessary to check the operation of the conversion in Example (e.g. http://<documentserver-address>/)/
example.resources.requestsThe requested resources for the Example container{}
example.resources.limitsThe resources limits for the Example container{}
example.extraEnvVarsAn array with extra env variables for the Example container[]
example.extraConf.configMapThe name of the ConfigMap containing the json file that override the default values. See an example of creation here""
example.extraConf.filenameThe name of the json file that contains custom values. Must be the same as the key name in example.extraConf.ConfigMaplocal.json
example.extraVolumesAn array with extra volumes for the Example Pod[]
example.extraVolumeMountsAn array with extra volume mounts for the Example container[]
jwt.enabledSpecifies the enabling the JSON Web Token validation by the ONLYOFFICE Docs. Common for inbox and outbox requeststrue
jwt.secretDefines the secret key to validate the JSON Web Token in the request to the ONLYOFFICE Docs. Common for inbox and outbox requests. If the value is empty, a random one will be generated, which will be used later in the upgrade. If a value is set, it will be used""
jwt.headerDefines the http header that will be used to send the JSON Web Token. Common for inbox and outbox requestsAuthorization
jwt.inBodySpecifies the enabling the token validation in the request body to the ONLYOFFICE Docsfalse
jwt.inboxJSON Web Token validation parameters for inbox requests only. If not specified, the values of the parameters of the common jwt are used{}
jwt.outboxJSON Web Token validation parameters for outbox requests only. If not specified, the values of the parameters of the common jwt are used{}
jwt.existingSecretThe name of an existing secret containing variables for jwt. If not specified, a secret named jwt will be created""
service.existingThe name of an existing service for ONLYOFFICE Docs. If not specified, a service named documentserver will be created""
service.annotationsMap of annotations to add to the ONLYOFFICE Docs service. If set to, it takes priority over the commonAnnotations{}
service.typeONLYOFFICE Docs service typeClusterIP
service.portONLYOFFICE Docs service port8888
service.sessionAffinitySession Affinity for ONLYOFFICE Docs service. If not set, None will be set as the default value""
service.sessionAffinityConfigConfiguration for ONLYOFFICE Docs service Session Affinity. Used if the service.sessionAffinity is set{}
ingress.enabledEnable the creation of an ingress for the ONLYOFFICE Docsfalse
ingress.annotationsMap of annotations to add to the Ingress. If set to, it takes priority over the commonAnnotationsnginx.ingress.kubernetes.io/proxy-body-size: 100m
ingress.ingressClassNameUsed to reference the IngressClass that should be used to implement this Ingressnginx
ingress.controllerNameUsed to distinguish between controllers with the same IngressClassName but from different vendorsingress-nginx
ingress.hostIngress hostname for the ONLYOFFICE Docs ingress""
ingress.tenantsIngress hostnames if you need to use more than one name. For example, for multitenancy. If set to, it takes priority over the ingress.host. If ingress.ssl.enabled is set to true, it is assumed that the certificate for all specified domains is kept secret by ingress.ssl.secret[]
ingress.ssl.enabledEnable ssl for the ONLYOFFICE Docs ingressfalse
ingress.ssl.secretSecret name for ssl to mount into the Ingresstls
ingress.pathSpecifies the path where ONLYOFFICE Docs will be available/
ingress.pathTypeSpecifies the path type for the ONLYOFFICE Docs ingress resource. Allowed values are Exact, Prefix or ImplementationSpecificImplementationSpecific
ingress.letsencrypt.enabledEnabling certificate request creation in Let's Encrypt. Used if ingress.enabled is set to truefalse
ingress.letsencrypt.clusterIssuerNameClusterIssuer Nameletsencrypt-prod
ingress.letsencrypt.emailYour email address used for ACME registration""
ingress.letsencrypt.serverThe address of the Let's Encrypt server to which requests for certificates will be senthttps://acme-v02.api.letsencrypt.org/directory
ingress.letsencrypt.secretNameName of a secret used to store the ACME account private keyletsencrypt-prod-private-key
openshift.route.enabledEnable the creation of an OpenShift Route for the ONLYOFFICE Docsfalse
openshift.route.annotationsMap of annotations to add to the OpenShift Route. If set to, it takes priority over the commonAnnotations{}
openshift.route.hostOpenShift Route hostname for the ONLYOFFICE Docs route""
openshift.route.pathSpecifies the path where ONLYOFFICE Docs will be available/
openshift.route.wildcardPolicyThe policy for handling wildcard subdomains in the OpenShift Route. Allowed values are None, SubdomainNone
grafana.enabledEnable the installation of resources required for the visualization of metrics in Grafanafalse
grafana.namespaceThe name of the namespace in which RBAC components and Grafana resources will be deployed. If not set, the name will be taken from namespaceOverride if set, or .Release.Namespace""
grafana.ingress.enabledEnable the creation of an ingress for the Grafana. Used if you set grafana.enabled to true and want to use Nginx Ingress to access Grafanafalse
grafana.ingress.annotationsMap of annotations to add to Grafana Ingress. If set to, it takes priority over the commonAnnotationsnginx.ingress.kubernetes.io/proxy-body-size: 100m
grafana.dashboard.enabledEnable the installation of ready-made Grafana dashboards. Used if you set grafana.enabled to truefalse
podSecurityContext.enabledEnable security context for the podsfalse
podSecurityContext.converter.fsGroupDefines the Group ID to which the owner and permissions for all files in volumes are changed when mounted in the Converter Pod101
podSecurityContext.docservice.fsGroupDefines the Group ID to which the owner and permissions for all files in volumes are changed when mounted in the Docservice Pod101
podSecurityContext.jobs.fsGroupDefines the Group ID to which the owner and permissions for all files in volumes are changed when mounted in Pods created by Jobs101
podSecurityContext.example.fsGroupDefines the Group ID to which the owner and permissions for all files in volumes are changed when mounted in the Example Pod1001
podSecurityContext.tests.fsGroupDefines the Group ID to which the owner and permissions for all files in volumes are changed when mounted in the Test Pod101
webProxy.enabledSpecify whether a Web proxy is used in your network to access the Pods of k8s cluster to the Internetfalse
webProxy.httpWeb Proxy address for HTTP traffichttp://proxy.example.com
webProxy.httpsWeb Proxy address for HTTPS traffichttps://proxy.example.com
webProxy.noProxyPatterns for IP addresses or k8s services name or domain names that shouldn’t use the Web Proxylocalhost,127.0.0.1,docservice
privateClusterSpecify whether the k8s cluster is used in a private network without internet accessfalse
upgrade.job.enabledEnable the execution of job pre-upgrade before upgrading ONLYOFFICE Docstrue
upgrade.job.annotationsDefines annotations that will be additionally added to pre-upgrade Job. If set to, it takes priority over the commonAnnotations{}
upgrade.job.podAnnotationsMap of annotations to add to the pre-upgrade Pod{}
upgrade.job.customPodAntiAffinityProhibiting the scheduling of pre-upgrade Job Pod relative to other Pods containing the specified labels on the same node{}
upgrade.job.podAffinityDefines Pod affinity rules for pre-upgrade Job Pod scheduling by nodes relative to other Pods{}
upgrade.job.nodeAffinityDefines Node affinity rules for pre-upgrade Job Pod scheduling by nodes{}
upgrade.job.nodeSelectorNode labels for pre-upgrade Job Pod assignment. If set to, it takes priority over the nodeSelector{}
upgrade.job.tolerationsTolerations for pre-upgrade Job Pod assignment. If set to, it takes priority over the tolerations[]
upgrade.job.initContainersDefines containers that run before pre-upgrade container in the Pod[]
upgrade.job.image.repositoryJob by upgrade image repositoryonlyoffice/docs-utils
upgrade.job.image.tagJob by upgrade image tag9.0.4-1
upgrade.job.image.pullPolicyJob by upgrade image pull policyIfNotPresent
upgrade.job.containerSecurityContext.enabledEnable security context for the pre-upgrade containerfalse
upgrade.job.resources.requestsThe requested resources for the job pre-upgrade container{}
upgrade.job.resources.limitsThe resources limits for the job pre-upgrade container{}
upgrade.existingConfigmap.tblRemove.nameThe name of the existing ConfigMap that contains the sql file for deleting tables from the databaseremove-db-scripts
upgrade.existingConfigmap.tblRemove.keyNameThe name of the sql file containing instructions for deleting tables from the database. Must be the same as the key name in upgrade.existingConfigmap.tblRemove.nameremovetbl.sql
upgrade.existingConfigmap.tblCreate.nameThe name of the existing ConfigMap that contains the sql file for craeting tables from the databaseinit-db-scripts
upgrade.existingConfigmap.tblCreate.keyNameThe name of the sql file containing instructions for creating tables from the database. Must be the same as the key name in upgrade.existingConfigmap.tblCreate.namecreatedb.sql
upgrade.existingConfigmap.dsStopThe name of the existing ConfigMap that contains the ONLYOFFICE Docs upgrade script. If set, the four previous parameters are ignored. Must contain a key stop.sh""
rollback.job.enabledEnable the execution of job pre-rollback before rolling back ONLYOFFICE Docstrue
rollback.job.annotationsDefines annotations that will be additionally added to pre-rollback Job. If set to, it takes priority over the commonAnnotations{}
rollback.job.podAnnotationsMap of annotations to add to the pre-rollback Pod{}
rollback.job.customPodAntiAffinityProhibiting the scheduling of pre-rollback Job Pod relative to other Pods containing the specified labels on the same node{}
rollback.job.podAffinityDefines Pod affinity rules for pre-rollback Job Pod scheduling by nodes relative to other Pods{}
rollback.job.nodeAffinityDefines Node affinity rules for pre-rollback Job Pod scheduling by nodes{}
rollback.job.nodeSelectorNode labels for pre-rollback Job Pod assignment. If set to, it takes priority over the nodeSelector{}
rollback.job.tolerationsTolerations for pre-rollback Job Pod assignment. If set to, it takes priority over the tolerations[]
rollback.job.initContainersDefines containers that run before pre-rollback container in the Pod[]
rollback.job.image.repositoryJob by rollback image repositoryonlyoffice/docs-utils
rollback.job.image.tagJob by rollback image tag9.0.4-1
rollback.job.image.pullPolicyJob by rollback image pull policyIfNotPresent
rollback.job.containerSecurityContext.enabledEnable security context for the pre-rollback containerfalse
rollback.job.resources.requestsThe requested resources for the job rollback container{}
rollback.job.resources.limitsThe resources limits for the job rollback container{}
rollback.existingConfigmap.tblRemove.nameThe name of the existing ConfigMap that contains the sql file for deleting tables from the databaseremove-db-scripts
rollback.existingConfigmap.tblRemove.keyNameThe name of the sql file containing instructions for deleting tables from the database. Must be the same as the key name in rollback.existingConfigmap.tblRemove.nameremovetbl.sql
rollback.existingConfigmap.tblCreate.nameThe name of the existing ConfigMap that contains the sql file for craeting tables from the databaseinit-db-scripts
rollback.existingConfigmap.tblCreate.keyNameThe name of the sql file containing instructions for creating tables from the database. Must be the same as the key name in rollback.existingConfigmap.tblCreate.namecreatedb.sql
rollback.existingConfigmap.dsStopThe name of the existing ConfigMap that contains the ONLYOFFICE Docs rollback script. If set, the four previous parameters are ignored. Must contain a key stop.sh""
delete.job.enabledEnable the execution of job pre-delete before deleting ONLYOFFICE Docstrue
delete.job.annotationsDefines annotations that will be additionally added to pre-delete Job. If set to, it takes priority over the commonAnnotations{}
delete.job.podAnnotationsMap of annotations to add to the pre-delete Pod{}
delete.job.customPodAntiAffinityProhibiting the scheduling of pre-delete Job Pod relative to other Pods containing the specified labels on the same node{}
delete.job.podAffinityDefines Pod affinity rules for pre-delete Job Pod scheduling by nodes relative to other Pods{}
delete.job.nodeAffinityDefines Node affinity rules for pre-delete Job Pod scheduling by nodes{}
delete.job.nodeSelectorNode labels for pre-delete Job Pod assignment. If set to, it takes priority over the nodeSelector{}
delete.job.tolerationsTolerations for pre-delete Job Pod assignment. If set to, it takes priority over the tolerations[]
delete.job.initContainersDefines containers that run before pre-delete container in the Pod[]
delete.job.image.repositoryJob by delete image repositoryonlyoffice/docs-utils
delete.job.image.tagJob by delete image tag9.0.4-1
delete.job.image.pullPolicyJob by delete image pull policyIfNotPresent
delete.job.containerSecurityContext.enabledEnable security context for the pre-delete containerfalse
delete.job.resources.requestsThe requested resources for the job delete container{}
delete.job.resources.limitsThe resources limits for the job delete container{}
delete.existingConfigmap.tblRemove.nameThe name of the existing ConfigMap that contains the sql file for deleting tables from the databaseremove-db-scripts
delete.existingConfigmap.tblRemove.keyNameThe name of the sql file containing instructions for deleting tables from the database. Must be the same as the key name in delete.existingConfigmap.tblRemove.nameremovetbl.sql
delete.existingConfigmap.dsStopThe name of the existing ConfigMap that contains the ONLYOFFICE Docs delete script. If set, the two previous parameters are ignored. Must contain a key stop.sh""
install.job.enabledEnable the execution of job pre-install before installing ONLYOFFICE Docstrue
install.job.annotationsDefines annotations that will be additionally added to pre-install Job. If set to, it takes priority over the commonAnnotations{}
install.job.podAnnotationsMap of annotations to add to the pre-install Pod{}
install.job.customPodAntiAffinityProhibiting the scheduling of pre-install Job Pod relative to other Pods containing the specified labels on the same node{}
install.job.podAffinityDefines Pod affinity rules for pre-install Job Pod scheduling by nodes relative to other Pods{}
install.job.nodeAffinityDefines Node affinity rules for pre-install Job Pod scheduling by nodes{}
install.job.nodeSelectorNode labels for pre-install Job Pod assignment. If set to, it takes priority over the nodeSelector{}
install.job.tolerationsTolerations for pre-install Job Pod assignment. If set to, it takes priority over the tolerations[]
install.job.initContainersDefines containers that run before pre-install container in the Pod[]
install.job.image.repositoryJob by pre-install ONLYOFFICE Docs image repositoryonlyoffice/docs-utils
install.job.image.tagJob by pre-install ONLYOFFICE Docs image tag9.0.4-1
install.job.image.pullPolicyJob by pre-install ONLYOFFICE Docs image pull policyIfNotPresent
install.job.containerSecurityContext.enabledEnable security context for the pre-install containerfalse
install.job.resources.requestsThe requested resources for the job pre-install container{}
install.job.resources.limitsThe resources limits for the job pre-install container{}
install.existingConfigmap.tblCreate.nameThe name of the existing ConfigMap that contains the sql file for craeting tables from the databaseinit-db-scripts
install.existingConfigmap.tblCreate.keyNameThe name of the sql file containing instructions for creating tables from the database. Must be the same as the key name in install.existingConfigmap.tblCreate.namecreatedb.sql
install.existingConfigmap.initdbThe name of the existing ConfigMap that contains the initdb script. If set, the two previous parameters are ignored. Must contain a key initdb.sh""
clearCache.job.enabledEnable the execution of job Clear Cache after upgrading ONLYOFFICE Docs. Job by Clear Cache has a post-upgrade hook executes after any resources have been upgraded in Kubernetes. He clears the Cache directorytrue
clearCache.job.annotationsDefines annotations that will be additionally added to Clear Cache Job. If set to, it takes priority over the commonAnnotations{}
clearCache.job.podAnnotationsMap of annotations to add to the Clear Cache Pod{}
clearCache.job.customPodAntiAffinityProhibiting the scheduling of Clear Cache Job Pod relative to other Pods containing the specified labels on the same node{}
clearCache.job.podAffinityDefines Pod affinity rules for Clear Cache Job Pod scheduling by nodes relative to other Pods{}
clearCache.job.nodeAffinityDefines Node affinity rules for Clear Cache Job Pod scheduling by nodes{}
clearCache.job.nodeSelectorNode labels for Clear Cache Job Pod assignment. If set to, it takes priority over the nodeSelector{}
clearCache.job.tolerationsTolerations for Clear Cache Job Pod assignment. If set to, it takes priority over the tolerations[]
clearCache.job.initContainersDefines containers that run before Clear Cache container in the Pod[]
clearCache.job.image.repositoryJob by Clear Cache ONLYOFFICE Docs image repositoryonlyoffice/docs-utils
clearCache.job.image.tagJob by Clear Cache ONLYOFFICE Docs image tag9.0.4-1
clearCache.job.image.pullPolicyJob by Clear Cache ONLYOFFICE Docs image pull policyIfNotPresent
clearCache.job.containerSecurityContext.enabledEnable security context for the Clear Cache containerfalse
clearCache.job.resources.requestsThe requested resources for the job Clear Cache container{}
clearCache.job.resources.limitsThe resources limits for the job Clear Cache container{}
clearCache.existingConfigmap.nameThe name of the existing ConfigMap that contains the clears the Cache directory custom script. If set, the default configmap will not be created""
clearCache.existingConfigmap.keyNameThe name of the script containing instructions for clears the Cache directory. Must be the same as the key name in clearCache.existingConfigmap.name if a custom script is usedclearCache.sh
grafanaDashboard.job.annotationsDefines annotations that will be additionally added to Grafana Dashboard Job. If set to, it takes priority over the commonAnnotations{}
grafanaDashboard.job.podAnnotationsMap of annotations to add to the Grafana Dashboard Pod{}
grafanaDashboard.job.customPodAntiAffinityProhibiting the scheduling of Grafana Dashboard Job Pod relative to other Pods containing the specified labels on the same node{}
grafanaDashboard.job.podAffinityDefines Pod affinity rules for Grafana Dashboard Job Pod scheduling by nodes relative to other Pods{}
grafanaDashboard.job.nodeAffinityDefines Node affinity rules for Grafana Dashboard Job Pod scheduling by nodes{}
grafanaDashboard.job.nodeSelectorNode labels for Grafana Dashboard Job Pod assignment. If set to, it takes priority over the nodeSelector{}
grafanaDashboard.job.tolerationsTolerations for Grafana Dashboard Job Pod assignment. If set to, it takes priority over the tolerations[]
grafanaDashboard.job.initContainersDefines containers that run before Grafana Dashboard container in the Pod[]
grafanaDashboard.job.image.repositoryJob by Grafana Dashboard ONLYOFFICE Docs image repositoryonlyoffice/docs-utils
grafanaDashboard.job.image.tagJob by Grafana Dashboard ONLYOFFICE Docs image tag9.0.4-1
grafanaDashboard.job.image.pullPolicyJob by Grafana Dashboard ONLYOFFICE Docs image pull policyIfNotPresent
grafanaDashboard.job.containerSecurityContext.enabledEnable security context for the Grafana Dashboard containerfalse
grafanaDashboard.job.resources.requestsThe requested resources for the job Grafana Dashboard container{}
grafanaDashboard.job.resources.limitsThe resources limits for the job Grafana Dashboard container{}
wopiKeysGeneration.job.annotationsDefines annotations that will be additionally added to Wopi Keys Generation Job. If set to, it takes priority over the commonAnnotations{}
wopiKeysGeneration.job.podAnnotationsMap of annotations to add to the Wopi Keys Generation Pod{}
wopiKeysGeneration.job.customPodAntiAffinityProhibiting the scheduling of Wopi Keys Generation Job Pod relative to other Pods containing the specified labels on the same node{}
wopiKeysGeneration.job.podAffinityDefines Pod affinity rules for Wopi Keys Generation Job Pod scheduling by nodes relative to other Pods{}
wopiKeysGeneration.job.nodeAffinityDefines Node affinity rules for Wopi Keys Generation Job Pod scheduling by nodes{}
wopiKeysGeneration.job.nodeSelectorNode labels for Wopi Keys Generation Job Pod assignment. If set to, it takes priority over the nodeSelector{}
wopiKeysGeneration.job.tolerationsTolerations for Wopi Keys Generation Job Pod assignment. If set to, it takes priority over the tolerations[]
wopiKeysGeneration.job.initContainersDefines containers that run before Wopi Keys Generation container in the Pod[]
wopiKeysGeneration.job.image.repositoryJob by Wopi Keys Generation ONLYOFFICE Docs image repositoryonlyoffice/docs-utils
wopiKeysGeneration.job.image.tagJob by Wopi Keys Generation ONLYOFFICE Docs image tag9.0.4-1
wopiKeysGeneration.job.image.pullPolicyJob by Wopi Keys Generation ONLYOFFICE Docs image pull policyIfNotPresent
wopiKeysGeneration.job.containerSecurityContext.enabledEnable security context for the Wopi Keys Generation containerfalse
wopiKeysGeneration.job.resources.requestsThe requested resources for the job Wopi Keys Generation container{}
wopiKeysGeneration.job.resources.limitsThe resources limits for the job Wopi Keys Generation container{}
wopiKeysDeletion.job.enabled Enable the execution of Wopi Keys Deletion job before deleting ONLYOFFICE Docs. He removes the WOPI secrets generated automatically. It is executed if wopi.enabled, wopi.keys.generation and wopiKeysDeletion.job.enabled are set to truetrue
wopiKeysDeletion.job.annotationsDefines annotations that will be additionally added to Wopi Keys Deletion Job. If set to, it takes priority over the commonAnnotations{}
wopiKeysDeletion.job.podAnnotationsMap of annotations to add to the Wopi Keys Deletion Pod{}
wopiKeysDeletion.job.customPodAntiAffinityProhibiting the scheduling of Wopi Keys Deletion Job Pod relative to other Pods containing the specified labels on the same node{}
wopiKeysDeletion.job.podAffinityDefines Pod affinity rules for Wopi Keys Deletion Job Pod scheduling by nodes relative to other Pods{}
wopiKeysDeletion.job.nodeAffinityDefines Node affinity rules for Wopi Keys Deletion Job Pod scheduling by nodes{}
wopiKeysDeletion.job.nodeSelectorNode labels for Wopi Keys Deletion Job Pod assignment. If set to, it takes priority over the nodeSelector{}
wopiKeysDeletion.job.tolerationsTolerations for Wopi Keys Deletion Job Pod assignment. If set to, it takes priority over the tolerations[]
wopiKeysDeletion.job.initContainersDefines containers that run before Wopi Keys Deletion container in the Pod[]
wopiKeysDeletion.job.image.repositoryJob by Wopi Keys Deletion ONLYOFFICE Docs image repositoryonlyoffice/docs-utils
wopiKeysDeletion.job.image.tagJob by Wopi Keys Deletion ONLYOFFICE Docs image tag9.0.4-1
wopiKeysDeletion.job.image.pullPolicyJob by Wopi Keys Deletion ONLYOFFICE Docs image pull policyIfNotPresent
wopiKeysDeletion.job.containerSecurityContext.enabledEnable security context for the Wopi Keys Deletion containerfalse
wopiKeysDeletion.job.resources.requestsThe requested resources for the job Wopi Keys Deletion container{}
wopiKeysDeletion.job.resources.limitsThe resources limits for the job Wopi Keys Deletion container{}
tests.enabledEnable the resources creation necessary for ONLYOFFICE Docs launch testing and connected dependencies availability testing. These resources will be used when running the helm test commandtrue
tests.annotationsDefines annotations that will be additionally added to Test Pod. If set to, it takes priority over the commonAnnotations{}
tests.customPodAntiAffinityProhibiting the scheduling of Test Pod relative to other Pods containing the specified labels on the same node{}
tests.podAffinityDefines Pod affinity rules for Test Pod scheduling by nodes relative to other Pods{}
tests.nodeAffinityDefines Node affinity rules for Test Pod scheduling by nodes{}
tests.nodeSelectorNode labels for Test Pod assignment. If set to, it takes priority over the nodeSelector{}
tests.tolerationsTolerations for Test Pod assignment. If set to, it takes priority over the tolerations[]
tests.initContainersDefines containers that run before Test container in the Pod[]
tests.image.repositoryTest container image nameonlyoffice/docs-utils
tests.image.tagTest container image tag9.0.4-1
tests.image.pullPolicyTest container image pull policyIfNotPresent
tests.containerSecurityContext.enabledEnable security context for the Test containerfalse
tests.resources.requestsThe requested resources for the test container{}
tests.resources.limitsThe resources limits for the test container{}
  • *Note: The prefix -de is specified in the value of the image repository, which means solution type. Possible options:
    • -de. For commercial Developer Edition
    • -ee. For commercial Enterprise Edition

    The default value of this parameter refers to the ONLYOFFICE Document Server Developer Edition. To learn more about this edition and compare it with other editions, please see the comparison table on this page.

Specify each parameter using the --set key=value[,key=value] argument to helm install. For example,

$ helm install documentserver onlyoffice/docs --set ingress.enabled=true,ingress.ssl.enabled=true,ingress.host=example.com

This command gives expose ONLYOFFICE Docs via HTTPS.

Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,

$ helm install documentserver -f values.yaml onlyoffice/docs
You can use the default values.yaml
5. Configuration and installation details
  1. Example deployment (optional)

    To deploy the example, set the example.enabled parameter to true:

    $ helm install documentserver onlyoffice/docs --set example.enabled=true
  2. Metrics deployment (optional)

    To deploy metrics, set metrics.enabled to true:

    $ helm install documentserver onlyoffice/docs --set metrics.enabled=true

    If you want to use Grafana to visualize metrics, set grafana.enabled to true. If you want to use Nginx Ingress to access Grafana, set grafana.ingress.enabled to true:

    $ helm install documentserver onlyoffice/docs --set grafana.enabled=true --set grafana.ingress.enabled=true
  3. Expose ONLYOFFICE Docs
    1. Expose ONLYOFFICE Docs via Service (HTTP Only)
      You should skip this step if you are going to expose ONLYOFFICE Docs via HTTPS.

      This type of exposure has the least overheads of performance, it creates a loadbalancer to get access to ONLYOFFICE Docs. Use this type of exposure if you use external TLS termination, and don't have another WEB application in the k8s cluster.

      To expose ONLYOFFICE Docs via service, set the service.type parameter to LoadBalancer:

      $ helm install documentserver onlyoffice/docs --set service.type=LoadBalancer,service.port=80

      Run the following command to get the documentserver service IP:

      $ kubectl get service documentserver -o jsonpath="{.status.loadBalancer.ingress[*].ip}"

      After that, ONLYOFFICE Docs will be available at http://DOCUMENTSERVER-SERVICE-IP/.

      If the service IP is empty, try getting the documentserver service hostname:

      $ kubectl get service documentserver -o jsonpath="{.status.loadBalancer.ingress[*].hostname}"

      In this case, ONLYOFFICE Docs will be available at http://DOCUMENTSERVER-SERVICE-HOSTNAME/.

    2. Expose ONLYOFFICE Docs via Ingress
      1. Installing the Kubernetes Nginx Ingress Controller

        To install the Nginx Ingress Controller to your cluster, run the following command:

        $ helm install nginx-ingress ingress-nginx/ingress-nginx --set controller.publishService.enabled=true,controller.replicaCount=2
        To install Nginx Ingress with the same parameters and to enable exposing ingress-nginx metrics to be gathered by Prometheus, run the following command:
        $ helm install nginx-ingress -f https://raw.githubusercontent.com/ONLYOFFICE/Kubernetes-Docs/master/sources/ingress_values.yaml ingress-nginx/ingress-nginx

        See more detail about installing Nginx Ingress via Helm here.

      2. Expose ONLYOFFICE Docs via HTTP
        You should skip this step if you are going to expose ONLYOFFICE Docs via HTTPS.

        This type of exposure has more overheads of performance compared with exposure via service, it also creates a loadbalancer to get access to ONLYOFFICE Docs. Use this type if you use external TLS termination and when you have several WEB applications in the k8s cluster. You can use the one set of ingress instances and the one loadbalancer for those. It can optimize the entry point performance and reduce your cluster payments, cause providers can charge a fee for each loadbalancer.

        To expose ONLYOFFICE Docs via ingress HTTP, set the ingress.enabled parameter to true:

        $ helm install documentserver onlyoffice/docs --set ingress.enabled=true

        Run the following command to get the documentserver ingress IP:

        $ kubectl get ingress documentserver -o jsonpath="{.status.loadBalancer.ingress[*].ip}"

        After that, ONLYOFFICE Docs will be available at http://DOCUMENTSERVER-INGRESS-IP/.

        If the ingress IP is empty, try getting the documentserver ingress hostname:

        $ kubectl get ingress documentserver -o jsonpath="{.status.loadBalancer.ingress[*].hostname}"

        In this case, ONLYOFFICE Docs will be available at http://DOCUMENTSERVER-INGRESS-HOSTNAME/.

      3. Expose ONLYOFFICE Docs via HTTPS

        This type of exposure allows you to enable internal TLS termination for ONLYOFFICE Docs.

        Create the tls secret with an ssl certificate inside.

        Put the ssl certificate and the private key into the tls.crt and tls.key files and then run:

        $ kubectl create secret generic tls \
          --from-file=./tls.crt \
          --from-file=./tls.key
        $ helm install documentserver onlyoffice/docs --set ingress.enabled=true,ingress.ssl.enabled=true,ingress.host=example.com

        Run the following command to get the documentserver ingress IP:

        $ kubectl get ingress documentserver -o jsonpath="{.status.loadBalancer.ingress[*].ip}"

        If the ingress IP is empty, try getting the documentserver ingress hostname:

        $ kubectl get ingress documentserver -o jsonpath="{.status.loadBalancer.ingress[*].hostname}"

        Associate the documentserver ingress IP or hostname with your domain name through your DNS provider.

        After that, ONLYOFFICE Docs will be available at https://your-domain-name/.

      4. Expose ONLYOFFICE Docs via HTTPS using the Let's Encrypt certificate
        • Add Helm repositories:
          $ helm repo add jetstack https://charts.jetstack.io
          $ helm repo update
        • Installing cert-manager
          $ helm install cert-manager --version v1.17.4 jetstack/cert-manager \
            --namespace cert-manager \
            --create-namespace \
            --set crds.enabled=true \
            --set crds.keep=false

          Next, perform the installation or upgrade by setting the ingress.enabled, ingress.ssl.enabled and ingress.letsencrypt.enabled parameters to true. Also set your own values in the parameters ingress.letsencrypt.email, ingress.host or ingress.tenants (for example, --set "ingress.tenants={tenant1.example.com,tenant2.example.com}") if you want to use multiple domain names.

    3. Expose ONLYOFFICE Docs on a virtual path

      This type of exposure allows you to expose ONLYOFFICE Docs on a virtual path, for example, http://your-domain-name/docs. To expose ONLYOFFICE Docs via ingress on a virtual path, set the ingress.enabled, ingress.host and ingress.path parameters.

      $ helm install documentserver onlyoffice/docs --set ingress.enabled=true,ingress.host=your-domain-name,ingress.path=/docs

      The list of supported ingress controllers for virtual path configuration:

      For virtual path configuration with Ingress NGINX by Kubernetes, append the pattern (/|$)(.*) to the ingress.path, for example, /docs becomes /docs(/|$)(.*).

  4. Expose ONLYOFFICE Docs via route in OpenShift

    This type of exposure allows you to expose ONLYOFFICE Docs via route in OpenShift. To expose ONLYOFFICE Docs via route, use these parameters: openshift.route.enabled, openshift.route.host, openshift.route.path.

    $ helm install documentserver onlyoffice/docs --set openshift.route.enabled=true,openshift.route.host=your-domain-name,openshift.route.path=/docs

    For tls termination, manually add certificates to the route via OpenShift web console.

6. Scale ONLYOFFICE Docs (optional)
This step is optional. You can skip this step entirely if you want to use default deployment settings.
  1. Horizontal Pod Autoscaling

    You can enable Autoscaling so that the number of replicas of docservice and converter deployments is calculated automatically based on the values and type of metrics.

    For resource metrics, API metrics.k8s.io must be registered, which is generally provided by metrics-server. It can be launched as a cluster add-on.

    To use the target utilization value (target.type==Utilization), it is necessary that the values for resources.requests are specified in the deployment.

    For more information about Horizontal Pod Autoscaling, see here.

    To enable HPA for the docservice deployment, specify the docservice.autoscaling.enabled=true parameter. In this case, the docservice.replicas parameter is ignored and the number of replicas is controlled by HPA.

    Similarly, to enable HPA for the converter deployment, specify the converter.autoscaling.enabled=true parameter. In this case, the converter.replicas parameter is ignored and the number of replicas is controlled by HPA.

    With the autoscaling.enabled parameter enabled, by default Autoscaling will adjust the number of replicas based on the average percentage of CPU Utilization. For other configurable Autoscaling parameters, see the Parameters table.

  2. Manual scaling

    The docservice and converter deployments consist of 2 pods each other by default.

    To scale the docservice deployment, use the following command:

    $ kubectl scale -n default deployment docservice --replicas=POD_COUNT

    where POD_COUNT is a number of the docservice pods.

    Do the same to scale the converter deployment:

    $ kubectl scale -n default deployment converter --replicas=POD_COUNT
7. Update ONLYOFFICE Docs

It's necessary to set the parameters for updating. For example,

$ helm upgrade documentserver onlyoffice/docs \
  --set docservice.image.tag=[version]
You also need to specify the parameters that were specified during installation.

Or modify the values.yaml file and run the command:

$ helm upgrade documentserver -f values.yaml onlyoffice/docs

Running the helm upgrade command runs a hook that shuts down the ONLYOFFICE Docs and cleans up the database. This is needed when updating the version of ONLYOFFICE Docs. The default hook execution time is 300s. The execution time can be changed using --timeout [time], for example:

$ helm upgrade documentserver -f values.yaml onlyoffice/docs --timeout 15m
When upgrading ONLYOFFICE Docs in a private k8s cluster behind a Web proxy or with no internet access, see the notes below.

If you want to update any parameter other than the version of the ONLYOFFICE Docs, then run the helm upgrade command without hooks, for example:

$ helm upgrade documentserver onlyoffice/docs --set jwt.enabled=false --no-hooks

To rollback updates, run the following command:

$ helm rollback documentserver
When rolling back ONLYOFFICE Docs in a private k8s cluster behind a Web proxy or with no internet access, see the notes below.
8. Shutdown ONLYOFFICE Docs (optional)

To perform the shutdown, run the following command:

$ kubectl apply -f https://raw.githubusercontent.com/ONLYOFFICE/Kubernetes-Docs/master/sources/shutdown-ds.yaml -n <NAMESPACE>

Where <NAMESPACE> is the namespace where ONLYOFFICE Docs is installed. If not specified, the default value will be used: default.

For example:

$ kubectl apply -f https://raw.githubusercontent.com/ONLYOFFICE/Kubernetes-Docs/master/sources/shutdown-ds.yaml -n onlyoffice

After successfully executing the Pod shutdown-ds that created the Job, delete this Job with the following command:

$ kubectl delete job shutdown-ds -n <NAMESPACE>

If after stopping ONLYOFFICE Docs you need to start it again then restart docservice and converter pods. For example, using the following command:

$ kubectl delete pod converter-*** docservice-*** -n <NAMESPACE>
9. Update ONLYOFFICE Docs license (optional)

After the release v5.1.1, you can update the license by simply recreating the secret with the new license, without deleting or rebooting pods. The documentserver is now able to dynamically reread the license file after replacing it. For example:

In order to update the license, you need to perform the following steps:

  • Place the license.lic file containing the new key in some directory
  • Run the following commands:
    $ kubectl delete secret [SECRET_LICENSE_NAME] -n <NAMESPACE>
    $ kubectl create secret generic [SECRET_LICENSE_NAME] --from-file=path/to/license.lic -n <NAMESPACE>

    Where SECRET_LICENSE_NAME is the name of an existing secret with a license

Thats all, the documentserver will reread the new license itself.

[DEPRECATED METHOD]Restart docservice and converter pods. For example, using the following command:
$ kubectl delete pod converter-*** docservice-*** -n <NAMESPACE>
10. ONLYOFFICE Docs installation test (optional)

You can test ONLYOFFICE Docs availability and access to connected dependencies by running the following command:

$ helm test documentserver -n <NAMESPACE>

The output should have the following line:

Phase: Succeeded

To view the log of the Pod running as a result of the helm test command, run the following command:

$ kubectl logs -f test-ds -n <NAMESPACE>

The ONLYOFFICE Docs availability check is considered a priority, so if it fails with an error, the test is considered to be failed.

After this, you can delete the test-ds Pod by running the following command:

$ kubectl delete pod test-ds -n <NAMESPACE>
This testing is for informational purposes only and cannot guarantee 100% availability results. It may be that even though all checks are completed successfully, an error occurs in the application. In this case, more detailed information can be found in the application logs.
11. Run Jobs in a private k8s cluster (optional)

When running Job for installation, update, rollback and deletion, the container being launched needs Internet access to download the latest sql scripts. If the access of containers to the external network is prohibited in your k8s cluster, then you can perform these Jobs by setting the privateCluster=true parameter and manually create a ConfigMap with the necessary sql scripts.

To do this, run the following commands:

If your cluster already has remove-db-scripts and init-db-scripts configmaps, then delete them:

$ kubectl delete cm remove-db-scripts init-db-scripts

Download the ONLYOFFICE Docs database scripts for database cleaning and database tables creating:

If PostgreSQL is selected as the database server:

$ wget -O removetbl.sql https://raw.githubusercontent.com/ONLYOFFICE/server/master/schema/postgresql/removetbl.sql
$ wget -O createdb.sql https://raw.githubusercontent.com/ONLYOFFICE/server/master/schema/postgresql/createdb.sql

If MySQL is selected as the database server:

$ wget -O removetbl.sql https://raw.githubusercontent.com/ONLYOFFICE/server/master/schema/mysql/removetbl.sql
$ wget -O createdb.sql https://raw.githubusercontent.com/ONLYOFFICE/server/master/schema/mysql/createdb.sql

Create a configmap from them:

$ kubectl create configmap remove-db-scripts --from-file=./removetbl.sql
$ kubectl create configmap init-db-scripts --from-file=./createdb.sql

If you specified a different name for ConfigMap and for the file from which it is created, set the appropriate parameters for the corresponding Jobs:

  • existingConfigmap.tblRemove.name and existingConfigmap.tblRemove.keyName for scripts for database cleaning
  • existingConfigmap.tblCreate.name and existingConfigmap.tblCreate.keyName for scripts for database tables creating

Next, when executing the commands helm install|upgrade|rollback|delete, set the parameter privateCluster=true

If it is possible to use a Web Proxy in your network to ensure the Pods containers have access to the Internet, then you can leave the parameter privateCluster=false, not manually create a configmaps with sql scripts and set the parameter webProxy.enabled=true, also setting the appropriate parameters for the Web Proxy.
12. Access to the info page (optional)

The access to /info page is limited by default. In order to allow the access to it, you need to specify the IP addresses or subnets (that will be Proxy container clients in this case) using proxy.infoAllowedIP parameter. Taking into consideration the specifics of Kubernetes net interaction it is possible to get the original IP of the user (being Proxy client) though it's not a standard scenario. Generally the Pods / Nodes / Load Balancer addresses will actually be the clients, so these addresses are to be used. In this case the access to the info page will be available to everyone. You can further limit the access to the info page using Nginx Basic Authentication which you can turn on by setting proxy.infoAllowedUser parameter value and by setting the password using proxy.infoAllowedPassword parameter, alternatively you can use the existing secret with password by setting its name with proxy.infoAllowedExistingSecret parameter.

Using Grafana to visualize metrics (optional)

This step is optional. You can skip this section if you don't want to install Grafana
1. Deploy Grafana
It is assumed that step #6.2 has already been completed.
  1. Deploy Grafana without installing ready-made dashboards
    You should skip this step if you want to Deploy Grafana with the installation of ready-made dashboards.

    To install Grafana to your cluster, run the following command:

    $ helm install grafana --version 12.1.8 bitnami/grafana \
      --set service.ports.grafana=80 \
      --set config.useGrafanaIniFile=true \
      --set config.grafanaIniConfigMap=grafana-ini \
      --set datasources.secretName=grafana-datasource \
      --set resourcesPreset=none \
      --set image.repository=bitnamilegacy/grafana \
      --set image.tag=12.1.1-debian-12-r1 \
      --set global.security.allowInsecureImages=true
  2. Deploy Grafana with the installation of ready-made dashboards
    1. Installing ready-made Grafana dashboards

      To install ready-made Grafana dashboards, set the grafana.enabled and grafana.dashboard.enabled parameters to true. If ONLYOFFICE Docs is already installed you need to run the helm upgrade documentserver onlyoffice/docs --set grafana.enabled=true --set grafana.dashboard.enabled=true command or helm upgrade documentserver -f ./values.yaml onlyoffice/docs if the parameters are specified in the values.yaml file. As a result, ready-made dashboards in the JSON format will be downloaded from the Grafana website, the necessary edits will be made to them and configmap will be created from them. A dashboard will also be added to visualize metrics coming from the ONLYOFFICE Docs (it is assumed that step #6 has already been completed).

    2. Installing Grafana

      To install Grafana to your cluster, run the following command:

      $ helm install grafana --version 12.1.8 bitnami/grafana \
        --set service.ports.grafana=80 \
        --set config.useGrafanaIniFile=true \
        --set config.grafanaIniConfigMap=grafana-ini \
        --set datasources.secretName=grafana-datasource \
        --set resourcesPreset=none \
        --set image.repository=bitnamilegacy/grafana \
        --set image.tag=12.1.1-debian-12-r1 \
        --set global.security.allowInsecureImages=true \
        --set dashboardsProvider.enabled=true \
        --set dashboardsConfigMaps[0].configMapName=dashboard-node-exporter \
        --set dashboardsConfigMaps[0].fileName=dashboard-node-exporter.json \
        --set dashboardsConfigMaps[1].configMapName=dashboard-deployment \
        --set dashboardsConfigMaps[1].fileName=dashboard-deployment.json \
        --set dashboardsConfigMaps[2].configMapName=dashboard-redis \
        --set dashboardsConfigMaps[2].fileName=dashboard-redis.json \
        --set dashboardsConfigMaps[3].configMapName=dashboard-rabbitmq \
        --set dashboardsConfigMaps[3].fileName=dashboard-rabbitmq.json \
        --set dashboardsConfigMaps[4].configMapName=dashboard-postgresql \
        --set dashboardsConfigMaps[4].fileName=dashboard-postgresql.json \
        --set dashboardsConfigMaps[5].configMapName=dashboard-nginx-ingress \
        --set dashboardsConfigMaps[5].fileName=dashboard-nginx-ingress.json \
        --set dashboardsConfigMaps[6].configMapName=dashboard-documentserver \
        --set dashboardsConfigMaps[6].fileName=dashboard-documentserver.json \
        --set dashboardsConfigMaps[7].configMapName=dashboard-cluster-resourses \
        --set dashboardsConfigMaps[7].fileName=dashboard-cluster-resourses.json

      After executing this command, the following dashboards will be imported into Grafana:

      • Node Exporter
      • Deployment Statefulset Daemonset
      • Redis Dashboard for Prometheus Redis Exporter
      • RabbitMQ-Overview
      • PostgreSQL Database
      • NGINX Ingress controller
      • ONLYOFFICE Docs
      • Resource usage by Pods and Containers
      You can see the description of the ONLYOFFICE Docs metrics that are visualized in Grafana here.

      See more details about installing Grafana via Helm here.

2. Access to Grafana via Ingress
It is assumed that step #5.3.2.1 has already been completed.

If ONLYOFFICE Docs was installed with the parameter grafana.ingress.enabled (step #5.2) then access to Grafana will be at: http://INGRESS-ADDRESS/grafana/

If Ingres was installed using a secure connection (step #5.3.2.3), then access to Grafana will be at: https://your-domain-name/grafana/

3. View gathered metrics in Grafana

Go to the address http(s)://your-domain-name/grafana/

Login - admin

To get the password, run the following command:

$ kubectl get secret grafana-admin --namespace default -o jsonpath="{.data.GF_SECURITY_ADMIN_PASSWORD}" | base64 --decode

In the dashboard section, you will see the added dashboards that will display the metrics received from Prometheus.

Host ONLYOFFICE Docs on your own server or use it in the cloud

Article with the tag:
Browse all tags