专属域名
文档搜索
轩辕助手
Run助手
邀请有礼
返回顶部
快速返回页面顶部
收起
收起工具栏
轩辕镜像 官方专业版
轩辕镜像 官方专业版轩辕镜像 官方专业版官方专业版
首页个人中心搜索镜像

交易
充值流量我的订单
工具
提交工单镜像收录一键安装
Npm 源Pip 源Homebrew 源
帮助
常见问题
其他
关于我们网站地图

官方QQ群: 1072982923

bitnamicharts/zookeeper Docker 镜像 - 轩辕镜像

zookeeper
bitnamicharts/zookeeper
Bitnami提供的Apache ZooKeeper Helm图表,用于在Kubernetes环境中简化分布式协调服务的部署与管理。
0 次下载
🚀专业版镜像服务,面向生产环境设计
版本下载
🚀专业版镜像服务,面向生产环境设计

Bitnami Secure Images Helm chart for Apache ZooKeeper

Apache ZooKeeper provides a reliable, centralized register of configuration data and services for distributed applications.

Overview of Apache ZooKeeper

Trademarks: This software listing is packaged by Bitnami. The respective trademarks mentioned in the offering are owned by the respective companies, and use of them does not imply any affiliation or endorsement.

TL;DR

console
helm install my-release oci://registry-1.docker.io/bitnamicharts/zookeeper

Why use Bitnami Secure Images?

Those are hardened, minimal CVE images built and maintained by Bitnami. Bitnami Secure Images are based on the cloud-optimized, security-hardened enterprise OS Photon Linux. Why choose BSI images?

  • Hardened secure images of popular open source software with Near-Zero Vulnerabilities
  • Vulnerability Triage & Prioritization with VEX Statements, KEV and EPSS Scores
  • Compliance focus with FIPS, STIG, and air-gap options, including secure bill of materials (SBOM)
  • Software supply chain provenance attestation through in-toto
  • First class support for the internet’s favorite Helm charts

Each image comes with valuable security metadata. You can view the metadata in our public catalog here. Note: Some data is only available with commercial subscriptions to BSI.

!Alt text !Alt text

If you are looking for our previous generation of images based on Debian Linux, please see the Bitnami Legacy registry.

Introduction

This chart bootstraps a ZooKeeper deployment on a Kubernetes cluster using the Helm package manager.

Prerequisites

  • Kubernetes 1.23+
  • Helm 3.8.0+
  • PV provisioner support in the underlying infrastructure

Installing the Chart

To install the chart with the release name my-release:

console
helm install my-release oci://REGISTRY_NAME/REPOSITORY_NAME/zookeeper

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts.

These commands deploy ZooKeeper on the Kubernetes cluster in the default configuration. The Parameters section lists the parameters that can be configured during installation.

Tip: List all releases using helm list

Configuration and installation details

Resource requests and limits

Bitnami charts allow setting resource requests and limits for all containers inside the chart deployment. These are inside the resources value (check parameter table). Setting requests is essential for production workloads and these should be adapted to your specific use case.

To make this process easier, the chart contains the resourcesPreset values, which automatically sets the resources section according to different presets. Check these presets in the bitnami/common chart. However, in production workloads using resourcesPreset is discouraged as it may not fully adapt to your specific needs. Find more information on container resource management in the official Kubernetes documentation.

Update credentials

Bitnami charts configure credentials at first boot. Any further change in the secrets or credentials require manual intervention. Follow these instructions:

  • Update the user password following the upstream documentation
  • Update the password secret with the new values (replace the SECRET_NAME, CLIENT_PASSWORD and SERVER_PASSWORD placeholders)
shell
kubectl create secret generic SECRET_NAME --from-literal=client-password=CLIENT_PASSWORD --from-literal=server-password=SERVER_PASSWORD --dry-run -o yaml | kubectl apply -f -
Prometheus metrics

This chart can be integrated with Prometheus by setting metrics.enabled to true. This will expose Zookeeper native Prometheus endpoint and a metrics service configurable under the metrics.service section. It will have the necessary annotations to be automatically scraped by Prometheus.

Prometheus requirements

It is necessary to have a working installation of Prometheus or Prometheus Operator for the integration to work. Install the Bitnami Prometheus helm chart or the Bitnami Kube Prometheus helm chart to easily have a working Prometheus in your cluster.

Integration with Prometheus Operator

The chart can deploy ServiceMonitor objects for integration with Prometheus Operator installations. To do so, set the value metrics.serviceMonitor.enabled=true. Ensure that the Prometheus Operator CustomResourceDefinitions are installed in the cluster or it will fail with the following error:

text
no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1"

Install the Bitnami Kube Prometheus helm chart for having the necessary CRDs and the Prometheus Operator.

Rolling vs Immutable tags

It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image.

Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist.

Configure log level

You can configure the ZooKeeper log level using the ZOO_LOG_LEVEL environment variable or the parameter logLevel. By default, it is set to ERROR because each use of the liveness probe and the readiness probe produces an INFO message on connection and a WARN message on disconnection, generating a high volume of noise in your logs.

In order to remove that log noise so levels can be set to 'INFO', two changes must be made.

First, ensure that you are not getting metrics via the deprecated pattern of polling 'mntr' on the ZooKeeper client port. The preferred method of polling for Apache ZooKeeper metrics is the ZooKeeper metrics server. This is supported in this chart when setting metrics.enabled to true.

Second, to avoid the connection/disconnection messages from the probes, you can set custom values for these checks which direct them to the ZooKeeper Admin Server instead of the client port. By default, an Admin Server will be started that listens on localhost at port 8080. The following is an example of this use of the Admin Server for probes:

yaml
livenessProbe:
  enabled: false
readinessProbe:
  enabled: false
customLivenessProbe:
  exec:
    command: ['/bin/bash', '-c', 'curl -s -m 2 http://localhost:8080/commands/ruok | grep ruok']
  initialDelaySeconds: 30
  periodSeconds: 10
  timeoutSeconds: 5
  successThreshold: 1
  failureThreshold: 6
customReadinessProbe:
  exec:
    command: ['/bin/bash', '-c', 'curl -s -m 2 http://localhost:8080/commands/ruok | grep error | grep null']
  initialDelaySeconds: 5
  periodSeconds: 10
  timeoutSeconds: 5
  successThreshold: 1
  failureThreshold: 6

You can also set the log4j logging level and what log appenders are turned on, by using ZOO_LOG4J_PROP set inside of conf/log4j.properties as zookeeper.root.logger by default to

console
zookeeper.root.logger=INFO, CONSOLE

the available appender is

  • CONSOLE
  • ROLLINGFILE
  • RFAAUDIT
  • TRACEFILE
Backup and restore

To back up and restore Helm chart deployments on Kubernetes, you need to back up the persistent volumes from the source deployment and attach them to a new deployment using Velero, a Kubernetes backup/restore tool. Find the instructions for using Velero in this guide.

Persistence

The Bitnami ZooKeeper image stores the ZooKeeper data and configurations at the /bitnami/zookeeper path of the container.

Persistent Volume Claims are used to keep the data across deployments. This is known to work in GCE, AWS, and minikube. See the Parameters section to configure the PVC or to disable persistence.

If you encounter errors when working with persistent volumes, refer to our troubleshooting guide for persistent volumes.

Adjust permissions of persistent volume mountpoint

As the image run as non-root by default, it is necessary to adjust the ownership of the persistent volume so that the container can write data into it.

By default, the chart is configured to use Kubernetes Security Context to automatically change the ownership of the volume. However, this feature does not work in all Kubernetes distributions. As an alternative, this chart supports using an initContainer to change the ownership of the volume before mounting it in the final destination.

You can enable this initContainer by setting volumePermissions.enabled to true.

Configure the data log directory

You can use a dedicated device for logs (instead of using the data directory) to help avoiding competition between logging and snaphots. To do so, set the dataLogDir parameter with the path to be used for writing transaction logs. Alternatively, set this parameter with an empty string and it will result in the log being written to the data directory (Zookeeper's default behavior).

When using a dedicated device for logs, you can use a PVC to persist the logs. To do so, set persistence.enabled to true. See the Persistence Parameters section for more information.

Set pod affinity

This chart allows you to set custom pod affinity using the affinity parameter. Find more information about pod affinity in the Kubernetes documentation.

As an alternative, you can use any of the preset configurations for pod affinity, pod anti-affinity, and node affinity available at the bitnami/common chart. To do so, set the podAffinityPreset, podAntiAffinityPreset, or nodeAffinityPreset parameters.

Parameters

Global parameters
NameDescriptionValue
global.imageRegistryGlobal Docker image registry""
global.imagePullSecretsGlobal Docker registry secret names as an array[]
global.defaultStorageClassGlobal default StorageClass for Persistent Volume(s)""
global.storageClassDEPRECATED: use global.defaultStorageClass instead""
global.security.allowInsecureImagesAllows skipping image verificationfalse
global.compatibility.openshift.adaptSecurityContextAdapt the securityContext sections of the deployment to make them compatible with Openshift restricted-v2 SCC: remove runAsUser, runAsGroup and fsGroup and let the platform use their allowed default IDs. Possible values: auto (apply if the detected running cluster is Openshift), force (perform the adaptation always), disabled (do not perform adaptation)auto
Common parameters
NameDescriptionValue
kubeVersionOverride Kubernetes version""
nameOverrideString to partially override common.names.fullname template (will maintain the release name)""
fullnameOverrideString to fully override common.names.fullname template""
clusterDomainKubernetes Cluster Domaincluster.local
extraDeployExtra objects to deploy (evaluated as a template)[]
commonLabelsAdd labels to all the deployed resources{}
commonAnnotationsAdd annotations to all the deployed resources{}
namespaceOverrideOverride namespace for ZooKeeper resources""
usePasswordFilesMount credentials as files instead of using environment variablestrue
diagnosticMode.enabledEnable diagnostic mode (all probes will be disabled and the command will be overridden)false
diagnosticMode.commandCommand to override all containers in the statefulset["sleep"]
diagnosticMode.argsArgs to override all containers in the statefulset["infinity"]
ZooKeeper chart parameters
NameDescriptionValue
image.registryZooKeeper image registryREGISTRY_NAME
image.repositoryZooKeeper image repositoryREPOSITORY_NAME/zookeeper
image.digestZooKeeper image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag""
image.pullPolicyZooKeeper image pull policyIfNotPresent
image.pullSecretsSpecify docker-registry secret names as an array[]
image.debugSpecify if debug values should be setfalse
auth.client.enabledEnable ZooKeeper client-server authentication. It uses SASL/Digest-MD5false
auth.client.clientUserUser that will use ZooKeeper clients to auth""
auth.client.clientPasswordPassword that will use ZooKeeper clients to auth""
auth.client.serverUsersComma, semicolon or whitespace separated list of user to be created""
auth.client.serverPasswordsComma, semicolon or whitespace separated list of passwords to assign to users when created""
auth.client.existingSecretUse existing secret (ignores previous passwords)""
auth.quorum.enabledEnable ZooKeeper server-server authentication. It uses SASL/Digest-MD5false
auth.quorum.learnerUserUser that the ZooKeeper quorumLearner will use to authenticate to quorumServers.""
auth.quorum.learnerPasswordPassword that the ZooKeeper quorumLearner will use to authenticate to quorumServers.""
auth.quorum.serverUsersComma, semicolon or whitespace separated list of users for the quorumServers.""
auth.quorum.serverPasswordsComma, semicolon or whitespace separated list of passwords to assign to users when created""
auth.quorum.existingSecretUse existing secret (ignores previous passwords)""
tickTimeBasic time unit (in milliseconds) used by ZooKeeper for heartbeats2000
initLimitZooKeeper uses to limit the length of time the ZooKeeper servers in quorum have to connect to a leader10
syncLimitHow far out of date a server can be from a leader5
preAllocSizeBlock size for transaction log file65536
snapCountThe number of transactions recorded in the transaction log before a snapshot can be taken (and the transaction log rolled)***
maxClientCnxnsLimits the number of concurrent connections that a single client may make to a single member of the ZooKeeper ensemble60
maxSessionTimeoutMaximum session timeout (in milliseconds) that the server will allow the client to negotiate40000
heapSizeSize (in MB) for the Java Heap options (Xmx and Xms)1024
fourlwCommandsWhitelistA list of comma separated Four Letter Words commands that can be executedsrvr, mntr, ruok
minServerIdMinimal SERVER_ID value, nodes increment their IDs respectively1
listenOnAllIPsAllow ZooKeeper to listen for connections from its peers on all available IP addressesfalse
zooServersZooKeeper space separated servers list. Leave empty to use the default ZooKeeper server names.""
autopurge.snapRetainCountThe most recent snapshots amount (and corresponding transaction logs) to retain10
autopurge.purgeIntervalThe time interval (in hours) for which the purge task has to be triggered1
logLevelLog level for the ZooKeeper server. ERROR by defaultERROR
jvmFlagsDefault JVM flags for the ZooKeeper process""
dataLogDirDedicated data log directory""
configurationConfigure ZooKeeper with a custom zoo.cfg file""
existingConfigmapThe name of an existing ConfigMap with your custom configuration for ZooKeeper""
extraEnvVarsArray with extra environment variables to add to ZooKeeper nodes[]
extraEnvVarsCMName of existing ConfigMap containing extra env vars for ZooKeeper nodes""
extraEnvVarsSecretName of existing Secret containing extra env vars for ZooKeeper nodes""

_Note: the README for this chart is longer than the DockerHub length limit of 25000, so it has been trimmed. The full README can be found at [***]

查看更多 zookeeper 相关镜像 →
zookeeper logo
zookeeper
官方
Apache ZooKeeper 是一款开源服务器,专为分布式系统设计,致力于提供高可靠的协调服务,它通过简洁高效的接口支持配置管理、命名服务、分布式锁、集群节点同步等关键功能,帮助分布式应用实现数据一致性维护、节点状态监控及故障自动恢复,是构建稳定、可靠分布式架构不可或缺的基础设施,广泛应用于分布式计算、大数据处理等领域,为各类分布式系统的协调与协作提供坚实保障。
1490100M+ pulls
上次更新:8 天前
bitnami/zookeeper logo
bitnami/zookeeper
认证
Bitnami为ZooKeeper提供的安全镜像是一款预配置且经过安全加固的软件包,旨在简化分布式协调服务ZooKeeper的部署与管理流程,集成了自动化安全更新、合规性检查及行业最佳实践配置,可确保在生产环境中稳定运行,适用于需要可靠分布式协调能力的企业级应用,帮助用户快速搭建安全高效的ZooKeeper集群,有效降低部署复杂度并显著增强系统整体安全性。
133100M+ pulls
上次更新:4 个月前
ubuntu/zookeeper logo
ubuntu/zookeeper
认证
ZooKeeper提供集中式配置信息维护,由Canonical提供长期支持的跟踪版本。
141M+ pulls
上次更新:1 年前
rancher/zookeeper logo
rancher/zookeeper
认证
暂无描述
1100K+ pulls
上次更新:9 年前
adobe/zookeeper logo
adobe/zookeeper
认证
暂无描述
10K+ pulls
上次更新:4 个月前
debezium/zookeeper logo
debezium/zookeeper
Zookeeper image required when running the Debezium platform.
215M+ pulls
上次更新:1 年前

轩辕镜像配置手册

探索更多轩辕镜像的使用方法,找到最适合您系统的配置方式

登录仓库拉取

通过 Docker 登录认证访问私有仓库

Linux

在 Linux 系统配置镜像服务

Windows/Mac

在 Docker Desktop 配置镜像

Docker Compose

Docker Compose 项目配置

K8s Containerd

Kubernetes 集群配置 Containerd

K3s

K3s 轻量级 Kubernetes 镜像加速

Dev Containers

VS Code Dev Containers 配置

MacOS OrbStack

MacOS OrbStack 容器配置

宝塔面板

在宝塔面板一键配置镜像

群晖

Synology 群晖 NAS 配置

飞牛

飞牛 fnOS 系统配置镜像

极空间

极空间 NAS 系统配置服务

爱快路由

爱快 iKuai 路由系统配置

绿联

绿联 NAS 系统配置镜像

威联通

QNAP 威联通 NAS 配置

Podman

Podman 容器引擎配置

Singularity/Apptainer

HPC 科学计算容器配置

其他仓库配置

ghcr、Quay、nvcr 等镜像仓库

专属域名拉取

无需登录使用专属域名

需要其他帮助?请查看我们的 常见问题Docker 镜像访问常见问题解答 或 提交工单

镜像拉取常见问题

轩辕镜像免费版与专业版有什么区别?

免费版仅支持 Docker Hub 访问,不承诺可用性和速度;专业版支持更多镜像源,保证可用性和稳定速度,提供优先客服响应。

轩辕镜像支持哪些镜像仓库?

专业版支持 docker.io、gcr.io、ghcr.io、registry.k8s.io、nvcr.io、quay.io、mcr.microsoft.com、docker.elastic.co 等;免费版仅支持 docker.io。

流量耗尽错误提示

当返回 402 Payment Required 错误时,表示流量已耗尽,需要充值流量包以恢复服务。

410 错误问题

通常由 Docker 版本过低导致,需要升级到 20.x 或更高版本以支持 V2 协议。

manifest unknown 错误

先检查 Docker 版本,版本过低则升级;版本正常则验证镜像信息是否正确。

镜像拉取成功后,如何去掉轩辕镜像域名前缀?

使用 docker tag 命令为镜像打上新标签,去掉域名前缀,使镜像名称更简洁。

查看全部问题→

用户好评

来自真实用户的反馈,见证轩辕镜像的优质服务

用户头像

oldzhang

运维工程师

Linux服务器

5

"Docker访问体验非常流畅,大镜像也能快速完成下载。"

轩辕镜像
镜像详情
...
bitnamicharts/zookeeper
官方博客Docker 镜像使用技巧与技术博客
热门镜像查看热门 Docker 镜像推荐
一键安装一键安装 Docker 并配置镜像源
提交工单
咨询镜像拉取问题请 提交工单,官方技术交流群:1072982923
轩辕镜像面向开发者与科研用户,提供开源镜像的搜索和访问支持。所有镜像均来源于原始仓库,本站不存储、不修改、不传播任何镜像内容。
咨询镜像拉取问题请提交工单,官方技术交流群:
轩辕镜像面向开发者与科研用户,提供开源镜像的搜索和访问支持。所有镜像均来源于原始仓库,本站不存储、不修改、不传播任何镜像内容。
官方邮箱:点击复制邮箱
©2024-2026 源码跳动
官方邮箱:点击复制邮箱Copyright © 2024-2026 杭州源码跳动科技有限公司. All rights reserved.