oliver006/redis_exporter.
Supports Valkey and Redis 2.x, 3.x, 4.x, 5.x, 6.x, and 7.x

as a relabel config to the corresponding scrape config. As per the regex value, only pods with "redis" in their name will be relabelled as such.
Similar approaches can be taken with other role types depending on how scrape targets are retrieved.
The Prometheus docs have a very informative article on how multi-target exporters are intended to work.
Run the exporter with the command line flag --redis.addr= so it won't try to access the local instance every time the /metrics endpoint is scraped. Using below config instead of the /metric endpoint the /scrape endpoint will be used by prometheus. As an example the first target will be queried with this web request:
[***]
yamlscrape_configs: ## config for the multiple Redis targets that the exporter will scrape - job_name: 'redis_exporter_targets' static_configs: - targets: - redis://first-redis-host:6379 - redis://second-redis-host:6379 - redis://second-redis-host:6380 - redis://second-redis-host:6381 metrics_path: /scrape relabel_configs: - source_labels: [__address__] target_label: __param_target - source_labels: [__param_target] target_label: instance - target_label: __address__ replacement: <<REDIS-EXPORTER-HOSTNAME>>:9121 ## config for scraping the exporter itself - job_name: 'redis_exporter' static_configs: - targets: - <<REDIS-EXPORTER-HOSTNAME>>:9121
The Redis instances are listed under targets, the Redis exporter hostname is configured via the last relabel_config rule.
If authentication is needed for the Redis instances then you can set the password via the --redis.password command line option of
the exporter (this means you can currently only use one password across the instances you try to scrape this way. Use several
exporters if this is a problem).
You can also use a json file to supply multiple targets by using file_sd_configs like so:
yamlscrape_configs: - job_name: 'redis_exporter_targets' file_sd_configs: - files: - targets-redis-instances.json metrics_path: /scrape relabel_configs: - source_labels: [__address__] target_label: __param_target - source_labels: [__param_target] target_label: instance - target_label: __address__ replacement: <<REDIS-EXPORTER-HOSTNAME>>:9121 ## config for scraping the exporter itself - job_name: 'redis_exporter' static_configs: - targets: - <<REDIS-EXPORTER-HOSTNAME>>:9121
The targets-redis-instances.json should look something like this:
json[ { "targets": [ "redis://redis-host-01:6379", "redis://redis-host-02:6379"], "labels": { } } ]
Prometheus uses file watches and all changes to the json file are applied immediately.
When using a Redis Cluster, the exporter provides a discovery endpoint that can be used to discover all nodes in the cluster.
To use this feature, the exporter must be started with the --is-cluster flag.
The discovery endpoint is available at /discover-cluster-nodes and can be used in the Prometheus configuration like this:
yamlscrape_configs: - job_name: 'redis_exporter_cluster_nodes' http_sd_configs: - url: http://<<REDIS-EXPORTER-HOSTNAME>>:9121/discover-cluster-nodes refresh_interval: 10m metrics_path: /scrape relabel_configs: - source_labels: [__address__] target_label: __param_target - source_labels: [__param_target] target_label: instance - target_label: __address__ replacement: <<REDIS-EXPORTER-HOSTNAME>>:9121 ## config for scraping the exporter itself - job_name: 'redis_exporter' static_configs: - targets: - <<REDIS-EXPORTER-HOSTNAME>>:9121
| Name | Environment Variable Name | Description |
|---|---|---|
| redis.addr | REDIS_ADDR | Address of the Redis instance, defaults to redis://localhost:6379. If TLS is enabled, the address must be like the following rediss://localhost:6379 |
| redis.user | REDIS_USER | User name to use for authentication (Redis ACL for Redis 6.0 and newer). |
| redis.password | REDIS_PASSWORD | Password of the Redis instance, defaults to "" (no password). |
| redis.password-file | REDIS_PASSWORD_FILE | Password file of the Redis instance to scrape, defaults to "" (no password file). |
| check-keys | REDIS_EXPORTER_CHECK_KEYS | Comma separated list of key patterns to export value and length/size, eg: db3=user_count will export key user_count from db 3. db defaults to 0 if omitted. The key patterns specified with this flag will be found using SCAN. Use this option if you need glob pattern matching; check-single-keys is faster for non-pattern keys. Warning: using --check-keys to match a very large number of keys can slow down the exporter to the point where it doesn't finish scraping the redis instance. --check-keys doesn't work in cluster mode as "SCAN" does not work across multiple instances. |
| check-single-keys | REDIS_EXPORTER_CHECK_SINGLE_KEYS | Comma separated list of keys to export value and length/size, eg: db3=user_count will export key user_count from db 3. db defaults to 0 if omitted. The keys specified with this flag will be looked up directly without any glob pattern matching. Use this option if you don't need glob pattern matching; it is faster than check-keys. |
| check-streams | REDIS_EXPORTER_CHECK_STREAMS | Comma separated list of stream-patterns to export info about streams, groups and consumers. Syntax is the same as check-keys. |
| check-single-streams | REDIS_EXPORTER_CHECK_SINGLE_STREAMS | Comma separated list of streams to export info about streams, groups and consumers. The streams specified with this flag will be looked up directly without any glob pattern matching. Use this option if you don't need glob pattern matching; it is faster than check-streams. |
| streams-exclude-consumer-metrics | REDIS_EXPORTER_STREAMS_EXCLUDE_CONSUMER_METRICS | Don't collect per consumer metrics for streams (decreases amount of metrics and cardinality). |
| check-keys-batch-size | REDIS_EXPORTER_CHECK_KEYS_BATCH_SIZE | Approximate number of keys to process in each execution. This is basically the COUNT option that will be passed into the SCAN command as part of the execution of the key or key group metrics, see COUNT option. Larger value speeds up scanning. Still Redis is a single-threaded app, huge COUNT can affect production environment. |
| count-keys | REDIS_EXPORTER_COUNT_KEYS | Comma separated list of patterns to count, eg: db3=sessions:* will count all keys with prefix sessions: from db 3. db defaults to 0 if omitted. Warning: The exporter runs SCAN to count the keys. This might not perform well on large databases. |
| script | REDIS_EXPORTER_SCRIPT | Comma separated list of path(s) to Redis Lua script(s) for gathering extra metrics. |
| debug | REDIS_EXPORTER_DEBUG | Verbose debug output |
| log-level | REDIS_EXPORTER_LOG_LEVEL | Set log level |
| log-format | REDIS_EXPORTER_LOG_FORMAT | Log format, valid options are txt (default) and json. |
| namespace | REDIS_EXPORTER_NAMESPACE | Namespace for the metrics, defaults to redis. |
| connection-timeout | REDIS_EXPORTER_CONNECTION_TIMEOUT | Timeout for connection to Redis instance, defaults to "15s" (in Golang duration format) |
| web.listen-address | REDIS_EXPORTER_WEB_LISTEN_ADDRESS | Address to listen on for web interface and telemetry, defaults to 0.0.0.0:9121. |
| web.telemetry-path | REDIS_EXPORTER_WEB_TELEMETRY_PATH | Path under which to expose metrics, defaults to /metrics. |
| redis-only-metrics | REDIS_EXPORTER_REDIS_ONLY_METRICS | Whether to export only Redis metrics (omit Go process+runtime metrics), defaults to false. |
| include-go-runtime-metrics | REDIS_EXPORTER_INCLUDE_GO_RUNTIME_METRICS | Whether to include Go runtime metrics, defaults to false. |
| include-config-metrics | REDIS_EXPORTER_INCL_CONFIG_METRICS | Whether to include all config settings as metrics, defaults to false. |
| include-system-metrics | REDIS_EXPORTER_INCL_SYSTEM_METRICS | Whether to include system metrics like total_system_memory_bytes, defaults to false. |
| include-modules-metrics | REDIS_EXPORTER_INCL_MODULES_METRICS | Whether to collect Redis Modules metrics, defaults to false. |
探索更多轩辕镜像的使用方法,找到最适合您系统的配置方式
通过 Docker 登录认证访问私有仓库
在 Linux 系统配置镜像服务
在 Docker Desktop 配置镜像
Docker Compose 项目配置
Kubernetes 集群配置 Containerd
K3s 轻量级 Kubernetes 镜像加速
VS Code Dev Containers 配置
MacOS OrbStack 容器配置
在宝塔面板一键配置镜像
Synology 群晖 NAS 配置
飞牛 fnOS 系统配置镜像
极空间 NAS 系统配置服务
爱快 iKuai 路由系统配置
绿联 NAS 系统配置镜像
QNAP 威联通 NAS 配置
Podman 容器引擎配置
HPC 科学计算容器配置
ghcr、Quay、nvcr 等镜像仓库
无需登录使用专属域名
需要其他帮助?请查看我们的 常见问题Docker 镜像访问常见问题解答 或 提交工单
免费版仅支持 Docker Hub 访问,不承诺可用性和速度;专业版支持更多镜像源,保证可用性和稳定速度,提供优先客服响应。
专业版支持 docker.io、gcr.io、ghcr.io、registry.k8s.io、nvcr.io、quay.io、mcr.microsoft.com、docker.elastic.co 等;免费版仅支持 docker.io。
当返回 402 Payment Required 错误时,表示流量已耗尽,需要充值流量包以恢复服务。
通常由 Docker 版本过低导致,需要升级到 20.x 或更高版本以支持 V2 协议。
先检查 Docker 版本,版本过低则升级;版本正常则验证镜像信息是否正确。
使用 docker tag 命令为镜像打上新标签,去掉域名前缀,使镜像名称更简洁。
来自真实用户的反馈,见证轩辕镜像的优质服务