专属域名
文档搜索
轩辕助手
Run助手
邀请有礼
返回顶部
快速返回页面顶部
收起
收起工具栏
轩辕镜像 官方专业版
轩辕镜像 官方专业版轩辕镜像 官方专业版官方专业版
首页个人中心搜索镜像

交易
充值流量我的订单
工具
提交工单镜像收录一键安装
Npm 源Pip 源Homebrew 源
帮助
常见问题
其他
关于我们网站地图

官方QQ群: 1072982923

bitnami/spark Docker 镜像 - 轩辕镜像

spark
bitnami/spark
比特纳米Spark安全镜像是一种为Apache Spark大数据处理框架提供的预配置、安全加固的软件镜像,它集成了必要的依赖组件,经过严格的安全审查与漏洞修复,支持快速部署和持续更新,确保用户能够便捷、安全地搭建和运行Spark集群,适用于企业级大数据分析、数据处理等场景,满足合规性要求并有效降低运维复杂度。
186 收藏0 次下载
🚀专业版镜像服务,面向生产环境设计
版本下载
🚀专业版镜像服务,面向生产环境设计

Bitnami Secure Image for Apache Spark

What is Apache Spark?

Apache Spark is a high-performance engine for large-scale computing tasks, such as data processing, machine learning and real-time data streaming. It includes APIs for Java, Python, Scala and R.

Overview of Apache Spark Trademarks: This software listing is packaged by Bitnami. The respective trademarks mentioned in the offering are owned by the respective companies, and use of them does not imply any affiliation or endorsement.

TL;DR

Docker Compose
console
docker run --name spark bitnami/spark:latest

You can find the available configuration options in the Environment Variables section.

Why use Bitnami Secure Images?

Those are hardened, minimal CVE images built and maintained by Bitnami. Bitnami Secure Images are based on the cloud-optimized, security-hardened enterprise OS Photon Linux. Why choose BSI images?

  • Hardened secure images of popular open source software with Near-Zero Vulnerabilities
  • Vulnerability Triage & Prioritization with VEX Statements, KEV and EPSS Scores
  • Compliance focus with FIPS, STIG, and air-gap options, including secure bill of materials (SBOM)
  • Software supply chain provenance attestation through in-toto
  • First class support for the internet’s favorite Helm charts

Each image comes with valuable security metadata. You can view the metadata in our public catalog here. Note: Some data is only available with commercial subscriptions to BSI.

!Alt text !Alt text

If you are looking for our previous generation of images based on Debian Linux, please see the Bitnami Legacy registry.

How to deploy Apache Spark in Kubernetes?

Deploying Bitnami applications as Helm Charts is the easiest way to get started with our applications on Kubernetes. Read more about the installation in the Bitnami Apache Spark Chart GitHub repository.

Why use a non-root container?

Non-root container images add an extra layer of security and are generally recommended for production environments. However, because they run as a non-root user, privileged tasks are typically off-limits. Learn more about non-root containers in our docs.

Supported tags and respective Dockerfile links

Learn more about the Bitnami tagging policy and the difference between rolling tags and immutable tags in our documentation page.

You can see the equivalence between the different tags by taking a look at the tags-info.yaml file present in the branch folder, i.e bitnami/ASSET/BRANCH/DISTRO/tags-info.yaml.

Subscribe to project updates by watching the bitnami/containers GitHub repo.

Get this image

The recommended way to get the Bitnami Apache Spark Docker Image is to pull the prebuilt image from the Docker Hub Registry.

console
docker pull bitnami/spark:latest

To use a specific version, you can pull a versioned tag. You can view the list of available versions in the Docker Hub Registry.

console
docker pull bitnami/spark:[TAG]

If you wish, you can also build the image yourself by cloning the repository, changing to the directory containing the Dockerfile and executing the docker build command. Remember to replace the APP, VERSION and OPERATING-SYSTEM path placeholders in the example command below with the correct values.

console
git clone [***]
cd bitnami/APP/VERSION/OPERATING-SYSTEM
docker build -t bitnami/APP:latest .

Configuration

Environment variables
Customizable environment variables
NameDescriptionDefault Value
SPARK_MODESpark cluster mode to run (can be master or worker).master
SPARK_MASTER_URLUrl where the worker can find the master. Only needed when spark mode is worker.spark://spark-master:7077
SPARK_NO_DAEMONIZESpark does not run as a daemon.true
SPARK_RPC_AUTHENTICATION_ENABLEDEnable RPC authentication.no
SPARK_RPC_AUTHENTICATION_SECRETThe secret key used for RPC authentication.nil
SPARK_RPC_ENCRYPTION_ENABLEDEnable RPC encryption.no
SPARK_LOCAL_STORAGE_ENCRYPTION_ENABLEDEnable local storage encryption.no
SPARK_SSL_ENABLEDEnable SSL configuration.no
SPARK_SSL_KEY_PASSWORDThe password to the private key in the key store.nil
SPARK_SSL_KEYSTORE_PASSWORDThe password for the key store.nil
SPARK_SSL_KEYSTORE_FILELocation of the key store.${SPARK_CONF_DIR}/certs/spark-keystore.jks
SPARK_SSL_TRUSTSTORE_PASSWORDThe password for the trust store.nil
SPARK_SSL_TRUSTSTORE_FILELocation of the key store.${SPARK_CONF_DIR}/certs/spark-truststore.jks
SPARK_SSL_NEED_CLIENT_AUTHWhether to require client authentication.yes
SPARK_SSL_PROTOCOLTLS protocol to use.TLSv1.2
SPARK_WEBUI_SSL_PORTSpark management server port number for SSL/TLS connections.nil
SPARK_METRICS_ENABLEDWhether to enable metrics for Spark.false
Read-only environment variables
NameDescriptionValue
SPARK_BASE_DIRSpark installation directory.${BITNAMI_ROOT_DIR}/spark
SPARK_CONF_DIRSpark configuration directory.${SPARK_BASE_DIR}/conf
SPARK_DEFAULT_CONF_DIRSpark default configuration directory.${SPARK_BASE_DIR}/conf.default
SPARK_WORK_DIRSpark workspace directory.${SPARK_BASE_DIR}/work
SPARK_CONF_FILESpark configuration file path.${SPARK_CONF_DIR}/spark-defaults.conf
SPARK_LOG_DIRSpark logs directory.${SPARK_BASE_DIR}/logs
SPARK_TMP_DIRSpark tmp directory.${SPARK_BASE_DIR}/tmp
SPARK_JARS_DIRSpark jar directory.${SPARK_BASE_DIR}/jars
SPARK_INITSCRIPTS_DIRSpark init scripts directory./docker-entrypoint-initdb.d
SPARK_USERSpark user.spark
SPARK_DAEMON_USERSpark system user.spark
SPARK_DAEMON_GROUPSpark system group.spark

Additionally, more environment variables natively supported by Apache Spark can be found at the official documentation.

For example, you could still use SPARK_WORKER_CORES or SPARK_WORKER_MEMORY to configure the number of cores and the amount of memory to be used by a worker machine.

When you start the spark image, you can adjust the configuration of the instance by passing one or more environment variables either on the docker-compose file or on the docker run command line. If you want to add a new environment variable:

  • For docker-compose add the variable name and value under the application section in the docker-compose.yml file present in this repository:
yaml
spark:
  ...
  environment:
    - SPARK_MODE=master
  ...
  • For manual execution add a -e option with each variable and value:
console
docker run -d --name spark \
  --network=spark_network \
  -e SPARK_MODE=master \
  bitnami/spark
Security

The Bitnani Apache Spark docker image supports enabling RPC authentication, RPC encryption and local storage encryption easily using the following env vars in all the nodes of the cluster.

diff
+ SPARK_RPC_AUTHENTICATION_ENABLED=yes
+ SPARK_RPC_AUTHENTICATION_SECRET=RPC_AUTHENTICATION_SECRET
+ SPARK_RPC_ENCRYPTION=yes
+ SPARK_LOCAL_STORAGE_ENCRYPTION=yes

Please note that RPC_AUTHENTICATION_SECRET is a placeholder that needs to be updated with a correct value. Be also aware that currently is not possible to submit an application to a standalone cluster if RPC authentication is configured. More info about the issue here.

Additionally, SSL configuration can be easily activated following the next steps:

  1. Enable SSL configuration by setting the following env vars:

    diff
    + SPARK_SSL_ENABLED=yes
    + SPARK_SSL_KEY_PASSWORD=KEY_PASSWORD
    + SPARK_SSL_KEYSTORE_PASSWORD=KEYSTORE_PASSWORD
    + SPARK_SSL_TRUSTSTORE_PASSWORD=TRUSTSTORE_PASSWORD
    + SPARK_SSL_NEED_CLIENT_AUTH=yes
    + SPARK_SSL_PROTOCOL=TLSv1.2
    

    Please note that KEY_PASSWORD, KEYSTORE_PASSWORD, and TRUSTSTORE_PASSWORD are placeholders that needs to be updated with a correct value.

  2. You need to mount your spark keystore and truststore files to /opt/bitnami/spark/conf/certs. Please note they should be called spark-keystore.jks and spark-truststore.jks and they should be in JKS format.

Setting up an Apache Spark Cluster

A Apache Spark cluster can easily be setup with the default docker-compose.yml file from the root of this repo. The docker-compose includes two different services, spark-master and spark-worker.

By default, when you deploy the docker-compose file you will get an Apache Spark cluster with 1 master and 1 worker.

If you want N workers, all you need to do is start the docker-compose deployment with the following command:

console
docker-compose up --scale spark-worker=3
Mount a custom configuration file

The image looks for configuration in the conf/ directory of /opt/bitnami/spark.

Using docker-compose
yaml
...
volumes:
  - /path/to/spark-defaults.conf:/opt/bitnami/spark/conf/spark-defaults.conf
...
Using the command line
console
docker run --name spark -v /path/to/spark-defaults.conf:/opt/bitnami/spark/conf/spark-defaults.conf bitnami/spark:latest

After that, your changes will be taken into account in the server's behaviour.

Installing additional jars

By default, this container bundles a generic set of jar files but the default image can be extended to add as many jars as needed for your specific use case. For instance, the following Dockerfile adds aws-java-sdk-bundle-1.11.704.jar:

Dockerfile
FROM bitnami/spark
USER root
RUN install_packages curl
USER 1001
RUN curl [***] --output /opt/bitnami/spark/jars/aws-java-sdk-bundle-1.11.704.jar
Using a different version of Hadoop jars

In a similar way that in the previous section, you may want to use a different version of Hadoop jars.

Go to <[***]> and copy the download url bundling the Hadoop version you want and matching the Apache Spark version of the container. Extend the Bitnami container image as below:

Dockerfile
FROM bitnami/spark:latest
USER root
RUN install_packages curl
USER 1001
RUN rm -r /opt/bitnami/spark/jars && \
    curl --location [***] | \
    tar --extract --gzip --strip=1 --directory /opt/bitnami/spark/ spark-3.5.0-bin-hadoop3/jars/

You can check the Hadoop version by running the following commands in the new container image:

console
$ pyspark
>>> sc._gateway.jvm.org.apache.hadoop.util.VersionInfo.getVersion()
'2.7.4'
FIPS configuration in Bitnami Secure Images

The Bitnami Apache Spark Docker image from the Bitnami Secure Images catalog includes extra features and settings to configure the container with FIPS capabilities. You can configure the next environment variables:

  • OPENSSL_FIPS: whether OpenSSL runs in FIPS mode or not. yes (default), no.

Logging

The Bitnami Apache Spark Docker image sends the container logs to the stdout. To view the logs:

console
docker logs spark

or using Docker Compose:

console
docker-compose logs spark

You can configure the containers logging driver using the --log-driver option if you wish to consume the container logs differently. In the default configuration docker uses the json-file driver.

Maintenance

Backing up your container

To backup your data, configuration and logs, follow these simple steps:

Step 1: Stop the currently running container
console
docker stop spark

or using Docker Compose:

console
docker-compose stop spark
Step 2: Run the backup command

We need to mount two volumes in a container we will use to create the backup: a directory on your host to store the backup in, and the volumes from the container we just stopped so we can access the data.

console
docker run --rm -v /path/to/spark-backups:/backups --volumes-from spark busybox \
  cp -a /bitnami/spark /backups/latest

or using Docker Compose:

console
docker run --rm -v /path/to/spark-backups:/backups --volumes-from `docker-compose ps -q spark` busybox \
  cp -a /bitnami/spark /backups/latest
Restoring a backup

Restoring a backup is as simple as mounting the backup as volumes in the container.

console
docker run -v /path/to/spark-backups/latest:/bitnami/spark bitnami/spark:latest

or by modifying the docker-compose.yml file present in this repository:

yaml
services:
  spark:
  ...
    volumes:
      - /path/to/spark-backups/latest:/bitnami/spark
  ...
Upgrade this image

Bitnami provides up-to-date versions of spark, including security patches, soon after they are made upstream. We recommend that you follow these steps to upgrade your container.

Step 1: Get the updated image
console
docker pull bitnami/spark:latest

or if you're using Docker Compose, update the value of the image property to bitnami/spark:latest.

Step 2: Stop and backup the currently running container

Before continuing, you should backup your container's data, configuration and logs.

Follow the steps on creating a backup.

Step 3: Remove the currently running container
console
docker rm -v spark

or using Docker Compose:

console
docker-compose rm -v spark
Step 4: Run the new image

Re-create your container from the new image, restoring your backup if necessary.

console
docker run --name spark bitnami/spark:latest

or using Docker Compose:

console
docker-compose up spark

Notable Changes

4.0.0-debian-10-r2
  • The container image was updated to use hadoop-aws 3.4.x and aws-java-sdk was removed. If you want to use a different version, please read Using a different version of Hadoop jars.
3.0.0-debian-10-r44
  • The container image was updated to use hadoop-aws 3.2.x. If you want to use a different version, please read Using a different version of Hadoop jars.
2.4.5-debian-10-r49
  • This image now has an aws-cli and two jars: hadoop-aws and aws-java-sdk for provide an easier way to use AWS.

Using docker-compose.yaml

Please be aware this file has not undergone internal testing. Consequently, we advise its use exclusively for development or testing purposes. For production-ready deployments, we highly recommend utilizing its associated Bitnami Helm chart.

If you detect any issue in the docker-compose.yaml file, feel free to report it or contribute with a fix by following our Contributing Guidelines.

Contributing

We'd love for you to contribute to this container. You can request new features by creating an issue or submitting a pull request with your contribution.

Issues

If you encountered a problem running this container, you can file an issue. For us to provide better support, be sure to fill the issue template.

License

Copyright © 2026 Broadcom. The term "Broadcom" refers to Broadcom Inc. and/or its subsidiaries.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

<[***]>

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

查看更多 spark 相关镜像 →
spark logo
spark
官方
Apache Spark 是一款专为大规模数据处理打造的统一分析引擎,它集成了批处理、流处理、机器学习、图计算等多种数据处理能力,通过基于内存的计算模型显著提升数据处理速度,具备高效、易用且可扩展的特性,能为企业和开发者提供一站式的大数据分析解决方案,支持从数据提取、清洗、转换到深度分析与应用部署的全流程,满足各类复杂数据场景下的处理需求,助力实现数据驱动的高效决策与业务创新。
1121M+ pulls
上次更新:7 天前
apache/spark logo
apache/spark
Apache Spark是由Apache软件基金会开发的开源分布式计算系统,专为大数据处理设计,支持批处理、流处理、机器学习和图计算等多种数据处理模式,通过基于内存的计算引擎显著提升处理速度,具备高效、易用且可扩展的特性,广泛应用于数据科学与大数据分析领域,为用户提供快速、灵活的大数据处理解决方案。
9710M+ pulls
上次更新:8 天前
bitnamicharts/spark logo
bitnamicharts/spark
认证
Bitnami Apache Spark的Helm Chart,用于在Kubernetes集群上部署高性能大规模计算引擎,支持数据处理、机器学习和实时流处理,提供Java、Python、Scala和R的API。
500K+ pulls
上次更新:5 个月前
rancher/spark logo
rancher/spark
认证
暂无描述
13.4K pulls
上次更新:10 年前
sparkfhir/spark logo
sparkfhir/spark
Spark FHIR Server是一个基于FHIR标准的开源医疗数据服务器,用于存储、查询、交换和管理医疗保健资源,支持医疗数据的标准化互操作。
110K+ pulls
上次更新:3 个月前
saagie/spark logo
saagie/spark
暂无描述
1M+ pulls
上次更新:8 个月前

轩辕镜像配置手册

探索更多轩辕镜像的使用方法,找到最适合您系统的配置方式

登录仓库拉取

通过 Docker 登录认证访问私有仓库

Linux

在 Linux 系统配置镜像服务

Windows/Mac

在 Docker Desktop 配置镜像

Docker Compose

Docker Compose 项目配置

K8s Containerd

Kubernetes 集群配置 Containerd

K3s

K3s 轻量级 Kubernetes 镜像加速

Dev Containers

VS Code Dev Containers 配置

MacOS OrbStack

MacOS OrbStack 容器配置

宝塔面板

在宝塔面板一键配置镜像

群晖

Synology 群晖 NAS 配置

飞牛

飞牛 fnOS 系统配置镜像

极空间

极空间 NAS 系统配置服务

爱快路由

爱快 iKuai 路由系统配置

绿联

绿联 NAS 系统配置镜像

威联通

QNAP 威联通 NAS 配置

Podman

Podman 容器引擎配置

Singularity/Apptainer

HPC 科学计算容器配置

其他仓库配置

ghcr、Quay、nvcr 等镜像仓库

专属域名拉取

无需登录使用专属域名

需要其他帮助?请查看我们的 常见问题Docker 镜像访问常见问题解答 或 提交工单

镜像拉取常见问题

轩辕镜像免费版与专业版有什么区别?

免费版仅支持 Docker Hub 访问,不承诺可用性和速度;专业版支持更多镜像源,保证可用性和稳定速度,提供优先客服响应。

轩辕镜像支持哪些镜像仓库?

专业版支持 docker.io、gcr.io、ghcr.io、registry.k8s.io、nvcr.io、quay.io、mcr.microsoft.com、docker.elastic.co 等;免费版仅支持 docker.io。

流量耗尽错误提示

当返回 402 Payment Required 错误时,表示流量已耗尽,需要充值流量包以恢复服务。

410 错误问题

通常由 Docker 版本过低导致,需要升级到 20.x 或更高版本以支持 V2 协议。

manifest unknown 错误

先检查 Docker 版本,版本过低则升级;版本正常则验证镜像信息是否正确。

镜像拉取成功后,如何去掉轩辕镜像域名前缀?

使用 docker tag 命令为镜像打上新标签,去掉域名前缀,使镜像名称更简洁。

查看全部问题→

用户好评

来自真实用户的反馈,见证轩辕镜像的优质服务

用户头像

oldzhang

运维工程师

Linux服务器

5

"Docker访问体验非常流畅,大镜像也能快速完成下载。"

轩辕镜像
镜像详情
...
bitnami/spark
官方博客Docker 镜像使用技巧与技术博客
热门镜像查看热门 Docker 镜像推荐
一键安装一键安装 Docker 并配置镜像源
提交工单
咨询镜像拉取问题请 提交工单,官方技术交流群:1072982923
轩辕镜像面向开发者与科研用户,提供开源镜像的搜索和访问支持。所有镜像均来源于原始仓库,本站不存储、不修改、不传播任何镜像内容。
咨询镜像拉取问题请提交工单,官方技术交流群:
轩辕镜像面向开发者与科研用户,提供开源镜像的搜索和访问支持。所有镜像均来源于原始仓库,本站不存储、不修改、不传播任何镜像内容。
官方邮箱:点击复制邮箱
©2024-2026 源码跳动
官方邮箱:点击复制邮箱Copyright © 2024-2026 杭州源码跳动科技有限公司. All rights reserved.