mumoshu/aws-secret-operatorA Kubernetes operator that automatically creates and updates Kubernetes secrets according to what are stored in AWS Secrets Manager.
aws-secret-operator custom resources maps AWS secrets to K8S secrets. *** K8S secrets as just cached, latest AWS secrets.
Security:
By "decryption at rest". No need to create Kubernetes secrets by hand, helm, kustomize, or anything that requires you to decrypt the original secret on CI or your laptop
Scalability:
Relies on Secrets Manager instead of SSM Parameter Store for less chances being throttled by SSM's API rate limit.
Kubernetes secrets act as cache of Secrets Manager secrets, even number of API calls to Secrets Manager is minimum.
Let's say you've stored a secrets manager secret named prod/mysecret whose value is:
json{ "foo": "bar" }
console$ aws secretsmanager get-secret-value \ --secret-id prod/mysecret $ aws secretsmanager create-secret \ --name prod/mysecret { "ARN": "arn:aws:secretsmanager:REGION:ACCOUNT:secret:prod/mysecret-Ld0PUs", "Name": "prod/mysecret" } $ aws secretsmanager put-secret-value\ --secret-id prod/mysecret \ --secret-string '{"foo":"bar"}'
Let's see the SecretId and VersionId which uniquely identifies the secret:
console$ aws secretsmanager describe-secret --secret-id prod/mysecret { "ARN": "arn:aws:secretsmanager:REGION:ACCOUNT:secret:prod/mysecret-Ld0PUs", "Name": "prod/mysecret", "LastChangedDate": ***.306, "LastAccessedDate": ***.0, "VersionIdsToStages": { "c43e66cb-d0fe-44c5-9b7e-d450441a04be": [ "AWSCURRENT" ] } }
Note that
aws-secret-operatorintentionally disallow omittingVersionIdor specifyingVersionStageas it makes you difficult to trigger updates to Pods in response to AWS secrets changes.Run a script like update-aws-secret-ids in order to automate bumping VersionId in your configuration files.
Create a custom resource awssecret named example that points the SecretsManager secret:
your_example_awssecret.yaml:
yamlapiVersion: mumoshu.github.io/v1alpha1 kind: AWSSecret metadata: name: example spec: stringDataFrom: secretsManagerSecretRef: secretId: prod/mysecret versionId: c43e66cb-d0fe-44c5-9b7e-d450441a04be
The operator then creates a Kubernetes secret named example that looks like:
json{ "kind": "Secret", "apiVersion": "v1", "metadata": { "name": "example", "namespace": "default", "selfLink": "/api/v1/namespaces/default/secrets/test", "uid": "82ef45ee-4fdd-11e8-87bf-00e092001ba4", "resourceVersion": "25758", "creationTimestamp": "2018-05-04T20:55:43Z" }, "data": { "foo": "YmFyCg==" }, "type": "Opaque" }
Now, your pod should either mount the generated secret as a volume, or set an environment variable from the secret.
Secrets manager has two secret formats: key/value (where the content is a plain JSON key-value map) and plain (a raw string). When the operator spots the key/value version, it expands all the key-value pairs into the nested secret keys:
json{ "apiVersion": "v1", "data": { "password": "cGFzc3dvcmQ=", "user": "dXNlcg==" }, "kind": "Secret", "metadata": { "creationTimestamp": "2020-06-29T10:35:32Z", "name": "example", "namespace": "default" }, "type": "Opaque" }
In case of plain secret format, the whole content of the secret is exposed as a data key of the secret:
json{ "apiVersion": "v1", "data": { "data": "Zm9vXG5iYXIK" }, "kind": "Secret", "metadata": { "creationTimestamp": "2020-06-29T10:35:32Z", "name": "example", "namespace": "default" }, "type": "Opaque" }
bash# Setup RBAC (Namespaced, more secure) $ kubectl create -f deploy/namespaced/rbac.yaml # Setup RBAC (Cluster-scoped, easy to use) $ kubectl create -f deploy/cluster_scoped/rbac.yaml # Setup the CRD $ kubectl create -f deploy/crds/mumoshu_v1alpha1_awssecret_crd.yaml # Deploy the app-operator # CAUTION: replace `ap-northeast-2` with your region e.g. us-west-2, and image tag $ cat deploy/namespaced/deployment.yaml | sed -e 's/REPLACE_THIS_WITH_YOUR_REGION/ap-northeast-1/' | kubectl create -f - # or cluster-scoped $ cat deploy/cluster_scoped/deployment.yaml | sed -e 's/REPLACE_THIS_WITH_YOUR_REGION/ap-northeast-1/' | kubectl create -f - # Verify that a pod is created $ kubectl get pod -l app=aws-secret-operator # Create an AWSSecret resource $ kubectl create -f your_example_awssecret.yaml # Verify that a secret is created $ kubectl get secret # Cleanup $ kubectl delete -f your_example_awssecret.yaml $ kubectl delete -f deploy/namespaced/deployment.yaml $ kubectl delete -f deploy/namespaced/rbac.yaml $ kubectl delete -f deploy/crds/mumoshu_v1alpha1_awssecret_crd.yaml
Why not use helm-secrets or sops in combination with e.g. kubectl?
Because I don't want to give my CI the permission to decrypt secrets.
Just for example, helm-secrets works by calling sops under the hood to decrypt encrypted values.yaml files, so that helm is able to sees the secrets as unencrypted values.yaml files. This implies that your CI system must have an AWS credential to call kms:Decrypt referring your KMS key. A compromised AWS credential can be used by a malicious user to decrypt those secrets. This is especially problematic when the CI system is a publicly hosted SaaS.
aws-secret-operator prevents this kind of ***s by enabling your deployment pipeline usually seen in a CI system to submit just the reference to secrets stored in AWS Secrets Manager. The operator then decrypts them to produce Kubernetes Secret objects.
Why not use AWS SSM Parameter Store as a primary source of secrets?
Pros:
Parameter Store has an efficient API to batch get multiple secrets sharing a same prefix.
Cons:
Its API rate limit is way too low. This has been discussed in several places in the Internet:
Why not use S3 as a primary source of secrets?
Pros:
Scalability. This project could have used S3 instead, because S3 supports efficient batch gets with filters by prefixes. An example of such project is chamber. chamber is a CLI wraps SSM Param Store and S3, moving from Parameter Store to S3 due to the issue 1 explained above.
Cons:
Tooling. One of benefit of Secrets Manager over S3 is that in theory Secrets Manager has possibilities to deserve attentions of developers who who, for a better U/X, wraps Secrets Manager into a dedicated service/application to manager secrets.
As using S3 for a primary storage for secrets is not a common practice, S3 can be said to have less possibilities to deserve.
Use sops in an independent CI/CD pipeline so you can version-control the "latest master data" of secrets on Git repos. Each pull request that changes the master data results in CI workflows that deploys the master data to Secrets Manager.
Do provide only a KMS encryption permission to the CI system, so that the compromised AWS credential won't allow the ***er to decrypt your secrets.
You don't need to share secrets across apps/namespaces/environments.
Assuming you're going to manage encrypted secrets within a Git repo, sharing them requires you to copy and possible re-encrypt the secret to multiple git projects.
This project is powered by operator-framework. Thanks for building the awesome framework :)
探索更多轩辕镜像的使用方法,找到最适合您系统的配置方式
通过 Docker 登录认证访问私有仓库
在 Linux 系统配置镜像服务
在 Docker Desktop 配置镜像
Docker Compose 项目配置
Kubernetes 集群配置 Containerd
K3s 轻量级 Kubernetes 镜像加速
VS Code Dev Containers 配置
MacOS OrbStack 容器配置
在宝塔面板一键配置镜像
Synology 群晖 NAS 配置
飞牛 fnOS 系统配置镜像
极空间 NAS 系统配置服务
爱快 iKuai 路由系统配置
绿联 NAS 系统配置镜像
QNAP 威联通 NAS 配置
Podman 容器引擎配置
HPC 科学计算容器配置
ghcr、Quay、nvcr 等镜像仓库
无需登录使用专属域名
需要其他帮助?请查看我们的 常见问题Docker 镜像访问常见问题解答 或 提交工单
免费版仅支持 Docker Hub 访问,不承诺可用性和速度;专业版支持更多镜像源,保证可用性和稳定速度,提供优先客服响应。
专业版支持 docker.io、gcr.io、ghcr.io、registry.k8s.io、nvcr.io、quay.io、mcr.microsoft.com、docker.elastic.co 等;免费版仅支持 docker.io。
当返回 402 Payment Required 错误时,表示流量已耗尽,需要充值流量包以恢复服务。
通常由 Docker 版本过低导致,需要升级到 20.x 或更高版本以支持 V2 协议。
先检查 Docker 版本,版本过低则升级;版本正常则验证镜像信息是否正确。
使用 docker tag 命令为镜像打上新标签,去掉域名前缀,使镜像名称更简洁。
来自真实用户的反馈,见证轩辕镜像的优质服务