专属域名
文档搜索
轩辕助手
Run助手
邀请有礼
返回顶部
快速返回页面顶部
收起
收起工具栏
轩辕镜像 官方专业版
轩辕镜像 官方专业版轩辕镜像 官方专业版官方专业版
首页个人中心搜索镜像

交易
充值流量我的订单
工具
提交工单镜像收录一键安装
Npm 源Pip 源Homebrew 源
帮助
常见问题
其他
关于我们网站地图

官方QQ群: 1072982923

oceanprotocol/operator-engine Docker 镜像 - 轩辕镜像

operator-engine
oceanprotocol/operator-engine
oceanprotocol
0 次下载
🚀 稳定镜像源 = 更少宕机 + 更低运维成本
镜像简介版本下载
🚀 稳定镜像源 = 更少宕机 + 更低运维成本

![banner]([***]

Operator-Engine

Orchestrates a compute job

!Travis (.com) branch !GitHub contributors

Table of Contents

  • Operator-Engine
  • Table of Contents
    • About
    • Getting Started
      • Running the Engine
        • Applying the Operator Engine deployment
        • Running in Development mode
        • Running in a not Develop mode
        • Preparation of your local environment
      • Continuous Integration & Delivery
      • Testing
      • Testing in the K8s cluster
      • New Version
    • License

About

The Operator Engine is a backend agent implementing part of the Ocean Protocol Compute to the Data OEP-12, in charge of orchestrate the compute infrastructure using Kubernetes as backend. Typically the Operator Engine retrieve the Workflows created by the Operator Service, in Kubernetes and manage the infrastructure necessary to complete the execution of the compute workflows.

The Operator Engine is in charge of retrieving all the Workflows registered in a K8s cluster, allowing to:

  • Orchestrate the flow of the execution
  • Start the configuration pod in charge of download the workflow dependencies (datasets and algorithms)
  • Start the pod including the algorithm to execute
  • Start the publishing pod that uploads the results to a remote storage(ipfs or S3)

Getting Started

Running the Engine

The operator engine is in charge of gathering all the Worflow requests directly from the K8s infrastructure. To do that, the operator engine needs to be running inside the K8s cluster where the engine will read the Workflows registered.

There are multiple configurations and deployments of K8s possible, it's out of the scope of this documentation page to describe how to configure your K8s cluster.

Applying the Operator Engine deployment

First is necessary to apply the operator-engine YAML defining the K8s deployment:

$ kubectl create ns ocean-compute
$ kubectl config set-context --current --namespace ocean-compute
$ kubectl apply -f kubernetes/sa.yml
$ kubectl apply -f kubernetes/binding.yml
$ kubectl apply -f kubernetes/operator.yml

This will generate the ocean-compute-operator deployment in K8s. You can check the Deployment was created successfully using the following command:

$ kubectl  get deployment ocean-compute-operator -o yaml

By default we use the ocean-compute namespace in the K8s deployments.

After apply the Deployment you should be able to see the operator-engine pod with the prefix ocean-compute-operator:

$ kubectl  get pod ocean-compute-operator-7b5779c47b-2r4j8

NAME                                      READY   STATUS    RESTARTS   AGE
ocean-compute-operator-7b5779c47b-2r4j8   1/1     Running   0          12m

Customize your Operator Engine deployment

The following settings needs to be configured:

VariableDescription
nCPUHow many CPUs can be used
cpuTypeType of CPU
nGPUHow many GPUs can be used
gpuRamHow much RAM per GPU
gpuTypeType of GPU
ramGBHow much RAM can be used
diskGBHow much diskspace can be used (Expressed in GB)
priceMinutePrice per minute
descriptionDescription of this enviroment
maxJobsMaximum simultaneous jobs for this enviroment
maxJobDurationMaximum job duration in seconds
STORAGE_EXPIRYHow long is the output kept in storage. Expressed in hours. 0 means no expiry
OPERATOR_PRIVATE_KEYPrivate key of address used to sign notifications and consume algo/inputs
IPFS_TYPEIPFS library to use. 'CLUSTER' to use ipfs-cluster, 'CLIENT' to use ipfs-client (default)
IPFS_OUTPUT, IPFS_ADMINLOGSIPFS gateway to upload the output data (algorithm logs & algorithm output) and admin logs (logs from pod-configure & pod-publish)
IPFS_OUTPUT_PREFIX, IPFS_ADMINLOGS_PREFIXPrefix used for the results files (see below)
IPFS_API_KEY, IPFS_API_CLIENT IPFS API Key and Client ID for authentication purpose (optional)
AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_REGIONS3 credentials for the logs and output buckets.
AWS_BUCKET_OUTPUTBucket that will hold the output data (algorithm logs & algorithm output).
AWS_BUCKET_ADMINLOGSBucket that will hold the admin logs (logs from pod-configure & pod-publish).
STORAGE_CLASSStorage class to use (see next section).
NOTIFY_START_URLURL to call when a new job starts.
NOTIFY_STOP_URLURL to call when a new job ends.
SERVICE_ACCOUNTK8 service account to run pods. Defaults to 'default'
NODE_SELECTORK8 node selector (if defined)
PULL_SECRETImagesPullSecret (if defined) (see [***]
PULL_POLICYimagePullPolicy (if defined) (see [***]
FILTERING_CONTAINERFiltering pod image to use for filtering (if defined)
LOG_CFG, LOG_LEVELDefine the location of the log file and logging level, respectively

Only one method of uploading is going to be used. Priority is:

  • first IPFS vars are checked. If they exists, then IPFS will be used
  • 2nd, AWS vars are checked. If they exists, then AWS S3 will be used

Usage of IPFS_OUTPUT and IPFS_OUTPUT_PREFIX (IPFS_ADMINLOGS/IPFS_ADMINLOGS_PREFIX)

This will allow you to have the following scenarios:

    • IPFS_OUTPUT=[***]
    • IPFS_OUTPUT_PREFIX=[***]

    Port 5001 will be used to call addFIle, but the result will look like ipfs.oceanprotocol.com:8080/ipfs/HASH

    • IPFS_OUTPUT=[***]
    • IPFS_OUTPUT_PREFIX=ipfs://

    Port 5001 will be used to call addFIle, but the result will look like "ipfs://HASH" (you will hide your ipfs deployment)

  1. IPFS_EXPIRY_TIME = the default expiry time. "0" = unlimited

Usage of NOTIFY_START_URL and NOTIFY_STOP_URL

Engine will JSON POST the following for each action: - algoDID: Algorithm DID (if any) - jobId: Job ID - secret: Secret value (exported to algo pod as secret env) - DID: Array of input DIDs

Storage Expiry

Op-engine will pass a ENV variable called STORAGE_EXPIRY to pod-publishing (the env is defined in op-service and passed through from there).

Usage of NODE_SELECTOR

If defined, all pods are going to contain the following selectors in the specs:

spec:
   template:
      spec:
         affinity:
            nodeAffinity:
               requiredDuringSchedulingIgnoredDuringExecution:
                  nodeSelectorTerms:
                  - matchExpressions:
                     - key: scope
                        operator: In
                        values:
                        - $NODE_SELECTOR

This allows you to run C2D pods on specific nodes

Storage class

For minikube, you can use 'standard' class.

For AWS , please make sure that your class allocates volumes in the same region and zone in which you are running your pods.

We created our own 'standard' class in AWS:

bash
kubectl get storageclass standard -o yaml
yaml
allowedTopologies:
- matchLabelExpressions:
    - key: failure-domain.beta.kubernetes.io/zone
          values:
          - us-east-1a
apiVersion: storage.k8s.io/v1
kind: StorageClass
parameters:
    fsType: ext4
    type: gp2
provisioner: kubernetes.io/aws-ebs
reclaimPolicy: Delete
volumeBindingMode: Immediate

Or we can use this for minikube:

yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: standard
provisioner: docker.io/hostpath
reclaimPolicy: Retain

For more information, please visit [***]

Usage of FILTERING_CONTAINER

After an algorithm job is done, you can run your own custom image that can do an analysis of the output folder. That image could detect data leaks and overwrite the output folder if needed Format is the usual docker image notation.

Customizing job templates

All pods(jobs) are started using the templates from operator_engine/templates/ folder. If you want to customize them (adding some apps customizations, labels, etc) then you can mount that folder using an external volume. Please make sure that you have all template and not only the custom ones.

Running in Development mode

If you run the operator-engine in development mode, it will allows to:

  • Get access to the operator-engine pod
  • Start and stop multiple times the operator-engine process, changing the code directly in the pod
  • Test with different configurations without re-generating docker images

Typically the main process of the operator-engine pod is the kopf process. You can get access to any operator-engine pod running the typical kubectl exec command, but if you want to stop kopf, modify the config and the code and try again, it's recommended to modify the starting command of the pod. You can do that un-comment the startup command in the Dockerfile file where you use tail instead of the kopf command. This will start the pod but not the kopf process inside the pod. Allowing to you to get access there and start/stop kopf as many times you want.

After changing the Dockerfile you can publish a new version of the operator-engine docker image. At this point, you can stop the ocean-compute-operator pod. Take into account the pod id in your deployment will be different:

$ kubectl delete pod ocean-compute-operator-7b5779c47b-2jrlp

This will force the pull of the latest version of the operator-engine to be downloaded and run in the K8s cluster. Having that you should be able to get access to the pod:

$ kubectl exec -it ocean-compute-operator-7b5779c47b-2jrlp bash

root@ocean-compute-operator-7b5779c47b-2jrlp:/operator_engine# ps aux

USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root         1  0.0  0.0   4080   736 ?        Ss   09:35   0:00 tail -f /dev/null
root         9  0.3  0.0   5752  3544 pts/0    Ss   09:45   0:00 bash
root        16  0.0  0.0   9392  3064 pts/0    R+   09:45   0:00 ps aux

Now inside the pod you can start kopf running the following command:

$ kopf run --standalone /operator_engine/operator_main.py

This should start the operator-engine subscribed to the Workflows registered in K8s.

Running in a not Develop mode

First time you create the operator setup, you need to initialize the operator deployment as we saw above using the command:

$ kubectl apply -f k8s_install/operator.yml

This should start automatically the ocean-compute-operator pod using by default the latest Docker image of the operator-engine. You can check everything is running:

$ kubectl get pod ocean-compute-operator-7b5779c47b-2r4j8  
NAME                                      READY   STATUS    RESTARTS   AGE
ocean-compute-operator-7b5779c47b-2r4j8   1/1     Running   0          114m
Preparation of your local environment

Once you have Kubectl able to connect you your K8s cluster, run the service is as simple as running the following commands:

virtualenv -p python3.7 venv
source venv/bin/activate
pip install -r requirements_dev.txt
Continuous Integration & Delivery

You can find the Travis compilation here: [***]

And the Docker images here: [***]

Testing

Automatic tests are set up via Travis, executing tox. Our tests use the pytest framework.

Testing in the K8s cluster

You can register a Workflow in K8s to check how the operator-engine orchestrate the compute execution using one of the test examples included in the project. You can register it running the following command:

$ kubectl apply -f k8s_install/workflow-1.yaml
workflow.oceanprotocol.com/workflow-1 created

In the operator-engine pod you should see in the logs how the engine is doing some job:

[2019-09-17 12:27:03,730] ocean-operator       [INFO    ] Stage 0 with stageType Filtering
[2019-09-17 12:27:03,731] ocean-operator       [INFO    ] Running container openjdk:14-jdk
[2019-09-17 12:27:03,757] ocean-operator       [INFO    ] ConfigMap workflow-1 created
[2019-09-17 12:27:03,771] ocean-operator       [INFO    ] PersistentVolumeClaim workflow-1 created
[2019-09-17 12:27:03,790] ocean-operator       [INFO    ] Job workflow-1-configure-job created
[2019-09-17 12:27:03,803] ocean-operator       [INFO    ] Waiting configure pod to finish
[2019-09-17 12:27:13,826] ocean-operator       [INFO    ] Waiting configure pod to finish
[2019-09-17 12:27:23,853] ocean-operator       [INFO    ] Waiting configure pod to finish
[2019-09-17 12:27:33,892] ocean-operator       [INFO    ] Job workflow-1-algorithm-job created
[2019-09-17 12:27:33,901] ocean-operator       [INFO    ] Waiting algorithm pod to finish
[2019-09-17 12:27:43,942] ocean-operator       [INFO    ] Job workflow-1-publish-job created
[2019-09-17 12:27:43,951] ocean-operator       [INFO    ] Waiting publish pod to finish
[2019-09-17 12:27:53,978] ocean-operator       [INFO    ] Waiting publish pod to finish
[2019-09-17 12:28:04,003] ocean-operator       [INFO    ] Waiting publish pod to finish

You can check the individual logs of the compute pods using the standard K8s log command:

$ kubectl logs
ocean-compute-operator-7b5779c47b-2r4j8  workflow-1-configure-job-qk4pv           
workflow-1-algorithm-job-c9m4t           workflow-1-publish-job-dcfjc             
$ kubectl logs ocean-compute-operator-7b5779c47b-2r4j8

New Version

The bumpversion.sh script helps bump the project version. You can execute the script using {major|minor|patch} as first argument, to bump the version accordingly.

License

Copyright 2023 Ocean Protocol Foundation Ltd.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

[***]

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

查看更多 operator-engine 相关镜像 →
victoriametrics/operator logo
victoriametrics/operator
用于在Kubernetes环境中自动化部署、管理和运维Victoria Metrics时序数据库的Operator控制器
100M+ pulls
上次更新:未知
istio/operator logo
istio/operator
用于在Kubernetes集群内管理Istio服务网格的操作员,支持自动化部署、配置及生命周期管理。
71B+ pulls
上次更新:未知
paketobuildpacks/node-engine logo
paketobuildpacks/node-engine
暂无描述
10M+ pulls
上次更新:未知
rancher/cis-operator logo
rancher/cis-operator
暂无描述
5M+ pulls
上次更新:未知
redislabs/operator logo
redislabs/operator
暂无描述
75M+ pulls
上次更新:未知
rancher/eks-operator logo
rancher/eks-operator
暂无描述
1M+ pulls
上次更新:未知

轩辕镜像配置手册

探索更多轩辕镜像的使用方法,找到最适合您系统的配置方式

登录仓库拉取

通过 Docker 登录认证访问私有仓库

Linux

在 Linux 系统配置镜像服务

Windows/Mac

在 Docker Desktop 配置镜像

Docker Compose

Docker Compose 项目配置

K8s Containerd

Kubernetes 集群配置 Containerd

K3s

K3s 轻量级 Kubernetes 镜像加速

Dev Containers

VS Code Dev Containers 配置

MacOS OrbStack

MacOS OrbStack 容器配置

宝塔面板

在宝塔面板一键配置镜像

群晖

Synology 群晖 NAS 配置

飞牛

飞牛 fnOS 系统配置镜像

极空间

极空间 NAS 系统配置服务

爱快路由

爱快 iKuai 路由系统配置

绿联

绿联 NAS 系统配置镜像

威联通

QNAP 威联通 NAS 配置

Podman

Podman 容器引擎配置

Singularity/Apptainer

HPC 科学计算容器配置

其他仓库配置

ghcr、Quay、nvcr 等镜像仓库

专属域名拉取

无需登录使用专属域名

需要其他帮助?请查看我们的 常见问题Docker 镜像访问常见问题解答 或 提交工单

镜像拉取常见问题

轩辕镜像免费版与专业版有什么区别?

免费版仅支持 Docker Hub 访问,不承诺可用性和速度;专业版支持更多镜像源,保证可用性和稳定速度,提供优先客服响应。

轩辕镜像支持哪些镜像仓库?

专业版支持 docker.io、gcr.io、ghcr.io、registry.k8s.io、nvcr.io、quay.io、mcr.microsoft.com、docker.elastic.co 等;免费版仅支持 docker.io。

流量耗尽错误提示

当返回 402 Payment Required 错误时,表示流量已耗尽,需要充值流量包以恢复服务。

410 错误问题

通常由 Docker 版本过低导致,需要升级到 20.x 或更高版本以支持 V2 协议。

manifest unknown 错误

先检查 Docker 版本,版本过低则升级;版本正常则验证镜像信息是否正确。

镜像拉取成功后,如何去掉轩辕镜像域名前缀?

使用 docker tag 命令为镜像打上新标签,去掉域名前缀,使镜像名称更简洁。

查看全部问题→

用户好评

来自真实用户的反馈,见证轩辕镜像的优质服务

用户头像

oldzhang

运维工程师

Linux服务器

5

"Docker访问体验非常流畅,大镜像也能快速完成下载。"

轩辕镜像
镜像详情
...
oceanprotocol/operator-engine
官方博客Docker 镜像使用技巧与技术博客
热门镜像查看热门 Docker 镜像推荐
一键安装一键安装 Docker 并配置镜像源
咨询镜像拉取问题请 提交工单,官方技术交流群:1072982923
轩辕镜像面向开发者与科研用户,提供开源镜像的搜索和访问支持。所有镜像均来源于原始仓库,本站不存储、不修改、不传播任何镜像内容。
咨询镜像拉取问题请提交工单,官方技术交流群:
轩辕镜像面向开发者与科研用户,提供开源镜像的搜索和访问支持。所有镜像均来源于原始仓库,本站不存储、不修改、不传播任何镜像内容。
官方邮箱:点击复制邮箱
©2024-2026 源码跳动
官方邮箱:点击复制邮箱Copyright © 2024-2026 杭州源码跳动科技有限公司. All rights reserved.