专属域名
文档搜索
提交工单
轩辕助手
返回顶部
快速返回页面顶部
收起
收起工具栏
轩辕镜像
轩辕镜像专业版
个人中心搜索镜像
交易
充值流量我的订单
工具
提交工单镜像收录一键安装
Npm 源Pip 源
帮助
常见问题
其他
关于我们网站地图

官方QQ群: 13763429

轩辕镜像
镜像详情
...
zyddnys/manga-image-translator
官方博客热门镜像提交工单
本站面向开发者与科研用户,提供开源镜像的搜索和下载加速服务。
所有镜像均来源于原始开源仓库,本站不存储、不修改、不传播任何镜像内容。
轩辕镜像 —— 国内开发者首选的专业 Docker 镜像加速平台。在线技术支持请优先 提交工单,技术交流欢迎加入官方QQ群:13763429 。
本站面向开发者与科研用户,提供开源镜像的搜索和下载加速服务。所有镜像均来源于原始开源仓库,本站不存储、不修改、不传播任何镜像内容。

manga-image-translator Docker 镜像下载 - 轩辕镜像

manga-image-translator 镜像详细信息和使用指南

manga-image-translator 镜像标签列表和版本信息

manga-image-translator 镜像拉取命令和加速下载

manga-image-translator 镜像使用说明和配置指南

Docker 镜像加速服务 - 轩辕镜像平台

国内开发者首选的 Docker 镜像加速平台

极速拉取 Docker 镜像服务

相关 Docker 镜像推荐

热门 Docker 镜像下载

manga-image-translator
zyddnys/manga-image-translator

manga-image-translator 镜像详细信息

manga-image-translator 镜像标签列表

manga-image-translator 镜像使用说明

manga-image-translator 镜像拉取命令

Docker 镜像加速服务

轩辕镜像平台优势

镜像下载指南

相关 Docker 镜像推荐

manga-image-translator from [***]
12 收藏0 次下载activezyddnys镜像
🚀轩辕镜像专业版更稳定💎一键安装 Docker 配置镜像源
镜像简介版本下载
🚀轩辕镜像专业版更稳定💎一键安装 Docker 配置镜像源

manga-image-translator 镜像详细说明

manga-image-translator 使用指南

manga-image-translator 配置说明

manga-image-translator 官方文档

Manga/Image Translator (English Readme)

Last Updated: 2025/05/10

!Commit activity !Lines of code !License !Contributors ![]([]

One-click translation of text in various images
中文说明 | Changelog
Welcome to join our *** <[***]>

This project aims to translate images that are unlikely to be professionally translated, such as comics/images on various group chats and image boards, making it possible for Japanese novices like me to understand the content. It mainly supports Japanese, but also supports Simplified and Traditional Chinese, English and 20 other minor languages. Supports image repair (text removal) and typesetting. This project is v2 of Qiú wén zhuǎn yì zhì.

Note: This project is still in the early stages of development and has many shortcomings. We need your help to improve it!

📂 Directory

  • Showcase
  • Online Version
  • Installation
    • Local Setup
      • Using Pip/venv (Recommended)
      • Notes for Windows Users
    • Docker
      • Run Web Server
        • Using Nvidia GPU
      • Use as CLI
      • Build Locally
  • Usage
    • Local (Batch) Mode
    • Web Mode
      • Old UI
      • New UI
    • API Mode
      • API Documentation
    • Config-help Mode
  • Option and Configuration
    • Recommended Options
      • Tips to Improve Translation Quality
    • Command Line Options
      • Basic Options
      • Additional Options
        • Local Mode Options
        • WebSocket Mode Options
        • API Mode Options
        • Web Mode Options
    • Configuration File
      • Render Options
      • Upscale Options
      • Translator Options
      • Detector Options
      • Inpainter Options
      • Colorizer Options
      • OCR Options
      • Other Options
    • Language Code Reference
    • Translator Reference
    • Glossary
    • Replacement Dictionary
    • Environment Variables Summary
    • GPT Configuration Reference
    • Rendering with Gimp
  • Future Plans
  • Support Us
    • Thanks to all contributors
  • Star Growth Curve

Showcase

The following examples may not be frequently updated and may not represent the effect of the current main branch version.

Original ImageTranslated Image
佐藤さんは知っていた - 猫麦
(Source @09ra_19ra)
Output
(Mask)
Gris finds out she's of royal blood - VERTI
(Source @VERTIGRIS_ART)
Output
--detector ctd (Mask)
陰キャお嬢様の新学期🏫📔🌸 (#3) - ひづき夜宵🎀💜
(Source @hiduki_yayoi)
Output
--translator none (Mask)
幼なじみの高校デビューの癖がすごい (#1) - 神吉李花☪️🐧
(Source @rikak)
Output
(Mask)

Online Version

Official demo site (maintained by zyddnys): <[]>
Browser script (maintained by QiroNT): <[
]>

  • Note: If the online version is inaccessible, it might be due to Google GCP restarting the server. Please wait a moment for the service to restart.
  • The online version uses the latest version from the main branch.

Installation

Local Setup

Using Pip/venv (Recommended)

bash
# First, ensure you have Python 3.10 or later installed on your machine
# The very latest version of Python might not be compatible with some PyTorch libraries yet
$ python --version
Python 3.10.6

# Clone this repository
$ git clone [***]

# Create a venv (optional, but recommended)
$ python -m venv venv

# Activate the venv
$ source venv/bin/activate

# If you want to use the --use-gpu option, please visit [***] to install PyTorch, which needs to correspond to your CUDA version.
# If you did not use venv to create a virtual environment, you need to add --upgrade --force-reinstall to the pip command to overwrite the currently installed PyTorch version.

# Install dependencies
$ pip install -r requirements.txt

Models will be automatically downloaded to the ./models directory at runtime.

Notes for Windows Users:

Please install Microsoft C++ Build Tools (Download, Instructions) before performing the pip install, as some pip dependencies need it to compile. (See #114).

To use CUDA on Windows, install the correct PyTorch version as described on <[***]>.

Docker

Requirements:

  • Docker (19.03+ for CUDA / GPU acceleration)
  • Docker Compose (Optional, if you want to use the files in demo/doc folder)
  • Nvidia Container Runtime (Optional, if you want to use CUDA)

This project supports Docker, with the image being zyddnys/manga-image-translator:main. This Docker image contains all the dependencies and models required for the project. Please note that this image is quite large (~15GB).

Run Web Server

You can start the Web Server (CPU) using the following command:

Note that you need to add the required environment variables using -e or --env

bash
docker run \
  --name manga_image_translator_cpu \
  -p 5003:5003 \
  --ipc=host \
  --entrypoint python \
  --rm \
  -v /demo/doc/../../result:/app/result \
  -v /demo/doc/../../server/main.py:/app/server/main.py \
  -v /demo/doc/../../server/instance.py:/app/server/instance.py \
  -e OPENAI_API_KEY='' \
  -e OPENAI_API_BASE='' \
  -e OPENAI_MODEL='' \
  zyddnys/manga-image-translator:main \
  server/main.py --verbose --start-instance --host=0.0.0.0 --port=5003

Or use the compose file

Note that you need to add the required environment variables in the file first

bash
docker-compose -f demo/doc/docker-compose-web-with-cpu.yml up

The Web Server starts on port 8000 by default, and the translation results will be saved in the /result folder.

Using Nvidia GPU

To use a supported GPU, please read the Docker section above first. You will need some special dependencies.

You can start the Web Server (GPU) using the following command:

Note that you need to add the required environment variables using -e or --env

bash
docker run \
  --name manga_image_translator_gpu \
  -p 5003:5003 \
  --ipc=host \
  --gpus all \
  --entrypoint python \
  --rm \
  -v /demo/doc/../../result:/app/result \
  -v /demo/doc/../../server/main.py:/app/server/main.py \
  -v /demo/doc/../../server/instance.py:/app/server/instance.py \
  -e OPENAI_API_KEY='' \
  -e OPENAI_API_BASE='' \
  -e OPENAI_MODEL='' \
  -e OPENAI_HTTP_PROXY='' \
  zyddnys/manga-image-translator:main \
  server/main.py --verbose --start-instance --host=0.0.0.0 --port=5003 --use-gpu

Or use the compose file (for Web Server + GPU):

Note that you need to add the required environment variables in the file first

bash
docker-compose -f demo/doc/docker-compose-web-with-gpu.yml up

Use as CLI

To use Docker via CLI (i.e., Batch Mode):

Some translation services require API keys to run, pass them to your docker container as environment variables.

bash
docker run --env="DEEPL_AUTH_KEY=xxx" -v <targetFolder>:/app/<targetFolder> -v <targetFolder>-translated:/app/<targetFolder>-translated  --ipc=host --rm zyddnys/manga-image-translator:main local -i=/app/<targetFolder> <cli flags>

Note: If you need to reference files on your host, you will need to mount the relevant files as volumes into the /app folder inside the container. The CLI paths will need to be the internal Docker path /app/... and not the path on your host.

Build Locally

To build the docker image locally, you can run the following command (you need to have make tool installed on your machine):

bash
make build-image

Then test the built image, run:

Some translation services require API keys to run, pass them to your docker container as environment variables. Add environment variables in the Dockerfile.

bash
make run-web-server

Usage

Local (Batch) Mode

bash
# Replace <path> with the path to your image folder or file.
$ python -m manga_translator local -v -i <path>
# The results can be found in `<path_to_image_folder>-translated`.

Web Mode

Old UI

bash
# Start a web server.
$ cd server
$ python main.py --use-gpu
# The web demo service address is [***]

New UI

Documentation

API Mode

bash
# Start a web server.
$ cd server
$ python main.py --use-gpu
# The API service address is [***]

API Documentation

Read the openapi documentation at: 127.0.0.1:8000/docs

FastAPI-html

Config-help Mode

bash
python -m manga_translator config-help

Options and Configuration Description

Recommended Options

Detector:

  • English: ??
  • Japanese: ??
  • Chinese (Simplified): ??
  • Korean: ??
  • Using {"detector":{"detector": "ctd"}} can increase the number of text lines detected Update: Actual testing shows that default works better with related parameter adjustments in black and white comics.

OCR:

  • English: ??
  • Japanese: 48px
  • Chinese (Simplified): ??
  • Korean: 48px

Translator:

  • Japanese -> English: Sugoi
  • Chinese (Simplified) -> English: ??
  • Chinese (Simplified) -> Japanese: ??
  • Japanese -> Chinese (Simplified): sakura or opanai
  • English -> Japanese: ??
  • English -> Chinese (Simplified): ??

Inpainter: lama_large

Colorizer: mc2

Tips to Improve Translation Quality

  • Small resolutions can sometimes trip up the detector, which is not so good at picking up irregular text sizes. To circumvent this you can use an upscaler by specifying upscale_ratio 2 or any other value
  • If the rendered text is too small to read, specify font_size_offset or use the --manga2eng renderer, which will try to fit the detected text bubble rather than detected textline area.
  • Specify a font with --font-path fonts/anime_ace_3.ttf for example
  • Set mask_dilation_offset to 10~30 to increase the mask coverage and better wrap the source text
  • change inpainter.
  • Increasing the box_threshold can help filter out gibberish from OCR error detection to some extent.
  • Use OpenaiTranslator to load the glossary file (custom_openai cannot load it)
  • When the image resolution is low, lower detection_size, otherwise it may cause some sentences to be missed. The opposite is true when the image resolution is high.
  • When the image resolution is high, increase inpainting_size, otherwise it may not completely cover the mask, resulting in source text leakage. In other cases, you can increase kernel_size to reduce the accuracy of text removal so that the model gets a larger field of view (Note: Judge whether the text leakage is caused by inpainting based on the consistency between the source text and the translated text. If consistent, it is caused by inpainting, otherwise it is caused by text detection and OCR)

Command Line Options

Basic Options

text
-h, --help                     show this help message and exit
-v, --verbose                  print debug messages and save intermediate images in results folder
--attempts ATTEMPTS            Number of attempts when an error occurs. -1 for infinite attempts.
--ignore-errors                Skip images when an error occurs.
--model-dir MODEL_DIR          Model directory (defaults to ./models in the project root)
--use-gpu                      Turns on/off GPU (automatically switches between mps and cuda)
--use-gpu-limited              Turns on/off GPU (excluding offline translators)
--font-path FONT_PATH          Path to the font file
--pre-dict PRE_DICT            Path to the pre-translation replacement dictionary file
--post-dict POST_DICT          Path to the post-translation replacement dictionary file
--kernel-size KERNEL_SIZE      Set the kernel size for the convolution of text erasure area to completely clear residual text
--context-size                 Pages of context are needed for translating the current page. currently, this only applies to openaitranslator. 

Additional Options

Local Mode Options
text
local                         run in batch translation mode
-i, --input INPUT [INPUT ...] Image folder path (required)
-o, --dest DEST               Destination folder path for translated images (default: '')
-f, --format FORMAT           Output format for the translation. Options: [List OUTPUT_FORMATS here, png,webp,jpg,jpeg,xcf,psd,pdf]
--overwrite                   Overwrite already translated images
--skip-no-text                Skip images with no text (won't be saved).
--use-mtpe                    Turn on/off Machine Translation Post-Editing (MTPE) on the command line (currently Linux only)
--save-text                   Save extracted text and translations to a text file.
--load-text                   Load extracted text and translations from a text file.
--save-text-file SAVE_TEXT_FILE  Similar to --save-text, but with a specified file path. (default: '')
--prep-manual                 Prepare for manual typesetting by outputting blanked, inpainted images, and copies of the original image for reference
--save-quality SAVE_QUALITY   Quality of saved JPEG images, from 0 to 100 where 100 is best (default: 100)
--config-file CONFIG_FILE     Path to a configuration file (default: None)
WebSocket Mode Options
text
ws                  run in WebSocket mode
--host HOST         Host of the WebSocket service (default: 127.0.0.1)
--port PORT         Port of the WebSocket service (default: 5003)
--nonce NONCE       Nonce used to secure internal WebSocket communication
--ws-url WS_URL     Server URL for WebSocket mode (default: ws://localhost:5000)
--models-ttl MODELS_TTL  Time in seconds to keep models in memory after last use (0 means forever)
API Mode Options
text
shared              run in API mode
--host HOST         Host of the API service (default: 127.0.0.1)
--port PORT         Port of the API service (default: 5003)
--nonce NONCE       Nonce used to secure internal API server communication
--report REPORT     Report to server to register instance (default: None)
--models-ttl MODELS_TTL  TTL of models in memory in seconds (0 means forever)
Web Mode Options (missing some basic options, still needs to be added)
text
--host HOST           Host address (default: 127.0.0.1)
--port PORT           Port number (default: 8000)
--start-instance      Whether an instance of the translator should be started automatically
--nonce NONCE         Nonce used to secure internal Web Server communication
--models-ttl MODELS_TTL  Time in seconds to keep models in memory after last use (0 means forever)

Configuration File

Run python -m manga_translator config-help >> config-info.json to see the documentation for the JSON schema An example config file can be found in example/config-example.json

Expand the full config JSON
json
{
  "$defs": {
    "Alignment": {
      "enum": [
        "auto",
        "left",
        "center",
        "right"
      ],
      "title": "Alignment",
      "type": "string"
    },
    "Colorizer": {
      "enum": [
        "none",
        "mc2"
      ],
      "title": "Colorizer",
      "type": "string"
    },
    "ColorizerConfig": {
      "properties": {
        "colorization_size": {
          "default": 576,
          "title": "Colorization Size",
          "type": "integer"
        },
        "denoise_sigma": {
          "default": 30,
          "title": "Denoise Sigma",
          "type": "integer"
        },
        "colorizer": {
          "$ref": "#/$defs/Colorizer",
          "default": "none"
        }
      },
      "title": "ColorizerConfig",
      "type": "object"
    },
    "Detector": {
      "enum": [
        "default",
        "dbconvnext",
        "ctd",
        "craft",
        "paddle",
        "none"
      ],
      "title": "Detector",
      "type": "string"
    },
    "DetectorConfig": {
      "properties": {
        "detector": {
          "$ref": "#/$defs/Detector",
          "default": "default"
        },
        "detection_size": {
          "default": 2048,
          "title": "Detection Size",
          "type": "integer"
        },
        "text_threshold": {
          "default": 0.5,
          "title": "Text Threshold",
          "type": "number"
        },
        "det_rotate": {
          "default": false,
          "title": "Det Rotate",
          "type": "boolean"
        },
        "det_auto_rotate": {
          "default": false,
          "title": "Det Auto Rotate",
          "type": "boolean"
        },
        "det_invert": {
          "default": false,
          "title": "Det Invert",
          "type": "boolean"
        },
        "det_gamma_correct": {
          "default": false,
          "title": "Det Gamma Correct",
          "type": "boolean"
        },
        "box_threshold": {
          "default": 0.75,
          "title": "Box Threshold",
          "type": "number"
        },
        "unclip_ratio": {
          "default": 2.3,
          "title": "Unclip Ratio",
          "type": "number"
        }
      },
      "title": "DetectorConfig",
      "type": "object"
    },
    "Direction": {
      "enum": [
        "auto",
        "horizontal",
        "vertical"
      ],
      "title": "Direction",
      "type": "string"
    },
    "InpaintPrecision": {
      "enum": [
        "fp32",
        "fp16",
        "bf16"
      ],
      "title": "InpaintPrecision",
      "type": "string"
    },
    "Inpainter": {
      "enum": [
        "default",
        "lama_large",
        "lama_mpe",
        "sd",
        "none",
        "original"
      ],
      "title": "Inpainter",
      "type": "string"
    },
    "InpainterConfig": {
      "properties": {
        "inpainter": {
          "$ref": "#/$defs/Inpainter",
          "default": "lama_large"
        },
        "inpainting_size": {
          "default": 2048,
          "title": "Inpainting Size",
          "type": "integer"
        },
        "inpainting_precision": {
          "$ref": "#/$defs/InpaintPrecision",
          "default": "bf16"
        }
      },
      "title": "InpainterConfig",
      "type": "object"
    },
    "Ocr": {
      "enum": [
        "32px",
        "48px",
        "48px_ctc",
        "mocr"
      ],
      "title": "Ocr",
      "type": "string"
    },
    "OcrConfig": {
      "properties": {
        "use_mocr_merge": {
          "default": false,
          "title": "Use Mocr Merge",
          "type": "boolean"
        },
        "ocr": {
          "$ref": "#/$defs/Ocr",
          "default": "48px"
        },
        "min_text_length": {
          "default": 0,
          "title": "Min Text Length",
          "type": "integer"
        },
        "ignore_bubble": {
          "default": 0,
          "title": "Ignore Bubble",
          "type": "integer"
        }
      },
      "title": "OcrConfig",
      "type": "object"
    },
    "RenderConfig": {
      "properties": {
        "renderer": {
          "$ref": "#/$defs/Renderer",
          "default": "default"
        },
        "alignment": {
          "$ref": "#/$defs/Alignment",
          "def
查看更多 manga-image-translator 相关镜像 →
atlassian/default-image logo
atlassian/default-image
by Atlassian
认证
Bitbucket流水线的默认构建环境,用于支持其构建流程。
8650M+ pulls
上次更新:6 个月前

用户好评

来自真实用户的反馈,见证轩辕镜像的优质服务

oldzhang的头像

oldzhang

运维工程师

Linux服务器

5

"Docker加速体验非常流畅,大镜像也能快速完成下载。"

用户评价详情

oldzhang - 运维工程师

Linux服务器

5

Docker加速体验非常流畅,大镜像也能快速完成下载。

Nana - 后端开发

Mac桌面

4.9

配置Docker镜像源后,拉取速度快了数倍,开发体验提升明显。

Qiang - 平台研发

K8s集群

5

轩辕镜像在K8s集群中表现很稳定,容器部署速度明显加快。

小敏 - 测试工程师

Windows桌面

4.8

Docker镜像下载不再超时,测试环境搭建更加高效。

晨曦 - DevOps工程师

宝塔面板

5

配置简单,Docker镜像源稳定,适合快速部署环境。

阿峰 - 资深开发

群晖NAS

5

在群晖NAS上配置后,镜像下载速度飞快,非常适合家庭实验环境。

俊仔 - 后端工程师

飞牛NAS

4.9

Docker加速让容器搭建顺畅无比,再也不用等待漫长的下载。

Lily - 测试经理

Linux服务器

4.8

镜像源覆盖面广,更新及时,团队一致反馈体验不错。

浩子 - 云平台工程师

Podman容器

5

使用轩辕镜像后,Podman拉取镜像稳定无比,生产环境可靠。

Kai - 运维主管

爱快路由

5

爱快系统下配置加速服务,Docker镜像拉取速度提升非常大。

翔子 - 安全工程师

Linux服务器

4.9

镜像源稳定性高,安全合规,Docker拉取无忧。

亮哥 - 架构师

K8s containerd

5

大规模K8s集群下镜像加速效果显著,节省了大量时间。

慧慧 - 平台开发

Docker Compose

4.9

配置Compose镜像加速后,整体构建速度更快了。

Tina - 技术支持

Windows桌面

4.8

配置简单,镜像拉取稳定,适合日常开发环境。

宇哥 - DevOps Leader

极空间NAS

5

在极空间NAS上使用Docker加速,体验流畅稳定。

小静 - 数据工程师

Linux服务器

4.9

Docker镜像源下载速度快,大数据环境搭建轻松完成。

磊子 - SRE

宝塔面板

5

使用轩辕镜像后,CI/CD流程整体快了很多,值得推荐。

阿Yang - 前端开发

Mac桌面

4.9

国内网络环境下,Docker加速非常给力,前端环境轻松搭建。

Docker迷 - 架构师

威联通NAS

5

威联通NAS下配置镜像加速后,Docker体验比官方源好很多。

方宇 - 系统工程师

绿联NAS

5

绿联NAS支持加速配置,Docker镜像下载快且稳定。

常见问题

Q1:轩辕镜像免费版与专业版有什么区别?

免费版仅支持 Docker Hub 加速,不承诺可用性和速度;专业版支持更多镜像源,保证可用性和稳定速度,提供优先客服响应。

Q2:轩辕镜像免费版与专业版有分别支持哪些镜像?

免费版仅支持 docker.io;专业版支持 docker.io、gcr.io、ghcr.io、registry.k8s.io、nvcr.io、quay.io、mcr.microsoft.com、docker.elastic.co 等。

Q3:流量耗尽错误提示

当返回 402 Payment Required 错误时,表示流量已耗尽,需要充值流量包以恢复服务。

Q4:410 错误问题

通常由 Docker 版本过低导致,需要升级到 20.x 或更高版本以支持 V2 协议。

Q5:manifest unknown 错误

先检查 Docker 版本,版本过低则升级;版本正常则验证镜像信息是否正确。

Q6:镜像拉取成功后,如何去掉轩辕镜像域名前缀?

使用 docker tag 命令为镜像打上新标签,去掉域名前缀,使镜像名称更简洁。

查看全部问题→

轩辕镜像下载加速使用手册

探索更多轩辕镜像的使用方法,找到最适合您系统的配置方式

登录仓库拉取

通过 Docker 登录认证访问私有仓库

Linux

在 Linux 系统配置镜像加速服务

Windows/Mac

在 Docker Desktop 配置镜像加速

Docker Compose

Docker Compose 项目配置加速

K8s Containerd

Kubernetes 集群配置 Containerd

宝塔面板

在宝塔面板一键配置镜像加速

群晖

Synology 群晖 NAS 配置加速

飞牛

飞牛 fnOS 系统配置镜像加速

极空间

极空间 NAS 系统配置加速服务

爱快路由

爱快 iKuai 路由系统配置加速

绿联

绿联 NAS 系统配置镜像加速

威联通

QNAP 威联通 NAS 配置加速

Podman

Podman 容器引擎配置加速

Singularity/Apptainer

HPC 科学计算容器配置加速

其他仓库配置

ghcr、Quay、nvcr 等镜像仓库

专属域名拉取

无需登录使用专属域名加速

需要其他帮助?请查看我们的 常见问题 或 官方QQ群: 13763429

商务:17300950906
©2024-2025 源码跳动
商务合作电话:17300950906Copyright © 2024-2025 杭州源码跳动科技有限公司. All rights reserved.