今回はTensoRTエンジン化の備忘録と、生成速度の比較をしてみます。
以前ブログで画像生成モデルのGGUF化の記事を書きました。当初生成速度アップを期待していましたが、結果としてVRAM面を最適化する技術でした。
というわけで今回は改めまして、画像生成用のモデルをTensorRTエンジン形式に変換し、今度こそ生成速度アップを目論みたいと思います。
前提
私のPCスペックを書いておきます。
- CPU: Core i5 14600K
- Memory: 64GB
- GPU: RTX5070(VRAM 12GB)
- OS: Windows11
また、これらのツールが利用できる環境です。
- ComfyUI Portable: v0.11.1(記事作成時点での最新)
- 便宜上のインストール位置:
I:\ai-generator\ComfyUI_tensorrt(必要に応じで読み替えてください)
- 便宜上のインストール位置:
- Git: v2.52.0
- Visual C++ 再頒布可能パッケージ: v14
TensorRtについての注意事項です。ブログを読み進める前にご一読ください。
記事中で利用している「ComfyUI_TensorRT」カスタムノードですが、執筆時点で最終更新日から2年程経過しています。状況から今後更新される見込みは薄いと思われ、どこかで使えなくなるかなと思っていた所、記事作成途中でまんまと使えなくなりました(笑)
まぁ、あくまでComfuUIの最新版と組み合わせた場合の話ですが…。
厳密に確認してはいませんが、恐らく ComfyUI Release v11.1 までは利用でき、v12 以降では難しいようです。(model_base.py 変更の影響ぽい?)
もともとブログでは v11.1 を前提に進めていますが、最新版ComfuUIだと利用できなそうなので、ご了承のうえ読み進めてください。(2026/02/05)
モデルの準備
TensorRTのエンジン形式に変換するモデルを用意します。ここでは主要な画像生成モデル(下記表)を用意しましたが、SD3.5、Flux、Pony v7 はサイズが大きすぎるため除外しました。残念。
| モデル | ダウンロード |
|---|---|
| Stable Diffusion XL | sd_xl_base_1.0.safetensors |
| Pony Diffusion v6 | v6.safetensors |
| Illustrious-XL | Illustrious-XL-v2.0.safetensors |
| NoobAI-XL | NoobAI-XL-v1.1.safetensors |
ComfyUI_TensorRT のセットアップ
「ComfyUI_TensorRT」カスタムノードをインストールします。これはモデルのエンジン化やそれらのモデルをComfyUI上で利用(ロード)するために必要です。
下記コマンドをコマンドプロンプト(ターミナル)で順次実行し、カスタムノード及び依存関係をインストールします。
:: カスタムノードのクローン
cd I:\ai-generator\ComfyUI_tensorrt\ComfyUI\custom_nodes
git clone https://github.com/comfyanonymous/ComfyUI_TensorRT
:: カレントディレクトリの変更
cd ComfyUI_TensorRT
:: 依存関係の解決
..\..\..\python_embeded\python.exe -m pip install coloredlogs flatbuffers onnxscript
..\..\..\python_embeded\python.exe -m pip install -r requirements.txt参考として、私の環境にインストールしたときのログを載せておきます。
I:\>cd I:\ai-generator\ComfyUI_tensorrt\ComfyUI\custom_nodes
I:\ai-generator\ComfyUI_tensorrt\ComfyUI\custom_nodes>
I:\ai-generator\ComfyUI_tensorrt\ComfyUI\custom_nodes>git clone https://github.com/comfyanonymous/ComfyUI_TensorRT
Cloning into 'ComfyUI_TensorRT'...
remote: Enumerating objects: 119, done.
remote: Counting objects: 100% (74/74), done.
remote: Compressing objects: 100% (20/20), done.
remote: Total 119 (delta 66), reused 54 (delta 54), pack-reused 45 (from 1)
Receiving objects: 100% (119/119), 2.03 MiB | 9.04 MiB/s, done.
Resolving deltas: 100% (66/66), done.
I:\ai-generator\ComfyUI_tensorrt\ComfyUI\custom_nodes>dir
Volume in drive I is AI
Volume Serial Number is A8B1-D895
Directory of I:\ai-generator\ComfyUI_tensorrt\ComfyUI\custom_nodes
2026/01/29 22:21 <DIR> .
2026/01/29 21:25 <DIR> ..
2026/01/29 22:21 <DIR> ComfyUI_TensorRT
2026/01/29 14:35 5,281 example_node.py.example
2026/01/29 14:35 1,264 websocket_image_save.py
2026/01/29 21:25 <DIR> __pycache__
2 File(s) 6,545 bytes
4 Dir(s) 1,126,805,901,312 bytes free
I:\ai-generator\ComfyUI_tensorrt\ComfyUI\custom_nodes\ComfyUI_TensorRT>..\..\..\python_embeded\python.exe -m pip install coloredlogs flatbuffers
Collecting coloredlogs
Using cached coloredlogs-15.0.1-py2.py3-none-any.whl.metadata (12 kB)
Collecting flatbuffers
Using cached flatbuffers-25.12.19-py2.py3-none-any.whl.metadata (1.0 kB)
Collecting humanfriendly>=9.1 (from coloredlogs)
Using cached humanfriendly-10.0-py2.py3-none-any.whl.metadata (9.2 kB)
Collecting pyreadline3 (from humanfriendly>=9.1->coloredlogs)
Using cached pyreadline3-3.5.4-py3-none-any.whl.metadata (4.7 kB)
Using cached coloredlogs-15.0.1-py2.py3-none-any.whl (46 kB)
Using cached flatbuffers-25.12.19-py2.py3-none-any.whl (26 kB)
Using cached humanfriendly-10.0-py2.py3-none-any.whl (86 kB)
Using cached pyreadline3-3.5.4-py3-none-any.whl (83 kB)
Installing collected packages: flatbuffers, pyreadline3, humanfriendly, coloredlogs
━━━━━━━━━━━━━━━━━━━━╺━━━━━━━━━━━━━━━━━━━ 2/4 [humanfriendly] WARNING: The script humanfriendly.exe is installed in 'I:\ai-generator\ComfyUI_tensorrt\python_embeded\Scripts' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
━━━━━━━━━━━━━━━━━━━━╺━━━━━━━━━━━━━━━━━━━ 2/4 [humanfriendly] WARNING: The script coloredlogs.exe is installed in 'I:\ai-generator\ComfyUI_tensorrt\python_embeded\Scripts' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
Successfully installed coloredlogs-15.0.1 flatbuffers-25.12.19 humanfriendly-10.0 pyreadline3-3.5.4
I:\ai-generator\ComfyUI_tensorrt\ComfyUI\custom_nodes\ComfyUI_TensorRT>..\..\..\python_embeded\python.exe -m pip install -r requirements.txt
Collecting tensorrt>=10.0.1 (from -r requirements.txt (line 1))
Using cached tensorrt-10.14.1.48.post1-py2.py3-none-any.whl
Collecting onnx!=1.16.2 (from -r requirements.txt (line 2))
Using cached onnx-1.20.1-cp312-abi3-win_amd64.whl.metadata (8.6 kB)
Requirement already satisfied: tensorrt_cu13==10.14.1.48.post1 in i:\ai-generator\comfyui_tensorrt\python_embeded\lib\site-packages (from tensorrt>=10.0.1->-r requirements.txt (line 1)) (10.14.1.48.post1)
Requirement already satisfied: tensorrt_cu13_libs==10.14.1.48.post1 in i:\ai-generator\comfyui_tensorrt\python_embeded\lib\site-packages (from tensorrt_cu13==10.14.1.48.post1->tensorrt>=10.0.1->-r requirements.txt (line 1)) (10.14.1.48.post1)
Requirement already satisfied: tensorrt_cu13_bindings==10.14.1.48.post1 in i:\ai-generator\comfyui_tensorrt\python_embeded\lib\site-packages (from tensorrt_cu13==10.14.1.48.post1->tensorrt>=10.0.1->-r requirements.txt (line 1)) (10.14.1.48.post1)
Requirement already satisfied: cuda-toolkit<14,>=13 in i:\ai-generator\comfyui_tensorrt\python_embeded\lib\site-packages (from cuda-toolkit[cudart]<14,>=13->tensorrt_cu13_libs==10.14.1.48.post1->tensorrt_cu13==10.14.1.48.post1->tensorrt>=10.0.1->-r requirements.txt (line 1)) (13.1.1)
Requirement already satisfied: nvidia-cuda-runtime==13.1.80.* in i:\ai-generator\comfyui_tensorrt\python_embeded\lib\site-packages (from cuda-toolkit[cudart]<14,>=13->tensorrt_cu13_libs==10.14.1.48.post1->tensorrt_cu13==10.14.1.48.post1->tensorrt>=10.0.1->-r requirements.txt (line 1)) (13.1.80)
Requirement already satisfied: numpy>=1.23.2 in i:\ai-generator\comfyui_tensorrt\python_embeded\lib\site-packages (from onnx!=1.16.2->-r requirements.txt (line 2)) (2.4.1)
Requirement already satisfied: protobuf>=4.25.1 in i:\ai-generator\comfyui_tensorrt\python_embeded\lib\site-packages (from onnx!=1.16.2->-r requirements.txt (line 2)) (6.33.4)
Requirement already satisfied: typing_extensions>=4.7.1 in i:\ai-generator\comfyui_tensorrt\python_embeded\lib\site-packages (from onnx!=1.16.2->-r requirements.txt (line 2)) (4.15.0)
Requirement already satisfied: ml_dtypes>=0.5.0 in i:\ai-generator\comfyui_tensorrt\python_embeded\lib\site-packages (from onnx!=1.16.2->-r requirements.txt (line 2)) (0.5.4)
Using cached onnx-1.20.1-cp312-abi3-win_amd64.whl (16.4 MB)
Installing collected packages: onnx, tensorrt
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0/2 [onnx] WARNING: The scripts backend-test-tools.exe, check-model.exe and check-node.exe are installed in 'I:\ai-generator\ComfyUI_tensorrt\python_embeded\Scripts' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
Successfully installed onnx-1.20.1 tensorrt-10.14.1.48.post1
I:\ai-generator\ComfyUI_tensorrt\ComfyUI\custom_nodes\ComfyUI_TensorRT>インストールは以上になりますが、このままだとノードを実行した際にエラー(下図参照)が出力されます。

# ComfyUI Error Report
## Error Details
- **Node ID:** 1
- **Node Type:** STATIC_TRT_MODEL_CONVERSION
- **Exception Type:** torch.onnx._internal.exporter._errors.TorchExportError
- **Exception Message:** Failed to export the model with torch.export. [96mThis is step 1/3[0m of exporting the model to ONNX. Next steps:
- Modify the model code for `torch.export.export` to succeed. Refer to https://pytorch.org/docs/stable/generated/exportdb/index.html for more information.
- Debug `torch.export.export` and submit a PR to PyTorch.
- Create an issue in the PyTorch GitHub repository against the [96m*torch.export*[0m component and attach the full error stack as well as reproduction scripts.
## Exception summary
<class 'torch._dynamo.exc.UserError'>: Detected mismatch between the structure of `inputs` and `dynamic_shapes`: `inputs[3]` is a <class 'tuple'>, but `dynamic_shapes[3]` is a <class 'dict'>
For more information about this error, see: https://pytorch.org/docs/main/generated/exportdb/index.html#dynamic-shapes-validation
The error above occurred when calling torch.export.export. If you would like to view some more information about this error, and get a list of all other errors that may occur in your export call, you can replace your `export()` call with `draft_export()`.
(Refer to the full stack trace above for more information.)
## Stack Trace
```
File "I:\ai-generator\ComfyUI_tensorrt\ComfyUI\execution.py", line 518, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "I:\ai-generator\ComfyUI_tensorrt\ComfyUI\execution.py", line 329, in get_output_data
return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "I:\ai-generator\ComfyUI_tensorrt\ComfyUI\execution.py", line 303, in _async_map_node_over_list
await process_inputs(input_dict, i)
File "I:\ai-generator\ComfyUI_tensorrt\ComfyUI\execution.py", line 291, in process_inputs
result = f(**inputs)
File "I:\ai-generator\ComfyUI_tensorrt\ComfyUI\custom_nodes\ComfyUI_TensorRT\tensorrt_convert.py", line 628, in convert
return super()._convert(
~~~~~~~~~~~~~~~~^
model,
^^^^^^
...<14 lines>...
is_static=True,
^^^^^^^^^^^^^^^
)
^
File "I:\ai-generator\ComfyUI_tensorrt\ComfyUI\custom_nodes\ComfyUI_TensorRT\tensorrt_convert.py", line 282, in _convert
torch.onnx.export(
~~~~~~~~~~~~~~~~~^
unet,
^^^^^
...<7 lines>...
#dynamo=False, # ★この行を追記
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "I:\ai-generator\ComfyUI_tensorrt\python_embeded\Lib\site-packages\torch\onnx\__init__.py", line 296, in export
return _compat.export_compat(
~~~~~~~~~~~~~~~~~~~~~^
model,
^^^^^^
...<20 lines>...
legacy_export_kwargs=legacy_export_kwargs,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "I:\ai-generator\ComfyUI_tensorrt\python_embeded\Lib\site-packages\torch\onnx\_internal\exporter\_compat.py", line 154, in export_compat
onnx_program = _core.export(
model,
...<11 lines>...
verbose=verbose,
)
File "I:\ai-generator\ComfyUI_tensorrt\python_embeded\Lib\site-packages\torch\onnx\_internal\exporter\_flags.py", line 27, in wrapper
return func(*args, **kwargs)
File "I:\ai-generator\ComfyUI_tensorrt\python_embeded\Lib\site-packages\torch\onnx\_internal\exporter\_core.py", line 1409, in export
raise _errors.TorchExportError(
...<7 lines>...
) from first_error
```
## System Information
- **ComfyUI Version:** 0.11.1
- **Arguments:** ComfyUI\main.py --windows-standalone-build
- **OS:** win32
- **Python Version:** 3.13.11 (tags/v3.13.11:6278944, Dec 5 2025, 16:26:58) [MSC v.1944 64 bit (AMD64)]
- **Embedded Python:** true
- **PyTorch Version:** 2.10.0+cu130
## Devices
- **Name:** cuda:0 NVIDIA GeForce RTX 5070 : cudaMallocAsync
- **Type:** cuda
- **VRAM Total:** 12820480000
- **VRAM Free:** 11552161792
- **Torch VRAM Total:** 0
- **Torch VRAM Free:** 0
## Logs
```
2026-01-31T11:20:10.494107 - Adding extra search path checkpoints I:\ai-generator\ComfyUI\ComfyUI\models\checkpoints
2026-01-31T11:20:10.494188 - Adding extra search path text_encoders I:\ai-generator\ComfyUI\ComfyUI\models\text_encoders
2026-01-31T11:20:10.494233 - Adding extra search path text_encoders I:\ai-generator\ComfyUI\ComfyUI\models\clip\ # legacy location still supported
2026-01-31T11:20:10.494264 - Adding extra search path clip_vision I:\ai-generator\ComfyUI\ComfyUI\models\clip_vision
2026-01-31T11:20:10.494292 - Adding extra search path configs I:\ai-generator\ComfyUI\ComfyUI\models\configs
2026-01-31T11:20:10.494320 - Adding extra search path controlnet I:\ai-generator\ComfyUI\ComfyUI\models\controlnet
2026-01-31T11:20:10.494346 - Adding extra search path diffusion_models I:\ai-generator\ComfyUI\ComfyUI\models\diffusion_models
2026-01-31T11:20:10.494371 - Adding extra search path diffusion_models I:\ai-generator\ComfyUI\ComfyUI\models\unet
2026-01-31T11:20:10.494400 - Adding extra search path embeddings I:\ai-generator\ComfyUI\ComfyUI\models\embeddings
2026-01-31T11:20:10.494425 - Adding extra search path loras I:\ai-generator\ComfyUI\ComfyUI\models\loras
2026-01-31T11:20:10.494448 - Adding extra search path upscale_models I:\ai-generator\ComfyUI\ComfyUI\models\upscale_models
2026-01-31T11:20:10.494474 - Adding extra search path vae I:\ai-generator\ComfyUI\ComfyUI\models\vae
2026-01-31T11:20:10.494499 - Adding extra search path audio_encoders I:\ai-generator\ComfyUI\ComfyUI\models\audio_encoders
2026-01-31T11:20:10.494528 - Adding extra search path model_patches I:\ai-generator\ComfyUI\ComfyUI\models\model_patches
2026-01-31T11:20:11.563219 - Checkpoint files will always be loaded safely.
2026-01-31T11:20:11.676930 - Total VRAM 12227 MB, total RAM 65300 MB
2026-01-31T11:20:11.677099 - pytorch version: 2.10.0+cu130
2026-01-31T11:20:11.677461 - Set vram state to: NORMAL_VRAM
2026-01-31T11:20:11.677777 - Device: cuda:0 NVIDIA GeForce RTX 5070 : cudaMallocAsync
2026-01-31T11:20:11.687270 - Using async weight offloading with 2 streams
2026-01-31T11:20:11.687520 - Enabled pinned memory 29385.0
2026-01-31T11:20:11.689741 - working around nvidia conv3d memory bug.
2026-01-31T11:20:12.280914 - Found comfy_kitchen backend cuda: {'available': True, 'disabled': False, 'unavailable_reason': None, 'capabilities': ['apply_rope', 'apply_rope1', 'dequantize_nvfp4', 'dequantize_per_tensor_fp8', 'quantize_nvfp4', 'quantize_per_tensor_fp8', 'scaled_mm_nvfp4']}
2026-01-31T11:20:12.280999 - Found comfy_kitchen backend eager: {'available': True, 'disabled': False, 'unavailable_reason': None, 'capabilities': ['apply_rope', 'apply_rope1', 'dequantize_nvfp4', 'dequantize_per_tensor_fp8', 'quantize_nvfp4', 'quantize_per_tensor_fp8', 'scaled_mm_nvfp4']}
2026-01-31T11:20:12.281031 - Found comfy_kitchen backend triton: {'available': False, 'disabled': True, 'unavailable_reason': "ImportError: No module named 'triton'", 'capabilities': []}
2026-01-31T11:20:12.505840 - Using pytorch attention
2026-01-31T11:20:13.993817 - Python version: 3.13.11 (tags/v3.13.11:6278944, Dec 5 2025, 16:26:58) [MSC v.1944 64 bit (AMD64)]
2026-01-31T11:20:13.993913 - ComfyUI version: 0.11.1
2026-01-31T11:20:14.015304 - ComfyUI frontend version: 1.37.11
2026-01-31T11:20:14.016378 - [Prompt Server] web root: I:\ai-generator\ComfyUI_tensorrt\python_embeded\Lib\site-packages\comfyui_frontend_package\static
2026-01-31T11:20:14.859622 -
Import times for custom nodes:
2026-01-31T11:20:14.859703 - 0.0 seconds: I:\ai-generator\ComfyUI_tensorrt\ComfyUI\custom_nodes\websocket_image_save.py
2026-01-31T11:20:14.859772 - 0.0 seconds: I:\ai-generator\ComfyUI_tensorrt\ComfyUI\custom_nodes\ComfyUI_TensorRT
2026-01-31T11:20:14.859820 -
2026-01-31T11:20:14.864199 - Context impl SQLiteImpl.
2026-01-31T11:20:14.864255 - Will assume non-transactional DDL.
2026-01-31T11:20:14.884311 - Assets scan(roots=['models']) completed in 0.018s (created=0, skipped_existing=83, total_seen=83)
2026-01-31T11:20:14.923430 - Starting server
2026-01-31T11:20:14.923730 - To see the GUI go to: http://127.0.0.1:8188
2026-01-31T11:20:23.132614 - got prompt
2026-01-31T11:20:23.233777 - model weight dtype torch.float16, manual cast: None
2026-01-31T11:20:23.235026 - model_type EPS
2026-01-31T11:20:23.897314 - Using pytorch attention in VAE
2026-01-31T11:20:23.899006 - Using pytorch attention in VAE
2026-01-31T11:20:23.990303 - VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
2026-01-31T11:20:24.345243 - CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16
2026-01-31T11:20:24.903006 - Requested to load SDXL
2026-01-31T11:20:25.836854 - loaded completely; 4897.05 MB loaded, full load: True
2026-01-31T11:20:26.149903 - I:\ai-generator\ComfyUI_tensorrt\ComfyUI\custom_nodes\ComfyUI_TensorRT\tensorrt_convert.py:282: UserWarning: Exporting a model while it is in training mode. Please ensure that this is intended, as it may lead to different behavior during inference. Calling model.eval() before export is recommended.
torch.onnx.export(
2026-01-31T11:20:26.149985 - I:\ai-generator\ComfyUI_tensorrt\ComfyUI\custom_nodes\ComfyUI_TensorRT\tensorrt_convert.py:282: UserWarning: # 'dynamic_axes' is not recommended when dynamo=True, and may lead to 'torch._dynamo.exc.UserError: Constraints violated.' Supply the 'dynamic_shapes' argument instead if export is unsuccessful.
torch.onnx.export(
2026-01-31T11:20:26.150465 - W0131 11:20:26.150000 2936 Lib\site-packages\torch\onnx\_internal\exporter\_compat.py:125] Setting ONNX exporter to use operator set version 18 because the requested opset_version 17 is a lower version than we have implementations for. Automatic version conversion will be performed, which may not be successful at converting to the requested version. If version conversion is unsuccessful, the opset version of the exported model will be kept at 18. Please consider setting opset_version >=18 to leverage latest ONNX features
2026-01-31T11:20:26.850604 - W0131 11:20:26.850000 2936 Lib\site-packages\torch\onnx\_internal\exporter\_schemas.py:455] Missing annotation for parameter 'input' from (input, boxes, output_size: 'Sequence[int]', spatial_scale: 'float' = 1.0, sampling_ratio: 'int' = -1, aligned: 'bool' = False). Treating as an Input.
2026-01-31T11:20:26.851035 - W0131 11:20:26.850000 2936 Lib\site-packages\torch\onnx\_internal\exporter\_schemas.py:455] Missing annotation for parameter 'boxes' from (input, boxes, output_size: 'Sequence[int]', spatial_scale: 'float' = 1.0, sampling_ratio: 'int' = -1, aligned: 'bool' = False). Treating as an Input.
2026-01-31T11:20:26.851467 - W0131 11:20:26.851000 2936 Lib\site-packages\torch\onnx\_internal\exporter\_schemas.py:455] Missing annotation for parameter 'input' from (input, boxes, output_size: 'Sequence[int]', spatial_scale: 'float' = 1.0). Treating as an Input.
2026-01-31T11:20:26.851783 - W0131 11:20:26.851000 2936 Lib\site-packages\torch\onnx\_internal\exporter\_schemas.py:455] Missing annotation for parameter 'boxes' from (input, boxes, output_size: 'Sequence[int]', spatial_scale: 'float' = 1.0). Treating as an Input.
2026-01-31T11:20:27.150223 - !!! Exception during processing !!! Failed to export the model with torch.export. [96mThis is step 1/3[0m of exporting the model to ONNX. Next steps:
- Modify the model code for `torch.export.export` to succeed. Refer to https://pytorch.org/docs/stable/generated/exportdb/index.html for more information.
- Debug `torch.export.export` and submit a PR to PyTorch.
- Create an issue in the PyTorch GitHub repository against the [96m*torch.export*[0m component and attach the full error stack as well as reproduction scripts.
## Exception summary
<class 'torch._dynamo.exc.UserError'>: Detected mismatch between the structure of `inputs` and `dynamic_shapes`: `inputs[3]` is a <class 'tuple'>, but `dynamic_shapes[3]` is a <class 'dict'>
For more information about this error, see: https://pytorch.org/docs/main/generated/exportdb/index.html#dynamic-shapes-validation
The error above occurred when calling torch.export.export. If you would like to view some more information about this error, and get a list of all other errors that may occur in your export call, you can replace your `export()` call with `draft_export()`.
(Refer to the full stack trace above for more information.)
2026-01-31T11:20:27.292323 - Traceback (most recent call last):
File "I:\ai-generator\ComfyUI_tensorrt\python_embeded\Lib\site-packages\torch\onnx\_internal\exporter\_capture_strategies.py", line 121, in __call__
exported_program = self._capture(model, args, kwargs, dynamic_shapes)
File "I:\ai-generator\ComfyUI_tensorrt\python_embeded\Lib\site-packages\torch\onnx\_internal\exporter\_capture_strategies.py", line 235, in _capture
raise exc from None
File "I:\ai-generator\ComfyUI_tensorrt\python_embeded\Lib\site-packages\torch\onnx\_internal\exporter\_capture_strategies.py", line 219, in _capture
return torch.export.export(
~~~~~~~~~~~~~~~~~~~^
model,
^^^^^^
...<4 lines>...
prefer_deferred_runtime_asserts_over_guards=_flags.PREFER_DEFERRED_RUNTIME_ASSERTS_OVER_GUARDS,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "I:\ai-generator\ComfyUI_tensorrt\python_embeded\Lib\site-packages\torch\export\__init__.py", line 311, in export
raise e
File "I:\ai-generator\ComfyUI_tensorrt\python_embeded\Lib\site-packages\torch\export\__init__.py", line 277, in export
return _export(
mod,
...<6 lines>...
prefer_deferred_runtime_asserts_over_guards=prefer_deferred_runtime_asserts_over_guards,
)
File "I:\ai-generator\ComfyUI_tensorrt\python_embeded\Lib\site-packages\torch\export\_trace.py", line 1271, in wrapper
raise e
File "I:\ai-generator\ComfyUI_tensorrt\python_embeded\Lib\site-packages\torch\export\_trace.py", line 1237, in wrapper
ep = fn(*args, **kwargs)
File "I:\ai-generator\ComfyUI_tensorrt\python_embeded\Lib\site-packages\torch\export\exported_program.py", line 124, in wrapper
return fn(*args, **kwargs)
File "I:\ai-generator\ComfyUI_tensorrt\python_embeded\Lib\site-packages\torch\export\_trace.py", line 2377, in _export
ep = _export_for_training(
mod,
...<5 lines>...
prefer_deferred_runtime_asserts_over_guards=prefer_deferred_runtime_asserts_over_guards,
)
File "I:\ai-generator\ComfyUI_tensorrt\python_embeded\Lib\site-packages\torch\export\_trace.py", line 1271, in wrapper
raise e
File "I:\ai-generator\ComfyUI_tensorrt\python_embeded\Lib\site-packages\torch\export\_trace.py", line 1237, in wrapper
ep = fn(*args, **kwargs)
File "I:\ai-generator\ComfyUI_tensorrt\python_embeded\Lib\site-packages\torch\export\exported_program.py", line 124, in wrapper
return fn(*args, **kwargs)
File "I:\ai-generator\ComfyUI_tensorrt\python_embeded\Lib\site-packages\torch\export\_trace.py", line 2185, in _export_for_training
export_artifact = export_func(
mod=mod,
...<6 lines>...
_to_aten_func=_export_to_aten_ir_make_fx,
)
File "I:\ai-generator\ComfyUI_tensorrt\python_embeded\Lib\site-packages\torch\export\_trace.py", line 2069, in _non_strict_export
) = make_fake_inputs(
~~~~~~~~~~~~~~~~^
mod,
^^^^
...<3 lines>...
prefer_deferred_runtime_asserts_over_guards=prefer_deferred_runtime_asserts_over_guards, # for shape env initialization
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "I:\ai-generator\ComfyUI_tensorrt\python_embeded\Lib\site-packages\torch\_export\non_strict_utils.py", line 413, in make_fake_inputs
_check_dynamic_shapes(combined_args, dynamic_shapes)
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "I:\ai-generator\ComfyUI_tensorrt\python_embeded\Lib\site-packages\torch\export\dynamic_shapes.py", line 1049, in _check_dynamic_shapes
_tree_map_with_path(check_shape, combined_args, dynamic_shapes, tree_name="inputs")
~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "I:\ai-generator\ComfyUI_tensorrt\python_embeded\Lib\site-packages\torch\export\dynamic_shapes.py", line 704, in _tree_map_with_path
_compare(treespec, other_treespec, ())
~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "I:\ai-generator\ComfyUI_tensorrt\python_embeded\Lib\site-packages\torch\export\dynamic_shapes.py", line 695, in _compare
_compare(
~~~~~~~~^
child,
^^^^^^
other_child,
^^^^^^^^^^^^
path + (_key(treespec.type, treespec.context, i),),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "I:\ai-generator\ComfyUI_tensorrt\python_embeded\Lib\site-packages\torch\export\dynamic_shapes.py", line 670, in _compare
raise_mismatch_error(
~~~~~~~~~~~~~~~~~~~~^
f"`{tree_name}{rendered_path}` is a {treespec.type}, "
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
f"but `dynamic_shapes{rendered_path}` is a {other_treespec.type}"
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "I:\ai-generator\ComfyUI_tensorrt\python_embeded\Lib\site-packages\torch\export\dynamic_shapes.py", line 650, in raise_mismatch_error
raise UserError(
...<3 lines>...
)
torch._dynamo.exc.UserError: Detected mismatch between the structure of `inputs` and `dynamic_shapes`: `inputs[3]` is a <class 'tuple'>, but `dynamic_shapes[3]` is a <class 'dict'>
For more information about this error, see: https://pytorch.org/docs/main/generated/exportdb/index.html#dynamic-shapes-validation
The error above occurred when calling torch.export.export. If you would like to view some more information about this error, and get a list of all other errors that may occur in your export call, you can replace your `export()` call with `draft_export()`.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "I:\ai-generator\ComfyUI_tensorrt\ComfyUI\execution.py", line 518, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "I:\ai-generator\ComfyUI_tensorrt\ComfyUI\execution.py", line 329, in get_output_data
return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "I:\ai-generator\ComfyUI_tensorrt\ComfyUI\execution.py", line 303, in _async_map_node_over_list
await process_inputs(input_dict, i)
File "I:\ai-generator\ComfyUI_tensorrt\ComfyUI\execution.py", line 291, in process_inputs
result = f(**inputs)
File "I:\ai-generator\ComfyUI_tensorrt\ComfyUI\custom_nodes\ComfyUI_TensorRT\tensorrt_convert.py", line 628, in convert
return super()._convert(
~~~~~~~~~~~~~~~~^
model,
^^^^^^
...<14 lines>...
is_static=True,
^^^^^^^^^^^^^^^
)
^
File "I:\ai-generator\ComfyUI_tensorrt\ComfyUI\custom_nodes\ComfyUI_TensorRT\tensorrt_convert.py", line 282, in _convert
torch.onnx.export(
~~~~~~~~~~~~~~~~~^
unet,
^^^^^
...<7 lines>...
#dynamo=False, # ★この行を追記
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "I:\ai-generator\ComfyUI_tensorrt\python_embeded\Lib\site-packages\torch\onnx\__init__.py", line 296, in export
return _compat.export_compat(
~~~~~~~~~~~~~~~~~~~~~^
model,
^^^^^^
...<20 lines>...
legacy_export_kwargs=legacy_export_kwargs,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "I:\ai-generator\ComfyUI_tensorrt\python_embeded\Lib\site-packages\torch\onnx\_internal\exporter\_compat.py", line 154, in export_compat
onnx_program = _core.export(
model,
...<11 lines>...
verbose=verbose,
)
File "I:\ai-generator\ComfyUI_tensorrt\python_embeded\Lib\site-packages\torch\onnx\_internal\exporter\_flags.py", line 27, in wrapper
return func(*args, **kwargs)
File "I:\ai-generator\ComfyUI_tensorrt\python_embeded\Lib\site-packages\torch\onnx\_internal\exporter\_core.py", line 1409, in export
raise _errors.TorchExportError(
...<7 lines>...
) from first_error
torch.onnx._internal.exporter._errors.TorchExportError: Failed to export the model with torch.export. [96mThis is step 1/3[0m of exporting the model to ONNX. Next steps:
- Modify the model code for `torch.export.export` to succeed. Refer to https://pytorch.org/docs/stable/generated/exportdb/index.html for more information.
- Debug `torch.export.export` and submit a PR to PyTorch.
- Create an issue in the PyTorch GitHub repository against the [96m*torch.export*[0m component and attach the full error stack as well as reproduction scripts.
## Exception summary
<class 'torch._dynamo.exc.UserError'>: Detected mismatch between the structure of `inputs` and `dynamic_shapes`: `inputs[3]` is a <class 'tuple'>, but `dynamic_shapes[3]` is a <class 'dict'>
For more information about this error, see: https://pytorch.org/docs/main/generated/exportdb/index.html#dynamic-shapes-validation
The error above occurred when calling torch.export.export. If you would like to view some more information about this error, and get a list of all other errors that may occur in your export call, you can replace your `export()` call with `draft_export()`.
(Refer to the full stack trace above for more information.)
2026-01-31T11:20:27.294578 - Prompt executed in 4.16 seconds
```
## Attached Workflow
Please make sure that workflow does not contain any sensitive information such as API keys or passwords.
```
{"id":"7a1d5ae9-d5a1-4680-aa07-1fed5ad416ae","revision":0,"last_node_id":3,"last_link_id":1,"nodes":[{"id":2,"type":"CheckpointLoaderSimple","pos":[135.23900130890752,-531.0732052902944],"size":[423.5369089746273,98],"flags":{},"order":0,"mode":0,"inputs":[{"localized_name":"ckpt_name","name":"ckpt_name","type":"COMBO","widget":{"name":"ckpt_name"},"link":null}],"outputs":[{"localized_name":"MODEL","name":"MODEL","type":"MODEL","links":[1]},{"localized_name":"CLIP","name":"CLIP","type":"CLIP","links":null},{"localized_name":"VAE","name":"VAE","type":"VAE","links":null}],"properties":{"Node name for S&R":"CheckpointLoaderSimple"},"widgets_values":["SDXL\\3D\\sd_xl_base_1.0_0.9vae.safetensors"]},{"id":1,"type":"STATIC_TRT_MODEL_CONVERSION","pos":[591.1081472203377,-530.0571522161978],"size":[334.5162109375,178],"flags":{},"order":1,"mode":0,"inputs":[{"localized_name":"model","name":"model","type":"MODEL","link":1},{"localized_name":"filename_prefix","name":"filename_prefix","type":"STRING","widget":{"name":"filename_prefix"},"link":null},{"localized_name":"batch_size_opt","name":"batch_size_opt","type":"INT","widget":{"name":"batch_size_opt"},"link":null},{"localized_name":"height_opt","name":"height_opt","type":"INT","widget":{"name":"height_opt"},"link":null},{"localized_name":"width_opt","name":"width_opt","type":"INT","widget":{"name":"width_opt"},"link":null},{"localized_name":"context_opt","name":"context_opt","type":"INT","widget":{"name":"context_opt"},"link":null},{"localized_name":"num_video_frames","name":"num_video_frames","type":"INT","widget":{"name":"num_video_frames"},"link":null}],"outputs":[],"properties":{"Node name for S&R":"STATIC_TRT_MODEL_CONVERSION"},"widgets_values":["tensorrt/ComfyUI_STAT",1,1024,1024,1,14]}],"links":[[1,2,0,1,0,"MODEL"]],"groups":[],"config":{},"extra":{"workflowRendererVersion":"LG","ds":{"scale":1.7769324036150382,"offset":[9.82522483258083,760.4596875715949]}},"version":0.4}
```
## Additional Context
(Please add any additional context or steps to reproduce the error here)原因は、ノードの内部処理でパラメータの整合性が取れていないためらしく、これを回避するにはソースファイル(下図)の修正が必要なようです。

:: ソースファイル名
custom_nodes\comfyui_tensorrt\tensorrt_convert.py修正は282行目付近の torch.onnx.export にパラメータを追加します。
torch.onnx.export(
unet,
inputs,
output_onnx,
verbose=False,
input_names=input_names,
output_names=output_names,
opset_version=17,
dynamic_axes=dynamic_axes,
dynamo=False, # ★この行を追記
)この対応方法についてはリンク先の書き込みを参考にさせていただきました。
エンジン形式に変換する
変換はComfyUIで行います。通常は目的に合ったエンジンへ変換すれば良いのですが、ここでは比較のため、静的と動的、2種類のエンジン形式に変換します。
静的エンジン (Static)
「特定のサイズ」専用にガチガチに固めた形式です。
- 仕組み: 変換時に指定した「画像サイズ」や「バッチサイズ」のみを処理できるよう最適化されます。
- メリット: 無駄が一切省かれるため、その特定の条件において最速のスピードがでます。
- デメリット: 変換したモデルで画像生成する際、変換時に指定したパラメータ(画像解像度やプロンプトのトークン数など)から外れると、エラーや生成速度低下などが起こり得ます。
動的エンジン (Dynamic)
「ある程度の幅」を持たせた柔軟な形式です。
- 仕組み: 変換時に「最小・最適・最大」の範囲(プロファイル)を指定します。その範囲内なら実行時にサイズを変更可能です。
- メリット: 1つのエンジンで使い回しができます。縦長・横長・複数枚生成などを頻繁に切り替えたい場合には非常に便利です。
- デメリット: 静的エンジンに比べるとわずかに速度が落ちる場合があります。また、最大値に応じたVRAMが消費されるため、最適値や最小値を頻繁に使う場合は無駄が多くなります。
- 空のワークフローの準備
- ノードの配置
- Load Checkpoint (標準: Checkpointのロード)
- STATIC_TRT_MODEL_CONVERSION
(ComfyUI_TensorRT: 静的エンジン形式への変換) - DYNAMIC_TRT_MODEL_CONVERSION
(ComfyUI_TensorRT: 動的エンジン形式への変換)
- 配置したノードの modelポート を接続
- パラメータを設定(後述)
- ワークフローを実行

パラメータ設定は次の通り。
Load Checkpoint:
- ckpt_name: エンジン化したいモデルを指定
STATIC_TRT_MODEL_CONVERSION:
- batch_size_opt: 1
- height_opt: 1024
- whdth_opt: 1024
- context_opt: 2 (以下補足)
- 1=1~75トークン、2=76~150トークン、…
概ね1=75だが、モデルによっては変わる - 静的エンジンの場合、指定よりトークン数が少ないと画像が生成されず、超えると切り捨てられるみたいです。
- 1=1~75トークン、2=76~150トークン、…
- num_video_frames: 1(ここでは未使用)
DYNAMIC_TRT_MODEL_CONVERSION:
- batch_size_min: 1
- batch_size_opt: 1
- batch_size_max: 1
- height_min: 512
- height_opt: 1024
- height_max: 1536
- width_min: 512
- width_opt: 1024
- width_max: 1536
- context_min: 1
- context_opt: 2
- context_max: 5
- num_video_frames: 1 (ここでは未使用)
変換が完了すると、画像と同様に outputフォルダ に出力されます(下図参照)。また、変換にかかった時間はモデルあたり約2分程度と、思っていたより早かったです。

生成速度を比較する
エンジン形式のモデルが用意できたので、実際に画像生成を試し、4種類のモデルで変換前後の生成速度を比べてみます。
SDXL 1.0
t2iのシンプルなワークフローを作ります。

{
"id": "ead51eb7-625b-4e25-bcf7-295b1a565d66",
"revision": 0,
"last_node_id": 10,
"last_link_id": 16,
"nodes": [
{
"id": 9,
"type": "VAEDecode",
"pos": [
1151.916591121401,
-777.0911065933709
],
"size": [
140,
46
],
"flags": {},
"order": 6,
"mode": 0,
"inputs": [
{
"name": "samples",
"type": "LATENT",
"link": 7
},
{
"name": "vae",
"type": "VAE",
"link": 14
}
],
"outputs": [
{
"name": "IMAGE",
"type": "IMAGE",
"links": [
9
]
}
],
"properties": {
"Node name for S&R": "VAEDecode"
},
"widgets_values": []
},
{
"id": 7,
"type": "EmptyLatentImage",
"pos": [
772.2742169666307,
-462.4317527240989
],
"size": [
270,
106
],
"flags": {},
"order": 0,
"mode": 0,
"inputs": [],
"outputs": [
{
"name": "LATENT",
"type": "LATENT",
"links": [
4
]
}
],
"properties": {
"Node name for S&R": "EmptyLatentImage"
},
"widgets_values": [
1024,
1024,
1
]
},
{
"id": 8,
"type": "SaveImage",
"pos": [
1151.0651260575403,
-678.7681163604623
],
"size": [
418.15454545454577,
448.9272727272729
],
"flags": {},
"order": 7,
"mode": 0,
"inputs": [
{
"name": "images",
"type": "IMAGE",
"link": 9
}
],
"outputs": [],
"properties": {},
"widgets_values": [
"ComfyUI"
]
},
{
"id": 1,
"type": "KSampler",
"pos": [
773.2396701460586,
-775.6564163399718
],
"size": [
270,
262
],
"flags": {},
"order": 5,
"mode": 0,
"inputs": [
{
"name": "model",
"type": "MODEL",
"link": 16
},
{
"name": "positive",
"type": "CONDITIONING",
"link": 5
},
{
"name": "negative",
"type": "CONDITIONING",
"link": 6
},
{
"name": "latent_image",
"type": "LATENT",
"link": 4
}
],
"outputs": [
{
"name": "LATENT",
"type": "LATENT",
"links": [
7
]
}
],
"properties": {
"Node name for S&R": "KSampler"
},
"widgets_values": [
860742695443414,
"randomize",
30,
7,
"dpmpp_3m_sde_gpu",
"simple",
1
]
},
{
"id": 6,
"type": "CLIPTextEncode",
"pos": [
305.0378533302665,
-529.5499345422805
],
"size": [
400,
200
],
"flags": {},
"order": 4,
"mode": 0,
"inputs": [
{
"name": "clip",
"type": "CLIP",
"link": 13
}
],
"outputs": [
{
"name": "CONDITIONING",
"type": "CONDITIONING",
"links": [
6
]
}
],
"properties": {
"Node name for S&R": "CLIPTextEncode"
},
"widgets_values": [
"(worst quality, low quality, normal quality:1.4), lowres, extra fingers, missing fingers, poorly drawn hands, deformed hands, mutation, deformed, disfigured, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, skin spots, acnes, skin blemishes, aged, old, plastic skin, airbrushed, cartoon, anime, 3d render, cgi, illustration, sketch, drawing, painting, digital painting, watermark, text, signature, logo, out of frame, cropped, low contrast, oversaturated, harsh shadows.\n"
],
"color": "#332922",
"bgcolor": "#593930"
},
{
"id": 2,
"type": "CheckpointLoaderSimple",
"pos": [
-57.86214666973294,
-779.9499345422802
],
"size": [
270,
98
],
"flags": {},
"order": 1,
"mode": 0,
"inputs": [],
"outputs": [
{
"name": "MODEL",
"type": "MODEL",
"links": [
16
]
},
{
"name": "CLIP",
"type": "CLIP",
"links": [
12,
13
]
},
{
"name": "VAE",
"type": "VAE",
"links": [
14
]
}
],
"properties": {
"Node name for S&R": "CheckpointLoaderSimple"
},
"widgets_values": [
"SDXL\\sd_xl_base_1.0.safetensors"
]
},
{
"id": 5,
"type": "CLIPTextEncode",
"pos": [
301.8378533302667,
-776.4499345422805
],
"size": [
400,
200
],
"flags": {},
"order": 3,
"mode": 0,
"inputs": [
{
"name": "clip",
"type": "CLIP",
"link": 12
}
],
"outputs": [
{
"name": "CONDITIONING",
"type": "CONDITIONING",
"links": [
5
]
}
],
"properties": {
"Node name for S&R": "CLIPTextEncode"
},
"widgets_values": [
"Cinematic portrait of a stunning young woman, 22 years old, ethereal beauty, long wavy chestnut hair catching golden hour light, deep emerald eyes with intricate iris details and realistic catchlights, soft natural skin texture with visible pores, elegant silk cream-colored blouse with fine lace, standing in a lush European flower garden during sunset. Warm amber lighting, soft shadows, bokeh background with blurred lavender and roses, lens flare. Shot on Sony A7R IV, 85mm lens, f/1.8, 8k resolution, photorealistic, masterpiece, breathtaking atmosphere, highly polished, sharp focus on eyes, rich colors, volumetric lighting, professional color grading, exquisite composition, raw photo style, hyper-realistic, vivid details.\n"
],
"color": "#232",
"bgcolor": "#353"
},
{
"id": 10,
"type": "TensorRTLoader",
"pos": [
-56.95336518893458,
-614.8126931297093
],
"size": [
270,
82
],
"flags": {},
"order": 2,
"mode": 0,
"inputs": [],
"outputs": [
{
"name": "MODEL",
"type": "MODEL",
"links": []
}
],
"properties": {
"Node name for S&R": "TensorRTLoader"
},
"widgets_values": [
"sdxl_dynamic_b1_512-1024-1536x512-1024-1536_c1-2-5.engine",
"sdxl_base"
]
}
],
"links": [
[
4,
7,
0,
1,
3,
"LATENT"
],
[
5,
5,
0,
1,
1,
"CONDITIONING"
],
[
6,
6,
0,
1,
2,
"CONDITIONING"
],
[
7,
1,
0,
9,
0,
"LATENT"
],
[
9,
9,
0,
8,
0,
"IMAGE"
],
[
12,
2,
1,
5,
0,
"CLIP"
],
[
13,
2,
1,
6,
0,
"CLIP"
],
[
14,
2,
2,
9,
1,
"VAE"
],
[
16,
2,
0,
1,
0,
"MODEL"
]
],
"groups": [
{
"id": 1,
"title": "Loader",
"bounding": [
-83.66214666973298,
-872.3499345422797,
345.2272727272727,
759.2090909090908
],
"color": "#3f789e",
"font_size": 24,
"flags": {}
},
{
"id": 2,
"title": "Prompt",
"bounding": [
276.0378533302666,
-871.24993454228,
452.63636363636346,
759.7363636363638
],
"color": "#3f789e",
"font_size": 24,
"flags": {}
},
{
"id": 3,
"title": "Sampler",
"bounding": [
744.7742169666311,
-868.3317527240984,
362.34545454545446,
758.8272727272728
],
"color": "#3f789e",
"font_size": 24,
"flags": {}
},
{
"id": 4,
"title": "Output",
"bounding": [
1125.5651260575403,
-867.5681163604622,
482.82727272727266,
754.4363636363636
],
"color": "#3f789e",
"font_size": 24,
"flags": {}
}
],
"config": {},
"extra": {
"workflowRendererVersion": "LG",
"ds": {
"scale": 0.8410946974926454,
"offset": [
207.79911636670266,
1029.5110330271277
]
},
"frontendVersion": "1.37.11"
},
"version": 0.4
}ComfyUIにコピペするとWFを読み込めますWFを4回実行した際の、1枚あたりの平均生成時間です。
- Safetensors: 平均 8.58 秒
- TensorRT Engine Static: 平均 5.38 秒
- TensorRT Engine Dynamic: 平均 6.15 秒












参考として生成ログを載せておきます。
got prompt
0%| | 0/30 [00:00<?, ?it/s]got prompt
got prompt
got prompt
100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [00:08<00:00, 3.66it/s]
Prompt executed in 8.60 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [00:08<00:00, 3.68it/s]
Prompt executed in 8.57 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [00:08<00:00, 3.68it/s]
Prompt executed in 8.59 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [00:08<00:00, 3.69it/s]
Prompt executed in 8.56 secondsBAT (Batchfile)got prompt
0%| | 0/30 [00:00<?, ?it/s]got prompt
got prompt
got prompt
100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [00:05<00:00, 5.94it/s]
Prompt executed in 5.46 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [00:04<00:00, 6.11it/s]
Prompt executed in 5.31 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [00:04<00:00, 6.04it/s]
Prompt executed in 5.41 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [00:04<00:00, 6.08it/s]
Prompt executed in 5.35 secondsBAT (Batchfile)got prompt
0%| | 0/30 [00:00<?, ?it/s]got prompt
got prompt
got prompt
100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [00:05<00:00, 5.20it/s]
Prompt executed in 6.18 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [00:05<00:00, 5.26it/s]
Prompt executed in 6.11 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [00:05<00:00, 5.25it/s]
Prompt executed in 6.16 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [00:05<00:00, 5.23it/s]
Prompt executed in 6.16 secondsBAT (Batchfile)Pony v6
t2iのシンプルなワークフローを作ります。

{
"id": "ead51eb7-625b-4e25-bcf7-295b1a565d66",
"revision": 0,
"last_node_id": 12,
"last_link_id": 20,
"nodes": [
{
"id": 9,
"type": "VAEDecode",
"pos": [
1151.916591121401,
-777.0911065933709
],
"size": [
140,
46
],
"flags": {},
"order": 7,
"mode": 0,
"inputs": [
{
"name": "samples",
"type": "LATENT",
"link": 7
},
{
"name": "vae",
"type": "VAE",
"link": 18
}
],
"outputs": [
{
"name": "IMAGE",
"type": "IMAGE",
"links": [
9
]
}
],
"properties": {
"Node name for S&R": "VAEDecode"
},
"widgets_values": []
},
{
"id": 7,
"type": "EmptyLatentImage",
"pos": [
772.2742169666307,
-462.4317527240989
],
"size": [
270,
106
],
"flags": {},
"order": 0,
"mode": 0,
"inputs": [],
"outputs": [
{
"name": "LATENT",
"type": "LATENT",
"links": [
4
]
}
],
"properties": {
"Node name for S&R": "EmptyLatentImage"
},
"widgets_values": [
1024,
1024,
1
]
},
{
"id": 8,
"type": "SaveImage",
"pos": [
1151.0651260575403,
-678.7681163604623
],
"size": [
418.15454545454577,
448.9272727272729
],
"flags": {},
"order": 8,
"mode": 0,
"inputs": [
{
"name": "images",
"type": "IMAGE",
"link": 9
}
],
"outputs": [],
"properties": {},
"widgets_values": [
"ComfyUI"
]
},
{
"id": 1,
"type": "KSampler",
"pos": [
773.2396701460586,
-775.6564163399718
],
"size": [
270,
262
],
"flags": {},
"order": 6,
"mode": 0,
"inputs": [
{
"name": "model",
"type": "MODEL",
"link": 20
},
{
"name": "positive",
"type": "CONDITIONING",
"link": 5
},
{
"name": "negative",
"type": "CONDITIONING",
"link": 6
},
{
"name": "latent_image",
"type": "LATENT",
"link": 4
}
],
"outputs": [
{
"name": "LATENT",
"type": "LATENT",
"links": [
7
]
}
],
"properties": {
"Node name for S&R": "KSampler"
},
"widgets_values": [
682008478135254,
"randomize",
25,
5,
"euler_ancestral",
"karras",
1
]
},
{
"id": 11,
"type": "CLIPSetLastLayer",
"pos": [
299.31223566631354,
-780.7094248667589
],
"size": [
270,
58
],
"flags": {},
"order": 3,
"mode": 0,
"inputs": [
{
"name": "clip",
"type": "CLIP",
"link": 17
}
],
"outputs": [
{
"name": "CLIP",
"type": "CLIP",
"links": [
12,
13
]
}
],
"properties": {
"Node name for S&R": "CLIPSetLastLayer"
},
"widgets_values": [
-2
]
},
{
"id": 2,
"type": "CheckpointLoaderSimple",
"pos": [
-57.86214666973294,
-779.9499345422802
],
"size": [
270,
98
],
"flags": {},
"order": 1,
"mode": 0,
"inputs": [],
"outputs": [
{
"name": "MODEL",
"type": "MODEL",
"links": [
20
]
},
{
"name": "CLIP",
"type": "CLIP",
"links": [
17
]
},
{
"name": "VAE",
"type": "VAE",
"links": [
18
]
}
],
"properties": {
"Node name for S&R": "CheckpointLoaderSimple"
},
"widgets_values": [
"Pony\\V6\\v6.safetensors"
]
},
{
"id": 10,
"type": "TensorRTLoader",
"pos": [
-56.95336518893458,
-614.8126931297093
],
"size": [
270,
82
],
"flags": {},
"order": 2,
"mode": 0,
"inputs": [],
"outputs": [
{
"name": "MODEL",
"type": "MODEL",
"links": []
}
],
"properties": {
"Node name for S&R": "TensorRTLoader"
},
"widgets_values": [
"pony-v6_static_b1_1024x1024_c2.engine",
"sdxl_base"
]
},
{
"id": 6,
"type": "CLIPTextEncode",
"pos": [
305.8642996112582,
-417.9796866083964
],
"size": [
400,
200
],
"flags": {},
"order": 5,
"mode": 0,
"inputs": [
{
"name": "clip",
"type": "CLIP",
"link": 13
}
],
"outputs": [
{
"name": "CONDITIONING",
"type": "CONDITIONING",
"links": [
6
]
}
],
"properties": {
"Node name for S&R": "CLIPTextEncode"
},
"widgets_values": [
"score_4, score_5, score_6, low quality, bad anatomy, deformed, ugly, blurry, text, watermark, signature\n"
],
"color": "#332922",
"bgcolor": "#593930"
},
{
"id": 5,
"type": "CLIPTextEncode",
"pos": [
302.6642996112584,
-664.8796866083965
],
"size": [
400,
200
],
"flags": {},
"order": 4,
"mode": 0,
"inputs": [
{
"name": "clip",
"type": "CLIP",
"link": 12
}
],
"outputs": [
{
"name": "CONDITIONING",
"type": "CONDITIONING",
"links": [
5
]
}
],
"properties": {
"Node name for S&R": "CLIPTextEncode"
},
"widgets_values": [
"score_9, score_8_up, score_7_up, score_6_up, score_5_up, score_4_up, source_anime, rating_safe, 1girl, young woman, japanese school girl, short black bob hair, neat bangs, dark brown eyes, soft skin, wearing school sailor uniform, pleated skirt, red ribbon, standing at a train station platform, sunlight filtering through station roof, cinematic lighting, anime style, flat color shading, clean lineart, soft shadows, blue sky background, high resolution, detailed illustration, masterpiece, aesthetic, trending on pixiv\n"
],
"color": "#232",
"bgcolor": "#353"
}
],
"links": [
[
4,
7,
0,
1,
3,
"LATENT"
],
[
5,
5,
0,
1,
1,
"CONDITIONING"
],
[
6,
6,
0,
1,
2,
"CONDITIONING"
],
[
7,
1,
0,
9,
0,
"LATENT"
],
[
9,
9,
0,
8,
0,
"IMAGE"
],
[
12,
11,
0,
5,
0,
"CLIP"
],
[
13,
11,
0,
6,
0,
"CLIP"
],
[
17,
2,
1,
11,
0,
"CLIP"
],
[
18,
2,
2,
9,
1,
"VAE"
],
[
20,
2,
0,
1,
0,
"MODEL"
]
],
"groups": [
{
"id": 1,
"title": "Loader",
"bounding": [
-83.66214666973298,
-872.3499345422797,
345.2272727272727,
759.2090909090908
],
"color": "#3f789e",
"font_size": 24,
"flags": {}
},
{
"id": 2,
"title": "Prompt",
"bounding": [
276.0378533302666,
-871.24993454228,
452.63636363636346,
759.7363636363638
],
"color": "#3f789e",
"font_size": 24,
"flags": {}
},
{
"id": 3,
"title": "Sampler",
"bounding": [
744.7742169666311,
-868.3317527240984,
362.34545454545446,
758.8272727272728
],
"color": "#3f789e",
"font_size": 24,
"flags": {}
},
{
"id": 4,
"title": "Output",
"bounding": [
1125.5651260575403,
-867.5681163604622,
482.82727272727266,
754.4363636363636
],
"color": "#3f789e",
"font_size": 24,
"flags": {}
}
],
"config": {},
"extra": {
"workflowRendererVersion": "LG",
"ds": {
"scale": 0.8410946974926454,
"offset": [
229.19979818488443,
1035.4556668655116
]
},
"frontendVersion": "1.37.11"
},
"version": 0.4
}ComfyUIにコピペするとWFを読み込めますWFを4回実行した際の、1枚あたりの平均生成時間です。
- Safetensors: 平均 7.15 秒
- TensorRT Engine Static: 平均 4.42 秒
- TensorRT Engine Dynamic: 平均 5.01 秒












参考として生成ログを載せておきます。
got prompt
0%| | 0/25 [00:00<?, ?it/s]got prompt
got prompt
got prompt
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:06<00:00, 3.68it/s]
Prompt executed in 7.20 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:06<00:00, 3.72it/s]
Prompt executed in 7.14 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:06<00:00, 3.72it/s]
Prompt executed in 7.16 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:06<00:00, 3.73it/s]
Prompt executed in 7.10 secondsBAT (Batchfile)got prompt
0%| | 0/25 [00:00<?, ?it/s]got prompt
got prompt
got prompt
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:04<00:00, 6.19it/s]
Prompt executed in 4.44 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:03<00:00, 6.25it/s]
Prompt executed in 4.40 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:03<00:00, 6.25it/s]
Prompt executed in 4.44 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:03<00:00, 6.26it/s]
Prompt executed in 4.43 secondsBAT (Batchfile)got prompt
0%| | 0/25 [00:00<?, ?it/s]got prompt
got prompt
got prompt
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:04<00:00, 5.38it/s]
Prompt executed in 5.06 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:04<00:00, 5.48it/s]
Prompt executed in 4.97 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:04<00:00, 5.48it/s]
Prompt executed in 5.01 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:04<00:00, 5.47it/s]
Prompt executed in 4.98 secondsBAT (Batchfile)Illustrious v2.0
t2iのシンプルなワークフローを作ります。

{
"id": "b5cd0ade-791d-433d-9596-b4ffdbee7509",
"revision": 0,
"last_node_id": 12,
"last_link_id": 24,
"nodes": [
{
"id": 9,
"type": "VAEDecode",
"pos": [
1151.916591121401,
-777.0911065933709
],
"size": [
140,
46
],
"flags": {},
"order": 6,
"mode": 0,
"inputs": [
{
"name": "samples",
"type": "LATENT",
"link": 7
},
{
"name": "vae",
"type": "VAE",
"link": 18
}
],
"outputs": [
{
"name": "IMAGE",
"type": "IMAGE",
"links": [
9
]
}
],
"properties": {
"Node name for S&R": "VAEDecode"
},
"widgets_values": []
},
{
"id": 7,
"type": "EmptyLatentImage",
"pos": [
772.2742169666307,
-462.4317527240989
],
"size": [
270,
106
],
"flags": {},
"order": 0,
"mode": 0,
"inputs": [],
"outputs": [
{
"name": "LATENT",
"type": "LATENT",
"links": [
4
]
}
],
"properties": {
"Node name for S&R": "EmptyLatentImage"
},
"widgets_values": [
1024,
1024,
1
]
},
{
"id": 8,
"type": "SaveImage",
"pos": [
1151.0651260575403,
-678.7681163604623
],
"size": [
418.15454545454577,
448.9272727272729
],
"flags": {},
"order": 7,
"mode": 0,
"inputs": [
{
"name": "images",
"type": "IMAGE",
"link": 9
}
],
"outputs": [],
"properties": {},
"widgets_values": [
"ComfyUI"
]
},
{
"id": 1,
"type": "KSampler",
"pos": [
773.2396701460586,
-775.6564163399718
],
"size": [
270,
262
],
"flags": {},
"order": 5,
"mode": 0,
"inputs": [
{
"name": "model",
"type": "MODEL",
"link": 24
},
{
"name": "positive",
"type": "CONDITIONING",
"link": 5
},
{
"name": "negative",
"type": "CONDITIONING",
"link": 6
},
{
"name": "latent_image",
"type": "LATENT",
"link": 4
}
],
"outputs": [
{
"name": "LATENT",
"type": "LATENT",
"links": [
7
]
}
],
"properties": {
"Node name for S&R": "KSampler"
},
"widgets_values": [
306787797402042,
"randomize",
25,
5,
"euler_ancestral",
"karras",
1
]
},
{
"id": 2,
"type": "CheckpointLoaderSimple",
"pos": [
-57.86214666973294,
-779.9499345422802
],
"size": [
270,
98
],
"flags": {},
"order": 1,
"mode": 0,
"inputs": [],
"outputs": [
{
"name": "MODEL",
"type": "MODEL",
"links": [
24
]
},
{
"name": "CLIP",
"type": "CLIP",
"links": [
21,
22
]
},
{
"name": "VAE",
"type": "VAE",
"links": [
18
]
}
],
"properties": {
"Node name for S&R": "CheckpointLoaderSimple"
},
"widgets_values": [
"Illustrious\\Illustrious-XL-v2.0.safetensors"
]
},
{
"id": 5,
"type": "CLIPTextEncode",
"pos": [
302.6642996112584,
-779.6408928399219
],
"size": [
400,
200
],
"flags": {},
"order": 3,
"mode": 0,
"inputs": [
{
"name": "clip",
"type": "CLIP",
"link": 21
}
],
"outputs": [
{
"name": "CONDITIONING",
"type": "CONDITIONING",
"links": [
5
]
}
],
"properties": {
"Node name for S&R": "CLIPTextEncode"
},
"widgets_values": [
"(masterpiece:1.2), (best quality:1.2), (very aesthetic:1.2), newest, (foreshortening:1.4), (reaching towards viewer:1.3), hand reaching out to camera, (open palm:1.2), 1girl, young woman, white straight long hair, (blue colored inner hair:1.1), blunt bangs, sidelocks, (red floral hair ornament:1.2), (red eyelashes:1.1), makeup, smile, looking at viewer, late edo period, (dark purple hakama:1.2), (white kimono with small pink floral print:1.2), red ribbon bow on waist, (volumetric lighting:1.3), (depth of field:1.3), (sharp focus on hand and face:1.2), (blurry background:1.1), luminescent background, light particles, glowing, pastel colors, dynamic angle, portrait, intricate textile texture, ultra-detailed\n"
],
"color": "#232",
"bgcolor": "#353"
},
{
"id": 6,
"type": "CLIPTextEncode",
"pos": [
305.8642996112582,
-532.740892839922
],
"size": [
400,
200
],
"flags": {},
"order": 4,
"mode": 0,
"inputs": [
{
"name": "clip",
"type": "CLIP",
"link": 22
}
],
"outputs": [
{
"name": "CONDITIONING",
"type": "CONDITIONING",
"links": [
6
]
}
],
"properties": {
"Node name for S&R": "CLIPTextEncode"
},
"widgets_values": [
"lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, blurry, artist name, bad feet, distorted hands, fused fingers, too many fingers, long neck, split view, multiple girls, (back view:1.3), (from behind:1.3), (side view:1.1), closed eyes, monochrome, grayscale, simple background\n"
],
"color": "#332922",
"bgcolor": "#593930"
},
{
"id": 10,
"type": "TensorRTLoader",
"pos": [
-56.95336518893458,
-614.8126931297093
],
"size": [
270,
82
],
"flags": {},
"order": 2,
"mode": 0,
"inputs": [],
"outputs": [
{
"name": "MODEL",
"type": "MODEL",
"links": []
}
],
"properties": {
"Node name for S&R": "TensorRTLoader"
},
"widgets_values": [
"illustrious-v2_dynamic_b1_512-1024-1536x512-1024-1536_c1-2-5.engine",
"sdxl_base"
]
}
],
"links": [
[
4,
7,
0,
1,
3,
"LATENT"
],
[
5,
5,
0,
1,
1,
"CONDITIONING"
],
[
6,
6,
0,
1,
2,
"CONDITIONING"
],
[
7,
1,
0,
9,
0,
"LATENT"
],
[
9,
9,
0,
8,
0,
"IMAGE"
],
[
18,
2,
2,
9,
1,
"VAE"
],
[
21,
2,
1,
5,
0,
"CLIP"
],
[
22,
2,
1,
6,
0,
"CLIP"
],
[
24,
2,
0,
1,
0,
"MODEL"
]
],
"groups": [
{
"id": 1,
"title": "Loader",
"bounding": [
-83.66214666973298,
-872.3499345422797,
345.2272727272727,
759.2090909090908
],
"color": "#3f789e",
"font_size": 24,
"flags": {}
},
{
"id": 2,
"title": "Prompt",
"bounding": [
276.0378533302666,
-871.24993454228,
452.63636363636346,
759.7363636363638
],
"color": "#3f789e",
"font_size": 24,
"flags": {}
},
{
"id": 3,
"title": "Sampler",
"bounding": [
744.7742169666311,
-868.3317527240984,
362.34545454545446,
758.8272727272728
],
"color": "#3f789e",
"font_size": 24,
"flags": {}
},
{
"id": 4,
"title": "Output",
"bounding": [
1125.5651260575403,
-867.5681163604622,
482.82727272727266,
754.4363636363636
],
"color": "#3f789e",
"font_size": 24,
"flags": {}
}
],
"config": {},
"extra": {
"workflowRendererVersion": "LG",
"ds": {
"scale": 1.354591421258882,
"offset": [
-394.38586897184723,
871.2919459477863
]
},
"frontendVersion": "1.37.11"
},
"version": 0.4
}ComfyUIにコピペするとWFを読み込めますWFを4回実行した際の、1枚あたりの平均生成時間です。
- Safetensors: 平均 7.20 秒
- TensorRT Engine Static: 平均 4.44 秒
- TensorRT Engine Dynamic: 平均 4.97 秒












参考として生成ログを載せておきます。
got prompt
0%| | 0/25 [00:00<?, ?it/s]got prompt
got prompt
got prompt
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:06<00:00, 3.59it/s]
Prompt executed in 7.36 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:06<00:00, 3.70it/s]
Prompt executed in 7.18 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:06<00:00, 3.73it/s]
Prompt executed in 7.15 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:06<00:00, 3.73it/s]
Prompt executed in 7.10 secondsBAT (Batchfile)got prompt
0%| | 0/25 [00:00<?, ?it/s]got prompt
got prompt
got prompt
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:03<00:00, 6.27it/s]
Prompt executed in 4.41 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:03<00:00, 6.27it/s]
Prompt executed in 4.41 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:03<00:00, 6.28it/s]
Prompt executed in 4.42 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:04<00:00, 6.05it/s]
Prompt executed in 4.53 secondsBAT (Batchfile)got prompt
0%| | 0/25 [00:00<?, ?it/s]got prompt
got prompt
got prompt
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:04<00:00, 5.41it/s]
Prompt executed in 5.03 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:04<00:00, 5.52it/s]
Prompt executed in 4.94 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:04<00:00, 5.51it/s]
Prompt executed in 4.96 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:04<00:00, 5.51it/s]
Prompt executed in 4.93 secondsBAT (Batchfile)NoobAI v1.1
t2iのシンプルなワークフローを作ります。

{
"id": "ead51eb7-625b-4e25-bcf7-295b1a565d66",
"revision": 0,
"last_node_id": 12,
"last_link_id": 24,
"nodes": [
{
"id": 9,
"type": "VAEDecode",
"pos": [
1151.916591121401,
-777.0911065933709
],
"size": [
140,
46
],
"flags": {},
"order": 7,
"mode": 0,
"inputs": [
{
"name": "samples",
"type": "LATENT",
"link": 7
},
{
"name": "vae",
"type": "VAE",
"link": 18
}
],
"outputs": [
{
"name": "IMAGE",
"type": "IMAGE",
"links": [
9
]
}
],
"properties": {
"Node name for S&R": "VAEDecode"
},
"widgets_values": []
},
{
"id": 7,
"type": "EmptyLatentImage",
"pos": [
772.2742169666307,
-462.4317527240989
],
"size": [
270,
106
],
"flags": {},
"order": 0,
"mode": 0,
"inputs": [],
"outputs": [
{
"name": "LATENT",
"type": "LATENT",
"links": [
4
]
}
],
"properties": {
"Node name for S&R": "EmptyLatentImage"
},
"widgets_values": [
1024,
1024,
1
]
},
{
"id": 8,
"type": "SaveImage",
"pos": [
1151.0651260575403,
-678.7681163604623
],
"size": [
418.15454545454577,
448.9272727272729
],
"flags": {},
"order": 8,
"mode": 0,
"inputs": [
{
"name": "images",
"type": "IMAGE",
"link": 9
}
],
"outputs": [],
"properties": {},
"widgets_values": [
"ComfyUI"
]
},
{
"id": 1,
"type": "KSampler",
"pos": [
773.2396701460586,
-775.6564163399718
],
"size": [
270,
262
],
"flags": {},
"order": 6,
"mode": 0,
"inputs": [
{
"name": "model",
"type": "MODEL",
"link": 24
},
{
"name": "positive",
"type": "CONDITIONING",
"link": 5
},
{
"name": "negative",
"type": "CONDITIONING",
"link": 6
},
{
"name": "latent_image",
"type": "LATENT",
"link": 4
}
],
"outputs": [
{
"name": "LATENT",
"type": "LATENT",
"links": [
7
]
}
],
"properties": {
"Node name for S&R": "KSampler"
},
"widgets_values": [
941341786533022,
"randomize",
25,
5,
"euler_ancestral",
"karras",
1
]
},
{
"id": 11,
"type": "CLIPSetLastLayer",
"pos": [
299.31223566631354,
-780.7094248667589
],
"size": [
270,
58
],
"flags": {},
"order": 3,
"mode": 0,
"inputs": [
{
"name": "clip",
"type": "CLIP",
"link": 17
}
],
"outputs": [
{
"name": "CLIP",
"type": "CLIP",
"links": [
12,
13
]
}
],
"properties": {
"Node name for S&R": "CLIPSetLastLayer"
},
"widgets_values": [
-2
]
},
{
"id": 5,
"type": "CLIPTextEncode",
"pos": [
302.6642996112584,
-664.8796866083965
],
"size": [
400,
200
],
"flags": {},
"order": 4,
"mode": 0,
"inputs": [
{
"name": "clip",
"type": "CLIP",
"link": 12
}
],
"outputs": [
{
"name": "CONDITIONING",
"type": "CONDITIONING",
"links": [
5
]
}
],
"properties": {
"Node name for S&R": "CLIPTextEncode"
},
"widgets_values": [
"1girl, young woman, (jumping:1.3), (from below:1.4), (wide angle:1.3), (one leg extended towards:1.4), (thighhighs:1.2), futuristic sneakers, detailed sole, (glowing eyes:0.8), iridescent hair, floating hair, wind-blown, cyber city background, motion blur, (chromatic aberration:1.1), (dynamic composition:1.2), sharp focus on face, hyper detailed, vibrant colors, glint, flare, detailed eyes, very awa, masterpiece, best quality, newest, highres, absurdres, "
],
"color": "#232",
"bgcolor": "#353"
},
{
"id": 6,
"type": "CLIPTextEncode",
"pos": [
305.8642996112582,
-417.9796866083964
],
"size": [
400,
200
],
"flags": {},
"order": 5,
"mode": 0,
"inputs": [
{
"name": "clip",
"type": "CLIP",
"link": 13
}
],
"outputs": [
{
"name": "CONDITIONING",
"type": "CONDITIONING",
"links": [
6
]
}
],
"properties": {
"Node name for S&R": "CLIPTextEncode"
},
"widgets_values": [
"low quality, worst quality, normal quality, text, signature, jpeg artifacts, bad anatomy, old, early, copyright name, watermark, artist name, signature, "
],
"color": "#332922",
"bgcolor": "#593930"
},
{
"id": 2,
"type": "CheckpointLoaderSimple",
"pos": [
-57.86214666973294,
-779.9499345422802
],
"size": [
270,
98
],
"flags": {},
"order": 1,
"mode": 0,
"inputs": [],
"outputs": [
{
"name": "MODEL",
"type": "MODEL",
"links": [
24
]
},
{
"name": "CLIP",
"type": "CLIP",
"links": [
17
]
},
{
"name": "VAE",
"type": "VAE",
"links": [
18
]
}
],
"properties": {
"Node name for S&R": "CheckpointLoaderSimple"
},
"widgets_values": [
"NoobAI\\NoobAI-XL-v1.1.safetensors"
]
},
{
"id": 10,
"type": "TensorRTLoader",
"pos": [
-56.95336518893458,
-614.8126931297093
],
"size": [
270,
82
],
"flags": {},
"order": 2,
"mode": 0,
"inputs": [],
"outputs": [
{
"name": "MODEL",
"type": "MODEL",
"links": []
}
],
"properties": {
"Node name for S&R": "TensorRTLoader"
},
"widgets_values": [
"noobai-v11_static_b1_1024x1024_c2.engine",
"sdxl_base"
]
}
],
"links": [
[
4,
7,
0,
1,
3,
"LATENT"
],
[
5,
5,
0,
1,
1,
"CONDITIONING"
],
[
6,
6,
0,
1,
2,
"CONDITIONING"
],
[
7,
1,
0,
9,
0,
"LATENT"
],
[
9,
9,
0,
8,
0,
"IMAGE"
],
[
12,
11,
0,
5,
0,
"CLIP"
],
[
13,
11,
0,
6,
0,
"CLIP"
],
[
17,
2,
1,
11,
0,
"CLIP"
],
[
18,
2,
2,
9,
1,
"VAE"
],
[
24,
2,
0,
1,
0,
"MODEL"
]
],
"groups": [
{
"id": 1,
"title": "Loader",
"bounding": [
-83.66214666973298,
-872.3499345422797,
345.2272727272727,
759.2090909090908
],
"color": "#3f789e",
"font_size": 24,
"flags": {}
},
{
"id": 2,
"title": "Prompt",
"bounding": [
276.0378533302666,
-871.24993454228,
452.63636363636346,
759.7363636363638
],
"color": "#3f789e",
"font_size": 24,
"flags": {}
},
{
"id": 3,
"title": "Sampler",
"bounding": [
744.7742169666311,
-868.3317527240984,
362.34545454545446,
758.8272727272728
],
"color": "#3f789e",
"font_size": 24,
"flags": {}
},
{
"id": 4,
"title": "Output",
"bounding": [
1125.5651260575403,
-867.5681163604622,
482.82727272727266,
754.4363636363636
],
"color": "#3f789e",
"font_size": 24,
"flags": {}
}
],
"config": {},
"extra": {
"workflowRendererVersion": "LG",
"ds": {
"scale": 1.017724583966101,
"offset": [
401.18295824848093,
1051.7864856479273
]
},
"frontendVersion": "1.37.11"
},
"version": 0.4
}ComfyUIにコピペするとWFを読み込めますWFを4回実行した際の、1枚あたりの平均生成時間です。
- Safetensors: 平均 7.12 秒
- TensorRT Engine Static: 平均 4.38 秒
- TensorRT Engine Dynamic: 平均 4.99 秒












参考として生成ログを載せておきます。
got prompt
0%| | 0/25 [00:00<?, ?it/s]got prompt
got prompt
got prompt
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:06<00:00, 3.73it/s]
Prompt executed in 7.10 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:06<00:00, 3.74it/s]
Prompt executed in 7.09 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:06<00:00, 3.74it/s]
Prompt executed in 7.13 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:06<00:00, 3.71it/s]
Prompt executed in 7.15 secondsBAT (Batchfile)got prompt
0%| | 0/25 [00:00<?, ?it/s]got prompt
got prompt
got prompt
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:03<00:00, 6.30it/s]
Prompt executed in 4.39 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:03<00:00, 6.33it/s]
Prompt executed in 4.37 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:03<00:00, 6.33it/s]
Prompt executed in 4.37 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:03<00:00, 6.33it/s]
Prompt executed in 4.39 secondsBAT (Batchfile)got prompt
0%| | 0/25 [00:00<?, ?it/s]got prompt
got prompt
got prompt
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:04<00:00, 5.48it/s]
Prompt executed in 5.00 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:04<00:00, 5.51it/s]
Prompt executed in 4.94 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:04<00:00, 5.49it/s]
Prompt executed in 5.00 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:04<00:00, 5.42it/s]
Prompt executed in 5.02 secondsBAT (Batchfile)さいごに
今回の比較結果です。
| モデル名 | Safetensors (秒) | Static Engine (秒) | Dynamic Engine (秒) |
|---|---|---|---|
| SDXL1.0 | 8.58 | 5.38 | 6.15 |
| Pony v6 | 7.15 | 4.42 | 5.01 |
| Illustrious v2.0 | 7.20 | 4.44 | 4.97 |
| NoobAI v1.1 | 7.12 | 4.38 | 4.99 |
どのモデルもベースがSDXLなのでモデル間の差でみると誤差程度ですが、エンジン化したモデルだと、静的エンジンで約38%、動的エンジンで約30%ほど生成速度が短縮されるといった結果でした。
生成速度の面だけ見ると静的エンジンが優秀ですが、これは特定の画像サイズ(解像度)やバッチサイズ(生成枚数)に特化して最適化されているためです。そのため、毎回同じ条件で大量生成するような、特化した画像生成を行う場合に特に有用です。
一方、動的エンジンは、通常の生成でプロンプトを頻繁に書き換える(トークン数 context_opt が変動する)場合や、複数の画像サイズを利用したい場合に適しています。静的エンジンより速度はやや劣りますが、制限を気にせずに済むため、個人的には柔軟性が高く便利だと感じました。
と言う事で、色々書きましが思った以上に高速化できたので個人的には満足です。



コメント