[NVBug 6108145] Fix PTQ calibration and export for fused-experts MoE (Qwen3.5-MoE VLM)#1340
[NVBug 6108145] Fix PTQ calibration and export for fused-experts MoE (Qwen3.5-MoE VLM)#1340
Conversation
Signed-off-by: weimingc <17592131+meenchen@users.noreply.github.com>
📝 WalkthroughWalkthroughThis pull request adds comprehensive support for HuggingFace fused-expert modules in PTQ quantization by introducing dedicated export paths, normalizing quantizer names for wildcard matching, forcing eager expert implementation, updating configuration, and extending test coverage. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes 🚥 Pre-merge checks | ✅ 5 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Comment |
|
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@modelopt/torch/export/unified_export_hf.py`:
- Around line 649-657: The elif branch that checks for the same attribute
gate_up_proj_weight_quantizers is dead code because the preceding if block
handles that case and continues; remove the unreachable elif block (the second
check for gate_up_proj_weight_quantizers and its body) so only the initial
handling using fsdp2_aware_weight_update and _export_fused_experts(sub_module,
dtype) remains, leaving no duplicate checks for gate_up_proj_weight_quantizers
in the loop.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Enterprise
Run ID: 6c028304-1560-45eb-bcba-de04e7c03a20
📒 Files selected for processing (4)
examples/llm_ptq/hf_ptq.pymodelopt/torch/export/unified_export_hf.pymodelopt/torch/quantization/conversion.pymodelopt_recipes/general/ptq/nvfp4_experts_only-fp8_kv.yaml
| # _QuantFusedExperts uses plural `gate_up_proj_weight_quantizers` (ModuleList), | ||
| # which get_quantization_format's singular-weight_quantizer check misses. Handle | ||
| # it explicitly before the format gate so fused-experts get split + quantized. | ||
| if hasattr(sub_module, "gate_up_proj_weight_quantizers"): | ||
| from modelopt.torch.export.moe_utils import _export_fused_experts | ||
|
|
||
| with fsdp2_aware_weight_update(model, sub_module, reshard=False): | ||
| _export_fused_experts(sub_module, dtype) | ||
| continue |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify that lines 689-695 are now dead code by checking if they can ever be reached
# for modules with gate_up_proj_weight_quantizers
# Show both code blocks for comparison
echo "=== Early check (lines 649-657) ==="
sed -n '649,657p' modelopt/torch/export/unified_export_hf.py
echo ""
echo "=== Later duplicate check (lines 689-695) ==="
sed -n '689,695p' modelopt/torch/export/unified_export_hf.pyRepository: NVIDIA/Model-Optimizer
Length of output: 1176
🏁 Script executed:
#!/bin/bash
# Show the full if-elif-else structure around lines 649-695 to understand control flow
sed -n '645,700p' modelopt/torch/export/unified_export_hf.py | cat -nRepository: NVIDIA/Model-Optimizer
Length of output: 3631
Remove unreachable dead code at lines 689-695.
The early check at lines 649–657 catches all modules with gate_up_proj_weight_quantizers and executes continue, which skips to the next loop iteration. The elif at line 689 checking the same condition will never be reached because:
- Any module matching the condition is handled at line 652 and exits via
continue - The
elifcan only be checked if the firstifwas false, meaning the module lacksgate_up_proj_weight_quantizers
Lines 689–695 should be removed.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@modelopt/torch/export/unified_export_hf.py` around lines 649 - 657, The elif
branch that checks for the same attribute gate_up_proj_weight_quantizers is dead
code because the preceding if block handles that case and continues; remove the
unreachable elif block (the second check for gate_up_proj_weight_quantizers and
its body) so only the initial handling using fsdp2_aware_weight_update and
_export_fused_experts(sub_module, dtype) remains, leaving no duplicate checks
for gate_up_proj_weight_quantizers in the loop.
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #1340 +/- ##
==========================================
- Coverage 74.46% 72.61% -1.85%
==========================================
Files 464 481 +17
Lines 50089 52610 +2521
==========================================
+ Hits 37300 38205 +905
- Misses 12789 14405 +1616
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Signed-off-by: weimingc <17592131+meenchen@users.noreply.github.com>
cjluo-nv
left a comment
There was a problem hiding this comment.
This is a well-structured bug fix with good test coverage. The 4-part fix (eager impl forcing, quantizer name normalization, export path hoisting, YAML layerwise change) is logically coherent and well-tested. However, there's a duplicate code path in unified_export_hf.py that should be cleaned up.
Design: Despite the complexity gate firing, this is a bug fix within existing systems, not an architectural change. No new abstractions are introduced.
Tests: Comprehensive — covers force_eager_experts_impl_on_the_fly edge cases, and an end-to-end calibration test that guards the full pipeline (name normalization → wildcard matching → amax collection). Good.
Issue: The new early-exit block in _process_quantized_modules makes the existing elif hasattr(sub_module, "gate_up_proj_weight_quantizers") block (deeper in the same function, inside the get_quantization_format() != QUANTIZATION_NONE guard) dead code. One of these should be removed.
| from modelopt.torch.export.moe_utils import _export_fused_experts | ||
|
|
||
| with fsdp2_aware_weight_update(model, sub_module, reshard=False): | ||
| _export_fused_experts(sub_module, dtype) |
There was a problem hiding this comment.
This new early-exit block makes the existing elif hasattr(sub_module, "gate_up_proj_weight_quantizers") block at the end of this function (inside the get_quantization_format(sub_module) != QUANTIZATION_NONE branch, around line ~700 in the full file) dead code — the continue here means we never reach that path.
Please remove the old dead-code block to avoid confusion for future maintainers who might not realize both paths exist.
There was a problem hiding this comment.
🧹 Nitpick comments (2)
modelopt/torch/quantization/plugins/huggingface.py (1)
1441-1449: Harden recursive config traversal with a cycle guard.The recursive
_force()walk can loop forever on cyclic config graphs. Add a visited-set guard to make this robust.Proposed patch
nested_cfg_attrs = ("text_config", "vision_config", "audio_config", "speech_config") + visited_cfg_ids = set() def _force(cfg): if cfg is None: return + cfg_id = id(cfg) + if cfg_id in visited_cfg_ids: + return + visited_cfg_ids.add(cfg_id) if hasattr(cfg, "_experts_implementation"): cfg._experts_implementation = "eager" for sub in nested_cfg_attrs: if hasattr(cfg, sub): _force(getattr(cfg, sub))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@modelopt/torch/quantization/plugins/huggingface.py` around lines 1441 - 1449, The recursive helper _force can loop on cyclic config graphs; modify _force to accept and maintain a visited set (e.g., of object ids) and skip recursing into cfg instances already seen, so each cfg is processed only once. Specifically, update the _force signature to take an optional visited set, add the current cfg to visited (use id(cfg) or cfg itself), return early if already visited, and keep the existing behavior of setting cfg._experts_implementation = "eager" and iterating nested_cfg_attrs; ensure recursive calls pass the same visited set. This change hardens _force and prevents infinite recursion on cycles while still touching the same symbols (_force, nested_cfg_attrs, cfg._experts_implementation).tests/unit/torch/quantization/plugins/test_fused_experts.py (1)
385-441: Make registry cleanup exception-safe in the calibration test.If an assertion fails before the last line, the temporary registry entry may leak into subsequent tests.
Proposed patch
def test_calibration_populates_all_expert_quantizers(self): """After PTQ, every input/weight quantizer on the fused-experts module has amax set.""" import modelopt.torch.quantization as mtq model = _TinyMoEModel() expert_type = type(model.moe.experts) self._cleanup_registry(expert_type) - - quant_cfg = { + try: + quant_cfg = { "quant_cfg": [ {"quantizer_name": "*", "enable": False}, { "quantizer_name": "*gate_up_proj_input_quantizer", "cfg": {"num_bits": 8, "axis": None}, @@ for idx in range(NUM_EXPERTS): assert experts.gate_up_proj_weight_quantizers[idx].amax is not None, ( f"gate_up_proj_weight_quantizers[{idx}].amax is None — " "plural ModuleList name normalization in _match_quantizer likely broken." ) assert experts.down_proj_weight_quantizers[idx].amax is not None, ( f"down_proj_weight_quantizers[{idx}].amax is None." ) - - self._cleanup_registry(expert_type) + finally: + self._cleanup_registry(expert_type)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/torch/quantization/plugins/test_fused_experts.py` around lines 385 - 441, The test creates a temporary registry entry via expert_type and calls self._cleanup_registry(expert_type) at the end but can leak if an assertion fails; wrap the main test actions (quant_cfg setup, forward_loop, mtq.quantize, and all asserts) in a try/finally and move the final self._cleanup_registry(expert_type) into the finally block so cleanup always runs; keep expert_type assigned before the try and leave the initial cleanup call (before quantization) as-is.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@modelopt/torch/quantization/plugins/huggingface.py`:
- Around line 1441-1449: The recursive helper _force can loop on cyclic config
graphs; modify _force to accept and maintain a visited set (e.g., of object ids)
and skip recursing into cfg instances already seen, so each cfg is processed
only once. Specifically, update the _force signature to take an optional visited
set, add the current cfg to visited (use id(cfg) or cfg itself), return early if
already visited, and keep the existing behavior of setting
cfg._experts_implementation = "eager" and iterating nested_cfg_attrs; ensure
recursive calls pass the same visited set. This change hardens _force and
prevents infinite recursion on cycles while still touching the same symbols
(_force, nested_cfg_attrs, cfg._experts_implementation).
In `@tests/unit/torch/quantization/plugins/test_fused_experts.py`:
- Around line 385-441: The test creates a temporary registry entry via
expert_type and calls self._cleanup_registry(expert_type) at the end but can
leak if an assertion fails; wrap the main test actions (quant_cfg setup,
forward_loop, mtq.quantize, and all asserts) in a try/finally and move the final
self._cleanup_registry(expert_type) into the finally block so cleanup always
runs; keep expert_type assigned before the try and leave the initial cleanup
call (before quantization) as-is.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Enterprise
Run ID: 310a1d2b-745c-4df5-8317-038bf82c199f
📒 Files selected for processing (2)
modelopt/torch/quantization/plugins/huggingface.pytests/unit/torch/quantization/plugins/test_fused_experts.py
What does this PR do?
Type of change: Bug fix
Fixes a 4-bug cascade that caused silent PTQ failure on Qwen3.5-MoE VLMs (Qwen3.6-35B-A3B): calibration
appeared to succeed but produced token-salad at inference. Root cause: HF's @use_experts_implementation
dispatches expert forward to torch._grouped_mm / torch.bmm, bypassing the F.linear hook that captures
activations — so gate_up_proj_input_quantizer / down_proj_input_quantizer never calibrated and no input_scale
tensors were emitted.
Changes:
vision_config / …) so per-expert F.linear calls are visible to the calibration hook.
→ weight_quantizer) before fnmatch, so wildcards like mlp.expertsweight_quantizer match fused-expert
quantizers.
get_quantization_format() gate so _export_fused_experts() runs even when the top-level format query returns
QUANTIZATION_NONE (happens for experts-only recipes).
breaks the layerwise walker).
Usage
Testing
Testing
End-to-end PTQ → vLLM deploy → NEL eval on Qwen3.6-35B-A3B (256 experts × 40 layers, 35B params):
Hook-call diagnostic: 0 → 6720 per-expert F.linear calls during calibration after the fix; 0 → 30720
input_scale tensors emitted in the exported checkpoint.
FP8 fused-MoE path still produces gibberish — separate follow-up (vLLM per-expert weight_scale handling).
Before your PR is "Ready for review"
Make sure you read and follow Contributor guidelines and your commits are signed (
git commit -s -S).Make sure you read and follow the Security Best Practices (e.g. avoiding hardcoded
trust_remote_code=True,torch.load(..., weights_only=False),pickle, etc.).CONTRIBUTING.md: ✅ / ❌ / N/AAdditional Information
Summary by CodeRabbit
Release Notes
New Features
Bug Fixes
Documentation
Tests