Skip to content

<fix>[header]: add Tpm cascade extensions#3737

Open
ZStack-Robot wants to merge 1 commit intofeature-zsv-5.0.0-vm-support-vtpm-and-secucebootfrom
sync/wenhao.zhang/zsv-ldap
Open

<fix>[header]: add Tpm cascade extensions#3737
ZStack-Robot wants to merge 1 commit intofeature-zsv-5.0.0-vm-support-vtpm-and-secucebootfrom
sync/wenhao.zhang/zsv-ldap

Conversation

@ZStack-Robot
Copy link
Copy Markdown
Collaborator

Resolves: ZSV-11799
Related: ZSV-11310

Change-Id: I69656261626e6c73636874707076726378706d68

sync from gitlab !9605

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 13, 2026

Warning

Rate limit exceeded

@MatheMatrix has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 2 minutes and 53 seconds before requesting another review.

Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 2 minutes and 53 seconds.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 87dc4e2c-5685-4605-a91a-1b6d32a51764

📥 Commits

Reviewing files that changed from the base of the PR and between fffbed5 and f7821b4.

⛔ Files ignored due to path filters (1)
  • conf/springConfigXml/VmInstanceManager.xml is excluded by !**/*.xml
📒 Files selected for processing (12)
  • compute/src/main/java/org/zstack/compute/vm/devices/TpmCascadeExtension.java
  • compute/src/main/java/org/zstack/compute/vm/devices/VmHostBackupFileCascadeExtension.java
  • compute/src/main/java/org/zstack/compute/vm/devices/VmHostFileCascadeExtension.java
  • header/src/main/java/org/zstack/header/tpm/message/TpmDeletionMsg.java
  • header/src/main/java/org/zstack/header/tpm/message/TpmDeletionReply.java
  • header/src/main/java/org/zstack/header/vm/additions/VmHostBackupFileDeletionMsg.java
  • header/src/main/java/org/zstack/header/vm/additions/VmHostBackupFileDeletionReply.java
  • header/src/main/java/org/zstack/header/vm/additions/VmHostFileDeletionMsg.java
  • header/src/main/java/org/zstack/header/vm/additions/VmHostFileDeletionReply.java
  • plugin/kvm/src/main/java/org/zstack/kvm/efi/KvmSecureBootManager.java
  • plugin/kvm/src/main/java/org/zstack/kvm/tpm/KvmTpmExtensions.java
  • plugin/kvm/src/main/java/org/zstack/kvm/tpm/KvmTpmManager.java

Walkthrough

新增三个异步级联扩展以处理 TPM、主机文件与主机备份文件的级联删除;新增/重命名并传递相应删除消息类型;KVM 插件新增消息处理并在数据库中删除对应记录,级联结束后执行清理操作。

Changes

Cohort / File(s) Summary
级联扩展处理
compute/src/main/java/org/zstack/compute/vm/devices/TpmCascadeExtension.java, compute/src/main/java/org/zstack/compute/vm/devices/VmHostBackupFileCascadeExtension.java, compute/src/main/java/org/zstack/compute/vm/devices/VmHostFileCascadeExtension.java
新增三个继承自 AbstractAsyncCascadeExtension 的类,按 CascadeAction 路由(检查/删除/强制删除/清理),解析目标 VO 列表,异步对每个 VO 发送对应删除消息并收集回复,失败时聚合错误,清理时调用 dbf.eoCleanup(...),实现边缘名称、资源名及对子资源 action 的传播。
TPM 消息类型更新
header/src/main/java/org/zstack/header/tpm/message/TpmDeletionMsg.java, header/src/main/java/org/zstack/header/tpm/message/TpmDeletionReply.java
RemoveTpmMsg/RemoveTpmReply 重命名为 TpmDeletionMsg/TpmDeletionReply,消息基类由 NeedReplyMessage 改为 DeletionMessage,并同步更新工厂方法与引用。
虚拟机主机文件/备份文件消息新增
header/src/main/java/org/zstack/header/vm/additions/VmHostFileDeletionMsg.java, header/src/main/java/org/zstack/header/vm/additions/VmHostFileDeletionReply.java, header/src/main/java/org/zstack/header/vm/additions/VmHostBackupFileDeletionMsg.java, header/src/main/java/org/zstack/header/vm/additions/VmHostBackupFileDeletionReply.java
新增四个消息/回复类:VmHostFileDeletionMsg/ReplyVmHostBackupFileDeletionMsg/Reply。消息继承自 DeletionMessage 并携带 uuid 字段,用于指定要删除的主机文件或备份文件。
KVM 插件消息处理更新
plugin/kvm/src/main/java/org/zstack/kvm/efi/KvmSecureBootManager.java, plugin/kvm/src/main/java/org/zstack/kvm/tpm/KvmTpmExtensions.java, plugin/kvm/src/main/java/org/zstack/kvm/tpm/KvmTpmManager.java
KvmSecureBootManager 新增对 VmHostFileDeletionMsgVmHostBackupFileDeletionMsg 的处理,按 uuid 从 DB 删除对应 VmHostFileVO 并回复;KvmTpmExtensions 与 KvmTpmManager 将旧的 RemoveTpm 消息类型替换为新的 TpmDeletionMsg/TpmDeletionReply 并调整相关构造与分支。

Sequence Diagram(s)

sequenceDiagram
    participant VM as VM实例
    participant Cascade as 级联扩展
    participant DB as 数据库
    participant CloudBus as CloudBus
    participant Service as 目标服务(例如TPM/SecureBoot)

    VM->>Cascade: 发起级联删除请求
    activate Cascade

    Cascade->>DB: 查询子资源 VO 列表
    DB-->>Cascade: 返回 VO 列表

    loop 对每个子资源 VO
        Cascade->>CloudBus: 发送删除消息 (TpmDeletionMsg / VmHostFileDeletionMsg)
        CloudBus->>Service: 转发删除消息
        Service->>DB: 按 uuid 删除记录
        Service-->>CloudBus: 返回删除回复
        CloudBus-->>Cascade: 异步回复结果
    end

    Cascade->>DB: 调用 eoCleanup 或完成清理
    DB-->>Cascade: 清理完成

    Cascade-->>VM: 标记级联完成
    deactivate Cascade
Loading

代码审查工作量估计

🎯 4 (Complex) | ⏱️ ~45 minutes

诗歌

🐰 级联轻敲门,消息一行行,
UUID 排队去,服务忙转发,
回复汇成潮,错误合并藏,
数据库轻拂净,扩展添新芳。

🚥 Pre-merge checks | ✅ 1 | ❌ 2

❌ Failed checks (2 warnings)

Check name Status Explanation Resolution
Title check ⚠️ Warning 标题'[header]: add Tpm cascade extensions'仅部分相关,声称添加header层级别的TPM级联扩展,但实际改动涉及compute、header和plugin/kvm等多个模块中的级联删除扩展实现。 建议修改标题为更准确地反映所有改动,例如'Add cascade extensions for TPM, VmHostFile, and VmHostBackupFile'或'Implement cascade deletion handlers for TPM and host file resources'。
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (1 passed)
Check name Status Explanation
Description check ✅ Passed 描述内容虽然简洁,但与变更集相关,提及了解决的问题编号(ZSV-11799)和相关问题编号(ZSV-11310),表明这是一个有目的的改动。

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch sync/wenhao.zhang/zsv-ldap

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
plugin/kvm/src/main/java/org/zstack/kvm/tpm/KvmTpmManager.java (1)

319-323: ⚠️ Potential issue | 🟠 Major

补上传递 forceDelete 标志。

这里没有把 TpmDeletionMsg 继承自 DeletionMessageisForceDelete() 带进 context.force,导致 removeTpmFromVm() 在强制级联删除时仍会走 VM 状态校验。这样 DELETION_FORCE_DELETE_CODE 会退化成普通删除,删除中的 VM 可能因为状态检查失败而遗留 TPM 和 host file 记录。

🔧 建议修改
         static RemoveTpmFromVmContext valueOf(TpmDeletionMsg msg) {
             RemoveTpmFromVmContext context = new RemoveTpmFromVmContext();
             context.vmInstanceUuid = msg.getVmInstanceUuid();
             context.tpmUuid = msg.getTpmUuid();
+            context.force = msg.isForceDelete();
             return context;
         }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@plugin/kvm/src/main/java/org/zstack/kvm/tpm/KvmTpmManager.java` around lines
319 - 323, RemoveTpmFromVmContext.valueOf 未将 TpmDeletionMsg 继承自 DeletionMessage
的 isForceDelete() 传入 context.force,导致 removeTpmFromVm 在强制级联删除时仍进行 VM 状态校验。修复方法:在
RemoveTpmFromVmContext.valueOf(TpmDeletionMsg msg) 中设置 context.force =
msg.isForceDelete()(使用 DeletionMessage 的 isForceDelete()),以便 removeTpmFromVm() 和
DELETION_FORCE_DELETE_CODE 能正确识别强制删除路径。
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@compute/src/main/java/org/zstack/compute/vm/devices/TpmCascadeExtension.java`:
- Around line 83-107: The code currently treats failed TpmDeletionMsg replies as
mere logs but always calls dbf.removeByPrimaryKeys(...) and completion.success()
in WhileDoneCompletion.done(...), which can delete DB records even when remote
deletions failed; change the logic to track failures from each CloudBusCallBack
(check reply.isSuccess()) by adding the reply error to a thread-safe
ErrorCodeList/shared collection (or set a failure flag), and in the
WhileDoneCompletion.done(ErrorCodeList errorCodeList) only call
dbf.removeByPrimaryKeys(...) and completion.success() if there are no errors;
otherwise call completion.fail(...) (or propagate the collected ErrorCodeList)
so DB rows (TpmVO) are not removed when TpmDeletionMsg deletion failed. Ensure
you reference the existing TpmDeletionMsg callbacks and
WhileDoneCompletion(done) path when implementing this change.

In
`@compute/src/main/java/org/zstack/compute/vm/devices/VmHostFileCascadeExtension.java`:
- Around line 83-106: The current loop sends VmHostFileDeletionMsg but on
individual failure only logs and still removes the corresponding VmHostFileVOs
and returns success; change the logic so failures from the CloudBusCallBack are
collected and cause the cascade completion to fail instead of unconditionally
deleting DB rows: in the CloudBusCallBack for VmHostFileDeletionMsg record
failures (e.g., accumulate reply.getError() or build ErrorCode entries) and pass
them into the WhileDoneCompletion; in the WhileDoneCompletion.done
implementation, only call dbf.removeByPrimaryKeys(...) and completion.success()
if there are no collected errors, otherwise do not remove the VOs and call
completion.fail(...) with a composed ErrorCodeList (or the first error) to
propagate the downstream deletion failures; adjust references:
VmHostFileDeletionMsg, CloudBusCallBack, WhileDoneCompletion, VmHostFileVO,
dbf.removeByPrimaryKeys, and completion to implement this
error-collection-and-propagation behavior.

In `@plugin/kvm/src/main/java/org/zstack/kvm/efi/KvmSecureBootManager.java`:
- Around line 1076-1085: The two handlers handle(VmHostFileDeletionMsg msg) and
handle(VmHostBackupFileDeletionMsg msg) currently only remove VmHostFileVO /
VmHostBackupFileVO and leave orphaned VmHostFileContentVO rows; update both
handlers to also call databaseFacade.removeByPrimaryKey(msg.getUuid(),
VmHostFileContentVO.class) (before or after removing the VO) so the content
records with the same UUID are deleted as well, keeping the cleanup consistent
with the deletion path used around lines 1026-1033.

In `@plugin/kvm/src/main/java/org/zstack/kvm/tpm/KvmTpmExtensions.java`:
- Around line 522-525: The fallback cleanup path vmJustBeforeDeleteFromDb()
currently constructs a TpmDeletionMsg without marking it as a forced deletion,
so the synchronous call into KvmTpmManager.removeTpmFromVm() may skip removal
due to VM state checks; update the message construction to call
TpmDeletionMsg.setForceDelete(true) before
bus.makeTargetServiceIdByResourceUuid(...) so the removal is executed
unconditionally and avoids leaving TPM, VmHostFileVO and VmHostBackupFileVO
residues.

---

Outside diff comments:
In `@plugin/kvm/src/main/java/org/zstack/kvm/tpm/KvmTpmManager.java`:
- Around line 319-323: RemoveTpmFromVmContext.valueOf 未将 TpmDeletionMsg 继承自
DeletionMessage 的 isForceDelete() 传入 context.force,导致 removeTpmFromVm
在强制级联删除时仍进行 VM 状态校验。修复方法:在 RemoveTpmFromVmContext.valueOf(TpmDeletionMsg msg)
中设置 context.force = msg.isForceDelete()(使用 DeletionMessage 的 isForceDelete()),以便
removeTpmFromVm() 和 DELETION_FORCE_DELETE_CODE 能正确识别强制删除路径。
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 583666a9-c1e9-4645-897a-1fa45576b206

📥 Commits

Reviewing files that changed from the base of the PR and between ff216d7 and 44e8612.

⛔ Files ignored due to path filters (1)
  • conf/springConfigXml/VmInstanceManager.xml is excluded by !**/*.xml
📒 Files selected for processing (12)
  • compute/src/main/java/org/zstack/compute/vm/devices/TpmCascadeExtension.java
  • compute/src/main/java/org/zstack/compute/vm/devices/VmHostBackupFileCascadeExtension.java
  • compute/src/main/java/org/zstack/compute/vm/devices/VmHostFileCascadeExtension.java
  • header/src/main/java/org/zstack/header/tpm/message/TpmDeletionMsg.java
  • header/src/main/java/org/zstack/header/tpm/message/TpmDeletionReply.java
  • header/src/main/java/org/zstack/header/vm/additions/VmHostBackupFileDeletionMsg.java
  • header/src/main/java/org/zstack/header/vm/additions/VmHostBackupFileDeletionReply.java
  • header/src/main/java/org/zstack/header/vm/additions/VmHostFileDeletionMsg.java
  • header/src/main/java/org/zstack/header/vm/additions/VmHostFileDeletionReply.java
  • plugin/kvm/src/main/java/org/zstack/kvm/efi/KvmSecureBootManager.java
  • plugin/kvm/src/main/java/org/zstack/kvm/tpm/KvmTpmExtensions.java
  • plugin/kvm/src/main/java/org/zstack/kvm/tpm/KvmTpmManager.java

Comment on lines +1076 to +1085
private void handle(VmHostFileDeletionMsg msg) {
VmHostFileDeletionReply reply = new VmHostFileDeletionReply();
databaseFacade.removeByPrimaryKey(msg.getUuid(), VmHostFileVO.class);
bus.reply(msg, reply);
}

private void handle(VmHostBackupFileDeletionMsg msg) {
VmHostBackupFileDeletionReply reply = new VmHostBackupFileDeletionReply();
databaseFacade.removeByPrimaryKey(msg.getUuid(), VmHostBackupFileVO.class);
bus.reply(msg, reply);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

这里还需要同步删除 VmHostFileContentVO

这两个 handler 只删了 VmHostFileVO / VmHostBackupFileVO,没有清理同 UUID 的内容表。按同文件 Line 1026-1033 的现有删除路径看,这里需要显式删除 VmHostFileContentVO,否则每次级联删除都会留下孤儿内容记录。

🔧 建议修改
     private void handle(VmHostFileDeletionMsg msg) {
         VmHostFileDeletionReply reply = new VmHostFileDeletionReply();
+        SQL.New(VmHostFileContentVO.class)
+                .eq(VmHostFileContentVO_.uuid, msg.getUuid())
+                .delete();
         databaseFacade.removeByPrimaryKey(msg.getUuid(), VmHostFileVO.class);
         bus.reply(msg, reply);
     }

     private void handle(VmHostBackupFileDeletionMsg msg) {
         VmHostBackupFileDeletionReply reply = new VmHostBackupFileDeletionReply();
+        SQL.New(VmHostFileContentVO.class)
+                .eq(VmHostFileContentVO_.uuid, msg.getUuid())
+                .delete();
         databaseFacade.removeByPrimaryKey(msg.getUuid(), VmHostBackupFileVO.class);
         bus.reply(msg, reply);
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@plugin/kvm/src/main/java/org/zstack/kvm/efi/KvmSecureBootManager.java` around
lines 1076 - 1085, The two handlers handle(VmHostFileDeletionMsg msg) and
handle(VmHostBackupFileDeletionMsg msg) currently only remove VmHostFileVO /
VmHostBackupFileVO and leave orphaned VmHostFileContentVO rows; update both
handlers to also call databaseFacade.removeByPrimaryKey(msg.getUuid(),
VmHostFileContentVO.class) (before or after removing the VO) so the content
records with the same UUID are deleted as well, keeping the cleanup consistent
with the deletion path used around lines 1026-1033.

Comment thread plugin/kvm/src/main/java/org/zstack/kvm/tpm/KvmTpmExtensions.java
@MatheMatrix
Copy link
Copy Markdown
Owner

Comment on plugin/kvm/src/main/java/org/zstack/kvm/efi/KvmSecureBootManager.java:

Comment from wenhao.zhang:

VmHostFileContentVO 是级联删除的

@zstack-robot-1
Copy link
Copy Markdown
Collaborator

Comment from wenhao.zhang:

明天看一下为什么 case 卡住

@MatheMatrix MatheMatrix force-pushed the sync/wenhao.zhang/zsv-ldap branch 2 times, most recently from 7e1b863 to 5fd0450 Compare April 14, 2026 02:52
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
plugin/kvm/src/main/java/org/zstack/kvm/tpm/KvmTpmManager.java (1)

319-323: ⚠️ Potential issue | 🔴 Critical

这里要把 forceDelete 透传到 context.force

Line 330 通过 context.force 决定是否跳过 VM 状态校验,但 valueOf(TpmDeletionMsg) 现在只拷了 UUID。这样 TpmCascadeExtension 发出的 DELETION_FORCE_DELETE 会退化成普通删除,VM 删除链路会在状态检查前失败。

🔧 建议修改
         static RemoveTpmFromVmContext valueOf(TpmDeletionMsg msg) {
             RemoveTpmFromVmContext context = new RemoveTpmFromVmContext();
             context.vmInstanceUuid = msg.getVmInstanceUuid();
             context.tpmUuid = msg.getTpmUuid();
+            context.force = msg.isForceDelete();
             return context;
         }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@plugin/kvm/src/main/java/org/zstack/kvm/tpm/KvmTpmManager.java` around lines
319 - 323, The valueOf(TpmDeletionMsg) factory currently only copies
vmInstanceUuid and tpmUuid into RemoveTpmFromVmContext, so the force flag from
the incoming TpmDeletionMsg is lost and DELETION_FORCE_DELETE requests are
treated as normal deletes; modify valueOf(TpmDeletionMsg) to also set
context.force from the message (e.g. context.force = msg.isForceDelete() or
msg.getForceDelete()) so that the later check against context.force (used at the
VM state validation around line 330) correctly skips state validation for force
deletes.
♻️ Duplicate comments (2)
compute/src/main/java/org/zstack/compute/vm/devices/TpmCascadeExtension.java (1)

83-106: ⚠️ Potential issue | 🔴 Critical

删除失败时不要继续删 TpmVO

reply.isSuccess() == false 这里只记日志,收尾阶段仍然无条件 removeByPrimaryKeys(..., TpmVO.class)completion.success()。一旦 TpmDeletionMsg 失败,TPM 的宿主机文件清理和 secret 清理都还没做完,数据库记录却已经没了。

🔧 建议修改
         new While<>(tpmList).each((tpm, whileCompletion) -> {
             TpmDeletionMsg msg = new TpmDeletionMsg();
             msg.setTpmUuid(tpm.getUuid());
             msg.setVmInstanceUuid(tpm.getVmInstanceUuid());
             msg.setForceDelete(action.isActionCode(CascadeConstant.DELETION_FORCE_DELETE_CODE));
             bus.makeTargetServiceIdByResourceUuid(msg, SERVICE_ID, msg.getTpmUuid());
             bus.send(msg, new CloudBusCallBack(whileCompletion) {
                 `@Override`
                 public void run(MessageReply reply) {
                     if (reply.isSuccess()) {
                         logger.debug(String.format("deleted Tpm[uuid:%s] from VM[uuid:%s]",
                                 tpm.getUuid(), tpm.getVmInstanceUuid()));
                     } else {
                         logger.warn(String.format("failed to deleted Tpm[uuid:%s] from VM[uuid:%s]",
                                 tpm.getUuid(), tpm.getVmInstanceUuid()));
+                        whileCompletion.addError(reply.getError());
                     }
                     whileCompletion.done();
                 }
             });
         }).run(new WhileDoneCompletion(completion) {
             `@Override`
             public void done(ErrorCodeList errorCodeList) {
+                if (!errorCodeList.isEmpty()) {
+                    completion.fail(errorCodeList);
+                    return;
+                }
                 dbf.removeByPrimaryKeys(transformAndRemoveNull(tpmList, TpmVO::getUuid), TpmVO.class);
                 completion.success();
             }
         });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@compute/src/main/java/org/zstack/compute/vm/devices/TpmCascadeExtension.java`
around lines 83 - 106, The current loop always calls
dbf.removeByPrimaryKeys(...) and completion.success() even when some
TpmDeletionMsg replies failed (reply.isSuccess() == false); change the logic to
track per-message failures inside the CloudBusCallBack (e.g. collect ErrorCode
or set a failure flag when reply.isSuccess() is false) and propagate those
errors into the WhileDoneCompletion.done(ErrorCodeList) result, then in the
WhileDoneCompletion.done implementation only call
dbf.removeByPrimaryKeys(transformAndRemoveNull(tpmList, TpmVO::getUuid),
TpmVO.class) and completion.success() if no failures were recorded; otherwise
aggregate the failures and call completion.fail(...) with an appropriate
ErrorCodeList so failed TPMs are not removed from the DB.
compute/src/main/java/org/zstack/compute/vm/devices/VmHostFileCascadeExtension.java (1)

83-106: ⚠️ Potential issue | 🟠 Major

删除失败时不要继续删 VmHostFileVO

VmHostFileDeletionMsg 失败分支这里只打日志,done() 里仍然无条件执行 removeByPrimaryKeys(..., VmHostFileVO.class) 并返回成功。这样一旦 secure-boot 服务侧删除失败,MN 侧元数据会先被清掉,后续无法补偿或重试。

🔧 建议修改
         new While<>(voList).each((vo, whileCompletion) -> {
             VmHostFileDeletionMsg msg = new VmHostFileDeletionMsg();
             msg.setUuid(vo.getUuid());
             msg.setForceDelete(action.isActionCode(CascadeConstant.DELETION_FORCE_DELETE_CODE));
             bus.makeLocalServiceId(msg, VmInstanceConstant.SECURE_BOOT_SERVICE_ID);
             bus.send(msg, new CloudBusCallBack(whileCompletion) {
                 `@Override`
                 public void run(MessageReply reply) {
                     if (reply.isSuccess()) {
                         logger.debug(String.format("deleted VmHostFile[uuid:%s] from VM[uuid:%s]",
                                 vo.getUuid(), vo.getVmInstanceUuid()));
                     } else {
                         logger.warn(String.format("failed to deleted VmHostFile[uuid:%s] from VM[uuid:%s]",
                                 vo.getUuid(), vo.getVmInstanceUuid()));
+                        whileCompletion.addError(reply.getError());
                     }
                     whileCompletion.done();
                 }
             });
         }).run(new WhileDoneCompletion(completion) {
             `@Override`
             public void done(ErrorCodeList errorCodeList) {
+                if (!errorCodeList.isEmpty()) {
+                    completion.fail(errorCodeList);
+                    return;
+                }
                 dbf.removeByPrimaryKeys(transformAndRemoveNull(voList, VmHostFileVO::getUuid), VmHostFileVO.class);
                 completion.success();
             }
         });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@compute/src/main/java/org/zstack/compute/vm/devices/VmHostFileCascadeExtension.java`
around lines 83 - 106, The current done() always removes VmHostFileVO rows even
if some VmHostFileDeletionMsg calls failed; change the WhileDoneCompletion.done
implementation to check the incoming ErrorCodeList parameter and only call
dbf.removeByPrimaryKeys(..., VmHostFileVO.class) and completion.success() when
errorCodeList is empty; if errorCodeList is non-empty, skip any DB removals and
call completion.fail(...) with an aggregated error (or the ErrorCodeList
converted to an ErrorCode), and log the failure. Update the done() in the
WhileDoneCompletion that wraps the While<>(voList).each(...) and reference
VmHostFileDeletionMsg, WhileDoneCompletion.done, removeByPrimaryKeys,
VmHostFileVO, and completion to implement this conditional behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@plugin/kvm/src/main/java/org/zstack/kvm/tpm/KvmTpmManager.java`:
- Around line 914-915: APIRemoveTpmMsg's forceDelete flag is not propagated
because TpmDeletionMsg.valueOf(msg) only copies tpmUuid/vmInstanceUuid; fix by
ensuring APIRemoveTpmMsg.forceDelete is forwarded: either update
TpmDeletionMsg.valueOf(Message msg) to detect and copy the
APIDeleteMessage/forceDelete field into the returned TpmDeletionMsg, or add an
explicit passthrough before bus.makeTargetServiceIdByResourceUuid(...) to set
inner.setForceDelete(((APIRemoveTpmMsg)msg).isForceDelete()); keep references to
APIRemoveTpmMsg, TpmDeletionMsg.valueOf, forceDelete, inner,
makeTargetServiceIdByResourceUuid, SERVICE_ID and msg.getTpmUuid() when making
the change.

---

Outside diff comments:
In `@plugin/kvm/src/main/java/org/zstack/kvm/tpm/KvmTpmManager.java`:
- Around line 319-323: The valueOf(TpmDeletionMsg) factory currently only copies
vmInstanceUuid and tpmUuid into RemoveTpmFromVmContext, so the force flag from
the incoming TpmDeletionMsg is lost and DELETION_FORCE_DELETE requests are
treated as normal deletes; modify valueOf(TpmDeletionMsg) to also set
context.force from the message (e.g. context.force = msg.isForceDelete() or
msg.getForceDelete()) so that the later check against context.force (used at the
VM state validation around line 330) correctly skips state validation for force
deletes.

---

Duplicate comments:
In
`@compute/src/main/java/org/zstack/compute/vm/devices/TpmCascadeExtension.java`:
- Around line 83-106: The current loop always calls dbf.removeByPrimaryKeys(...)
and completion.success() even when some TpmDeletionMsg replies failed
(reply.isSuccess() == false); change the logic to track per-message failures
inside the CloudBusCallBack (e.g. collect ErrorCode or set a failure flag when
reply.isSuccess() is false) and propagate those errors into the
WhileDoneCompletion.done(ErrorCodeList) result, then in the
WhileDoneCompletion.done implementation only call
dbf.removeByPrimaryKeys(transformAndRemoveNull(tpmList, TpmVO::getUuid),
TpmVO.class) and completion.success() if no failures were recorded; otherwise
aggregate the failures and call completion.fail(...) with an appropriate
ErrorCodeList so failed TPMs are not removed from the DB.

In
`@compute/src/main/java/org/zstack/compute/vm/devices/VmHostFileCascadeExtension.java`:
- Around line 83-106: The current done() always removes VmHostFileVO rows even
if some VmHostFileDeletionMsg calls failed; change the WhileDoneCompletion.done
implementation to check the incoming ErrorCodeList parameter and only call
dbf.removeByPrimaryKeys(..., VmHostFileVO.class) and completion.success() when
errorCodeList is empty; if errorCodeList is non-empty, skip any DB removals and
call completion.fail(...) with an aggregated error (or the ErrorCodeList
converted to an ErrorCode), and log the failure. Update the done() in the
WhileDoneCompletion that wraps the While<>(voList).each(...) and reference
VmHostFileDeletionMsg, WhileDoneCompletion.done, removeByPrimaryKeys,
VmHostFileVO, and completion to implement this conditional behavior.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 2374bb5e-0f0a-43f3-921d-a80b9ade3f0e

📥 Commits

Reviewing files that changed from the base of the PR and between 44e8612 and 7e1b863.

⛔ Files ignored due to path filters (1)
  • conf/springConfigXml/VmInstanceManager.xml is excluded by !**/*.xml
📒 Files selected for processing (12)
  • compute/src/main/java/org/zstack/compute/vm/devices/TpmCascadeExtension.java
  • compute/src/main/java/org/zstack/compute/vm/devices/VmHostBackupFileCascadeExtension.java
  • compute/src/main/java/org/zstack/compute/vm/devices/VmHostFileCascadeExtension.java
  • header/src/main/java/org/zstack/header/tpm/message/TpmDeletionMsg.java
  • header/src/main/java/org/zstack/header/tpm/message/TpmDeletionReply.java
  • header/src/main/java/org/zstack/header/vm/additions/VmHostBackupFileDeletionMsg.java
  • header/src/main/java/org/zstack/header/vm/additions/VmHostBackupFileDeletionReply.java
  • header/src/main/java/org/zstack/header/vm/additions/VmHostFileDeletionMsg.java
  • header/src/main/java/org/zstack/header/vm/additions/VmHostFileDeletionReply.java
  • plugin/kvm/src/main/java/org/zstack/kvm/efi/KvmSecureBootManager.java
  • plugin/kvm/src/main/java/org/zstack/kvm/tpm/KvmTpmExtensions.java
  • plugin/kvm/src/main/java/org/zstack/kvm/tpm/KvmTpmManager.java
✅ Files skipped from review due to trivial changes (6)
  • header/src/main/java/org/zstack/header/vm/additions/VmHostFileDeletionReply.java
  • header/src/main/java/org/zstack/header/tpm/message/TpmDeletionReply.java
  • plugin/kvm/src/main/java/org/zstack/kvm/tpm/KvmTpmExtensions.java
  • header/src/main/java/org/zstack/header/vm/additions/VmHostFileDeletionMsg.java
  • header/src/main/java/org/zstack/header/vm/additions/VmHostBackupFileDeletionMsg.java
  • header/src/main/java/org/zstack/header/vm/additions/VmHostBackupFileDeletionReply.java
🚧 Files skipped from review as they are similar to previous changes (1)
  • header/src/main/java/org/zstack/header/tpm/message/TpmDeletionMsg.java

Comment thread plugin/kvm/src/main/java/org/zstack/kvm/tpm/KvmTpmManager.java
@MatheMatrix MatheMatrix force-pushed the sync/wenhao.zhang/zsv-ldap branch from 5fd0450 to d086b59 Compare April 14, 2026 03:02
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
plugin/kvm/src/main/java/org/zstack/kvm/tpm/KvmTpmManager.java (1)

319-324: ⚠️ Potential issue | 🟠 Major

context.force 未从 TpmDeletionMsg.forceDelete 传递过来。

RemoveTpmFromVmContext.valueOf(TpmDeletionMsg) 只复制了 tpmUuidvmInstanceUuid,但没有设置 context.force。这导致即使 TpmDeletionMsg.forceDelete=truecontext.force 仍然是默认的 false,从而在 line 330 的 "check-vm-status" 流程中不会被跳过。

🔧 建议修改
 static RemoveTpmFromVmContext valueOf(TpmDeletionMsg msg) {
     RemoveTpmFromVmContext context = new RemoveTpmFromVmContext();
     context.vmInstanceUuid = msg.getVmInstanceUuid();
     context.tpmUuid = msg.getTpmUuid();
+    context.force = msg.isForceDelete();
     return context;
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@plugin/kvm/src/main/java/org/zstack/kvm/tpm/KvmTpmManager.java` around lines
319 - 324, The valueOf factory method
RemoveTpmFromVmContext.valueOf(TpmDeletionMsg) fails to copy the force flag, so
context.force remains false even when TpmDeletionMsg.forceDelete is true; update
RemoveTpmFromVmContext.valueOf to set context.force = msg.isForceDelete() (or
msg.getForceDelete()) when constructing the context so downstream logic (e.g.,
the "check-vm-status" flow) sees the intended force value.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@compute/src/main/java/org/zstack/compute/vm/devices/TpmCascadeExtension.java`:
- Around line 107-108: The error message in TpmCascadeExtension's
completion.fail call contains a typo ("failed to deleted Tpm"); update the
message to "failed to delete Tpm" in the call where
completion.fail(operr("failed to deleted Tpm[uuid:%s]",
tpmList.get(0).getVmInstanceUuid()).withCause(errorCodeList)); — keep the
surrounding logic (the operr invocation and withCause(errorCodeList)) unchanged.

In
`@compute/src/main/java/org/zstack/compute/vm/devices/VmHostBackupFileCascadeExtension.java`:
- Around line 109-110: In VmHostBackupFileCascadeExtension update the error
message passed to completion.fail: change the string "failed to deleted
VmHostBackupFile from resource[uuid:%s]" to "failed to delete VmHostBackupFile
from resource[uuid:%s]" where operr(...) is called (the call using
voList.get(0).getResourceUuid() and .withCause(errorCodeList)); keep the same
formatting and arguments, only correct the typo "deleted" -> "delete".

In
`@compute/src/main/java/org/zstack/compute/vm/devices/VmHostFileCascadeExtension.java`:
- Around line 106-107: In VmHostFileCascadeExtension, update the error message
passed to completion.fail—change the misspelled string "failed to deleted
VmHostFile from VM[uuid:%s]" used in the
completion.fail(operr(...).withCause(errorCodeList)) call (which references
voList.get(0).getVmInstanceUuid()) to the correct phrasing "failed to delete
VmHostFile from VM[uuid:%s]".

---

Outside diff comments:
In `@plugin/kvm/src/main/java/org/zstack/kvm/tpm/KvmTpmManager.java`:
- Around line 319-324: The valueOf factory method
RemoveTpmFromVmContext.valueOf(TpmDeletionMsg) fails to copy the force flag, so
context.force remains false even when TpmDeletionMsg.forceDelete is true; update
RemoveTpmFromVmContext.valueOf to set context.force = msg.isForceDelete() (or
msg.getForceDelete()) when constructing the context so downstream logic (e.g.,
the "check-vm-status" flow) sees the intended force value.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: a18eec50-ef1c-42c4-941b-83a58f38fd86

📥 Commits

Reviewing files that changed from the base of the PR and between 7e1b863 and 5fd0450.

⛔ Files ignored due to path filters (1)
  • conf/springConfigXml/VmInstanceManager.xml is excluded by !**/*.xml
📒 Files selected for processing (12)
  • compute/src/main/java/org/zstack/compute/vm/devices/TpmCascadeExtension.java
  • compute/src/main/java/org/zstack/compute/vm/devices/VmHostBackupFileCascadeExtension.java
  • compute/src/main/java/org/zstack/compute/vm/devices/VmHostFileCascadeExtension.java
  • header/src/main/java/org/zstack/header/tpm/message/TpmDeletionMsg.java
  • header/src/main/java/org/zstack/header/tpm/message/TpmDeletionReply.java
  • header/src/main/java/org/zstack/header/vm/additions/VmHostBackupFileDeletionMsg.java
  • header/src/main/java/org/zstack/header/vm/additions/VmHostBackupFileDeletionReply.java
  • header/src/main/java/org/zstack/header/vm/additions/VmHostFileDeletionMsg.java
  • header/src/main/java/org/zstack/header/vm/additions/VmHostFileDeletionReply.java
  • plugin/kvm/src/main/java/org/zstack/kvm/efi/KvmSecureBootManager.java
  • plugin/kvm/src/main/java/org/zstack/kvm/tpm/KvmTpmExtensions.java
  • plugin/kvm/src/main/java/org/zstack/kvm/tpm/KvmTpmManager.java
✅ Files skipped from review due to trivial changes (6)
  • header/src/main/java/org/zstack/header/vm/additions/VmHostBackupFileDeletionReply.java
  • header/src/main/java/org/zstack/header/vm/additions/VmHostFileDeletionReply.java
  • header/src/main/java/org/zstack/header/vm/additions/VmHostBackupFileDeletionMsg.java
  • header/src/main/java/org/zstack/header/vm/additions/VmHostFileDeletionMsg.java
  • header/src/main/java/org/zstack/header/tpm/message/TpmDeletionReply.java
  • plugin/kvm/src/main/java/org/zstack/kvm/tpm/KvmTpmExtensions.java
🚧 Files skipped from review as they are similar to previous changes (1)
  • header/src/main/java/org/zstack/header/tpm/message/TpmDeletionMsg.java

Comment thread compute/src/main/java/org/zstack/compute/vm/devices/TpmCascadeExtension.java Outdated
@MatheMatrix MatheMatrix force-pushed the sync/wenhao.zhang/zsv-ldap branch from d086b59 to fffbed5 Compare April 14, 2026 05:26
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
plugin/kvm/src/main/java/org/zstack/kvm/tpm/KvmTpmManager.java (1)

320-324: ⚠️ Potential issue | 🟠 Major

forceDelete 在这里被丢掉了。

Line 320-324 没有把 msg.isForceDelete() 写进 context.force,后面的 skipIf(data -> context.force) 因而永远收不到 API / cascade 透传下来的强删语义。结果就是 Enforcing 删除仍会走常规状态检查,删除中的 VM 可能被误拦住。

🔧 建议修改
         static RemoveTpmFromVmContext valueOf(TpmDeletionMsg msg) {
             RemoveTpmFromVmContext context = new RemoveTpmFromVmContext();
             context.vmInstanceUuid = msg.getVmInstanceUuid();
             context.tpmUuid = msg.getTpmUuid();
+            context.force = msg.isForceDelete();
             return context;
         }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@plugin/kvm/src/main/java/org/zstack/kvm/tpm/KvmTpmManager.java` around lines
320 - 324, The valueOf factory for RemoveTpmFromVmContext is dropping the
forceDelete flag, so populate context.force from the incoming TpmDeletionMsg;
inside static RemoveTpmFromVmContext valueOf(TpmDeletionMsg msg) set
context.force = msg.isForceDelete() (or equivalent getter) so that downstream
skipIf(data -> context.force) and cascade/API-driven force-delete semantics are
respected; update any related setters/getters on RemoveTpmFromVmContext if
necessary to expose the force boolean.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@compute/src/main/java/org/zstack/compute/vm/devices/TpmCascadeExtension.java`:
- Around line 84-89: The cascade VM path must always use force delete: inside
TpmCascadeExtension where the code iterates over tpmList (the
While<>(tpmList).each block) and constructs TpmDeletionMsg, always set
msg.setForceDelete(true) instead of conditionalizing on
action.isActionCode(CascadeConstant.DELETION_FORCE_DELETE_CODE); ensure the
TpmDeletionMsg for each tpm (tpm.getUuid(), tpm.getVmInstanceUuid()) is sent
with forceDelete enabled so VM-level cascade deletion does not get blocked by
KvmTpmManager state checks.

In
`@compute/src/main/java/org/zstack/compute/vm/devices/VmHostBackupFileCascadeExtension.java`:
- Around line 55-73: The voFromAction in VmHostBackupFileCascadeExtension
currently only handles VmInstanceVO and explicitly skips VolumeSnapshotGroupVO,
leaving VmHostBackupFileVO rows whose resourceUuid holds snapshot-group UUIDs
orphaned; update voFromAction to also handle VolumeSnapshotGroupVO (or when
action.getParentIssuer() equals VolumeSnapshotGroupVO.class.getSimpleName() or
NAME where appropriate) by extracting the parent context UUIDs (like you do for
VmInstanceInventory) and querying
Q.New(VmHostBackupFileVO.class).in(VmHostBackupFileVO_.resourceUuid,
<snapshotGroupUuidList>).list(); also remove the explicit skip of
VolumeSnapshotGroupVO and ensure getEdgeNames covers the snapshot-group edge so
cascade deletes include those backup files.

In
`@compute/src/main/java/org/zstack/compute/vm/devices/VmHostFileCascadeExtension.java`:
- Around line 77-113: The current handleDeletion sends deletion messages for
every VmHostFileVO, which removes the latest host file for deleted VMs; change
handleDeletion in VmHostFileCascadeExtension to preserve the latest VmHostFileVO
per VM: group voList by getVmInstanceUuid, for each group keep the record with
the newest creation timestamp (or the designated “latest” flag) and only send
VmHostFileDeletionMsg for the non-latest entries (use VmHostFileVO.getUuid and
VmHostFileDeletionMsg as before); alternatively, if you prefer centralized
reclaim logic, skip deleting any records here and let VmHostFileTracker perform
reclamation for deleted VMs—ensure the code references handleDeletion,
VmHostFileVO, VmHostFileDeletionMsg and VmHostFileTracker so reviewers can find
the change.

---

Outside diff comments:
In `@plugin/kvm/src/main/java/org/zstack/kvm/tpm/KvmTpmManager.java`:
- Around line 320-324: The valueOf factory for RemoveTpmFromVmContext is
dropping the forceDelete flag, so populate context.force from the incoming
TpmDeletionMsg; inside static RemoveTpmFromVmContext valueOf(TpmDeletionMsg msg)
set context.force = msg.isForceDelete() (or equivalent getter) so that
downstream skipIf(data -> context.force) and cascade/API-driven force-delete
semantics are respected; update any related setters/getters on
RemoveTpmFromVmContext if necessary to expose the force boolean.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 81fa4dc6-80d9-4945-975d-c37fc3229977

📥 Commits

Reviewing files that changed from the base of the PR and between 5fd0450 and fffbed5.

⛔ Files ignored due to path filters (1)
  • conf/springConfigXml/VmInstanceManager.xml is excluded by !**/*.xml
📒 Files selected for processing (12)
  • compute/src/main/java/org/zstack/compute/vm/devices/TpmCascadeExtension.java
  • compute/src/main/java/org/zstack/compute/vm/devices/VmHostBackupFileCascadeExtension.java
  • compute/src/main/java/org/zstack/compute/vm/devices/VmHostFileCascadeExtension.java
  • header/src/main/java/org/zstack/header/tpm/message/TpmDeletionMsg.java
  • header/src/main/java/org/zstack/header/tpm/message/TpmDeletionReply.java
  • header/src/main/java/org/zstack/header/vm/additions/VmHostBackupFileDeletionMsg.java
  • header/src/main/java/org/zstack/header/vm/additions/VmHostBackupFileDeletionReply.java
  • header/src/main/java/org/zstack/header/vm/additions/VmHostFileDeletionMsg.java
  • header/src/main/java/org/zstack/header/vm/additions/VmHostFileDeletionReply.java
  • plugin/kvm/src/main/java/org/zstack/kvm/efi/KvmSecureBootManager.java
  • plugin/kvm/src/main/java/org/zstack/kvm/tpm/KvmTpmExtensions.java
  • plugin/kvm/src/main/java/org/zstack/kvm/tpm/KvmTpmManager.java
✅ Files skipped from review due to trivial changes (4)
  • header/src/main/java/org/zstack/header/vm/additions/VmHostFileDeletionReply.java
  • header/src/main/java/org/zstack/header/vm/additions/VmHostBackupFileDeletionReply.java
  • header/src/main/java/org/zstack/header/vm/additions/VmHostFileDeletionMsg.java
  • header/src/main/java/org/zstack/header/vm/additions/VmHostBackupFileDeletionMsg.java
🚧 Files skipped from review as they are similar to previous changes (3)
  • header/src/main/java/org/zstack/header/tpm/message/TpmDeletionReply.java
  • plugin/kvm/src/main/java/org/zstack/kvm/tpm/KvmTpmExtensions.java
  • header/src/main/java/org/zstack/header/tpm/message/TpmDeletionMsg.java

Comment on lines +55 to +73
private List<VmHostBackupFileVO> voFromAction(CascadeAction action) {
if (VmInstanceVO.class.getSimpleName().equals(action.getParentIssuer())) {
List<VmInstanceInventory> secretResourcePoolInventories = action.getParentIssuerContext();
List<String> vmUuidList = transformAndRemoveNull(secretResourcePoolInventories, VmInstanceInventory::getUuid);

if (vmUuidList.isEmpty()) {
return null;
}
return Q.New(VmHostBackupFileVO.class)
.in(VmHostBackupFileVO_.resourceUuid, vmUuidList)
.list();
}
// Note: VolumeSnapshotGroupVO has no cascade extension!
// skip: else if (VolumeSnapshotGroupVO.class.getSimpleName().equals(action.getParentIssuer()))
else if (NAME.equals(action.getParentIssuer())) {
return action.getParentIssuerContext();
}

return null;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

快照组上的备份文件不会被这个级联清掉。

resourceUuid 在这里同时承载了 VM UUID 和快照组 UUID,但 voFromAction() / getEdgeNames() 只处理 VmInstanceVO,而且 Line 67-69 还明确跳过了 VolumeSnapshotGroupVO。这样删除快照组时,对应的 VmHostBackupFileVO 会直接遗留在库里。

Also applies to: 123-130

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@compute/src/main/java/org/zstack/compute/vm/devices/VmHostBackupFileCascadeExtension.java`
around lines 55 - 73, The voFromAction in VmHostBackupFileCascadeExtension
currently only handles VmInstanceVO and explicitly skips VolumeSnapshotGroupVO,
leaving VmHostBackupFileVO rows whose resourceUuid holds snapshot-group UUIDs
orphaned; update voFromAction to also handle VolumeSnapshotGroupVO (or when
action.getParentIssuer() equals VolumeSnapshotGroupVO.class.getSimpleName() or
NAME where appropriate) by extracting the parent context UUIDs (like you do for
VmInstanceInventory) and querying
Q.New(VmHostBackupFileVO.class).in(VmHostBackupFileVO_.resourceUuid,
<snapshotGroupUuidList>).list(); also remove the explicit skip of
VolumeSnapshotGroupVO and ensure getEdgeNames covers the snapshot-group edge so
cascade deletes include those backup files.

Comment on lines +77 to +113
private void handleDeletion(CascadeAction action, Completion completion) {
final List<VmHostFileVO> voList = voFromAction(action);
if (CollectionUtils.isEmpty(voList)) {
completion.success();
return;
}

new While<>(voList).each((vo, whileCompletion) -> {
VmHostFileDeletionMsg msg = new VmHostFileDeletionMsg();
msg.setUuid(vo.getUuid());
msg.setForceDelete(action.isActionCode(CascadeConstant.DELETION_FORCE_DELETE_CODE));
bus.makeLocalServiceId(msg, VmInstanceConstant.SECURE_BOOT_SERVICE_ID);
bus.send(msg, new CloudBusCallBack(whileCompletion) {
@Override
public void run(MessageReply reply) {
if (reply.isSuccess()) {
logger.debug(String.format("deleted VmHostFile[uuid:%s] from VM[uuid:%s]",
vo.getUuid(), vo.getVmInstanceUuid()));
} else {
whileCompletion.addError(reply.getError());
whileCompletion.allDone();
}
whileCompletion.done();
}
});
}).run(new WhileDoneCompletion(completion) {
@Override
public void done(ErrorCodeList errorCodeList) {
if (!errorCodeList.isEmpty()) {
completion.fail(operr("failed to delete VmHostFile from VM[uuid:%s]", voList.get(0).getVmInstanceUuid())
.withCause(errorCodeList));
return;
}
completion.success();
}
});
}
Copy link
Copy Markdown

@coderabbitai coderabbitai bot Apr 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

这里会把 deleted VM 的 latest host file 也一并删掉。

Line 77-113 当前会把该 VM 下所有 VmHostFileVO 都发到删除消息里,没有给最新一份 NvRam/TpmState 留保留窗口。这样会直接打破 deleted VM 仍保留 latest copy 90 天的恢复/排障策略。建议这里只清理 non-latest 记录,或继续让 VmHostFileTracker 接管这部分回收。

Based on learnings, "TPM and NvRam host files only require the single latest copy to be retained. For deleted VMs, non-latest VmHostFileVO records have no practical value and can be deleted immediately upon a successful KVM delete command, regardless of their age (no 7-day retention grace period is needed for non-latest files). Only the latest file needs to be retained for up to 90 days."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@compute/src/main/java/org/zstack/compute/vm/devices/VmHostFileCascadeExtension.java`
around lines 77 - 113, The current handleDeletion sends deletion messages for
every VmHostFileVO, which removes the latest host file for deleted VMs; change
handleDeletion in VmHostFileCascadeExtension to preserve the latest VmHostFileVO
per VM: group voList by getVmInstanceUuid, for each group keep the record with
the newest creation timestamp (or the designated “latest” flag) and only send
VmHostFileDeletionMsg for the non-latest entries (use VmHostFileVO.getUuid and
VmHostFileDeletionMsg as before); alternatively, if you prefer centralized
reclaim logic, skip deleting any records here and let VmHostFileTracker perform
reclamation for deleted VMs—ensure the code references handleDeletion,
VmHostFileVO, VmHostFileDeletionMsg and VmHostFileTracker so reviewers can find
the change.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comment from wenhao.zhang:

那我问你,VmHostFileVO 的 vmInstanceUuid 是有外键的,你存到哪里去

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check VmHostFileVO for FK definition or schema annotations
rg -n "vmInstanceUuid|ForeignKey|ManyToOne|JoinColumn|foreignKey" --type java -C2 -g "*VmHostFileVO*"

# Check SQL schema for VmHostFileVO table definition
rg -n -i "VmHostFile" --type sql -C5

# Check VmHostFileTracker for 90-day retention logic context
rg -n "90|latestFile|latest|retention|deleteAfter|NvRam|TpmState" --type java -g "*VmHostFileTracker*" -C3

Length of output: 14873


@ZStack-Robot,感谢 wenhao.zhang 的指正,我的原始 review 评论是错误的。

从代码确认:

`@ForeignKey`(parentEntityClass = VmInstanceEO.class, onDeleteAction = ForeignKey.ReferenceOption.CASCADE)
private String vmInstanceUuid;

VmHostFileVO.vmInstanceUuidVmInstanceEOON DELETE CASCADE 的外键约束。VmHostFileVO 记录的生命周期与 VmInstanceEO 强绑定,不可能在 VM 的 EO 记录不存在的情况下单独保留。

我之前建议"保留最新一条"是基于对 VmHostFileTracker 90 天保留策略的误解。实际上:

  • VmHostFileTracker 注释明确写的是 VmInstanceEO(deleted) 状态下(即 VM 已软删除但 EO 记录尚存时)才保留 90 天
  • 这与 cascade 删除阶段在同一 VM 软删除流程中运行并不冲突
  • 根本没有"存到哪里"这个选项——FK CASCADE 决定了 VmHostFileVO 必须随 EO 一起消失

当前 handleDeletion 删除所有 VmHostFileVO 的逻辑是正确的,我的原始评论有误,请忽略。


✏️ Learnings added
Learnt from: ZStack-Robot
Repo: MatheMatrix/zstack PR: 3737
File: compute/src/main/java/org/zstack/compute/vm/devices/VmHostFileCascadeExtension.java:77-113
Timestamp: 2026-04-14T05:45:02.800Z
Learning: In ZStack's VmHostFileCascadeExtension (compute/src/main/java/org/zstack/compute/vm/devices/VmHostFileCascadeExtension.java), deleting ALL VmHostFileVO records for a VM during cascade deletion is correct behavior. VmHostFileVO.vmInstanceUuid has an ON DELETE CASCADE foreign key to VmInstanceEO, so VmHostFileVO records cannot exist independently of the VM's EO record. The 90-day "latest file" retention policy in VmHostFileTracker applies only while VmInstanceEO is in the soft-deleted state — there is no way to preserve a VmHostFileVO record beyond the VM's EO lifecycle. Suggesting to skip the latest record in the cascade extension is incorrect.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.


🧠 Learnings used
Learnt from: MatheMatrix
Repo: MatheMatrix/zstack PR: 2496
File: storage/src/main/java/org/zstack/storage/snapshot/VolumeSnapshotCascadeExtension.java:71-73
Timestamp: 2025-08-24T07:56:17.047Z
Learning: In VolumeSnapshotCascadeExtension, the createActionForChildResource method always wraps the context in VolumeSnapshotDeletionStructs when creating child actions, so no backward compatibility with List<VolumeSnapshotInventory> is needed in handleDeletionCleanup.

Learnt from: ZStack-Robot
Repo: MatheMatrix/zstack PR: 3417
File: plugin/kvm/src/main/java/org/zstack/kvm/efi/KvmSecureBootExtensions.java:217-221
Timestamp: 2026-03-06T10:24:57.110Z
Learning: In ZStack's KvmSecureBootExtensions.syncVmHostFilesFromHost method (plugin/kvm/src/main/java/org/zstack/kvm/efi/KvmSecureBootExtensions.java), when reading individual host files (NvRam/TpmState) fails (to.getError() != null), it is intentional to log a warning and continue rather than failing the whole operation. This is a resilience design choice — individual host file sync failures should be tolerated and must not cause the entire completion to fail.

Learnt from: zstack-robot-1
Repo: MatheMatrix/zstack PR: 3665
File: plugin/kvm/src/main/java/org/zstack/kvm/vmfiles/VmHostFileTracker.java:341-347
Timestamp: 2026-04-02T07:45:31.705Z
Learning: In ZStack's VmHostFileTracker (plugin/kvm/src/main/java/org/zstack/kvm/vmfiles/VmHostFileTracker.java), TPM and NvRam host files only require the single latest copy to be retained. For deleted VMs, non-latest VmHostFileVO records have no practical value and can be deleted immediately upon a successful KVM delete command, regardless of their age (no 7-day retention grace period is needed for non-latest files). Only the latest file needs to be retained for up to 90 days.

Learnt from: zstack-robot-1
Repo: MatheMatrix/zstack PR: 3307
File: conf/db/zsv/V5.0.0__schema.sql:38-76
Timestamp: 2026-02-10T08:53:52.891Z
Learning: 在 ZStack 项目中,对于 joined-table inheritance 模式(如 KVMHostVO→HostEO, ApplianceVmVO→VmInstanceEO),子表的 uuid 字段不添加到父表的外键约束。ZStack 依赖 Hibernate/JPA 在应用层管理继承关系和级联操作,外键约束仅在数据库层直接删除时才有效,因此不使用。

Learnt from: MatheMatrix
Repo: MatheMatrix/zstack PR: 2418
File: compute/src/main/java/org/zstack/compute/vm/VmInstanceBase.java:3873-3885
Timestamp: 2025-08-12T05:52:18.323Z
Learning: 在 compute/src/main/java/org/zstack/compute/vm/VmInstanceBase.java 的 setNoIpAllocationStaticIp() 流程中,静态 IP 标签删除已统一在 change-ip-in-database 步骤对 voRemoveSet 循环处理:对收集到的旧 IPv4/IPv6 UsedIpVO 均调用 ipOperator.deleteStaticIpByVmUuidAndL3Uuid(vmUuid, l3Uuid, IPv6NetworkUtils.ipv6AddessToTagValue(ip)) 清理对应的 static-ip 标签。因此无需在分支里分别删除 IPv4/IPv6 的标签。

Learnt from: ZStack-Robot
Repo: MatheMatrix/zstack PR: 2566
File: conf/db/upgrade/V5.4.0__schema.sql:33-34
Timestamp: 2025-09-05T10:14:54.816Z
Learning: 在ZStack的HostNetworkInterfaceVO表中,pciDeviceAddress字段出现结尾换行符(\n)的脏数据仅在嵌套环境中出现,删除这些记录是安全的,不会产生影响。这种情况通常发生在主机重新连接时。

Learnt from: MatheMatrix
Repo: MatheMatrix/zstack PR: 2823
File: plugin/kvm/src/main/java/org/zstack/kvm/KVMHost.java:786-831
Timestamp: 2025-10-28T02:29:38.803Z
Learning: APIUpdateHostnameMsg 的主机名有效性已在 API 层(HostApiInterceptor/VmHostnameUtils)完成校验;KVMHost.handle(UpdateHostnameMsg) 不应重复校验。

Learnt from: ZStack-Robot
Repo: MatheMatrix/zstack PR: 2418
File: compute/src/main/java/org/zstack/compute/vm/VmInstanceBase.java:3856-3865
Timestamp: 2025-08-12T05:39:14.846Z
Learning: 在 compute/src/main/java/org/zstack/compute/vm/VmInstanceBase.java 的 setNoIpAllocationStaticIp 流程中,从数据库加载的 VmNicVO 实例其 getUsedIps() 为空时也不会为 null(返回空集合)。因此无需对 getUsedIps() 再做 Optional/空列表归一化的空指针保护;若找不到对应 NIC,使用 orElse(new VmNicVO()) 的约定允许后续逻辑通过 NPE 暴露问题,与项目既有约定一致。

Learnt from: MatheMatrix
Repo: MatheMatrix/zstack PR: 2709
File: compute/src/main/java/org/zstack/compute/vm/VmInstanceBase.java:5142-5142
Timestamp: 2025-10-09T02:23:30.305Z
Learning: 在 compute/src/main/java/org/zstack/compute/vm/VmInstanceBase.java 的 getAttachableL3Network() 逻辑中,项目约束是:同一 VmNic 可能同时存在两条 UsedIpVO,但它们只代表同一 L3 的 IPv4/IPv6 双栈,UsedIpVO.l3NetworkUuid 与 VmNicVO.l3NetworkUuid 保持一致;因此使用 VmNicVO_.l3NetworkUuid 列表判断 IPAM 关闭网络是否已附加是正确的,无需通过 UsedIpVO 或 VmNicHelper.getL3Uuids() 额外聚合。

Learnt from: zstack-robot-1
Repo: MatheMatrix/zstack PR: 2798
File: plugin/kvm/src/main/java/org/zstack/kvm/KVMHost.java:6888-6942
Timestamp: 2025-10-24T06:45:56.228Z
Learning: KVM 数据面(agent)在/host/file/download路径会校验上传URL的scheme;管理面KVMHost.uploadFileToHost当前仅做非空scheme检测。

Learnt from: ZStack-Robot
Repo: MatheMatrix/zstack PR: 2504
File: storage/src/main/java/org/zstack/storage/snapshot/VolumeSnapshotTreeBase.java:775-819
Timestamp: 2025-08-25T03:52:37.301Z
Learning: 在ZStack项目的VolumeSnapshotTreeBase类中,当通过dbf.findByUuid()方法获取VolumeVO对象时,需要进行null检查,因为该方法可能在找不到对应记录时返回null,直接使用可能导致NullPointerException。

Learnt from: zstack-robot-2
Repo: MatheMatrix/zstack PR: 2360
File: compute/src/main/java/org/zstack/compute/vm/VmNicParamBuilder.java:27-28
Timestamp: 2025-08-04T03:35:51.225Z
Learning: 在 VmNicParamBuilder.buildByVmUuid 方法中,上层调用保证传入的 VM UUID 对应的实例一定存在,因此使用 Q.New(VmInstanceVO.class).eq(VmInstanceVO_.uuid, vmInstanceUuid).find() 查询后不需要进行 null 检查。

Learnt from: zstack-robot-2
Repo: MatheMatrix/zstack PR: 2496
File: storage/src/main/java/org/zstack/storage/snapshot/VolumeSnapshotCascadeExtension.java:124-129
Timestamp: 2025-08-24T06:36:54.519Z
Learning: In cascade deletion scenarios for VolumeSnapshotCascadeExtension, the default values should be DeleteVolumeSnapshotDirection.Pull and DeleteVolumeSnapshotScope.Chain to maintain backward compatibility with existing cascade deletion logic, rather than using Auto values.

Learnt from: zstack-robot-1
Repo: MatheMatrix/zstack PR: 2360
File: network/src/main/java/org/zstack/network/l3/L3BasicNetwork.java:449-490
Timestamp: 2025-08-04T04:48:19.103Z
Learning: ZStack项目在cherry-pick操作中,即使发现了性能优化机会(如IP地址批量保存的内存优化),也严格遵循不做额外修改的政策,优先保证cherry-pick的完整性和一致性。

Learnt from: ZStack-Robot
Repo: MatheMatrix/zstack PR: 2489
File: storage/src/main/java/org/zstack/storage/snapshot/VolumeTree.java:471-545
Timestamp: 2025-08-22T05:36:40.467Z
Learning: In ZStack项目中,对于核心逻辑相关的代码,团队倾向于暂时不进行重构,即使代码复杂度较高,优先保证核心功能的稳定性。

Learnt from: ZStack-Robot
Repo: MatheMatrix/zstack PR: 2325
File: compute/src/main/java/org/zstack/compute/vm/VmMigrateCallExtensionFlow.java:29-29
Timestamp: 2025-07-24T05:53:10.246Z
Learning: 在ZStack项目的重构过程中,团队采用务实的优先级策略:优先完成影响运行时功能的代码重构,对于注释中的旧引用等非功能性问题可以延后处理,因为这些不会影响系统的实际运行。

Learnt from: zstack-robot-1
Repo: MatheMatrix/zstack PR: 2360
File: compute/src/main/java/org/zstack/compute/vm/StaticIpOperator.java:364-501
Timestamp: 2025-08-04T03:19:07.454Z
Learning: In ZStack project, developers may prefer to keep related validation logic together in a single method (like validateStaticIpTagsInApiMessage with 130+ lines) when the architecture is clear and there's no code reuse, rather than splitting into smaller methods based purely on single responsibility principle.

Learnt from: ZStack-Robot
Repo: MatheMatrix/zstack PR: 2419
File: plugin/sdnController/src/main/java/org/zstack/sdnController/SdnControllerBase.java:986-1067
Timestamp: 2025-08-13T02:31:40.023Z
Learning: 在ZStack项目中,团队倾向于保持统一的业务流程方法不进行拆分,即使方法较长也会维持在单个方法中以保持业务逻辑的完整性。

Learnt from: zstack-robot-2
Repo: MatheMatrix/zstack PR: 2496
File: plugin/sharedMountPointPrimaryStorage/src/main/java/org/zstack/storage/primary/smp/KvmBackend.java:2545-2566
Timestamp: 2025-08-24T07:35:42.793Z
Learning: 在 ZStack 代码库中,当响应类包含数值字段(如 size、actualSize)时,优先使用原始类型(long)而不是包装类型(Long),以避免 NPE 风险和不必要的装箱/拆箱操作。如果 Agent 端可能不设置该字段,应在 Agent 端确保设置默认值,而不是在使用端做 null 检查。

Learnt from: ZStack-Robot
Repo: MatheMatrix/zstack PR: 3716
File: header/src/main/java/org/zstack/header/keyprovider/EncryptedResourceKeyManager.java:32-35
Timestamp: 2026-04-12T05:42:20.685Z
Learning: In ZStack's KvmTpmExtensions.preReleaseVmResource (plugin/kvm/src/main/java/org/zstack/kvm/tpm/KvmTpmExtensions.java), as of PR `#3716`, the `ResourceKeyResult.refExistedBeforeCreate` field has been removed from `EncryptedResourceKeyManager.ResourceKeyResult`. Rollback behavior is now determined solely by `ResourceKeyResult.isCreatedNewKey()`: when true, the implementation deletes the materialized key-tool secret and removes the `EncryptedResourceKeyRef` row; when false, rollback is a no-op. The previously hardcoded `result.setRefExistedBeforeCreate(true)` in `preReleaseVmResource` has also been removed as part of this redesign.

@MatheMatrix
Copy link
Copy Markdown
Owner

Comment from ye.zou:

Code Review

# 严重程度 分类 文件 问题描述 修复建议
1 🔴 Critical 正确性 KvmTpmManager.java:320-325 RemoveTpmFromVmContext.valueOf(TpmDeletionMsg) 没有读取 msg.isForceDelete() 来设置 context.forceTpmCascadeExtension 在 line 89 正确调用了 msg.setForceDelete(true)(force delete 场景),APIRemoveTpmMsg handler 在 line 916 也设了 inner.setForceDelete(),但 valueOf() 完全忽略了这个字段。结果:cascade force-delete 仍会检查 VM 状态(skipIf(data -> context.force) 在 line 331),导致 VM 非 Stopped 状态下 cascade 删除 TPM 失败 valueOf() 中加 context.force = msg.isForceDelete();
2 🟡 Major 回归风险 KvmTpmExtensions.java:490-531 vmJustBeforeDeleteFromDb() 仍然发送 TpmDeletionMsg,新增的 TpmCascadeExtension 也会在 VM cascade delete 时发送 TpmDeletionMsg。两条路径都会触发,造成 double-delete。第一次删除成功后第二次会找不到记录。虽然当前 removeTpmFromVm 可能不会硬报错,但这是冗余且脆弱的设计 请确认并注释说明:cascade extension 加入后,vmJustBeforeDeleteFromDb() 中的 TPM 删除逻辑是否应该移除?如果保留,请加注释说明为什么需要两条路径
3 🟡 Major 正确性 VmHostFileCascadeExtension.java:113, VmHostBackupFileCascadeExtension.java:120, TpmCascadeExtension.java:109 dbf.eoCleanup(VmHostFileVO.class) / dbf.eoCleanup(VmHostBackupFileVO.class) / dbf.eoCleanup(TpmVO.class)这三个 VO 类都没有对应的 EO 类(均直接继承 ResourceVO,无 soft-delete 机制)。eoCleanup 会打 warn log 后空跑,不会崩溃,但说明作者没理解 EO cleanup 的含义 如果这些 VO 没有 EO(不做 soft-delete),handleDeletionCleanup 里直接 completion.success() 即可,去掉 eoCleanup 调用
4 🟠 Minor 代码质量 TpmCascadeExtension.java:57, VmHostFileCascadeExtension.java:57, VmHostBackupFileCascadeExtension.java:57 变量名 secretResourcePoolInventories 是明显的 copy-paste 错误,实际类型是 List<VmInstanceInventory> 改为 vmInventoriesvmInstanceInventories
5 🟠 Minor 代码质量 VmHostFileCascadeExtension.java:144, VmHostBackupFileCascadeExtension.java:149 文件末尾缺少换行符 加个换行

结论: CONDITIONAL ⚠️

回归风险: 中

  • Issue Test1 #1 是功能性 bug:cascade force-delete 路径下 TPM 删不掉(VM 非 Stopped 状态时)
  • Issue tao.yang/zstack-ZSV-3564@@2 #2 的 double-delete 路径目前不会崩溃,但架构上应明确

修复建议:

  1. 必须修RemoveTpmFromVmContext.valueOf() 加上 context.force = msg.isForceDelete()
  2. 建议修:去掉三个 eoCleanup 空调用,或者加注释说明意图
  3. 建议明确vmJustBeforeDeleteFromDb 与 cascade extension 的职责边界,注释说明双路径的设计意图

@zstack-robot-1
Copy link
Copy Markdown
Collaborator

Comment from wenhao.zhang:

现在确定需要:

  • 移除 KvmTpmExtensions.vmJustBeforeDeleteFromDb 中删除 TpmVO 部分
  • 去掉三个类中 eoCleanup 的调用并直接 completion.success()
  • secretResourcePoolInventories 改名为 vmInventories

Resolves: ZSV-11799
Related: ZSV-11310

Change-Id: I69656261626e6c73636874707076726378706d68
@MatheMatrix MatheMatrix force-pushed the sync/wenhao.zhang/zsv-ldap branch from fffbed5 to f7821b4 Compare April 14, 2026 08:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants