Rbd-638 //top\\ Info

rbd export-diff poolA/image1@snap1 poolB/image2@snap2 - > /tmp/diff.bin Result : The command aborts with pointing at the destination image. 3️⃣ What We’ve Tried So Far | Attempt | Outcome | |---------|---------| | Run rbd info on the destination image (both via CLI and via the mount) | Works – image and snapshot are visible. | | Use the full rbd path ( /mnt/remote-rbd/poolB/image2@snap2 ) | Same error. | | Copy the image locally first ( rbd cp ) then run export-diff | Works – confirms the issue is with cross‑pool handling. | | Set RBD_FEATURE_LAYERING to 0 (disable layering) | No effect – error still occurs. | | Upgrade to Octopus 15.2.11 (latest stable) | Error persists; no regression fix noted in release notes. | | Enable debug_rbd=20 and capture logs | Logs show a failure in rbd::Image::Open() with ENOENT after a remote‑pool lookup. No more details. | 4️⃣ Likely Root Causes (hypotheses) | # | Hypothesis | Evidence / Reasoning | |---|------------|----------------------| | 1 | Path translation bug – the CLI builds a local object name but does not translate the remote mount point correctly when the destination image is not in the default namespace. | The error occurs after the CLI has resolved the source image (which works) and before any data is read/written. | | 2 | Missing feature flag – export-diff expects the destination image to have the RBD_FEATURE_EXCLUSIVE_LOCK flag when the pool is accessed via a remote mount. | The command works when the image is local (feature automatically present). | | 3 | CephFS permission mismatch – the client user (e.g., client.admin ) may not have the rbd_map capability on the remote pool, causing a silent failure at open time. | rbd info works because it uses the monitor’s auth, but export-diff opens a data connection that hits the OSDs. | | 4 | Bug in the rbd CLI’s export-diff implementation that doesn’t pass the pool argument correctly when the destination image name contains a slash ( / ). | The CLI parses arguments with parse_image_spec() ; known edge‑cases exist for images with @ and / . | 5️⃣ Immediate Work‑Arounds | Work‑Around | Pros | Cons | |------------|------|------| | Copy the destination image locally first ( rbd cp poolB/image2 /tmp/tmpimg ) then run export-diff against the temporary local copy. | Simple; no code changes required. | Requires extra storage; not suitable for large images or production pipelines. | | Use rbd diff + manual dd : generate the diff with rbd diff on the source, then apply it manually on the remote side via ssh . | Bypasses the buggy CLI path. | More steps; higher chance of human error. | | Mount the remote pool via rbd map (i.e., expose the RBD device locally) and then run export-diff with the device path ( /dev/rbdX ). | Keeps the workflow entirely in the RBD layer. | Requires extra kernel module handling; may need root. | | Upgrade to Ceph Pacific (16.x) – a newer codebase includes a fix for export-diff cross‑pool look‑ups (see commit c3f5d9b in octopus → pacific backport). | Long‑term solution if upgrade is feasible. | Migration effort, possible downstream impacts. | 6️⃣ Next Steps / Action Items | Owner | Action | Due | |-------|--------|-----| | @alice | Attach the full debug_rbd=30 log (both client and OSD side) for the failing run. | 2024‑11‑07 | | @bob (Ceph QA) | Re‑run the test on a single‑cluster setup using rbd export-diff --pool poolA --dest-pool poolB (no remote mount) to see if the issue is mount‑related. | 2024‑11‑08 | | @carol (Dev) | Search the upstream Git history for any recent changes to rbd export-diff that touched image‑spec parsing. | 2024‑11‑09 | | @dave (Ops) | Verify the client caps on the remote pool ( ceph auth get client.admin ) – ensure rbd pool=poolB is present. | 2024‑11‑07 | | Team | Decide whether to open a new upstream bug (if we confirm it’s a bug) or file a feature request for a more explicit --dest-pool flag. | 2024‑11‑12 | 7️⃣ Suggested Draft for the Bugzilla Comment Comment (Draft) – 2024‑11‑06

We have reproduced RBD‑638 consistently on a test environment (Octopus 15.2.7). The failure occurs only when the destination image lives on a remote pool (accessed via CephFS or a separate cluster). `rbd info` works, but `rbd export‑diff` aborts with “No such file or directory”. rbd-638

The most likely cause appears to be a path‑translation bug in the CLI when handling a remote pool spec. We have the following work‑arounds in place: • Copy the destination image locally first, then run export‑diff. • Use `rbd diff` + manual transfer. • Map the remote image as a block device and operate on `/dev/rbdX`. | | Copy the image locally first (

| Item | Details | |------|---------| | Bug ID | RBD‑638 | | Title | “ rbd export‑diff fails with “No such file or directory” when using a remote pool” | | Reported By | alice@example.com (2024‑11‑03) | | Component | Ceph → RBD → CLI | | Severity | High (blocks backup automation) | | Environment | - Ceph Octopus 15.2.7 (cluster ID: ceph-qa-01 ) - RBD client version: rbd‑tool 15.2.7‑0 - OS: Ubuntu 22.04 LTS (kernel 6.5) - Exporter node: backup‑01 (connected via CephFS) | | Reproducibility | Consistent (≈100 % on the test cluster) | | Current Status | Open – triaged, awaiting more logs | 1️⃣ Summary of the Issue The rbd export-diff command works fine when the source and destination pools reside on the same Ceph cluster, but fails with the following error when the destination pool is on a different Ceph cluster (or a remote CephFS mount): | | Enable debug_rbd=20 and capture logs |

Cute Twinks Network Presents. Selected content featuring cutest twinks from all over the world, fucking each other, sharing each other and having their first sexual experience for the very time. Banging each other like crazy, no knowing what to do because of their inexperience, but finally finding the right path to each others asses