You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Implement multi-store persistence (OpenDAL) with sled + dashmap.
Write resource blobs to both stores; read from fastest (benchmarked at init) with sled fallback. Add delete across stores. Keeps existing Sled indexes untouched. Preps for full Persistable pattern across profiles.
Lessons learned from OpenDAL integration and terraphim_persistence pattern
2
+
3
+
- Consistency first: Switching only reads to OpenDAL while writes remain on Sled breaks read-after-write; dual-write or single-path is required for correctness.
4
+
- One abstraction boundary: Use OpenDAL as the single storage interface; let Sled/DashMap/RocksDB be OpenDAL services instead of directly coupling to them.
5
+
- Fastest-read via benchmarking: Measuring operator latency at startup and selecting the fastest improves read performance; still need write-all for durability.
6
+
- Tokio runtime scope: Avoid constructing runtimes deep inside libraries. Expose async APIs or use appropriate blocking adaptors.
7
+
- Migration strategy: Plan backfill and deletion symmetry. When introducing a new backend, provide tools/tests to migrate and keep stores in sync.
8
+
- Feature gating services: Keep backend choices behind features to reduce dependency surface and compile times.
/// Stores all resources in OpenDAL. The Key is the Subject as a `string.as_bytes()`, the value a [PropVals]. Propvals must be serialized using [bincode].
67
-
dal_resources: opendal::Operator,
69
+
/// OpenDAL operators by profile name (e.g., "sled", "dashmap") for resource blobs
let db = sled::open(path).map_err(|e|format!("Failed opening DB at this location: {:?} . Is another instance of Atomic Server running? {}", path, e))?;
108
150
let resources = db.open_tree("resources_v1").map_err(|e|format!("Failed building resources. Your DB might be corrupt. Go back to a previous version and export your data. {}", e))?;
109
151
let reference_index = db.open_tree("reference_index_v1")?;
110
152
let query_index = db.open_tree("members_index")?;
111
153
let prop_val_sub_index = db.open_tree("prop_val_sub_index")?;
112
154
let watched_queries = db.open_tree("watched_queries")?;
- Branch: origin/opendal; commits include "WIP OpenDAL #433" and "WIP dal".
4
+
- Changes: adds OpenDAL (services-sled) to `lib/Cargo.toml`; in `lib/src/db.rs` adds `dal_resources: opendal::Operator`, initializes it with OpenDAL Sled at `<db_path>/opendal` tree `resources_v1`, and switches `get_propvals` to read via `dal_resources.read(subject)` using a new Tokio runtime.
5
+
- Gaps / issues:
6
+
- Writes still go to Sled via `set_propvals` (OpenDAL not used for write). Reads now use OpenDAL, so read-after-write breaks (OpenDAL store is empty).
7
+
- Deletion still removes from Sled tree; OpenDAL items (if any) won’t be deleted.
8
+
- New Tokio runtime inside the DB struct risks nested runtimes and increases complexity; prefer async surface or OpenDAL blocking API.
9
+
- Parallel index/other code still relies on Sled trees; storage is now split across two backends.
10
+
- Recommendation: unify I/O through OpenDAL (including Sled via `services-sled`) OR dual-write until migration completes. Remove in-DB runtime; make read path async or use blocking layer. Add migration/backfill from Sled to OpenDAL and tests for CRUD consistency.
-`Persistable` trait: save to all profiles, load from fastest. Key derived by implementor (e.g., `document_<id>.json`).
15
+
-`settings::parse_profiles`: builds OpenDAL operators for profiles (memory, dashmap, rocksdb, redb, sqlite, s3, atomicserver, etc.), benchmarks read latency and picks fastest.
16
+
- Implementations provided for `Thesaurus` and `Document`; includes memory-only helpers and tests for memory/rocksdb/redb/sqlite.
17
+
- This matches the intended architecture: write-multiplex + read-from-fastest.
18
+
19
+
Action items alignment
20
+
21
+
- Port the terraphim pattern to atomic-server: single abstraction via OpenDAL; write to all configured profiles (or at least primary+replica), read from fastest; ensure index/storage consistency and clear migration path.
-[x] Init OpenDAL Sled under `<db>/opendal` tree `resources_v1`
7
+
-[x] Read path switched to `dal_resources.read(subject)`
8
+
-[ ] Write path still uses Sled `resources.insert`
9
+
-[ ] Delete path still uses Sled remove
10
+
-[ ] Indexing and queries use Sled trees only
11
+
-[ ] Concurrency/runtime: embedded tokio runtime introduced
12
+
13
+
- Risks
14
+
- Read-after-write inconsistency (OpenDAL store empty)
15
+
- Double storage cost; unclear source of truth
16
+
- Migration undefined; no backfill from existing sled tree
17
+
- Runtime nesting problems in servers already using Tokio
18
+
19
+
- Proposed plan
20
+
1) Decide strategy: a) OpenDAL-only with `services-sled` as one profile; or b) dual-write with single-read (fastest) via profiles like terraphim.
21
+
2) If (a): wrap all CRUD in OpenDAL; drop direct sled access for propvals; use sled only for indexes until ported.
22
+
3) If (b): implement `save_to_all` concept in atomic Db for propvals (write to N stores); pick fastest for read; keep indexes coherent on one canonical store.
23
+
4) Remove embedded runtime; move read calls to async surface or blocking shim.
24
+
5) Add migration: backfill OpenDAL from Sled existing items. One-time tool or lazy-on-read write-through.
- terraphim-persistence crate already models profiles + speed test + save_to_all + read_fastest. Could reuse patterns and helper code or even the crate.
29
+
- Consider features for memory/dashmap/rocksdb/redb/sqlite - align with atomic-server env.
0 commit comments