You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+58-20Lines changed: 58 additions & 20 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,14 +2,15 @@
2
2
3
3
Go implementation of offset-based native UnixFS proofs.
4
4
5
-
**Note:** this is a side-project not used in production. It's mostly in alpha version. It isn't optimized at any level nor audited in any way.
5
+
**Note:** this is a side-project and not be considered production-ready. Itisn't optimized nor audited in any way.
6
6
7
7
## Table of contents
8
8
-[About the project](#about)
9
9
-[Assumptions of the UnixFS DAG file](#Assumptions-of-the-UnixFS-DAG-file)
10
10
-[Proof format](#proof-format)
11
11
-[Use-case analysis and security](#use-case-analysis-and-security)
12
12
-[Proof sizes and benchmark](#proof-sizes-and-benchmark)
13
+
-[CLI](#cli)
13
14
-[Roadmap](#roadmap)
14
15
-[Contributing](#contributing)
15
16
-[License](#license)
@@ -19,9 +20,14 @@ Go implementation of offset-based native UnixFS proofs.
19
20
## About
20
21
This library allows generating and verification proofs for UnixFS file DAGs.
21
22
22
-
The challenger knows the _Cid_ of a UnixFS DAG and the maximum size of the underlying represented file. This information asks the prover to generate proof that it stores the block at a specified offset between _[0, max-file-size]_.
23
+
The verifier knows the _Cid_ of a UnixFS DAG and the size of the underlying represented file.
24
+
With this information, the verifier asks the prover to generate a proof that it stores the block at a specified offset between _[0, max-file-size]_.
23
25
24
-
The proof is a sub-DAG of the original, which contains the path to the targeted block, plus each level of intermediate nodes.
26
+
The proof is a sub-DAG which contains all the necessary blocks to assert that:
27
+
- The provided block is part of the DAG with the expected Cid root.
28
+
- The provided block of data is at the specified offset in the file.
29
+
30
+
The primary motivation for this kind of library is to provide a way to make challenges at a random-sampled offset of the original file to have a probabilistic guarantee that the prover is storing the data.
25
31
26
32
Consider the following UnixFS DAG file with a fanout factor of 3:
@@ -30,19 +36,26 @@ Consider the following UnixFS DAG file with a fanout factor of 3:
30
36
-->
31
37
32
38
33
-
Considering a verifer is asking a prover to provide a proof that it contains the corresponding block at the _file level offset_ X, the prover generates the subdag inside the green zone:
34
-
-Roundo nodes are internal DAG nodes that are somewhat small-ish and don't contain file data.
39
+
Considering a verifier is asking a prover to provide a proof that it contains the corresponding block at the _file level offset_ X, the prover generates the subdag inside the green zone:
40
+
-Round nodes are internal DAG nodes that are somewhat small-ish and don't contain file data.
35
41
- Square nodes contain chunks of the original file data.
36
-
- The indigo colored nodes are necessary nodes to make the proof verify that the target block (red) is at the specified offset.
42
+
- The indigo-colored nodes are necessary nodes to verify that the target block (red) is at the specified offset.
43
+
44
+
To understand better more details about this proof, read the _Proof sizes and benchmark_ section.
37
45
46
+
## Does this library assume any particular setup of the UnixFS DAG for the file?
47
+
No, this library works with any DAG layout, so it doesn't have any particular assumptions.
48
+
The DAG can have different layouts (e.g., balanced, trickle, etc.), chunking (e.g., fixed size, etc.), or other particular DAG builder configurations.
38
49
39
-
## Assumptions of the UnixFS DAG file
40
-
This library works with any file UnixFS DAG. It doesn't assume any particular layout (e.g., balanced, trickle, etc.), chunking (e.g., fixed size, etc.), or other particular DAG builder configuration.
50
+
This minimum level of assumptions allows the challenger to only needed to know the _Cid_ and file size to ask and verify the proof.
51
+
There's an inherent tradeoff between assumptions and possible optimizations of the proof. See _Proof size and benchmark_ section.
41
52
42
53
## Proof format
43
-
To avoid inventing any new proof standard or format, the proof is a byte array. This byte array is a CAR file format of all the blocks that are part of the proof.
54
+
To avoid inventing any new proof standard or format, the proof is a byte array corresponding to a CAR file format of all the blocks that are part of the proof.
55
+
The decision was mainly to avoid friction about defining a new format or standard.
44
56
45
-
Today this is the decided format mostly to avoid friction about defining other formats. The order of blocks in the CAR file should be considered undefined despite the current implementation having a BFS order.
57
+
The order of blocks in the CAR file should be considered undefined despite the current implementation having a BFS order.
58
+
Defining a particular order can improve the proof verification, so that's a possible change that can be done.
46
59
47
60
## Use-case analysis and security
48
61
The primary motivation is to support a random-sampling-based challenge system between a prover and a verifier.
@@ -51,33 +64,58 @@ Given a file with size _MaxSize_, the verifying can ask the prover to generate p
51
64
52
65
The security of this schema is similar to other random-sampling schemas:
53
66
- If the underlying prover doesn't have the block, it won't generate the proof.
54
-
- If the offset is random-sampled in the _[0, MaxSize]_ range, it can't be guessed by the prover without storing all the files.
67
+
- If the offset is random-sampled in the _[0, MaxSize]_ range, it can't be guessed by the prover without storing all the file.
55
68
56
69
If the bad-prover is storing only part of the leaves _p_ (e.g., 50%):
57
70
- A single challenge makes the prover have a probability `p` (e.g., 50%) of success.
58
-
- If the challenger asks for N (e.g., 5) proofs, the probability of generating all correct proofs is `p^N` (e.g., 3%) at the cost of a proof size of `SingleProofSize*N`.
71
+
- If the challenger asks for N (e.g., 5) proofs, the probability of generating all correct proofs is `p^N` (e.g., 3%) at the cost of a proof size of ~`SingleProofSize*N`.
59
72
60
-
If the underlying file has some erasure coding applied with leverage `X` (e.g., 2x):
61
-
- A single challenge makes the prover have a probability of `p^X` of success (e.g., 25%)
62
-
- If the challenger asks for N (e.g., 5) proofs, the probability of generating all correct proofs is `p^(X*N)` (e.g., 0.097%)
73
+
Despite the above, if the prover deletes only 1 byte of the data, it would still generate proofs with ~high chance. Still, the file could be considered corrupted since a single byte is usually enough to make the file unavailable.
63
74
64
-
In summary, applying an erasure coding schema in the underlying file can make a single proof be _good enough_ to balance the proof size with more underlying storage for the original file.
75
+
One possible approach can be inspired from work by Mustafa et al. for data-availability schemas (see [here](https://ethresear.ch/t/simulating-a-fraud-proof-blockchain/5024)).
76
+
If an erasure-code schema was applied to the data, this forces the prover to drop a significant amount of data to make the file unrecoverable. For example, if the erasure code has a 2x leverage, the miner should drop at least 50% of the file to make it unrecoverable. As shown before, dropping 50% of the data means it has 3% success if asked for 5 proofs. This means that if the file is in an unrecoverable state, with 5 proofs, we should detect this at least 97% of the time.
65
77
66
78
Notice that if the prover has missing internal nodes of the UnixFS, then the impact of a missed block is much higher than missing leaves (underlying data) since the probability of hitting an internal node is way bigger than leaves for a random offset. (e.g., if the root Cid block is missing, all challenges will fail). This means that the probability of the prover failing to provide the proofs is lower than the analysis made above for leaves.
67
79
68
80
69
81
## Proof sizes and benchmark
70
-
The size of the proof should be already close to the minimal level. Notice that these proofs are pretty big for the single reason that no assumptions are made of DAG layout nor chunking. Thus internal nodes at visited levels include many children. If we're able to have some extra assumptions as fixed-size chunking, then we could potentially ignore untargeted raw leaves which are the biggest in size, and only include the targeted (red) leaf node.
82
+
The proof size is directly related to how many assumptions we have about the underlying DAG structure. The current implementation of this library doesn't assume anything about the DAG structure, so it isn't optimized for proof size.
83
+
The biggest weight in the proofs comes from leave blocks which are usually heavy (~100s of KB), and depending on where an offest lands on the DAG structure, it could contain multiple data-blocks.
84
+
85
+
If at least we can bake an assumption about constant size chunks with defined size, we could generate mostly minimal and constant sized proofs since we could probably avoid all leaves and only include the targeted one. Maybe the library can be extended to allow baking assumptions like this and generate smaller proofs in the future.
86
+
87
+
The cost of generating the proofs should be _O(1)_. Probably soon, I'll add some benchmarks, but realistically speaking is mainly tied to how fast lookups can be done in the `DAGService`, which mainly depends on the source of the data, not the algorithm.
88
+
89
+
## CLI
90
+
A simple CLI `ufsproof` is provided, allowing easy to generate and verify proofs, which can be installed running `make install`.
91
+
92
+
To generate proofs, run `ufsproof prove [cid] [offset]`, which prints in stdout the proof for block of Cid at the provided offset.
93
+
For example:
94
+
-`ufsproof prove QmUavJLgtkQy6wW2j1J1A5cAP6UQt3XLQjsArsU2ZYmgSo 1300`: assumes that the Cid is stored in an IPFS API at `/ip4/127.0.0.1/tcp/5001`.
95
+
-`ufsproof prove QmUavJLgtkQy6wW2j1J1A5cAP6UQt3XLQjsArsU2ZYmgSo 1300 > proof.car`: stores the proof in a file.
96
+
-`ufsproof prove --car-file mydag.car QmUavJLgtkQy6wW2j1J1A5cAP6UQt3XLQjsArsU2ZYmgSo 1300`: uses a CAR file instead of an IPFS API.
97
+
98
+
To verify proofs, run `ufsproof verify [cid] [offset] [proof-path:(optional, by default stdin)]`.
Generating and verifying proofs are mostly symmetrical operations. The current implementation is very naive and not optimized in any way. Being stricter with the spec CAR serialization block order can make the implementation faster. Probably, not a big deal unless you're generating proofs for thousands of _Cids_.
Remember that because of (**) mentioned in _Proof sizes and benchmark_ is possible to have a valid proof message on some offsets greater than the proved one.
73
111
74
112
## Roadmap
75
-
The following bullets will probably be implemented soon:
113
+
Possible ideas in the near future:
76
114
-[ ] Allow direct leaf Cid proof (non-offset based); a bit offtopic for this lib and not sure entirely useful.
77
115
-[ ] Benchmarks, may be fun but nothing entirely useful for now.
78
-
-[ ] CLI command wirable to `go-ipfs`. The lib already supports any `DAGService` so anything can be pluggable.
79
116
-[ ] Allow strict mode proof validation; maybe it makes sense to fail faster in some cases, nbd.
80
117
-[ ] CLI for validation from DealID in Filecoin network; maybe fun, but `Labels` are unverified.
118
+
- Baking assumptions for shorter proofs.
81
119
-[ ] godocs
82
120
83
121
This is a side-project made for fun, so a priori is a hand-wavy roadmap.
0 commit comments