Conversation

Replying to
2.2 talks about this! We assume channel balances are uniformly random, so bigger channels are more likely to succeed. But costs along a path *add* (A-B costs 5, B-C costs 7 => A-B-C costs 12), whereas probabilities *multiply* (P(A-B) 1/2, P(B-C) 1/3, P(A-B-C) = 1/6).
1
22
If a channel has capacity u, chance of getting x sats through (aka p(x)) is (u+1 - x)/(u+1). There's a whole other paper on this, and the +1 seems weird, but basically seems right: bigger channel easier, smaller payment easier. OK.
3
16
So, the paper goes to show that maximizing the product of these p(x) is the same as minimizing the sum of -log(p(x)) and I believe it because it sounds right, plus they use big words like "group homomorphism" and so I'm just nodding along. So that's our cost function; great!
4
30
(They also show this function is convex,and I believe all that because big words like "second derivative" and there are formulae!). They then say that makes Min Cost Flow polynomial-time solvable, and they have references, so I believe it.
1
19
2.3 asks "but what about fees", and shows that we can just add the ppm feerate into our cost function, with some μ factor depending on whether fees or reliability is more important. Without a base fee, this fee + reliability cost stays convex (thus solvable). Hence #zerobasefee
2
21
More big math words in italics (Lagrangian!) shows minimizing this gives the best tradeoff. And if we don't like the answer (too pricy), we know increasing μ will drop fees & reliability. If too unlikely, decreasing μ will increase fees & reliability (well, make it no worse!)
2
18
2.4 says "this isn't right, the fees are actually little drains, so this needs *generalized* min cost flow!". I swore a bit at that point, but then it says basically "that's hard, fees are low, let's not go there". Phew! Just calc min flow for worst case total fees.
3
17
3 is the algo, using this. You break into pieces by MCF, send off payments, then learn about capacity from both failures and successes, changing your "uncertainty" cost component. If payment fails at B->C, you know it succeeded A->B. Success means capacity > x, fail means < x.
2
14
Surprisingly, it also means you learn about reverse capacity (if B->C has low capacity, it must all be C->B, and vice-versa). Most (all?) implementations use 1% reserve either side, and we know the channel size from onchain, so we can use this.
3
16
So you iterate until you either succeed, or can't find any capacity at all, or probability so low you give up. (Or try decreasing μ you damn cheapskate!).
3
15
Replying to
I highly suggest to look into the "linear" aspect of this. The stuff you read so far (I guess) only deals with rational numbers, which are a PITA when it comes to actual implementations. Aside from that, linearized stuff seems to be much faster. , your cue :)
2