-
-
Notifications
You must be signed in to change notification settings - Fork 3.1k
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Bithack #386
Bithack #386
Conversation
Just to make sure i get this correctly: this optimization is based on the fact Without looking at the distributions of GetBlocks requests, and the distribution of providers, I'm not sure this is a good assumption to make. It may work, but we may be falling back to the ticker constantly. If this works better for now and both of you are confident this is the right way to go, let's do it. But it makes me a bit uneasy. While it's nice to reduce dht queries, it may introduce a lot of wait. we won't really know until we can analyze these protocols in a testbed and in isolation. |
Yeah, I can definitely see some situations where we would be frequently falling back. Its really gonna be hard to tell until we have a testbed and can start analyzing these things. |
// It uses an internal `datastore.Datastore` instance to store values. | ||
type BlockService struct { | ||
Datastore ds.Datastore | ||
Remote exchange.Interface | ||
// TODO don't expose underlying impl details |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 agreed
Lots of comments up there. I don't feel comfortable with my understanding of what this bitswap impl does. Before merging this in, I want a README that explains this design, outlining the major async processes, the knobs to tweak, and the tradeoffs being made. we've been talking about having these for the major subsystems, and perhaps this is a good place to start. (I'll write a similar one for net) |
License: MIT Signed-off-by: Brian Tiger Chow <brian@perfmode.com>
@whyrusleeping License: MIT Signed-off-by: Brian Tiger Chow <brian@perfmode.com>
@whyrusleeping @jbenet Putting the block generator in a util dir until blocks. Can't put it in util/testutil because the util/testutil/dag-generator imports blockservice and blockservice uses the generator. Tough problem. This'll do for now. License: MIT Signed-off-by: Brian Tiger Chow <brian@perfmode.com>
+ test(notifications) cc @whyrusleeping @jbenet License: MIT Signed-off-by: Brian Tiger Chow <brian@perfmode.com>
@whyrusleeping @jbenet License: MIT Signed-off-by: Brian Tiger Chow <brian@perfmode.com>
License: MIT Signed-off-by: Brian Tiger Chow <brian@perfmode.com>
@whyrusleeping ada5714#commitcomment-8753622 License: MIT Signed-off-by: Brian Tiger Chow <brian@perfmode.com>
329fb6a
to
260ac96
Compare
@maybebtc good to merge? |
Need to remove the timeout from the implementation of HasBlock, but this can be handled in a future PR. https://github.com/jbenet/go-ipfs/blob/master/exchange/bitswap/bitswap.go#L256 |
Ah yeah, can do that at the same time as renaming |
sgtm |
BitHack makes a few changes to the way our BitSwap implementation work. From an API standpoint, the biggest change is the addition of
GetBlocks
.GetBlock
has been changed to be a special case call ofGetBlocks
. WhenGetBlocks
is called, the keys that are requested are sent to another goroutine through a channel, where the actual requests are handled.When a set of keys are requested, BitSwap will add them all to the wantlist, and the find providers for the first key in the list. It will then send the wantlist to those found providers.
The main bitswap loop now contains a 'timeout ticker', when it ticks, bitswap will find providers for every single key left on the wantlist and send the wantlist out to them, this is the 'plan B' as we hope that our initial request succeeds, saving us the bandwidth. This operation could be optimized to ensure only sending wantlist to a peer once per tick.
ReceiveMessage
also was changed. Previously it would always return a message, which looked to me like it would cause an infinite loop of messaging between a set of peers. Im surprised we never experienced anything bad because of it.To take advantage of this new behaviour, I modified the DAGReader to use GetBlocks to retrieve blocks for reading.