-
Notifications
You must be signed in to change notification settings - Fork 293
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Feat: Enable DHT-based peer discovery and routing for cluster peers #489
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There should also be an accompanying ipfs-cluster-website
PR that updates the bootstrapping documentation.
@@ -86,6 +89,8 @@ func NewCluster( | |||
return nil, errors.New("cluster host is nil") | |||
} | |||
|
|||
ctx, cancel := context.WithCancel(context.Background()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unrelated to this particular PR, but could we have the context.Context
be passed into the NewCluster
function? That way tracing and metrics can follow the ctx from the initialisation of the cluster in daemon.go
all the way through.
@@ -335,24 +334,6 @@ func (rpcapi *RPCAPI) ConsensusPeers(ctx context.Context, in struct{}, out *[]pe | |||
return err | |||
} | |||
|
|||
/* |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we really be 'just' removing these RPC calls? Yes, we are pre-v1 but I feel like we should provide at least some form of depreciation period. Especially, considering we don't know to what extent our users have integrated with the API.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
RPC API is internal, so yes, I'm happy 'just' removing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
peerManager := pstoremgr.New(host, cfg.GetPeerstorePath()) | ||
|
||
ctx, cancel := context.WithCancel(context.Background()) | ||
idht, err := dht.New(ctx, host) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this a new cluster DHT or a new connection to the IPFS public DHT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cluster DHT
86325de
to
1575cbb
Compare
@lanzafame is it good from code point of view? (i fixed the tests now) Is it ok to just save the peerstore on shutdown now? And is it ok to include all known addresses in it? Other possibilities are: not saving the peerstore at all (not touching what the user provides), or saving only addresses for current peers. |
Yep.
Yes and yes. |
This uses go-libp2p-kad-dht as routing provider for the Cluster Peers. This means that: * A cluster peer can discover other Cluster peers even if they are not in their peerstore file. * We remove a bunch of code sending and receiving peers multiaddresses when a new peer was added to the Cluster. * PeerAdd now takes an ID and not a multiaddress. We do not need to ask the new peer which is our external multiaddress nor broadcast the new multiaddress to everyone. This will fix problems when bootstrapping a new peer to the Cluster while not all the other peers are online. * Adding a new peer does not mean to open connections to all peers anymore. The number of connections will be made according to the DHT parameters (this is good to have for future work) The that detecting a peer addition in the watchPeers() function does no longer mean that we have connected to it or that we know its multiaddresses. Therefore it's no point to save the peerstore in these events anymore. Here a question opens, should we save the peerstore at all, and should we save multiaddresses only for cluster peers, or for everyone known? Currently, the peerstore is only updated on clean shutdown, and it is updated with all the multiaddresses known, and not limited to peer IDs in the cluster, (because, why not). License: MIT Signed-off-by: Hector Sanjuan <code@hector.link>
@lanzafame what's missing here? I'm going to look at that failing test, see if I can make it not fail randomly |
@hsanjuan sorry, nothing, it just fell off my radar. |
This uses go-libp2p-kad-dht as routing provider for the Cluster Peers.
This means that:
not in their peerstore file.
when a new peer was added to the Cluster.
ask the new peer which is our external multiaddress nor broadcast
the new multiaddress to everyone. This will fix problems when bootstrapping
a new peer to the Cluster while not all the other peers are online.
anymore. The number of connections will be made according to the DHT
parameters (this is good to have for future work)
The that detecting a peer addition in the watchPeers() function does
no longer mean that we have connected to it or that we know its
multiaddresses. Therefore it's no point to save the peerstore in these
events anymore.
Here a question opens, should we save the peerstore at all, and should we
save multiaddresses only for cluster peers, or for everyone known?
Currently, the peerstore is only updated on clean shutdown,
and it is updated with all the multiaddresses known, and not limited to
peer IDs in the cluster, (because, why not).
License: MIT
Signed-off-by: Hector Sanjuan code@hector.link