Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

ipfs-cluster libp2p host does not enable NAT support #346

Closed
hsanjuan opened this issue Mar 13, 2018 · 5 comments
Closed

ipfs-cluster libp2p host does not enable NAT support #346

hsanjuan opened this issue Mar 13, 2018 · 5 comments
Assignees
Labels
exp/novice Someone with a little familiarity can pick up kind/bug A bug in existing code (including security flaws) P1 High: Likely tackled by core team if no one steps up status/blocked Unable to be worked further until needs are met

Comments

@hsanjuan
Copy link
Collaborator

We're going to leave this there until we update libp2p again so that this can be used: libp2p/go-libp2p#293

@hsanjuan hsanjuan added kind/bug A bug in existing code (including security flaws) exp/novice Someone with a little familiarity can pick up status/blocked Unable to be worked further until needs are met P1 High: Likely tackled by core team if no one steps up labels Mar 13, 2018
@nothingismagick
Copy link

nothingismagick commented Mar 21, 2018

Can this be fast-tracked? I am confronted with an architecture where my nodes (except for the bootstrapping node) change networks and public IPs so regularly, that rm -r ~/.ipfs-cluster/ipfs-cluster-data* is already becoming old hotness. New hotness: manually purging the current peer’s peer object in service.json! If any node changes its host network, then that node can’t be dialed - and the raft is outdated because cluster peers do not match raft peers...

Thankfully the error message helpfully says to ‘Clean the raft state’ - but I have not found any concise guide to doing that...

Returning to the original network does not repair the situation, as would be expected. Right now with only a few pins I don‘t mind tearing it down and building it back up again, but when the bootstrapping node has a million+ pinset... yeah.

Am I a being dense? Or am I missing something here? Or should I drop ipfs-cluster in favor of orbitDB where I just micromanage the heck out of everybody’s node? I don’t mind working with the IPFS community here and accepting the tech-debt and it’s risks, but I kind of need the security that at some point In the next three to six months I can ship code where the dependencies are fully (and clearly) documented (and NAT works as expected).

@hsanjuan
Copy link
Collaborator Author

Hi @nothingismagick !

Can this be fast-tracked?

Sorry no, it cannot be fast-tracked, at least until the sharding branch is rebased on master and I fix something else in libp2p. But it's our intention to do it at the early as possible. (cc @ZenGround0 )

Thankfully the error message helpfully says to ‘Clean the raft state’ - but I have not found any concise guide to doing that...

ipfs-cluster-ctl state cleanup will clean the state (equiv to rm -r <path-to-ipfs-cluster-data>). Please feel free to improve docs on this front. Even though the specific command is newer, the cluster guide does mention the procedure in the dynamic cluster section.

Or should I drop ipfs-cluster in favor of orbitDB where I just micromanage the heck out of everybody’s node?

It is very hard to support a use-case that you haven't described in detail (did I miss it?). I don't know what you are trying to do or how. Perhaps cluster cannot help you right now. Perhaps your use case just needs some small thing to be viable or perhaps it needs some big thing. I suggest that you open an issue specifically to describe what you are trying to do and how you want to use ipfs-cluster for it. We can work from there and then we can plan and prioritize it the best we can :)

Perhaps your usecase is already among the ones compiled at https://github.com/ipfs/ipfs-cluster/pull/215/files by @ZenGround0 , but still good to have a new issue.

I kind of need the security that at some point In the next three to six months

We aim to do our best to support you, but I have to be very clear in telling you that we can't give you any securities for anything. As said above, the best course here is to describe your usecase in detail so that we can come up with actionable items and features that we can plan for and work on.

@nothingismagick
Copy link

Ok - thanks for the detailed feedback! I am a little concerned about the suggestion to “improve the docs” - because I am too new to this project to undertake a task like that. Is there a docs task-force for the IPFS ecosystem?

@ZenGround0
Copy link
Collaborator

Hey @nothingismagick, there is nothing to be concerned about. @hsanjuan is not suggesting that you overhaul and rework the documentation of the project as a whole. He is only letting you know that if you see specific things in the existing docs that could use a sentence or two for more clarity then we will accept your patches. In this case the 5th bullet under "Dynamic cluster membership considerations" would be more effective if it referenced ipfs-cluster-service state cleanup which is something we only figured out based on your feedback.

The general idea is that we value not only feedback but suggestions for improvement in the form of pull requests to the docs and code. Not all projects are as welcoming so we find it helpful to remind people that we accept PRs. There is no pressure on you or other users to contribute, but if you feel inclined we appreciate your help.

On another note there is a documentation effort across the ipfs-universe of projects. If you are interested in this a good place to start is this issue in the ipfs/docs repo: ipfs-inactive/docs#58

hsanjuan added a commit that referenced this issue May 27, 2018
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
@ghost ghost assigned hsanjuan May 27, 2018
@ghost ghost added the status/in-progress In progress label May 27, 2018
hsanjuan added a commit that referenced this issue May 28, 2018
Fix #346: Enable NAT hole punching for libp2p host
@ghost ghost removed the status/in-progress In progress label May 28, 2018
@nothingismagick
Copy link

awesome

sublimino pushed a commit to sublimino/ipfs-cluster that referenced this issue Jun 2, 2018
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
sublimino pushed a commit to sublimino/ipfs-cluster that referenced this issue Jun 2, 2018
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
sublimino pushed a commit to sublimino/ipfs-cluster that referenced this issue Jun 2, 2018
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
exp/novice Someone with a little familiarity can pick up kind/bug A bug in existing code (including security flaws) P1 High: Likely tackled by core team if no one steps up status/blocked Unable to be worked further until needs are met
Projects
None yet
Development

No branches or pull requests

3 participants