Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

[Conceptual] Looking up an account #166

Closed
wants to merge 1 commit into from
Closed

[Conceptual] Looking up an account #166

wants to merge 1 commit into from

Conversation

carver
Copy link
Collaborator

@carver carver commented Oct 28, 2021

Just starting to feel out what it might look like to get the encoded account from the state trie.

So far, this totally ignores the question of state root. That will be addressed by v0.1.1 of eth-trie: carver/eth-trie.rs#20


async fn find_content(&self, content_key: &[u8]) -> Result<Vec<u8>, String> {
// TODO stop cloning content_key, when find_nodes_close_to_content API has changed
let ENRs = self.overlay.find_nodes_close_to_content(content_key.clone()).await;
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

er, I think these should be to_vec() instead of clone(), until #165 gets merged.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In my previous DDHT iteration, I ended up needing/wanting two independent APIs here.

  1. find_nodes_close_to_thing: will return the very closest nodes after an exhaustive search.
  2. stream_nodes_close_to_thing: will stream the nodes as they are found in the order of:
    • up until the point where it has found the very closest node, it will only stream nodes closer than the previously returned one.
    • once the closest node has been found, it will continue streaming them in distance order.

An example using linear ordering.

Supposing we are looking for node 0 and there exists nodes 0, 1, 2, 3, 4, 5, 6.

We encounter the nodes in the order of: 4, 3, 5, 2, 0, 1, 6.

Then the APIs above would return:

  • find_nodes_close_to_thing: [0, 1, 2, 3, ... (probably cutting off at some default limit)]
  • stream_nodes_close_to_thing: [4, 3, 2, 0, 1, 3, 6]

The general idea is that we want to prioritize being able to start requesting content and acting on the nodes we find well before we find the actual closest node.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another similar thing at play here is that stream_nodes_close_to_thing should potentially zero in on the very closest node, and then after that, continue searching outwards, eventually enumerating the entire search space in the network if you continue streaming nodes.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I think I see where you're going. One thing that surprised me was: if 5 is the only node we have, this strategy wouldn't ask 5, it would just block waiting for another candidate to come in? I'd think you'd want to have a request in-flight as much as possible (maybe even 2 or 3 in parallel).

For short-term, my instinct is to keep the approach as trivial as possible, until it works.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, the edge cases for degenerative small networks are awkward. In the algorithm I implemented the stream concludes when all nodes have been asked (or timed out). Feel free to just use this as a reference for where you might be going. No need to try and do the complex thing first.

};
match response {
Response::FoundContent(val) => Ok(val),
// The following is unreachable, right?
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hah, not sure why I wrote that. It's definitely reachable.

@ogenev ogenev closed this Dec 13, 2023
@ogenev ogenev deleted the look-up-account branch December 13, 2023 08:02
@njgheorghita
Copy link
Collaborator

@ogenev I'm totally on board with closing this pr. Just wanted to say that before you close an issue/pr I'd recommend leaving a short comment that explains why an issue / pr is being closed, for future reference. Even in cases like this, where the pr is outdated and no longer appears to be relevant/active, it's still useful to drop a quick comment explaining why before closing.

@ogenev
Copy link
Member

ogenev commented Dec 14, 2023

Sorry for the inconvenience.

I deleted the upstream branch which was outdated (stale for more than 2 years) and this closed the PR automatically.

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants