Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Handle overlay network events #110

Merged
merged 6 commits into from
Sep 21, 2021

Conversation

ogenev
Copy link
Member

@ogenev ogenev commented Sep 20, 2021

What does this PR do?

This is PR following #103.
It adds support for running event handlers for state and history network in parallel.

Any specific implementation changes that would benefit highlighting?

The general idea is to have one main portal network event handler and a network event handler per network.
The main event handler will dispatch via unbounded channels all events to the specific network handlers by checking the protocol header in discv5 talk_req.

This implementation was tested with the dummy node from the testing framework branch by sending three ping requests, one for every network(state, history) and one for a network we don't support. Here are the results:

Sep 20 13:02:16.534  INFO trin_core::cli: Protocol: ipc
IPC path: /tmp/trin-jsonrpc.ipc    
Sep 20 13:02:16.534  INFO trin_core::cli: Pool Size: 2    
Sep 20 13:02:16.534  INFO trin_core::cli: Bootnodes: None    
Sep 20 13:02:26.539 DEBUG trin_core::socket: STUN claims that public network endpoint is: Err("Timed out waiting for STUN server reply")    
Sep 20 13:02:26.547  INFO trin_core::portalnet::discovery: Starting discv5 with local enr encoded=enr:-IS4QEBiJz6RJ6sUlzBMUowcrHJrk9JFzAT32gJktiG6-lTPDzrgEfmhrGSHkprIjUp4bQE_L-bUqxsEANBVGTegCrMBgmlkgnY0gmlwhMCoASGJc2VjcDI1NmsxoQLP5TRJ2p4HSTZL3fa-NxzRANm0OE6VieTw6yfKF2ViyYN1ZHCCIyk decoded=ENR: NodeId: 0x092c..362b, Socket: Some(192.168.1.33:9001)    
Sep 20 13:02:26.551  INFO discv5::service: Discv5 Service started
Sep 20 13:02:26.551 DEBUG discv5::handler: Handler Starting
Sep 20 13:02:26.551 DEBUG discv5::socket::recv: Recv handler starting
Sep 20 13:02:26.551 DEBUG discv5::socket::send: Send handler starting
Sep 20 13:02:26.565 DEBUG trin: Selected networks to spawn: ["history", "state"]    
Sep 20 13:02:26.566  INFO trin: About to spawn State Network with boot nodes: []    
Sep 20 13:02:26.566  INFO trin: About to spawn History Network with boot nodes: []    
Sep 20 13:02:55.816 DEBUG discv5::service: NodeId unknown, requesting ENR. Node: 0x8c16..1720, addr: 192.168.1.33:9876
Sep 20 13:02:55.816 DEBUG discv5::handler: Sending WHOAREYOU to Node: 0x8c16..1720, addr: 192.168.1.33:9876
Sep 20 13:02:55.840 DEBUG discv5::service: Session established with Node: 0x8c16..1720, direction: Incoming
Sep 20 13:02:55.840 DEBUG discv5::service: New connected node added to routing table: 0x8c16..1720
Sep 20 13:02:55.841 DEBUG trin_core::portalnet::events: Got discv5 event NodeInserted { node_id: NodeId { raw: [140, 22, 195, 101, 247, 209, 226, 186, 66, 78, 130, 150, 127, 62, 155, 96, 20, 110, 177, 16, 73, 7, 112, 122, 57, 130, 157, 135, 238, 167, 23, 32] }, replaced: None }    
Sep 20 13:02:55.841 DEBUG trin_core::portalnet::events: Got discv5 event TalkRequest(TalkRequest { id: RequestId([56, 72, 188, 199, 198, 185, 62, 132]), node_address: NodeAddress { socket_addr: 192.168.1.33:9876, node_id: NodeId { raw: [140, 22, 195, 101, 247, 209, 226, 186, 66, 78, 130, 150, 127, 62, 155, 96, 20, 110, 177, 16, 73, 7, 112, 122, 57, 130, 157, 135, 238, 167, 23, 32] } }, protocol: [104, 105, 115, 116, 111, 114, 121], body: [1, 1, 0, 0, 0, 0, 0, 0, 0, 255, 255, 255, 255, 255, 255, 255, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], sender: Some(UnboundedSender { chan: Tx { inner: Chan { tx: Tx { block_tail: 0x555fba6c3390, tail_position: 1 }, semaphore: 0, rx_waker: AtomicWaker, tx_count: 2, rx_fields: "..." } } }) })    
Sep 20 13:02:55.841 DEBUG trin_history::events: Got history request TalkRequest { id: RequestId([56, 72, 188, 199, 198, 185, 62, 132]), node_address: NodeAddress { socket_addr: 192.168.1.33:9876, node_id: NodeId { raw: [140, 22, 195, 101, 247, 209, 226, 186, 66, 78, 130, 150, 127, 62, 155, 96, 20, 110, 177, 16, 73, 7, 112, 122, 57, 130, 157, 135, 238, 167, 23, 32] } }, protocol: [104, 105, 115, 116, 111, 114, 121], body: [1, 1, 0, 0, 0, 0, 0, 0, 0, 255, 255, 255, 255, 255, 255, 255, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], sender: Some(UnboundedSender { chan: Tx { inner: Chan { tx: Tx { block_tail: 0x555fba6c3390, tail_position: 1 }, semaphore: 0, rx_waker: AtomicWaker, tx_count: 2, rx_fields: "..." } } }) }    
Sep 20 13:02:55.841 DEBUG trin_history::events: Got history overlay ping request Ping(Ping { enr_seq: 1, data_radius: 18446744073709551615 })    
Sep 20 13:02:55.841 DEBUG discv5::service: Sending TALK response to Node: 0x8c16..1720, addr: 192.168.1.33:9876
Sep 20 13:02:55.844 DEBUG trin_core::portalnet::events: Got discv5 event TalkRequest(TalkRequest { id: RequestId([108, 84, 236, 165, 144, 113, 72, 111]), node_address: NodeAddress { socket_addr: 192.168.1.33:9876, node_id: NodeId { raw: [140, 22, 195, 101, 247, 209, 226, 186, 66, 78, 130, 150, 127, 62, 155, 96, 20, 110, 177, 16, 73, 7, 112, 122, 57, 130, 157, 135, 238, 167, 23, 32] } }, protocol: [115, 116, 97, 116, 101], body: [1, 1, 0, 0, 0, 0, 0, 0, 0, 255, 255, 255, 255, 255, 255, 255, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], sender: Some(UnboundedSender { chan: Tx { inner: Chan { tx: Tx { block_tail: 0x555fba6c3390, tail_position: 2 }, semaphore: 0, rx_waker: AtomicWaker, tx_count: 2, rx_fields: "..." } } }) })    
Sep 20 13:02:55.844 DEBUG trin_state::events: Got state request TalkRequest { id: RequestId([108, 84, 236, 165, 144, 113, 72, 111]), node_address: NodeAddress { socket_addr: 192.168.1.33:9876, node_id: NodeId { raw: [140, 22, 195, 101, 247, 209, 226, 186, 66, 78, 130, 150, 127, 62, 155, 96, 20, 110, 177, 16, 73, 7, 112, 122, 57, 130, 157, 135, 238, 167, 23, 32] } }, protocol: [115, 116, 97, 116, 101], body: [1, 1, 0, 0, 0, 0, 0, 0, 0, 255, 255, 255, 255, 255, 255, 255, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], sender: Some(UnboundedSender { chan: Tx { inner: Chan { tx: Tx { block_tail: 0x555fba6c3390, tail_position: 2 }, semaphore: 0, rx_waker: AtomicWaker, tx_count: 2, rx_fields: "..." } } }) }    
Sep 20 13:02:55.844 DEBUG trin_state::events: Got state overlay ping request Ping(Ping { enr_seq: 1, data_radius: 18446744073709551615 })    
Sep 20 13:02:55.844 DEBUG discv5::service: Sending TALK response to Node: 0x8c16..1720, addr: 192.168.1.33:9876
Sep 20 13:02:55.846 DEBUG trin_core::portalnet::events: Got discv5 event TalkRequest(TalkRequest { id: RequestId([178, 121, 10, 242, 179, 112, 44, 18]), node_address: NodeAddress { socket_addr: 192.168.1.33:9876, node_id: NodeId { raw: [140, 22, 195, 101, 247, 209, 226, 186, 66, 78, 130, 150, 127, 62, 155, 96, 20, 110, 177, 16, 73, 7, 112, 122, 57, 130, 157, 135, 238, 167, 23, 32] } }, protocol: [98, 97, 114, 102, 111, 111], body: [1, 1, 0, 0, 0, 0, 0, 0, 0, 255, 255, 255, 255, 255, 255, 255, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], sender: Some(UnboundedSender { chan: Tx { inner: Chan { tx: Tx { block_tail: 0x555fba6c3390, tail_position: 3 }, semaphore: 0, rx_waker: AtomicWaker, tx_count: 2, rx_fields: "..." } } }) })    
Sep 20 13:02:55.847  WARN trin_core::portalnet::events: Non supported protocol : barfoo    
Sep 20 13:02:55.847 DEBUG discv5::service: Sending empty TALK response to Node: 0x8c16..1720, addr: 192.168.1.33:9876

Copy link
Member

@KolbyML KolbyML left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From a quick glance, it looks good and also fixes issues I saw in #103. So I support this PR

Copy link
Collaborator

@njgheorghita njgheorghita left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good! A couple suggestions about refactoring that I feel do need to get handled, but can be addressed in a subsequent pr if you'd prefer

@@ -194,114 +172,3 @@ async fn proxy_query_to_state_subnet(
None => Err("No response from state subnetwork".to_string()),
}
}

impl PortalnetEvents {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure protocol is an accurate name for this module anymore after removing this code...

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, what is left is the json-rpc handlers. I created a new jsonrpc folder and refactored a bit the jsonrpc files.

})
// Spawn main event handlers
//ToDo: Probably we can find a better way to refactor those event handler initializations.
// Initializing them together with the json-rpc handlers above may be a good idea.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I strongly support this. Ideally, when it comes to this refactor, we can package the logic neatly in such a way that it can be shared between this main() and the subnet main() used when they are run individually, so we don't need to maintain the same logic in multiple places. I'll create an issue to track this after this pr gets merged


tokio::spawn(history_events.process_requests());
}
None => {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, it seems like refactoring would also help eliminate some of these weird edge cases we need to handle right now.

pub event_rx: UnboundedReceiver<TalkRequest>,
}

impl HistoryEvents {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a little nervous at how similar this and StateEvents are - besides some log messages, I don't notice any significant differences. Which means as we continue development, any change to one will likely need to be made on the other - and they could fall out of sync very quickly. Could/did you try to implement a common datatype that can be used in each subnetwork? If you want to handle that in a subsequent pr though, that's cool

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I moved the logic for handling the common requests (currently all requests are common) inside the overlay protocol, which probably makes perfect sense because every network implements the overlay protocol. This was actually the idea from the beginning but sometimes I get lost in refactoring :)

@ogenev ogenev force-pushed the event_handler_multiple_networks branch from 9f4643d to b58bedc Compare September 21, 2021 09:01
@ogenev ogenev changed the title Handle portal events from multiple networks Handle overlay network events Sep 21, 2021
@ogenev ogenev merged commit 07d70c1 into ethereum:master Sep 21, 2021
@ogenev ogenev deleted the event_handler_multiple_networks branch January 24, 2022 14:17
# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants