You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, author. I noticed that you used oversampling when sampling fraud nodes in the training phase, requiring you to select nodes that are consistent with the fraud node label. This procedure requires the type label of the node. However, it is not possible to obtain the node type label in the inference stage, so it is impossible to oversample the fraudulent nodes, which leads to the inconsistency between the sampling method and the training stage, and the model in the training stage is more ideal. Does this not fit the logic of machine learning? Causing the trained model to be unreliable?
The text was updated successfully, but these errors were encountered:
Hello, author. I noticed that you used oversampling when sampling fraud nodes in the training phase, requiring you to select nodes that are consistent with the fraud node label. This procedure requires the type label of the node. However, it is not possible to obtain the node type label in the inference stage, so it is impossible to oversample the fraudulent nodes, which leads to the inconsistency between the sampling method and the training stage, and the model in the training stage is more ideal. Does this not fit the logic of machine learning? Causing the trained model to be unreliable?
I have the same question, so I don't think it is a fair comparision in the paper when tesing in inference stage.
Hello, author. I noticed that you used oversampling when sampling fraud nodes in the training phase, requiring you to select nodes that are consistent with the fraud node label. This procedure requires the type label of the node. However, it is not possible to obtain the node type label in the inference stage, so it is impossible to oversample the fraudulent nodes, which leads to the inconsistency between the sampling method and the training stage, and the model in the training stage is more ideal. Does this not fit the logic of machine learning? Causing the trained model to be unreliable?
The text was updated successfully, but these errors were encountered: