-
Notifications
You must be signed in to change notification settings - Fork 88
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Cannot Fanout a head after contestation #1260
Comments
Here is a link to the contest transaction: https://preview.cexplorer.io/datum/bce3f5c87c7ef9447d8c382cccd01649bb9bf646934651f0777057390d7457c5
so the contestation deadline is 1705567787000 which is Thursday, 18 January 2024 08:49:47 UTC |
The validity range of the failing |
I have tried to recover from this situation by:
But issuing a
|
@ch1bo noticed the It might be related and worth exploring as a potential solution to this. |
Current plan is to reproduce this using the model based testing |
Node 2 - posted: CloseTx 2024-01-18T08:46:49.767738Z |
Tried to make sense of the logs: https://github.com/input-output-hk/hydra/wiki/Logbook-2024-H1#sn-on-reproducing-the-contest-bug Then, we drew this diagram today which should reflect the situation given by the logs: Conclusion:
|
Context & versions
I am testing hydra-chess run between 2 different MacOS machines running on a local network:
Steps to reproduce
I don't know precisely 🤷 What I did is roughly the following:
newTable
which should open a head, which normally leads to each app posting a transaction to split the utxo committednewGame
stop
the game which did not work, and finallyClose
the head from a WS client which worked but lead to a contestation being issued by one of the nodesFanout
from any of the 2 nodesActual behavior
The most concerning problem is that now:
Fanout
from node-1, the transaction fails because of anH24
error meaning the deadline is not reachedFanout
from node-2, the transaction fails because of anH25
error meaning the snapshot hashes don't matchSo the head state is in limbo blocked in a transaction on L1.
Moreover, I observed that the 2
acks
from both nodes are not in sync and actually messages from node-1 are missing:[16,5]
[16, 12]
This seem to implies some messages were not persisted correctly on node-2, something which is already known to be possible because the underlying persistence mechanism we use is not very robust.
I have attached the state and logs for both nodes
chess-1.tgz
chess-2.tgz
The text was updated successfully, but these errors were encountered: