-
Notifications
You must be signed in to change notification settings - Fork 64
ModelComparisonSimulator
: handle different outputs from individual simulators
#452
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
base: dev
Are you sure you want to change the base?
ModelComparisonSimulator
: handle different outputs from individual simulators
#452
Conversation
Codecov ReportAttention: Patch coverage is
|
Thanks for the PR, I skimmed it and like the idea behind the changes. I'll try to conduct a proper review some time this week. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good from my side. See individual comments.
Can we also add tests for the (few) missed edge-case lines?
add newlines to correctly render lists, make reference to other class a link
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See my comment on one edge case, I'm not sure if it is a relevant one. What do you think?
Apart from that, the PR looks good to me, I only added minor formatting fixes to the docstring.
def _determine_key_conflicts(self, sims): | ||
# determine only once | ||
if self._keys is not None: | ||
return self._keys |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can this return "wrong" results if some simulators had n=0
in line 120 when the function first runs, and n>0
later on? Is this something we want to safeguard against?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can imagine this function to be quite cheap to compute, would it make sense to run it completely every time (but only logging the info once)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, will do that
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The changes are looking great. Can we address Valentin's comments before we merge? I also left some minor comments, still.
def _determine_key_conflicts(self, sims): | ||
# determine only once | ||
if self._keys is not None: | ||
return self._keys |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree
assert set(samples) == {"x", "model_indices", "c", "w"} | ||
assert np.sum(np.isnan(samples["c"])) + np.sum(np.isnan(samples["w"])) == batch_size | ||
elif multimodel_key_conflicts.key_conflicts == "error": | ||
with pytest.raises(Exception): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is too broad of a check. Use specific exception types.
fixes #441
As per #441 (comment), this PR implements
However, by default the simulator will just drop (with an info warning) keys that are not common for all simulators, since in most situations we would not need those outputs in the first place.