-
Notifications
You must be signed in to change notification settings - Fork 284
Implement new env performance before sending WTCT #4501
Conversation
7c779cb
to
6626654
Compare
Codecov Report
@@ Coverage Diff @@
## develop #4501 +/- ##
===========================================
- Coverage 90.24% 90.24% -0.01%
===========================================
Files 225 225
Lines 19667 19727 +60
===========================================
+ Hits 17748 17802 +54
- Misses 1919 1925 +6 |
if isinstance(env, OldEnv): | ||
return env.get_min_accepted_performance() | ||
# NewEnv | ||
# TODO: Implement minimum performance in new env |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe config_desc
instead? I feel like min performance is a setting that does not belong to a computation environment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was thinking this setting belongs to the app, not the env. Already made an issue on the board to fix this, do you think it is better to use config_desc
here instead of 0.0
?
513d3b0
to
f9806df
Compare
2b8b5e7
to
5a89cbb
Compare
- Yield new env, not old - Expect Performance model to be not Null
- unused vars - use env from register_env() - check regex for exception - Add comment and errback on fire and forget deferred
5a89cbb
to
945dda0
Compare
@Krigpl thanks for the review! fixed all the comments and rebased to the latest develop, please check again |
tests/golem/task/test_envmanager.py
Outdated
# Given | ||
env_id = "env1" | ||
env, _ = self.register_env(env_id) | ||
env.get_benchmark = MagicMock(side_effect=Exception) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
An alternative way is env.get_benchmark.return_value = Exception
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is there any added value to using return_value over side_effect?
i like side_effect for exceptions since its not really returning but raising
- Check if run_benchmarks exception is passed properly
d1669ca
to
1105d47
Compare
Compute the new app benchmark if needed. Send the new app benchmark score in WTCT so that it can be used in the new subtask creation flow.
Steps taken:
TaskServer.get_environment_by_id()
EnvironmentManager.get_performance()
CompTaskInfo
so it does not have to be calculated after the task is completedSteps left:
deferToThread
orsync_wait
for running the benchmark