Skip to content
This repository has been archived by the owner on Oct 31, 2023. It is now read-only.

Implement new env performance before sending WTCT #4501

Merged
merged 10 commits into from
Jul 29, 2019
Merged

Conversation

maaktweluit
Copy link
Contributor

@maaktweluit maaktweluit commented Jul 18, 2019

Compute the new app benchmark if needed. Send the new app benchmark score in WTCT so that it can be used in the new subtask creation flow.

Steps taken:

  • Added new env as return value to TaskServer.get_environment_by_id()
  • Added (new) EnvironmentManager.get_performance()
    • Stores performance scores in the same table as old EnvironmentManager
    • returns Deferred -> float if benchmark is cached or started, None when benchmark is running
  • Added performance to CompTaskInfo so it does not have to be calculated after the task is completed
  • Made a task on the clay board for the minimum env performance

Steps left:

  • TODO: Decide on deferToThread or sync_wait for running the benchmark
  • Run unit and integration tests for old code
  • Add unit tests for new code

@codecov
Copy link

codecov bot commented Jul 23, 2019

Codecov Report

Merging #4501 into develop will decrease coverage by <.01%.
The diff coverage is 91.3%.

@@             Coverage Diff             @@
##           develop    #4501      +/-   ##
===========================================
- Coverage    90.24%   90.24%   -0.01%     
===========================================
  Files          225      225              
  Lines        19667    19727      +60     
===========================================
+ Hits         17748    17802      +54     
- Misses        1919     1925       +6

if isinstance(env, OldEnv):
return env.get_min_accepted_performance()
# NewEnv
# TODO: Implement minimum performance in new env
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe config_desc instead? I feel like min performance is a setting that does not belong to a computation environment.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was thinking this setting belongs to the app, not the env. Already made an issue on the board to fix this, do you think it is better to use config_desc here instead of 0.0?

@maaktweluit maaktweluit force-pushed the mwu/new-bench branch 4 times, most recently from 513d3b0 to f9806df Compare July 25, 2019 12:22
@maaktweluit maaktweluit marked this pull request as ready for review July 25, 2019 14:38
@maaktweluit maaktweluit force-pushed the mwu/new-bench branch 2 times, most recently from 2b8b5e7 to 5a89cbb Compare July 25, 2019 15:43
@maaktweluit
Copy link
Contributor Author

@Krigpl thanks for the review!

fixed all the comments and rebased to the latest develop, please check again

# Given
env_id = "env1"
env, _ = self.register_env(env_id)
env.get_benchmark = MagicMock(side_effect=Exception)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An alternative way is env.get_benchmark.return_value = Exception

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there any added value to using return_value over side_effect?
i like side_effect for exceptions since its not really returning but raising

 - Check if run_benchmarks exception is passed properly
@maaktweluit maaktweluit merged commit b6860c3 into develop Jul 29, 2019
@maaktweluit maaktweluit deleted the mwu/new-bench branch July 29, 2019 09:34
# for free to subscribe to this conversation on GitHub. Already have an account? #.
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants