You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on May 25, 2023. It is now read-only.
At some point we need to set up testing suites for all of the modules. Ideally these tests should be set up for each individual module. We should start setting up black box tests as soon as possible as they will be very useful when starting to implement the modules (after which we can add white box tests).
Since a lot of the functionality we will be testing does not give exact results, we should have tests measure and record the deviation of test results rather than pass or fail, as well as time taken for testing. Furthermore, If we are profiling the time taken for tests, we should probably set up the tests to be run on the hardware that will be used at the competition (i.e. the WARG desktop).
This also means that tests have to be geared towards time as well (meaning testing scenarios that may take more/less time, testing large amounts of data, etc.)
I haven't yet figured exactly what the best way to set up the tests is, but cmake (which I'd like to use for the purposes of building the project) has some interesting stuff to use for testing which we should look at (found here)
TODO:
Create a testing program that can handle testing for all the modules (as well as record and document results)
Set up an environment for the computer vision software as well as nightly automated testing on the WARG desktop computer
Write tests for all of the modules
The text was updated successfully, but these errors were encountered:
# for freeto subscribe to this conversation on GitHub.
Already have an account?
#.
At some point we need to set up testing suites for all of the modules. Ideally these tests should be set up for each individual module. We should start setting up black box tests as soon as possible as they will be very useful when starting to implement the modules (after which we can add white box tests).
Since a lot of the functionality we will be testing does not give exact results, we should have tests measure and record the deviation of test results rather than pass or fail, as well as time taken for testing. Furthermore, If we are profiling the time taken for tests, we should probably set up the tests to be run on the hardware that will be used at the competition (i.e. the WARG desktop).
This also means that tests have to be geared towards time as well (meaning testing scenarios that may take more/less time, testing large amounts of data, etc.)
I haven't yet figured exactly what the best way to set up the tests is, but cmake (which I'd like to use for the purposes of building the project) has some interesting stuff to use for testing which we should look at (found here)
TODO:
The text was updated successfully, but these errors were encountered: