-
-
Notifications
You must be signed in to change notification settings - Fork 196
Ensuring we are up to date with x-common #416
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Comments
The best solution I could find until now is to add the data to "exercises": [
{
"slug": "leap",
"difficulty": 1,
"topics": [
],
"x-common-file": "exercises/leap/canonical-data.json",
"x-common-hash": "cda8f9800a33d997f8c6146a10b8caf66e25ec4b"
},
... With this information there, it would be easier to match against x-common: $ git diff cda8f9800a33d997f8c6146a10b8caf66e25ec4b exercises/leap/canonical-data.json The only thing we would have to keep in the test suite are the deviations from x-common. What do you think about it? |
As an example, this shamefull bash script gets the filenames and hashes from Beware: Ugly script ahead!checkChanges#!/usr/bin/env bash
QUERY='.exercises|map(select(has("x-common-file")))|map({"x-common-hash","x-common-file"})|map("git diff --name-only " + ."x-common-hash" + " " + ."x-common-file" + ";") | .[]';
eval $(jq -r "$QUERY" "$@") It must be run from the root of .../x-common/ $
.../x-common/ $ checkChanges ../xhaskell/config.json
exercises/leap/canonical-data.json
exercises/space-age/canonical-data.json
... |
It seems good to use a sha instead of a date - it allows us to check precisely whether the file has changed. That is good. It may not be necessary to specify the file directly, since they are all If we move to using the config.json file, I assume we would delete the date lines from the individual Tests.hs files? I wouldn't want to have to maintain our data in two places. |
In case of a change in the directory structure we'll have to think about how to deal with that. Considering that most of the json where not changed after the last restructuring, maybe we need to test if it will work without the file.
Agreed! |
I am assuming that if there is a change that all the files change at once (like what happened with canonical-data.json). In this case, all we need to do is change the file our script looks at, I think? I will say that in my current script I make use of I admit that I'm not thinking very hard about this because I expect a change in directory structure to be a rare event, and the benefit is that we avoid having to explicitly specify the JSON file. I could be convinced about explicitly specifying if you could convince me about how it deals with structure changes |
Ok...I think we can take the risk and forget about the filenames for now. 😄 |
OK, so I guess if we want to move to this scheme we just need to start by removing all the date lines and filling in commit SHAs. Should we:
|
The current SHA seems good enough. I don't care too much about what is currently not following x-common. Detecting changes from now on seems good enough. |
Going to ask real quick, thoughts on placing all the data in config.json instead of the individual test files? config.json:
individual test files:
It's probably not that hard to write something to insert in the |
Humm....I think that the deviations from Yeah...It is probably better put put everything in the test suite. |
I should probably mention this here while we are thinking about this subject - another alternative is to write test generators for those exercises that have canonical-data.json I have to honestly say that my current perceived benefit of writing the test generator is not high enough to justify the costs that I anticipate. I'm willing to have my mind changed on that, of course. |
The frequency of updates to exercises in this track is high, but most of the changes are not caused by minor changes in |
Might as well link to exercism/problem-specifications#524 so that I can see it from here later. I have no further updates right now. I run that script from time to time when I feel like it, and if I see something interesting I update this track. |
If we update all our test files to use the scheme proposed in #511 (comment)
Then, we can use a version comparison to see whether there are new changes. So, it could be useful to open a new issue to get every file to display its version. |
Consider the technique described in exercism/discussions#133. |
When we run the tests, HSpec displays the package version from the
It would be great if we could display the test-suite version at the top line! What about setting the version in the delivered To make things easier for us, we could use the following versioning: Old version:
New version:
Starting the SERIAL at Exercises that do not follow the canonical-data test could use The important thing about the serial number is that it would have to increase on any change. Instead of increasing by a unit it could be a date in the format Using the proposed versioning in Considering that The annoying thing with this way of versioning is that we would have to update two files instead of one ( So...what do you think about it @exercism/haskell? |
Oh, that seems good. The date is not particularly important now that x-common has versions. We could remove it from the test files too. Should we? It would not have been terrible to use YYMMDD in the serial since typically there is only one trackler update a day, but since the boundary doesn't always lie on midnight (and timezone considerations) starting at 1 seems better anyway. |
I think that the only thing we need to annotate in the test suites are the divergences from
👍 |
Currently doing well! Currently only out of date on (excluding triangle, crypto-square, the three text generation exercises, the five deprecated exercises):
we might in fact be up-to-date on bowling, I just didn't check roman numerals only needs descriptions to be up to date. |
I'm a bit too lazy to make list-ops match right now, so I won't. I did verify that we test the same functions and all cases seem covered well enough. |
All exercises in Is there anything else missing here or can we consider it solved, @petertseng? |
Let's discuss one last thing: How shall we remember to keep up to date? An acceptable answer could be "We just run whatever tool we have on a manual but periodic basis", which is a simple solution. I'll add it to #470 for now, and if we decide on this solution, we can close this issue. The other solution is that of exercism/discussions#133 which puts something in the CI to check that any file that gets changed is up-to-date. This is nonzero extra effort and also might still cause us to not discover an out-of-date exercise for a long time if its files do not get updated. So we would have to be careful about that. Another solution is to always check all versions in CI, but I don't like this solution because then a previously-passing build may turn into failing through no fault of our part. So if there is no better solution the best may simply be "We will just run it on a periodic basis". Tell me if there are any better ideas. I have raised the question on the discussions issue, but we will probably make our own decision since I don't know how many tracks have exercises versioned (I think Ruby and OCaml) and script ready to run (all four listed at https://github.com/petertseng/x-common/tree/up-to-date/up-to-date, plus Python) |
My understanding is that Travis-CI can only check if a branch is in a working state. From this, follows some simple rules:
The build environment, The situation may not be ideal, but I don't think we should make it worse adding Unless I'm completely missing something in exercism/discussions#133, there is only one thing that can be done without creating problem in Travis-CI:
This would increase complexity and doesn't seem to save much manual work IMHO, but it is possible. |
I agree with all of this. In addition: It helps make the results of the script accessible to contributors that are not ourselves, but I think there is a low chance anyone except ourselves is really interested in it. If it doesn't affect the build result it may also be easy to miss; some people may not even be aware it's there. An advantage is that it gives us logs for free of how up-to-date we were at any given point in time (as long as Travis CI is willing to keep our build logs), but I haven't seen any demand for that. I also don't want people to get into the practice of starting a new build for the sole purpose of seeing the tool's output (it would probably be better to just run the script). These factors make me not that motivated to add it. So I will not for now, though anyone is free to make the PR and we could discuss again. So I will leave this as a task to run periodically, in #470. |
There is a beta feature in Travis, which might help you
https://docs.travis-ci.com/user/cron-jobs/
Peter Tseng <notifications@github.com> schrieb am Di., 16. Mai 2017, 21:50:
…
- Run the report to see what exercises are outdated, without affecting
the build result.
This would increase complexity and doesn't seem to save much manual work
IMHO, but it is possible.
I agree with all of this. In addition:
It helps make the results of the script accessible to contributors that
are not ourselves, but I think there is a low chance anyone except
ourselves is really interested in it.
If it doesn't affect the build result it may also be easy to miss; some
people may not even be aware it's there.
An advantage is that it gives us logs for free of how up-to-date we were
at any given point in time (as long as Travis CI is willing to keep our
build logs), but I haven't seen any demand for that.
I also don't want people to get into the practice of starting a new build
for the sole purpose of seeing the tool's output (it would probably be
better to just run the script).
These factors make me not that motivated to add it. So I will not for now,
though anyone is free to make the PR and we could discuss again.
So I will leave this as a task to run periodically, in #470
<#470>.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#416 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AADmRwd96jZiZIJV_iHQm_hrQRwXhRXdks5r6f4ZgaJpZM4KqdYY>
.
|
How should we ensure we're up to date?
One idea is: Every time someone makes a change in x-common, that person is expected to file an issue against all tracks that implement the exercise. If we rely on this to happen, then we need take no further action.
Sometimes, that is not always adhered to. We mark dates in our test files, what if we used that to an advantage?
I wrote a short script in exercism/problem-specifications@master...petertseng:up-to-date . Sorry it is not in Haskell, but the idea should be clear:
In the short term: I will be making some PRs today based on what I've found.
In the long term: Which route shall we take?
The text was updated successfully, but these errors were encountered: