Skip to content

Ensuring we are up to date with x-common #416

New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Closed
petertseng opened this issue Nov 6, 2016 · 25 comments
Closed

Ensuring we are up to date with x-common #416

petertseng opened this issue Nov 6, 2016 · 25 comments

Comments

@petertseng
Copy link
Member

petertseng commented Nov 6, 2016

How should we ensure we're up to date?

One idea is: Every time someone makes a change in x-common, that person is expected to file an issue against all tracks that implement the exercise. If we rely on this to happen, then we need take no further action.

Sometimes, that is not always adhered to. We mark dates in our test files, what if we used that to an advantage?

I wrote a short script in exercism/problem-specifications@master...petertseng:up-to-date . Sorry it is not in Haskell, but the idea should be clear:

  • read out the date from our test file
  • ask x-common if that canonical-data.json file has been changed since that date

In the short term: I will be making some PRs today based on what I've found.
In the long term: Which route shall we take?

@rbasso
Copy link
Contributor

rbasso commented Nov 7, 2016

The best solution I could find until now is to add the data to config.json:

  "exercises": [
    {
      "slug": "leap",
      "difficulty": 1,
      "topics": [
      ],
      "x-common-file": "exercises/leap/canonical-data.json",
      "x-common-hash": "cda8f9800a33d997f8c6146a10b8caf66e25ec4b"
    },
    ...

With this information there, it would be easier to match against x-common:

$ git diff cda8f9800a33d997f8c6146a10b8caf66e25ec4b exercises/leap/canonical-data.json

The only thing we would have to keep in the test suite are the deviations from x-common.

What do you think about it?

@rbasso
Copy link
Contributor

rbasso commented Nov 7, 2016

As an example, this shamefull bash script gets the filenames and hashes from config.json and checks which files changed in the repository:

Beware: Ugly script ahead!

checkChanges
#!/usr/bin/env bash


QUERY='.exercises|map(select(has("x-common-file")))|map({"x-common-hash","x-common-file"})|map("git diff --name-only " + ."x-common-hash" + " " + ."x-common-file" + ";") | .[]';

eval $(jq -r "$QUERY" "$@")

It must be run from the root of x-common, passing the config.json file as an argument:

.../x-common/ $
.../x-common/ $ checkChanges ../xhaskell/config.json
exercises/leap/canonical-data.json
exercises/space-age/canonical-data.json
...

@petertseng
Copy link
Member Author

It seems good to use a sha instead of a date - it allows us to check precisely whether the file has changed. That is good.

It may not be necessary to specify the file directly, since they are all "exercises/" ++ slug ++ "/canonical-data.json". I would say it only needs to be specified if it breaks the pattern.

If we move to using the config.json file, I assume we would delete the date lines from the individual Tests.hs files? I wouldn't want to have to maintain our data in two places.

@rbasso
Copy link
Contributor

rbasso commented Nov 7, 2016

It may not be necessary to specify the file directly, since they are all "exercises/" ++ slug ++ "/canonical-data.json". I would say it only needs to be specified if it breaks the pattern.

In case of a change in the directory structure we'll have to think about how to deal with that. Considering that most of the json where not changed after the last restructuring, maybe we need to test if it will work without the file.

If we move to using the config.json file, I assume we would delete the date lines from the individual Tests.hs files?

Agreed!

@petertseng
Copy link
Member Author

In case of a change in the directory structure we'll have to think about how to deal with that.

I am assuming that if there is a change that all the files change at once (like what happened with canonical-data.json). In this case, all we need to do is change the file our script looks at, I think?

I will say that in my current script I make use of git log --follow to follow renames, but since we want to use diff to see whether the file changed, I guess we can't do that.

I admit that I'm not thinking very hard about this because I expect a change in directory structure to be a rare event, and the benefit is that we avoid having to explicitly specify the JSON file. I could be convinced about explicitly specifying if you could convince me about how it deals with structure changes

@rbasso
Copy link
Contributor

rbasso commented Nov 7, 2016

Ok...I think we can take the risk and forget about the filenames for now. 😄

@petertseng
Copy link
Member Author

OK, so I guess if we want to move to this scheme we just need to start by removing all the date lines and filling in commit SHAs. Should we:

  1. Just use the current HEAD of x-common? The only exercises for which we are out of date currently are word-count and triangle, I think. So we would just use different commits for those two exercises.
  2. Look at the current date of each exercise and try to find in a commit near that date? Seems more work.

@rbasso
Copy link
Contributor

rbasso commented Nov 7, 2016

The current SHA seems good enough.

I don't care too much about what is currently not following x-common. Detecting changes from now on seems good enough.

@petertseng
Copy link
Member Author

Going to ask real quick, thoughts on placing all the data in config.json instead of the individual test files?

config.json:

  • won't get shown to students, so they won't see irrelevant info they don't need to worry about
  • is all in one file, if that helps (does it?)
  • very easy to understand how to parse it (JSON), not clear how to find it if we bury it in the file (just grep for 40 [0-9a-f] characters in a row?)

individual test files:

  • probably harder to forget to change it
  • if I ask the question "when was accumulate updated?" it's probably easier to understand if I know I look in accumuate's tests, instead of look for it in JSON

It's probably not that hard to write something to insert in the x-common-hash to all exercises right now.
Note it's not a super high priority for me right now since currently looking at the date works and I honestly think that we (mostly you) made sure we were up to date pretty well when doing the hspec rewrites. So I might focus on some other things first.

@rbasso
Copy link
Contributor

rbasso commented Nov 8, 2016

Humm....I think that the deviations from x-common should stay in the test suite. It's not a problem if the students look at it, in fact it could be useful, because a lot of them already know the exercise from other tracks, also they can become contributors.

Yeah...It is probably better put put everything in the test suite.
I'm not sure about the format and position.

@petertseng
Copy link
Member Author

I should probably mention this here while we are thinking about this subject - another alternative is to write test generators for those exercises that have canonical-data.json

I have to honestly say that my current perceived benefit of writing the test generator is not high enough to justify the costs that I anticipate. I'm willing to have my mind changed on that, of course.

@rbasso
Copy link
Contributor

rbasso commented Nov 9, 2016

The frequency of updates to exercises in this track is high, but most of the changes are not caused by minor changes in x-common. I think that, if we automatize test generation, we would innovate less.

@petertseng
Copy link
Member Author

Might as well link to exercism/problem-specifications#524 so that I can see it from here later.

I have no further updates right now. I run that script from time to time when I feel like it, and if I see something interesting I update this track.

@petertseng
Copy link
Member Author

If we update all our test files to use the scheme proposed in #511 (comment)

Adapted from
Source: exercism/x-common/exercises/pangram/canonical-data.json
Version: 1.0.0
Date: 2017-03-28

Then, we can use a version comparison to see whether there are new changes. So, it could be useful to open a new issue to get every file to display its version.

@petertseng
Copy link
Member Author

Consider the technique described in exercism/discussions#133.

@rbasso
Copy link
Contributor

rbasso commented Apr 14, 2017

When we run the tests, HSpec displays the package version from the package.yaml file, which we don't set at the moment, so the user get something like this:

leap-0.0.0: test (suite: test)


leap
  isLeapYear
    1996 - leap year
    1997 - standard and odd year
    1998 - standard even year
    1900 - standard nineteenth century
    1800 - standard eighteenth century
    2400 - leap twenty fourth century
    2000 - leap y2k

Finished in 0.0002 seconds
7 examples, 0 failures

It would be great if we could display the test-suite version at the top line!

What about setting the version in the delivered package.yaml as a way to keep track of the canonical data versions?

To make things easier for us, we could use the following versioning: canonicalVersion.SERIAL. For example:

Old version:

  • Old canonical version: 1.2.3
  • Old serial number: 100
  • Old test suite version: 1.2.3.100

New version:

  • New canonical version: 1.2.5
  • New serial number: 100 + 1 = 101
  • New test suite version: 1.2.5.101

Starting the SERIAL at 1 seems reasonable to me.

Exercises that do not follow the canonical-data test could use 0.1.0.SERIAL, which makes their version greater than the default 0.0.0 but smaller than any canonical data, which start at 1.0.0.

The important thing about the serial number is that it would have to increase on any change. Instead of increasing by a unit it could be a date in the format YYYYMMDD, but I'm not sure if that would do any good, and two changes in a single day would have the same version.

Using the proposed versioning in package.yaml, the outdated test suites would be the ones where canonicalVersion > testSuiteVersion. One problem with this approach is that the exercises not tracking as existing canonical-data.json would be always be flagged outdated.

Considering that canonical-data.json is JSON and package.yaml is YAML, I guess that writing a small script to compare both repositories wouldn't be too hard, but seems an interesting practice with Aeson.

The annoying thing with this way of versioning is that we would have to update two files instead of one (package.yaml and test/Tests.hs), but it seems more similar to how a real application would do it.

So...what do you think about it @exercism/haskell?

@petertseng
Copy link
Member Author

Oh, that seems good.

The date is not particularly important now that x-common has versions. We could remove it from the test files too. Should we?

It would not have been terrible to use YYMMDD in the serial since typically there is only one trackler update a day, but since the boundary doesn't always lie on midnight (and timezone considerations) starting at 1 seems better anyway.

@rbasso
Copy link
Contributor

rbasso commented Apr 20, 2017

The date is not particularly important now that x-common has versions. We could remove it from the test files too. Should we?

I think that the only thing we need to annotate in the test suites are the divergences from x-common, and only if the exercise follows a canonical-data.json.

... but since the boundary doesn't always lie on midnight (and timezone considerations) starting at 1 seems better anyway.

👍

@petertseng
Copy link
Member Author

petertseng commented May 8, 2017

Currently doing well! Currently only out of date on (excluding triangle, crypto-square, the three text generation exercises, the five deprecated exercises):

              phone-number: 1.0.0.2 -> 1.2.0
                  list-ops: 0.1.0.1 -> 1.0.0
                      luhn: 0.9.0.1 # 2016-08-04 -> 1.0.0
                       pov: 1.0.0.2 -> 1.1.1
            roman-numerals: 0.9.0.1 # 2016-07-26 -> 1.0.0
                      leap: 0.9.0.1 # 2016-07-27 -> 1.0.0
                   bowling: 0.9.0.1 # 2016-11-20 -> 1.0.0

we might in fact be up-to-date on bowling, I just didn't check

roman numerals only needs descriptions to be up to date.

@petertseng
Copy link
Member Author

I'm a bit too lazy to make list-ops match right now, so I won't. I did verify that we test the same functions and all cases seem covered well enough.

@rbasso
Copy link
Contributor

rbasso commented May 16, 2017

All exercises in xhaskell and files in x-common are now versioned, and we also have some tools to check for updates.

Is there anything else missing here or can we consider it solved, @petertseng?

@petertseng
Copy link
Member Author

petertseng commented May 16, 2017

Let's discuss one last thing: How shall we remember to keep up to date?

An acceptable answer could be "We just run whatever tool we have on a manual but periodic basis", which is a simple solution. I'll add it to #470 for now, and if we decide on this solution, we can close this issue.

The other solution is that of exercism/discussions#133 which puts something in the CI to check that any file that gets changed is up-to-date. This is nonzero extra effort and also might still cause us to not discover an out-of-date exercise for a long time if its files do not get updated. So we would have to be careful about that.

Another solution is to always check all versions in CI, but I don't like this solution because then a previously-passing build may turn into failing through no fault of our part.

So if there is no better solution the best may simply be "We will just run it on a periodic basis". Tell me if there are any better ideas.

I have raised the question on the discussions issue, but we will probably make our own decision since I don't know how many tracks have exercises versioned (I think Ruby and OCaml) and script ready to run (all four listed at https://github.com/petertseng/x-common/tree/up-to-date/up-to-date, plus Python)

@rbasso
Copy link
Contributor

rbasso commented May 16, 2017

My understanding is that Travis-CI can only check if a branch is in a working state. From this, follows some simple rules:

  1. It shouldn't depend on anything except the repository.
  2. It shouldn't depend on past versions of the repository.

The build environment, configlet and stack are mostly unavoidable exceptions to the first rule, but hopefully they'll change little. The cache is a huge exception to the second, so I try to clean it from time to time, just to be sure that everything builds from scratch.

The situation may not be ideal, but I don't think we should make it worse adding x-common as an additional source of surprises.

Unless I'm completely missing something in exercism/discussions#133, there is only one thing that can be done without creating problem in Travis-CI:

  • Run the report to see what exercises are outdated, without affecting the build result.

This would increase complexity and doesn't seem to save much manual work IMHO, but it is possible.

@petertseng
Copy link
Member Author

  • Run the report to see what exercises are outdated, without affecting the build result.

This would increase complexity and doesn't seem to save much manual work IMHO, but it is possible.

I agree with all of this. In addition:

It helps make the results of the script accessible to contributors that are not ourselves, but I think there is a low chance anyone except ourselves is really interested in it.

If it doesn't affect the build result it may also be easy to miss; some people may not even be aware it's there.

An advantage is that it gives us logs for free of how up-to-date we were at any given point in time (as long as Travis CI is willing to keep our build logs), but I haven't seen any demand for that.

I also don't want people to get into the practice of starting a new build for the sole purpose of seeing the tool's output (it would probably be better to just run the script).

These factors make me not that motivated to add it. So I will not for now, though anyone is free to make the PR and we could discuss again.

So I will leave this as a task to run periodically, in #470.

@NobbZ
Copy link
Member

NobbZ commented May 16, 2017 via email

# for free to join this conversation on GitHub. Already have an account? # to comment
Projects
None yet
Development

No branches or pull requests

3 participants