-
Notifications
You must be signed in to change notification settings - Fork 614
Jupyter notebook test #540
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Conversation
There are still the following issues:
|
I think the hardwired NBDIR should not be a problem since the folder should not move around after they included it into the docs generator. (see this comment ) I am not quite sure if we want to put this into the bazel testing pipeline since it is kind of self contained. We could add it to the travis pipeline though. @seanpmorgan Maybe we can add another test after the bazel test pipeline and before the build. @PhilipMay do you know if this testing environment runs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @PhilipMay! Could you please add this to our cmake commands:
https://github.com/tensorflow/addons/blob/master/makefile
We can then add this to the dockerized tests in .travis.yml
tools/ci_testing/examples_test.py
Outdated
# ============================================================================== | ||
"""Test for example jupyter notebooks.""" | ||
|
||
import testipynb |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We'll need a script to install this dependency during test time. Could you add this to:
https://github.com/tensorflow/addons/blob/master/tools/ci_build/install/ci_requirements.txt
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done - but without specifying a specific version number.
Is that ok?
I am done with that.
Yes it runs pip this way. I uninstalled tensorflow with pip, started |
From my point of view this is done. These steps still miss:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @PhilipMay! I'm seeing several notebooks fail for DeadKernelError: Kernel died
possibly from timeout. Could you increase the timeout for the tests and see if it fixes it.
@seanpmorgan I can see no
|
@seanpmorgan what do you think are the next steps? Fix the |
Hi Philip could you please verify how you're testing the code? With the current commit I'm getting:
The indentation must be consistent with the other commands. Also, could you please run in the docker container so we have consistent environments and that is how this will be ran during automated testing. For a non-gpu environment I'm getting a mix of issues related to #532 (which is unrelated to this) and DeadKernel errors: When running in gpu environment just to mitigate the #532 issues I get As you can see a number of imports fail (matplotlib, tensorflow_datasets, numpy). So what we'll need to do is create a separate We can't merge this until we know that the setup is capable of running these tests |
This is fixed now. |
@seanpmorgan Do you think we could use the Colab Notebook container as an environment since this is the environment people will actually see this in? I am not quite sure if they are public though. Maybe a googler can help out? @karmel |
I don't quite follow-- what colab nb container are you referring to, exactly? |
I always thought that the "standard" Google Colab Notebook is actually a container running on Googles Servers. In case it is, do you know if the container is public? |
@MarkDaoust @yashk2810 @tomerk , who know more about how the nb tests are run and what the options here might be. |
One option is, since we are creating a subsite for addons, their notebooks will automatically get tested by the infrastructure we have in place for all the other subsites and TensorFlow. If the notebooks fail, an email will be sent to the addons team to look into what went wrong. For CI testing, I can look into opensourcing the stuff that we use to test the notebooks. @seanpmorgan @PhilipMay One thing to note for enabling the tests with our infra is to not save the outputs in the notebook and install whichever version of addons you want to test your notebook with. You can follow the template notebook (https://github.com/tensorflow/docs/blob/master/tools/templates/notebook.ipynb). |
Good news! Thank Yash. It sound like we can share the same notebook testing library (and even workflow) if opensourcing. Would you mind telling us the opensouring schedule if possible? |
Probably within a few weeks :) |
Hi @yashk2810 - what is the status here? |
Bumping it again... @yashk2810 @karmel |
Hi @PhilipMay, Opensourcing what we use will take some time since it needs to be decoupled from a lot of things that we use and will require some effort. I will triage and set a priority on it. But you are free to explore any other alternative that you have in mind :) On a positive note, since addons is now a subsite tensorflow.org/addons, every notebook is already getting tested. If something fails, we will send an email to the appropriate handle. |
@seanpmorgan does it mean, that the issue is redundant and could be closed? |
IMO yes, provided that the above system works. @yashk2810 What email are failures sent to? |
For me this is 100% ok. Why not. No need to implement something that is already implemented.
Are we sure that "send an email to the appropriate handle" does work and that this will always be converted to a ticket here in GitHub? |
I don;t think it will be converted to a ticket on Github. But, is there an email with all the maintainers of addons that we can forward it to? Then you can monitor that email for failure emails from our side. |
I dont know... @seanpmorgan do you? |
Also, can it be on CI level? So we get the alert before merging to master
and running nightly builds.
…On Tue, Dec 10, 2019, 21:23 Philip May ***@***.***> wrote:
But, is there an email with all the maintainers of addons that we can
forward it to? Then you can monitor that email for failure emails from our
side.
I dont know... @seanpmorgan <https://github.com/seanpmorgan> do you?
Before we close this PR we should test the mail functionality.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#540?email_source=notifications&email_token=AAF5KKQGKFTI3AMKFWLVTP3QX73EBA5CNFSM4IZRBPZ2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEGQZG5Q#issuecomment-564237174>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAF5KKVKQTHUK7O473ILJILQX73EBANCNFSM4IZRBPZQ>
.
|
(@yashk2810 -- not sure how these particular tests are set up, but when I previously tried in several ways to get internal testing emails forwarded to external aliases, I failed. We could possibly set up an internal alias with external members or something like that, but note that there may be some hoops to jump through.) |
Yup, that's how it works. The email is sent to our internal alias and then we forward it to the appropriate external handle after scrubbing all the internal email id's. We have been doing this for translations already. |
@yashk2810 The email for failing tests is: Do you have any more information as to when the notebooks are tested? Is this only done once the API docs are re-generated after every release? We'll be releasing 0.7 in the next few days. I can put a broken tutorial as a test on the branch. |
Thanks
Yes, currently. If the changes made to the notebooks are very often, then I can move it to weekly and eventually nightly.
If you want to test it out, sure. But we pull docs from the master branch. So you won;t have to put it on the 0.7 branch :) |
Great thanks for your help @yashk2810! We'll put up a test prior to requesting the next update of API docs. I did want to confirm that addons docs are pulled from master though. We only generate API docs on our release branches so can you confirm that the addons site generates them from master branch separately from what we do? |
So, for the narrative docs (guides, tutorials, etc) we pull from master. For API docs we use your pip package to generate the api docs. Does that make sense? |
Thanks all for the support on this! Closing this PR and created a subsequent issue to track this testing: |
First version of jupyter notebook test as discussed in #485