-
Notifications
You must be signed in to change notification settings - Fork 106
Makefile: added pack and test targets #169
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Conversation
- Added deps/readies and system-setup.py to provide standard system setup, - Docker: load all backends, - Tests: load backends upfront to avoid rdb problems, - Removed Python virtual environment setup in Makefile to avoid setup nightmares in various platforms, - Docker: build using debian:buster, then copy binaries into redis:latest.
Thank you @rafie. I need to look closely, hopefully by tomorrow. Off the bat, I'm not sure about having Do we need absolutely need |
The plan it to keep readies inline across all modules, and then turn it into a git submodule (we still need to polish our submodule workflow). We can, of course, clone/download it for the time being, but in our path to getting things to simply work out of the box, it's a bit of delay. |
Thanks for the clarifications @rafie. I think we should not forcibly install python 2.7 unless we are in an isolated environment (like a container or an enterprise VM). And in any case, getting brew to install python2 on my system as a result of a build step is 100% something I would not find acceptable as a user. It could seriously break module paths and wreck my setup. Two proposals for a solution:
What do you think? |
I think we'll have no problem running Readies with Python 3, if it's already available on the system. So we'll need to determine which python the system has, and work with that (and yes, I've noticed that "python" in conda is Python 3 :-)), and remember to limit ourselves to Python 3.5 features. |
Hi @rafie, if readies works with Python 3 then I'm good to proceed without bash. I just ask you to make sure it does and to avoid installing python 2 automatically. My suggestion is, if python 2.7 or >3.5 are not available, just fail with an informative message, and keep the py2 install script around only for Docker and Enterprise, but avoid calling it automatically in any other case. |
Just to make sure: CentOS 7 doesn't have Python 3 in its standard repo (one has to use IUS or EPEL repo or to build it from source). Do we want to let the user enable his own Py3? |
In that case the user can install python 2.7, right? |
Yes. It will, however, make our automation life a little harder, as we cannot just assume python3 is our python. And we still need Python 3 for the tests. So I guess we're better off installing Python 3 the hard way after all (it's not that hard). |
Sounds great. The only recommendation is to still avoid installing as part of build. |
OK, just clarifying: the system-setup.py script only run automatically in Docker/CI. For interactive/OSS scenarios and such, users can avoid them entirely and install everything on their own, if they so wish, or use it as mere reference. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi Rafi, sorry if it took so long to do a proper review.
There are a few comments that I'd like to discuss and that might require some changes. I can help just in case, let me know what you think.
The main points are related to
- testing on a GPU instance to make sure nothing broke (we really need the GPU CI)
- renaming ENGINE to DEVICE throughout
- rethinking
install-cpu
andinstall-gpu
, because it's not right, we need to reconcile this before merging
deps/readies/bin/getpy
Outdated
[ ! -z $(command -v python3) ] && exit 0 | ||
|
||
if [ ! -z $(command -v apt-get) ]; then | ||
show_if_error apt-get -qq update |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As a "good citizenship" suggestion: in case any system-wide installs are needed readies
should stop and ask "Should I install this and this? [Yn]", and possibly have a -y
flag that overrides that prompt (for Docker builds etc)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The system-setup script has a dual functionality: it can set up a machine to successfully build, run, and test the module, and also provide documentation of what the system requirements are, for a bloke that's vigilant about his system diet.
It is possible to add a safety question feature easily overrided in docker environments (like ASK=0 ./getpy). I'll add that.
There also the --nop option, that is similar to make's -n.
There is another aspect that I'm considering: having a Python virtual environment for all pip installations. This will probably added across modules when I integrate it with the build, testing and packaging process. Nothing is easy in life. :-)
deps/readies/bin/python2
Outdated
if [ ! -z $(command -v python2) ]; then | ||
python2 "$@" | ||
else | ||
python "$@" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This doesn't ensure that you're running python2. The else
clause here could well start python3
if python
symlinks to it (as it is customary in several setups)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, any venv will offer its own version of python as "python". So it might be indeed better to check for python2.7 before falling back to "python", and then checking the python version and bailing out if it's not 2... It's gets really comical at this stage. :-)
I actually don't use this script for anything practical, and it's initial purpose was to deal with systems that for some reason offer "python" as python2 while not offering python2 at all (I think some centos-es do that).
deps/readies/paella/setup.py
Outdated
|
||
if self.os == 'macosx': | ||
# this is required because osx pip installed are done with --user | ||
os.environ["PATH"] = os.environ["PATH"] + ':' + '$HOME/Library/Python/2.7/bin' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure this would work on macOS. conda
for instance prepends its paths to PATH, so this operation would appear after the conda
PATHs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, this is actually not very relevant to our case as we decided to avoid python2.
The purpose of this shim was to allow you to execute stuff that was put by "pip install --user" into $HOME/Library/Python/2.7/bin (e.g., virtualenv, pipenv).
get_deps.sh
Outdated
|
||
PREFIX=${DEPS_DIRECTORY}/install | ||
PREFIX=${DEPS_DIR}/install-$VARIANT |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not convinced having a deps/install-cpu
and deps/install-gpu
is the right thing to do.
If anything, we should keep the deps separated by backend, like
deps/install/libtensorflow-cpu/
deps/install/libtensorflow-gpu/
deps/install/libtorch-cpu/
deps/install/libtorch-gpu/
...
or actually, with an explicit version
deps/install/libtensorflow-1.12.0-cpu/
deps/install/libtensorflow-1.12.0-gpu/
deps/install/libtorch-1.2-cpu/
deps/install/libtorch-1.2-gpu/
...
so that it's easier to know where they are and to link to them from the backends.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since the device spec (CPU=1/GPU=1) is dominant, it looks easier to collect all dependencies under an encompassing directory. But then again, I'd like to first move into the bin/arch-os-variant form, and continue from there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What I mean is that it is dominant for each individual backend, not for RedisAI as a whole.
It's really important that we get this right, I don't think we are 100% aligned here (see my previous comment on bin/arch-os-variant)
CMakeLists.txt
Outdated
@@ -68,7 +70,7 @@ IF (APPLE) | |||
LINK_FLAGS "-undefined dynamic_lookup") | |||
ENDIF() | |||
|
|||
SET(CMAKE_INSTALL_PREFIX ${CMAKE_SOURCE_DIR}/install) | |||
SET(CMAKE_INSTALL_PREFIX ${CMAKE_SOURCE_DIR}/install-${ENGINE}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't agree with having separate install-cpu
and install-gpu
. The redisai.so
module is independent on the device, only the backends
depend on devices. We must make it possible to have multiple backends at the same time (I know this isn't the case for Enterprise, but we need to make this possible for OSS), we need to differentiate devices at the backends level.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right.
Actually, I was planning of having the "install" dir live inside bin/arch-os-variant, with "variant" reflecting the device as well.
Another issue: it appears cmake insists on installing every last device dependency for each and every build, which is pretty time consuming (and will be even worse with gpu devices). We probably need to do something about it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @rafie, thanks for the replies. I'll get to the others asap (working on updating to ONNXRuntime 0.5 atm).
However, I'd like to comment on this:
Actually, I was planning of having the "install" dir live inside bin/arch-os-variant, with "variant" reflecting the device as well.
What I meant is that redisai.so must not be tied to a device, because it does not depend on devices. Only backends have a concept of devices. So redisai.so
should not be contained in a directory with variant
in it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I now realize I should have given a better description of the variant concept.
A variant is part of the os-arch-variant triplet, and it describes an arbitrary attribute (actually, a set of attributes) of a build environment. The most natural variant is release/debug. So in practice, a variant should reflect a configuration of the build environment that reflects the actual build of source files, so their corresponding object files will be properly grouped and will not mix.
More examples of variants are some sort of code instrumentation, or a set of C macros that are turned on in a specific build.
Another thing to notice in that a certain variant may apply for one source module and not apply for another (in order not to create superfluous build process).
In our case, CPU/GPU may or may not be part of a build variant. This is determined solely by their projection on different modules: projection on main redisai may be none, while projection on backend adapters may exist.
Anyways, those projections are to be reflected in the build system, so location of each product (.so) is determined according to their dependencies. All we need to do is to expose those dependencies.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the clarification, I didn't get it at first.
- Restored README.md - Moved Makefile and system-setup.py to automation/ - Removed setup and show-setup from Makefile - Removed LD_LIBRARY_PATH from Makefile - Moved deps/readies into automation/ - Will provice automation/README.md in a subsequent PR - Added default value to DEVICE in CMakeLists.txt
Important:
|
* Makefile: added pack and test targets * Added deps/readies and system-setup.py * Fixed docker functionality, fixed tests, setup changes - Added deps/readies and system-setup.py to provide standard system setup, - Docker: load all backends, - Tests: load backends upfront to avoid rdb problems, - Removed Python virtual environment setup in Makefile to avoid setup nightmares in various platforms, - Docker: build using debian:buster, then copy binaries into redis:latest. * CircleCI: refactoring * CircleCI: refactoring #2 * CircleCI: github ssh keys for git lfs * CircleCI: revert to old image * CircleCI: disabled cache restore * CircleCI: added sudo * CircleCI: fixed git lfs handling * CircleCI: try redis:5.0.5-buster image once again * CircleCI: so maybe python:3.7.4-buster * Moved Paella to Python 3, bumped version to 0.3.2 * Added getpy to readies/paella * Build fix #1 * Build fix #2 * Dockerfile sh syntax, DEBUG flag in Makefile * system-setup: added popen.communicate() * Build: better cpu/gpu separation * Docker fixes * Build fix #3 * Build fix #4 * Fixes from review, updated README * Fixes from review #2 - Restored README.md - Moved Makefile and system-setup.py to automation/ - Removed setup and show-setup from Makefile - Removed LD_LIBRARY_PATH from Makefile - Moved deps/readies into automation/ - Will provice automation/README.md in a subsequent PR - Added default value to DEVICE in CMakeLists.txt * Fixes from review #3 * Fixes from review #4 * Fixes from review #5 * Fixes from review #6 * Fixes from review #7 * CircleCI cache hacking * CircleCI cache: cancelled fallback
No description provided.