Skip to content
This repository was archived by the owner on Sep 13, 2023. It is now read-only.

Streamlit serving #512

Merged
merged 9 commits into from
Dec 12, 2022
Merged

Streamlit serving #512

merged 9 commits into from
Dec 12, 2022

Conversation

mike0sv
Copy link
Contributor

@mike0sv mike0sv commented Dec 1, 2022

TODOs:

  • multiple methods
  • list model element distillation
  • binary payloads
  • better interface?
  • tests

@mike0sv mike0sv requested a review from a team December 1, 2022 21:33
@mike0sv mike0sv self-assigned this Dec 1, 2022
@mike0sv mike0sv temporarily deployed to internal December 1, 2022 21:33 Inactive
@mike0sv
Copy link
Contributor Author

mike0sv commented Dec 1, 2022

To test you can run:

from mlem.api import serve
from mlem.contrib.streamlit.server import StreamlitServer
from mlem.core.metadata import save


def main():
    save(lambda x: x + 1, "mdl2", sample_data=0)
    serve("mdl2", StreamlitServer())


if __name__ == '__main__':
    main()

@mike0sv mike0sv temporarily deployed to internal December 2, 2022 13:41 Inactive
@mike0sv mike0sv temporarily deployed to internal December 7, 2022 15:10 — with GitHub Actions Inactive
@mike0sv
Copy link
Contributor Author

mike0sv commented Dec 7, 2022

It was quite a journey, but I finally ran streamlit+pytorch in docker with images
image

Some stuff I found out: mac m1 is aarch64, and tochvision binaries on pypi do not have image.so shared library for some reason (pytorch/vision#5919) since 0.10, which is too old for my example

I tried building amd64 image, but watchdog lib does not work with this setup (it's optional though, so I fixed it with pip uninstall watchdog && touch watchdog.py && PYTHONPATH=. sh run.sh inside the container).

@mike0sv mike0sv temporarily deployed to internal December 7, 2022 20:44 — with GitHub Actions Inactive
@mike0sv mike0sv temporarily deployed to internal December 7, 2022 21:18 — with GitHub Actions Inactive
@codecov
Copy link

codecov bot commented Dec 7, 2022

Codecov Report

Base: 87.24% // Head: 86.70% // Decreases project coverage by -0.54% ⚠️

Coverage data is based on head (58e290b) compared to base (ff340ad).
Patch coverage: 52.59% of modified lines in pull request are covered.

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #512      +/-   ##
==========================================
- Coverage   87.24%   86.70%   -0.55%     
==========================================
  Files          97       99       +2     
  Lines        8821     8948     +127     
==========================================
+ Hits         7696     7758      +62     
- Misses       1125     1190      +65     
Impacted Files Coverage Δ
mlem/contrib/streamlit/_template.py 0.00% <0.00%> (ø)
mlem/ext.py 88.28% <ø> (ø)
mlem/contrib/streamlit/server.py 70.83% <70.83%> (ø)
mlem/runtime/client.py 84.68% <77.77%> (-0.76%) ⬇️
mlem/core/data_type.py 92.72% <100.00%> (+0.01%) ⬆️
mlem/runtime/server.py 88.53% <100.00%> (+0.22%) ⬆️
mlem/utils/entrypoints.py 86.36% <100.00%> (-0.85%) ⬇️

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report at Codecov.
📢 Do you have feedback about the report comment? Let us know in this issue.

@aguschin
Copy link
Contributor

aguschin commented Dec 9, 2022

Trying your example:

In [1]: from mlem.api import serve
   ...: from mlem.contrib.streamlit.server import StreamlitServer
   ...: from mlem.core.metadata import save
   ...:
   ...:
   ...: def main():
   ...:     save(lambda x: x + 1, "mdl2", sample_data=0)
   ...:     serve("mdl2", StreamlitServer())
   ...:
   ...:
   ...: if __name__ == '__main__':
   ...:     main()
   ...:
INFO:     Started server process [42653]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit)
INFO:     127.0.0.1:58074 - "GET / HTTP/1.1" 307 Temporary Redirect
INFO:     127.0.0.1:58074 - "GET /docs HTTP/1.1" 200 OK
INFO:     127.0.0.1:58074 - "GET /openapi.json HTTP/1.1" 200 OK

  You can now view your Streamlit app in your browser.

  URL: http://0.0.0.0:80

  For better performance, install the Watchdog module:

  $ xcode-select --install
  $ pip install watchdog

INFO:     127.0.0.1:58089 - "GET /interface.json HTTP/1.1" 200 OK
INFO:     127.0.0.1:58090 - "GET /interface.json HTTP/1.1" 200 OK
2022-12-09 20:48:06.914 Uncaught app exception
Traceback (most recent call last):
  File "/Users/aguschin/.local/share/virtualenvs/mlem-Utz6DvOn/lib/python3.9/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 564, in _run_script
    exec(code, module.__dict__)
  File "/private/var/folders/tv/l60j0x050p536g3bh8g2w1n80000gn/T/mlem_streamlit_script_psg1c3au/script.py", line 40, in <module>
    augment, arg_model_aug = augment_model(arg_model)
  File "/Users/aguschin/Git/iterative/mlem/mlem/contrib/streamlit/server.py", line 35, in augment_model
    for name, f in model.__fields__.items()
AttributeError: type object 'int' has no attribute '__fields__'

@aguschin
Copy link
Contributor

aguschin commented Dec 9, 2022

Got few issues looking the same with different models, will re-check them once you fix this :)

@mike0sv mike0sv temporarily deployed to internal December 9, 2022 14:59 — with GitHub Actions Inactive
@aguschin
Copy link
Contributor

aguschin commented Dec 9, 2022

When request via streamlit fails, it prints something like

HTTPError: 500 Server Error: Internal Server Error for url: http://0.0.0.0:8080/predict
Traceback:
File "/Users/aguschin/.local/share/virtualenvs/mlem-Utz6DvOn/lib/python3.9/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 564, in _run_script
    exec(code, module.__dict__)
File "/private/var/folders/tv/l60j0x050p536g3bh8g2w1n80000gn/T/mlem_streamlit_script_v6wsv2qt/script.py", line 64, in <module>
    response = getattr(client, method_name)(
File "/Users/aguschin/Git/iterative/mlem/mlem/runtime/client.py", line 153, in __call__
    out = self.call_method(self.name, data, return_raw)
File "/Users/aguschin/Git/iterative/mlem/mlem/runtime/client.py", line 201, in _call_method
    ret.raise_for_status()
File "/Users/aguschin/.local/share/virtualenvs/mlem-Utz6DvOn/lib/python3.9/site-packages/requests/models.py", line 1021, in raise_for_status
    raise HTTPError(http_error_msg, response=self)

but underneath, FastAPI fails with a different error. This is strange, maybe we eventually will need to either redirect failures, or use MLEM model directly instead of forwarding this to FastAPI, which seems more reasonable to me. For now should work though :)

@mike0sv mike0sv temporarily deployed to internal December 9, 2022 16:05 — with GitHub Actions Inactive
@mike0sv mike0sv temporarily deployed to internal December 11, 2022 14:37 — with GitHub Actions Inactive
@aguschin aguschin merged commit a891636 into main Dec 12, 2022
@aguschin aguschin deleted the feature/streamlit branch December 12, 2022 10:15
# for free to subscribe to this conversation on GitHub. Already have an account? #.
Labels
None yet
Projects
No open projects
Status: Done
Development

Successfully merging this pull request may close these issues.

2 participants