You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently there is no disclaimer about where your data gets send when using the AI feature.
I think it's important that users should be aware that using the AI feature means that their code gets sent off to somewhere, and that it's not done locally on their own machine without any code leaving their system.
The fact that this isn't mentioned anywhere (as far as I've been able to find, both in the readme and on the website) is a bit odd.
The text was updated successfully, but these errors were encountered:
Thanks for highlighting this @xorinzor. Let me rectify this straight away (updating docs/ app copy etc).
It was brought up on Discord as well a few times, the app currently uses OpenAI, but we would love to make it pluggable so that people can choose their own endpoint, configure the prompt etc. According to @pngwn it should be even possible to do this with the newest local models that have recently come out.
@xorinzor I just merged #2707 which fixes this (and will be making a release momentarily as well).
Separately but on a related note there is a ticket tracking the support for customising the LLM endpoint #2660 as well as customising the prompt #2624.
Feel free to open a new issue if I have missed something.
Currently there is no disclaimer about where your data gets send when using the AI feature.
I think it's important that users should be aware that using the AI feature means that their code gets sent off to somewhere, and that it's not done locally on their own machine without any code leaving their system.
The fact that this isn't mentioned anywhere (as far as I've been able to find, both in the readme and on the website) is a bit odd.
The text was updated successfully, but these errors were encountered: