-
Notifications
You must be signed in to change notification settings - Fork 202
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
About memory usage #406
Comments
What is really the problem ? Database access is all about creating fast access indexes to tabular data. The more indexes the more memory used => the faster they get. Indexes sizes and location will depend highly on your tables constructs and db engine configuration ( where we aimed defaults to low mem usage ) |
I guess one aspect here is whether PGlite loads the whole database (indexes and all) into memory. Which I believe it does right now? As opposed to somehow keeping data on disk until needed to serve a query. |
Yes, actually I want to know if PGlite loads the entire database into memory, as this may slow down the entire application when the data volume is large. |
I apologize if I didn't articulate my question clearly. In short, I want to know whether pglite will occupy a large amount of memory when I operate data in a database with a large amount of data (such as 200MB+). |
Sorry i was not clear either, in the end it is all depending on your web host configuration and tables indexes - not the data size - here's why. Case 1) Case 2) In the "ideal" configuration "Shared Array buffers + atomics + a block device" (WIP) : So it depends : pg docs says at least 8-96MB to be realistic : "Although the minimum required memory for running Postgres is as little as 8MB, there are noticable improvements in runtimes for the regression tests when expanding memory up to 96MB [...]. The rule is you can never have too much memory." Apart from the fact block device is (very) WIP, you will still have to configure to sab/atomics and to be practical of use i'd say 256-512MiB usage range is expected at first. Tbh, getting in the 128-256MiB zone will take a lot of work/test/time because pg doc talks about used memory, but in the browser you have pg code1, error messages and C library/Javascript to run all that that also take up memory. Footnotes
|
I am currently facing a scenario where I perform extensive database operations in the browser and store a large amount of data. Previously, I used
@jlongster/sql.js
for this purpose. However, I’ve noticed that memory usage increases with the database size. Despite using IndexedDB for storage, it seems that a copy of the data is still kept in memory. I am urgently seeking an alternative library to address this memory issue. Could you tell me if pglite might have the same problem? I would appreciate your insights.The text was updated successfully, but these errors were encountered: