You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We use passenger to run our rails application. Worker processes recreated after every 100 requests. Recently we tried client_ruby with DirectFileStore. Each worker process has its own pid so depending on server load there will be tens of thousands of files in prometheus work dir after couple of hours/days when app wasn't restarted.
With 50k files in prometheus work dir /metrics route starts to be very slow with processing time about 10 seconds which could lead to prometheus scraper timeouts.
Is there any workaround for long running processes with DirectFileStore? One possible workaround I can think of is just restart app once in a few hours. Instead of restarting the whole application we can just wipe prometheus work directory once in a few hours, but this is looks a little bit hacky for me.