Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

[Feature Request] Add pm.max_memory to PHP-FPM for Graceful Memory-Based Recycling #17661

Open
ethaniel opened this issue Jan 31, 2025 · 1 comment

Comments

@ethaniel
Copy link

Description

Summary

I’d like to propose a new configuration directive in PHP-FPM, tentatively named pm.max_memory, which would allow administrators to specify a per-child memory usage limit. Once a PHP-FPM worker process finishes handling a request, if its memory usage exceeds this configured threshold, it would be gracefully recycled before handling another request. This feature would complement existing solutions (pm.max_requests, memory_limit, cgroups) but address use cases where neither request count–based recycling nor OS-level OOM kills provide the desired behavior.

Motivation and Rationale

  1. Slow Memory Leaks:

    • While memory leaks in core PHP have become less common, it is not unusual for users to run older PHP versions (e.g., 7.4) or to rely on external C extensions that occasionally exhibit leaks. Over multiple requests, a slow leak can cause a worker’s memory footprint to grow steadily until the system becomes strained.
    • A process-level memory cap checked after each request (when no code is running) would help automatically recycle leaky workers before they become too large.
  2. Existing Workarounds:

    • pm.max_requests: Recycles processes based on request count. This helps, but it’s a blunt tool—sometimes memory issues manifest in fewer or more requests than expected.
    • memory_limit: Kills an individual script mid-request if it exceeds a certain amount of PHP-allocated memory. However, a memory leak that accumulates across requests might never exceed the per-request memory_limit.
    • cgroups / Docker memory limits: Typically trigger an OOM kill that can occur at any moment, including mid-request. This can disrupt active requests rather than recycling gracefully.
  3. Why a New Setting?:

    • pm.max_memory would allow graceful recycling once the worker has finished a request, preventing mid-execution kills. This behavior is more user-friendly and operationally safe than OOM kills or forcibly lowering pm.max_requests.

Proposed Behavior

  • Directive:

    pm.max_memory = <value>
    • 0 would disable the setting (no memory-based recycling).
    • A positive integer (e.g., in bytes) indicates the per-child memory threshold.
  • Measurement Timing:

    • Check worker memory usage at the end of each request (during request shutdown, before picking up a new request).
    • If usage exceeds pm.max_memory, the process gracefully exits (similar to how it does with pm.max_requests).
  • Memory Metric:

    • Likely the resident set size (RSS) of the process, as commonly displayed by top, ps, or read from /proc/self/statm on Linux. This aligns with what admins typically observe in real-world monitoring.
    • There will be platform-specific differences (e.g., using getrusage() or equivalent APIs on non-Linux OSes).
  • Graceful Behavior:

    • The process only exits after finishing the current request, preventing partial execution or abrupt kills.

Benefits

  1. Operational Simplicity: Admins can simply look at top or ps, see typical usage and outliers, and decide on an appropriate memory limit for each pool.
  2. Graceful Recycling: This avoids the downsides of an OOM kill, which can happen mid-request and risk data corruption or incomplete responses.
  3. Better Than pm.max_requests for Certain Leaks: Provides more precise control over memory-related issues, rather than guessing how many requests a leaking script can handle.

Potential Implementation Details

  1. Cross-Platform:
    • On Linux, reading /proc/self/statm is straightforward. Other systems may require different APIs, so this feature might initially be limited to platforms where memory usage can be reliably checked.
  2. Configuration:
    • Default value is 0 (disabled), so existing users are unaffected unless they opt in.
  3. Edge Cases:
    • Processes with spiky memory usage that are still within memory_limit per request. As soon as they finish a request, memory usage may drop. That’s acceptable—if the worker genuinely releases memory by request end, it won’t be terminated. We only care about persistent usage that doesn’t free up.

Alternatives Considered

  1. System OOM / cgroups:
    • Not ideal for graceful recycling. OOM kills can occur mid-request and take down the entire process or container.
  2. memory_limit:
    • Only applies to per-request usage inside the PHP memory allocator, not total process memory (including possible leaks in extensions).
  3. External Scripts:
    • While you can have a watchdog script to kill large PHP-FPM workers, that effectively duplicates the same logic in a less integrated and possibly more abrupt way.

Conclusion

pm.max_memory could offer a safer, more precise way to handle slow or partial memory leaks without relying on request counts or mid-request kills. Feedback on feasibility, naming, implementation strategies, and any potential pitfalls is greatly appreciated.

Thank you for considering this feature request!

@cmb69
Copy link
Member

cmb69 commented Jan 31, 2025

@bukka bukka added the SAPI: fpm label Feb 2, 2025
# for free to join this conversation on GitHub. Already have an account? # to comment
Projects
None yet
Development

No branches or pull requests

3 participants