Skip to content

LFI and RFI Attacks

Ivan Ristic edited this page Jun 11, 2013 · 2 revisions

Primary defenses

  • Do not allow any URLs (http, https, and ftp) in input. This is a strong approach that can completely address RFI, but there's ample opportunity for false positives. Does nothing for LFI. Disabled by default, enabled in paranoid mode. For those who wish to use this rule, we should be able to whitelist (allow URLs in) selected certain parameter names (and application URLs).

    • False positives: customers providing URLs as input (e.g, What is your blog's address?), return-to URLs, redirectors, RSS code, etc.
  • If URLs are allowed in input, then it is very difficult to distinguish a valid URL from an RFI attack. Some sloppy attacks are easy to catch, but a determined attacker can craft an URL that appears entirely benign. With content redirection (see the Experimental section at the end), it may be possible to inspect the contents of the remote URLs, but that is tricky and icky.

  • Do not allow PHP streams. There is no known legitimate use for these, and thus FPs should be minimal.

  • Have a reliable way to detect parameters that contain only paths (e.g., not forum posts that include paths).

    • Perform de-obfuscation (will probably need a special tfn) and raise flags for certain relevant aspects:

      • Type: Unix/Windows/SMB
      • NUL bytes
      • Path self-references
      • Path back-references
      • Path truncation attacks
  • Do not allow paths in parameters. This is a strong defense, but prone to FPs. Disabled by default, enabled in paranoid mode. For those who wish to use this rule, we should be able to whitelist only certain parameter names.

    • False positives: return-to paths. Some applications may actually legitimately accepting filesystem paths (e.g., Tomcat's administrative interface does).
  • Block paths when we're reasonably certain that evasion is taking place (e.g., when a NUL byte is observed).

  • Assemble a list of well-known files requested in LFI/RFI attacks. Match them against input (mass pattern patching against deobfuscated input will probably work).

  • Detect PHP code anywhere in request (either by looking for <? references, or by looking for code fragments and commonly-used functions). (PHP code is commonly injected into logs and similar places. If the site is vulnerable to XSS and RFI, the combination of the two can be escalated to "local" code execution.)

  • Detect PHP code in file uploads (without using external helpers). Especially PHP code in "images".

  • Create rules for well-known worms and exploits.

  • Assemble lists of vulnerable URLs and parameter names, and increase alerting sensitivity when detected.

Learning

Note: IronBee does not support learning. This section is here only for completeness.

  • Is parameter a path?

  • Is parameter a URL? Some flexibility is needed here; some people may enter www.example.com and some http://www.example.com. Both variants need to be taken into account when evaluating whether a page normally accepts URLs.

Secondary defenses

  • Detect OS commands in input (separate topic).

  • Detect tool usage (separate topic).

  • Detect web server logs in output.

  • Detect PHP session data in output.

  • Detect web shell upload.

  • Detect web shell output.

  • Detect unusually long transactions. This happens when a long-running process is started on the server to do something, or to participate in a botnet. This may require having some sense of what "unusually long" means in the context of the running web site.

  • Keep a list of IP addresses commonly used for attacks.

  • Prevent information leakage (that aids attackers and tools):

    • Detect error messages.

    • Detect phpinfo() content in output.

    • Detect dump of /proc/self/environ

Experimental

  • Deactivate URLs by adding something to the beginning (e.g., "URL"). This defense will be effective in defending applications that only present URLs in the UI, but don't actually use them for anything. At worse, automatic linking may not work correctly. It's a trade-off that may be suitable for some applications.

  • Prefix URLs with a redirection script, and (1) detect when the application is using the URLs and (2) perform content inspection every time. This approach is useful for detection of stepping-stone attacks, because the server(s) on which the application is running are no longer retrieving the content directly. That means that the attacker cannot exploit their potential position of power (access to internal resources). This is assuming the service performing content inspection is properly secured, that is. Of course, the disadvantage of this approach is that the attacker can now use the service to submit random GET requests. That's probably the case with a vulnerable application anyhow, but now it's the WAF that's doing it. PR.

Clone this wiki locally