Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
PHP-FPM remote code execution bug exploited in the wild (github.com/neex)
341 points by orangepanda on Oct 27, 2019 | hide | past | favorite | 132 comments


FYI : If you have a NextCloud or Owncloud installation. The recommended nginx configuration is vulnerable [1]

[1] https://nextcloud.com/blog/urgent-security-issue-in-nginx-ph...


I wish there was a webdav server that wasn't a huge PHP thing and had decent authentication/authorization.

Almost everything has SFTP built in anyway now though, it's only a matter of time before OSes other than Linux based ones integrate it into the shells and then webdav won't matter so much.


Seafile has been working for me as a personal Dropbox replacement, with s3ql for mass storage. It's very light in relation to Nextcloud/Owncloud (a primary criterion for me trying to cheap out on servers), supports WebDAV, role-based access and a bunch of SSO options. The biggest possible drawback I can think of is that it doesn't store files in the plain, so you can't trivially tie in SFTP or serve files from the storage directly.


Re: files in the plain, the server application is bundled with a daemon that allows you to access files locally as a read-only FUSE mount, e.g.:

   seaf-fuse.sh start /mnt/seafile
I find it pretty handy when running a web server on the same host: drop files into a folder on my local machine, and they show up in the web root in seconds!


Do you just want WebDAV and nothing else? There’s plenty of Docker images for that and most of them are just Apache with the relevant plugin and config.


Or something that includes NextCloud or Owncloud even if you do not use them, such as Mailinabox.


Thanks for the link! The example in the link does not contain the

   set $path_info $fastcgi_path_info;
line after the `fastcgi_split_path_info` directive.

My old configuration used the `$fastcgi_path_info`, and the new one uses the `$path_info` variable, so I got the following error while starting nginx:

    nginx emerg unknown "path_info" variable
Might be worth checking out the sample from the Nextcloud Admin Manual[1]

[1]: https://docs.nextcloud.com/server/17/admin_manual/installati...


The production-fpm docker image has not yet received any updates - correct?


This is a case study in why you shouldn't expose your self-hosted services to the internet.


It’s more evidence that you should assume everything is vulnerable and layer protection.

For a home network simple multi-port knocking should be enough (combined with --ctstate NEW even better). If port knocking or SPA is too cumbersome then at least consider limiting access based on GeoIP, block tor exit nodes, etc (ipset is pretty amazing).

This can be applied to any service on your network btw, including Wireguard. I like knowing that a portscan of my network shows nothing open. I don’t end up on a list that gets used in the next ‘spray and pray’ attack.

Disclaimer: I’m not advocating this for serious use due to replay attacks and IP spoofing via a VPS. This is for home network protection (a boring Class C non target).


How do you port-scan for WireGuard? The handshake design is designed to not respond to packets in the opening handshake unless the peer already knows your public key.


Right, portscanning won’t work on Wireguard. Port knocking can provide an additional layer of protection against an unknown vulnerability, even for Wireguard. Sorry for the confusion.


Google has gone the opposite direction.

I feel like throwing everything behind a VPN and pretending it is secure is a crux.

Several famous break-ins over the last ten years have hypothetically been on the inside of that wall.

Better to isolate services from each other limiting cross service jumping, than to build security around a single point of failure.


> Better to isolate services from each other limiting cross service jumping, than to build security around a single point of failure

I agree that it is better, but let’s not forget that building security around a single point of failure is still an improvement, that is simultaneously both high and low friction.

Bad: everything exposed to the internet

Good: everything behind a VPN

Best: Every application on its own micro-segment with access control up to the application layer to restrict all forms of access beyond the bare minimum of what is required.

Perfect is the enemy of good.

> Google has gone the opposite direction

Google scale solutions are great for google scale organisations. They don’t always scale down very well.


> Every application on its own micro-segment with access control up to the application layer to restrict all forms of access beyond the bare minimum of what is required.

I hope for the operator team that they have good tool support to help administer all the access controls. Over time and across large organisations there are going to be a lot.

The even larger challenge must be auditing all these access controls. Services change, and if a connection is not required anymore, it should be painless for its operators to get rid of the corresponding access control.


Security is in layers, there's no reason you can't rely on both service and network isolation.

Service isolation alone doesn't help when my private data is potentially exposed by this exploit.


No it's not.


"Argument is an intellectual process. Contradiction is just the automatic gainsaying of anything the other person says."

http://www.montypython.net/scripts/argument.php

https://www.youtube.com/watch?v=ohDB5gbtaEQ&t=74


I have been thinking about this a lot lately. What is the best alternative, only accessing your services through a VPN?


The problem with a VPN is that it makes it much harder to get friends and family to use it. Not to mention if you use the link sharing feature of NextCloud, you can't just give strangers VPN access. I do use WireGuard for accessing services like SSH or NFS from the public internet, but the usability hit is a deal-breaker for my family. Client-side certificates would help solve this problem somewhat (you could whitelist only sharing-links for instance), but now you've hit usability problems again.

I mitigate code execution worries by running all of my services in individual LXD containers. They're all using isolated user namespaces (unique mappings), and are firewalled away from being able to access my internal network. The data is bind-mounted from a ZFS filesystem which is backed up by the host and uploaded to BackBlaze. The containers themselves are also snapshotted by ZFS. Thus, I think the risks of exploits being able to do much damage are greatly reduced.

However, there is still a worry about information disclosure. Yeah, NextCloud can only access the documents it manages -- but some of those documents are somewhat sensitive. I don't know what the ideal solution for this would be (a wholly separate NextCloud instance just for accessing the private stuff? But what if your family needs to access them?). My main worry when hosting NextCloud was that I am entirely trusting the safety of my NextCloud-stored data to an authentication flow that they wrote themselves in PHP (and has had pretty ugly flaws such as silently disabling 2FA or letting you bypass it by clicking "cancel".)


> The problem with a VPN is that it makes it much harder to get friends and family to use it. Not to mention if you use the link sharing feature of NextCloud, you can't just give strangers VPN access.

This is a feature. Besides, you can send friends and family a QR code to connect to your WireGuard VPN. It isn't perfect, but it beats having your personal data stolen.


I don't see how "you cannot use the link sharing feature of NextCloud" is a feature? Seems to be the precise opposite. As for setting everyone else up on the VPN, you could probably get that to work (you'd need to mess with DNS, AllowedIPs, and iptables rules to only allow port 443 access for your family's clients). I might look into that.


It's a security trade off, if an arbitrary person can't access your Nextcloud instance, neither can an attacker.


Sure (and I agree), but that means it's not a feature. But after reading your earlier comment, I have set nginx to only permit NextCloud traffic if I'm on the local network (I can't block everything because my personal website and Matrix homeserver need to be publicly accessible in order to function, and there's no way in hell I'm hosting my homeserver anywhere other than at home).


I'm currently serving some of my "internal" services (a wiki, a coffee tracker; things like that - nothing fancy) only from a zerotier network my devices can connect to.

Thanks to letsencrypt "now" (for some time, I know.. but I wanted to do this way before they allowed one to) allowing wildcard TLS certs, I host the above on a domain which doesn't have a single public IP DNS entry, yet has full proper "validated, browser approved" TLS cert.

IOW, I fire up my zerotier client on my phone, open brave, put the URL in, and off I go. https, and for my eyes only.

It's great!


Note that in any configuration where you end up asking remote DNS servers about some particular name the operator might well be selling the list of names queried and their answers, this is called "passive DNS" and is aggregated then sold on so it isn't PII by the time it's sold (purchasers can't tell who asked, only what was asked and what the answer was)

Where people set wildcard DNS this means passive DNS reveals typos, as well as such "hidden" services. wwww.example.com and ddd.example.com are common typos for www for example whereas int-test.example.com is maybe interesting to black hats.


I've transitioned all my self-hosted services to behind a Wireguard VPN. This even includes SSH, so there are no ports exposed except the UDP port for Wireguard.


VPN or SSH tunnels. I just use WireGuard these days.


I hide mine behind letsencrypt, just dont put nextcloud.yourdomain.com but put it under a path like yourdomain.com/shortPhrase/nextcloud where shortPhrase is something like noway pizde and so on.

Then dont share your links publicly.


How does Let's Encrypt "hide" anything? Quite the contrary—the list of certs granted is publicly available (as it is for all CAs, I believe).


Lets Encrypt does not per se, but TLS does is what I mean with letsencrypt, thats why I said dont put a domain name for your nextcloud instance - because even if you get a wildcard cert, the domain names are public, and every lookup you do of your subdomain is visible to all ISPs, so even if you call it zyrkon.yourdomain.com someone can still attempt to make requests to it like /index.php?a

Put your services, on a shared domain name, only yourdomain.com and under a sub-path, like yourdomain.com/thisISAlmostLikeaPassword/nextcloud the subpath is hidden by TLS, unless you make it public by posting it on the internet. And also if you arent careful, like using google "auto-suggest" or just using any Google products, then they will at least know about your path.


> thisISAlmostLikeaPassword

Why not just add real HTTP authentication to the site instead?

One should always be wary of password-like mechanisms like secret paths, secret ports, etc. since none of these things are made to be secret, and could be disclosed by something unforeseen. (Paths, for instance, are saved in your browser history/cache, your HTTP caching proxy, if any, and also in the server’s access logs.)


Of course the site has its normal login/password, for example nextcloud has authentication.

But you see, for what we are discussing here, you could have exploited it even without authenticating, and especially it would have been easier for scanners to find it and exploit, if it was on its own domain.

Defense in depth.

For some services, yes I do basic http auth, besides their own shitty auth.


If you're worried about your ISP or people snooping on your traffic, then this scheme can be trivially defeated with a downgrade attack or looking at your address bar.


My ISP can not "be looking at your address bar", you are thinking of Google.

Downgrade attack, would not work since I use HTTPS Everywhere, and once my browser has visited the site it refused to downgrade - that header is set.


TLS hides the path from a potential attacker that could observe traffic. Putting your nextcloud instance on a nonstandard path might help in this case, but - if I read the issue correctly - not in this cases


I haven't studied the issue, but it requires to access/execute php, no?

If configuration requires a path to get further than a canned reply from nginx (403, 404, static page..), then it should reduce attack surface a lot. You should not be able to get anywhere near php without the path.


Exactly.


Or use a wildcard cert.


Get a wildcard


No, it was definitely not true in the past and is not true now. First, technically there is no much difference between a given app self-hosted by you and hosted by a company charging you for that except that in theory they should worry about these things instead of you. In practice, your experience will vary - companies happen to be as vulnerable as you, and for various reasons their reaction time might be longer.

Second, bugs are found every day, and your best bet is to use automatic security updates provided by your distro. Yes, if you host anything, you need to be a bit of a security guy and a small amount of paranoia won't hurt. But to say you must not self-host for security reasons is a gross oversimplification.


> But to say you must not self-host for security reasons is a gross oversimplification.

This is a gross oversimplification and straw man argument. You can self-host securely.


From the CVE:

> Solution

> On October 24, PHP 7.3.11 (current stable) and PHP 7.2.24 (old stable) were released to address this vulnerability along with other scheduled bug fixes. Those using nginx with PHP-FPM are encouraged to upgrade to a patched version as soon as possible.

> If patching is not feasible, the suggested workaround is to include checks to verify whether or not a file exists. This is achieved either by including the try_files directive or using an if statement, such as if (-f $uri).



Hmm, so looking at the exploit and the patch... do I read it right: There is a buffer underflow in php-fpm if the environment variables SCRIPT_FILENAME and PATH_INFO have a state that violates an assumption. And currently a widespread configuration of nginx + php-fpm is configured such that the URL can be suffiently mangled such that nginx sets these parameters in a violating manner.

However, that means anything utilizing php-fpm in this version remains vulnerable, and it's just unknown if or how apache + php-fpm, or other reverse proxies for php-fpm are vulnerable - right?

So while I don't need to panic right now, I'll certainly have to take a look at our setups running php-fpm on monday.


If i read the commit fixing the vulnerability correctly, you need to have env PATH_INFO set; SCRIPT_FILENAME is unaffected. I would expect other reverse proxies strip newline characters and such from the url, but YMMV. In the bug tracker someone suggested adding an url rewrite that strips away \n and anything that follows. That might be a viable mitigation for versions no longer patched.


On Monday?

Assume your systems are compromised and act accordingly.


In this case we're talking about 5 VMs running php-fpm and no nginx in sight, so these VMs aren't immediately affected by said exploit. Also, these VMs only consume some public, unauthenticated APIs of the company and render the concent into some pretty HTML. These boxes have no persistence, no access to PII and no access to anything you can't get with curl right now.

Worst-Case, they can try sending some spam mails or DDoS-attacks, in which case the hoster would zero-route/force-stop them as soon as that's detected. And then I'd have to rebuild them with ~30 minutes waiting for teraform.

So yes, I'm going to act accordingly. By not bothering on a sunday, because the systems are properly isolated and there are procedures in place.


Some people choose not to work on weekends. Work/life balance etc.


Sure, but the price of not having 24/7 support may be that instead of applying a patch you get to nuke everything from orbit and rebuild from backups.


Sure, but just because the company didn't want to shell out the dough for 24/7 support doesn't mean that the employees should necessarily take it upon themselves to work during their off time.


It probably comes down to the environment for other work. If the company will "pay the price" then that's okay, but if you will "pay the price" i.e. if a need to nuke everything from orbit and rebuild from backup will simply result in a lot of unpaid overtime for you, that sucks, but in that case you might prefer to do less unpaid work in your off time today instead of more unpaid work in your off time throughout the next week.


Well sure, if restoring from backups means you will be working unpaid overtime then it'd be worth working less overtime to stave off more, but in practice restoring from backups is a time-consuming process for computers, not for people. Realistically though, enjoying your Sunday and digging into work issues on Monday is probably not going to be a big deal.


There's also the small issue that every minute that passes is another potential minute an attacker is stealing sensitive data, PII, and email/IM logs from your company's internal network, and backdooring other servers, installing ransomware, etc. That requires far more than a wipe and restore to deal with, and could potentially result in a massive financial and reputational loss.


The stream of new security threats is never-ending, thus security must be a process that incorporates the reality that employees sometimes have days when they aren't working. Clearly, the OP is not being paid for 24/7 on-call security consulting, so why should they sacrifice their day off to investigate and patch security vulnerabilities?


That seems far fetched in most saneish setups of PHP. The only risk, really, is the apps' own data. Which is also where microservices shine - chaining attacks like this is exponentially more difficult then. Unless, of course, your app is a monolith, isn't sandboxed and segregated from the rest of internal network (i.e. on the same server), and the rest of the network is very, very vulnerable so the attacker can chain these exploits just right. The possibility is not that high if you're not a high profile target and if you're a high profile target, well, you should know better than to keep all of your eggs in the same basket. And if it's a shared vps where such things can actually happen, the hosting provider should take care if it.


I think you very greatly overestimate the typical level of isolation and "sanity" of most setups in general, let alone most PHP setups (which are likely generally much worse than most other setups).


Have more faith in fellow man, brother!


And who has seen proper Microservicearchitektur? I haven't. Only now how it should look like.

Soo you use microservices? Yes! Sooo why can I not find more then one database?


Eh, as long as the database is intact, just nuke the container, rebuild base image and you're free.


Ubuntu has a try_files directive in /etc/nginx/snippets/fastcgi-php.conf that is included by default. It was put there years ago to guard against another problem (also mentioned by OP), but it seems that the try_files directive will block this one, too.

Unfortunately, too many people still copy & paste three-liners from random blogs and call it a day, often overwriting the safe defaults provided by their distro, er, I mean, Debian/Ubuntu. (edit: The RPM world is a whole different beast. When you install typical LEMP components on CentOS 7, both MySQL and Memcached listen on all interfaces by default. Seriously?!)


I confirmed that Ubuntu's default config in /etc/nginx/snippets/fastcgi-php.conf has a try_files directive that prevents this exploit.


CentOS isn't marketed as a desktop distro. The listening default is helpful and when I switched over to Ubuntu that default of not listening confused me. Not sure I see the benefit.. it's like installing windows but the internet is disabled by default and must be configured manually.. installing another browser and you must configure it manually.


Listening on localhost, or a socket, is a reasonable default. Listening to nothing is annoying, and listening to everything is a terrible idea.

If you're spreading one service across multiple servers, you can spare the few seconds to open up IPs/ports. The default should keep things moderately secure on a single host.


Should probably have specified in the title that it's a PHP-FPM bug, had me worried there.


Ok, we've added that to the title.


For those of you not speaking Russian, Russian for ‘dick’ & ‘cunt’ (also meaning ‘something very bad happening’) are in the title.


I'd say it's in Croatian. In Russian, it's "пизда"


No, that's certainly just transliterated Russian: https://github.com/neex/phuip-fpizdam/blob/d43b788a65f83ba6f... (those literals mean "Fucking: your mom").


Yes! My Grandmother grew up in Belarus, but she knew some Russian words--mostly curses--and taught them to me!


In Romanian as well


How? Romanian is not a Slavic language, I thought both of these are Slavic-rooted words.


The Romanian language has loads of slavic words. Nevesta, ultia, curva, and so on, thanks to its proximity to slavic countries.


> If a webserver runs nginx + php-fpm and nginx have a configuration like

And it's the config settings every blog ive ever seen about nginx + php-fpm said to use. So I think a lot of sites are vulnerable right now.


Well, for what it's worth, I think the best practice was always to test the existence of the PHP script, either with `try_files`, or with `if`, so, if you do that, then you aren't vulnerable, according to the exploit.

E.g., if you follow the "PHP FastCGI Example" from nginx.com, then nginx would protect you from this vulnerability in PHP-FPM:

* http://web.archive.org/web/20150928021324/https://www.nginx....

Here's the current version of the page, which seems to have the same info as the archived one above:

* https://www.nginx.com/resources/wiki/start/topics/examples/p...

(I think it used to be at another URL prior to the involvement of the marketing department in 2015; not sure if it's worth finding at this point, because the bug is not even in nginx in the first place.)


Mailinabox as well.


According to [1] mailinabox seems to be not affected.

[1] https://github.com/mail-in-a-box/mailinabox/issues/1663#issu...


Good news, and good to see them respond so fast as well. I looked through the config files (could not get the exploit to work for some reason) and found the exact offending lines and jumped to the wrong conclusion. Weird how the config appears to have the exact setup that NextCloud has and yet it does not seem to be exploitable. Wonder why that is.


Exploit required specific combination of software and config lines. MIAB didn't have those lines.

That's not to say another similar exploit might have worked a different way. Luckily that bug is patched now.


This affects the default documented Nextcloud config. They blogged about it the other day: https://nextcloud.com/blog/urgent-security-issue-in-nginx-ph...


Can someone confirm, that it will still take 6 days until fedora servers get patched, except if i get it from testing? [1] Is this the norm for CVE's ? How long does it take other distros to patch?

Also: Am I secure if i run PHP 7.2.24 or do I need to change the configs?

[1] https://bodhi.fedoraproject.org/updates/FEDORA-2019-187ae312...


Does anyone know if this exploit has any lasting effects?

    After this, you can start appending ?a=<your command> to all PHP scripts (you may need multiple retries).
I'd love some way to confirm that my mitigations have worked and that I am no longer vulnerable but, y'know, running random slavic exploits against the server seems a bit sketchy.


Didn’t look at the code that closely, but there’s an option to only test the server for the exploit. As long as that is enabled, it doesn’t appear to write any files, etc.

I would say it might be better to just test against a localhost server in a VM to be on the safe side though.


I'm sure a lot of PHP 7.0 installations are still in production and will not receive a patch...


They will receive a patch if they're using it on a Linux distro that is still supported (e.g. Ubuntu 16.04 LTS). How many people actually bother to run apt-get update && apt-get upgrade on their cloud servers or docker images is a different question, though.


Probably a good idea to auto install security updates. At least that’s what I do on my servers.


I think it's the default for several years now but I imagine it's not normal for a security update in PHP to restart an nginx process? Maybe it is.


Yup, we run like 50 sites on php 5.3 atm and just 5 with 7+ php industry is weird and update shy in my experience... Atleast in Europe


OMG! Don't you ever have problems with those sites? Not just security but speed is also something I'd consider.

It's time to upgrade if you want to stay secure: https://www.php.net/supported-versions.php


If you're using out-of-support version, you should either use a distro that backports patches or contract somebody to do the backports for you. Otherwise you're basically hanging a sign saying "please pwn me" on your site. This is true for any software, not just PHP (for PHP, most security fixes are actually not hard to backport, just somebody has to do it).


The issue is PHP-FPM (FastCGI) only and it's vulnerable from outside only with nginx.

The vast majority of PHP 7.0 installations don't use FastGCI and don't use nginx but Apache simply because people used 'apt install php' (or 'yum install php') to install it.

So imho, the impact is very limited.


> The vast majority of PHP 7.0 installations don't use FastGCI and don't use nginx

Do you have a source for this?


Common approach is to serve static files with nginx and use apache / php_mod to process.

Why are you running php-fpm? Do you need to separate request's processes? The speed benefits of php-fpm are part of php 7 so using php_mod is faster now.


> Common approach is to serve static files with nginx and use apache / php_mod to process.

Not sure how common that really is, I've personally never set things up like that and just use nginx + php-fpm and don't know anyone that still uses apache with mod_php.


Plenty of stuff still uses it, unfortunately. Performance is pretty janky, I just moved a Mediawiki install from Apache+mod_php to Nginx+php-fpm as part of getting the site(s) on kubernetes and it’s tremendously better to work with and uses less memory due to not needing mpm_prefork.


We went from php_mod to php-fpm but we started moving back to php_mod after php 7 came out showing the benchmarks.


That's true for us as well with our legacy applications.

Our newer applications are using litespeed instead, and we've found it to be significantly better. You basically get the features of a nginx + apache + varnish stack in a single easily managed service and with better performance too.


I think its the default on Plesk installs


> Why are you running php-fpm?

Because running just nginx is more convenient than nginx + Apache, where Apache is only used for mod_php. For me anyway. (I only use nginx + php-fpm for a Wordpress instance; I have tons of stuff in other languages running on top of nginx too.)


Why are you not running php-fpm with Apache is a more pressing question IMO.


Speed mostly.


Can you elaborate? I've yet to see Apache + mod_php to be capable of coming even close to <anything> + PHP-FPM so I'm really interested in what you guys are doing.


Mod_php was always faster at executing scripts. There is less overhead as you don't have to communicate like you have to with fpm.

For light scripts this is far superior to fpm. On the other hand, always loading php does have it's downsides too as memory consumption can get quite high depending on the number of threads.

This is was also the reason for the fpm hype a long time ago: don't waste memory on php when php isn't needed. It had nothing to do with it running php faster.

What you should choose depends on your need.


> Mod_php was always faster at executing scripts

I've never, ever witnessed that mod_php came close to be fast, let alone faster than PHP-FPM. There's more work to be done in order to prepare everything needed for Apache to pass the data to PHP executable once it embeds it within its own process. Once opcache is up and running, PHP-FPM blows mod_php away (and there are tools to warm up the cache prior to letting the php-fpm node go live).

> This is was also the reason for the fpm hype a long time ago: don't waste memory on php when php isn't needed

I've been present when the "hype" as you called it hit. It had nothing to do with memory as much as it did with scaling. Added benefit was the ability to have PHP-FPM act as a multiplexer towards certain services (database to name one).

Today, there's no reason to use Apache and mod_php. It's slower and worse by definition. It can't be faster. If you receive results that show it is faster, you're either testing it wrong or your PHP-FPM runs on a raspberry pi.


> Mod_php was always faster at executing scripts. There is less overhead as you don't have to communicate like you have to with fpm.

The "overhead" of communicating via CGI to a PHP process has nothing to do with the speed of execution of the script itself.

> For light scripts this is far superior to fpm. On the other hand, always loading php does have it's downsides too as memory consumption can get quite high depending on the number of threads.

It's not far superior as the "overhead" of CGI is negligible in the real world. Plus you can pool processes for better scaling. Also, if you are using prefork with mod_php (which is the most probable scenario) it means you are forking an entirely new Apache process and not just "loading PHP" with each request.

> This is was also the reason for the fpm hype a long time ago: don't waste memory on php when php isn't needed. It had nothing to do with it running php faster.

It's not hype, because for a long time, mod_php required prefork because it was not thread-safe (even now it's still a pain to manage re-compiling PHP to be thread-safe for mod_php + Apache)...which means you could not take advantage of mpm_event or mpm_worker.


Yes CGI just allows you to scale in a way mod_php doesn't. Apache & mod_fcgid are a great combo imo.


Have you tried mod_proxy_fcgi? Being able to have similar handling for fpm and other proxied appservers (eg ruby or python stuff) is quite handy.


I was under the impression that a properly tuned mpm_event and fpm has very little difference to mpm_prefork and mod_php. What sort of machines are you running this on and what sort of child proc numbers are you running?


PHP-FPM + Nginx is the standard approach for OwnCloud & NextCloud. I'm sure they're not the only stacks that use that approach.


I have that location block in my nginx config but I also have the following above:

    location / {
        try_files $uri $uri/ /index.php?$query_string;
    }
Does this mean I'm safe? I'm asking because it's in a separate location block so not sure how this works (thinking that the try_files thing should be in the same location block).


I would suggest having a try_files in the .php location. Because it just takes someone sending a request to index.php for Nginx to process the request in a potentially vulnerable location block that doesn't have try_files.

You perhaps might not be vulnerable if you use an internal directive inside your .php location block.


Luckily, php 7.1 has still left a month for security fixes.

This is patched in 7.1.33.


The exploit requires

> fastcgi_split_path_info ^(.+?\.php)(/.*)$;

If I'm not using this feature of PHP, what can I put in this config value to prevent the exploit from working?


>There must be a PATH_INFO variable assignment via statement fastcgi_param PATH_INFO $fastcgi_path_info;. At first, we thought it is always present in the fastcgi_params file, but it's not true.

This is also precondition, and fortunately it's not included in standard fastcgi_params files.


I don't think you can fix it there, because you probably need stuff like /foo.php?key=value to work. (Edit: not quite...see comment below)

The try_files config mentioned on the page mitigates the issue.


I don't think this line is required to make foo.php?key=value to work, because the query parameters aren't treated as part of the path. It's for URLs like foo.php/bar where you have some path after the PHP file that's passed to the script - this was supported out the box by Apache mod_php and there's a fair amount of code out there that relies on it. (Generally because cleaner methods of feeding a set of paths to one PHP script require server configuration that may not be available on cheap shared web hosting.)


As good a time as any to be reminded.. if you have to run any kind of PHP app, always keep it in its own VM and preferably with no access to anything except its own databases, and ideally with minimal outbound Internet access.

PHP security has improved markedly over time (especially app security, not just the runtime), but it's still.. well.. stuff like this. This time around I'm lucky that the sole app I run was using Apache


You sound like PHP is somehow particularly bad in this regard. While this issue is nothing to be proud of, same kind of issues (and other RCE-causing issues too) are regularly found in many major products and libraries. There's no reason to specifically shame PHP for something that happens everywhere. Good defense is depth practices are always good idea, but no need to motivate it by casting PHP as some kind of particularly villainous.


App security is still not on a par with other language ecosystems.. I don't think for example I've heard of a Python based SQL injection in many years. Stuff like that seems to still crop up regularly in PHP land


Now you are confusing security of PHP as a platform with security of applications written in PHP. Python had 2 RCEs in 2018: https://www.cvedetails.com/vulnerability-list/vendor_id-1021... None in 2019 so far. PHP has none in 2018 and has one in 2019 so far (there's another one in http module but it's not part of the core).

> I don't think for example I've heard of a Python based SQL injection in many years. Stuff like that seems to still crop up regularly in PHP land

This is an extremely subjective statement based on your personal experience of what you heard and didn't. As such, it's not verifiable and not useful. What is useful is to know that, obviously, PHP, as well as Python, has SQL implementations that eliminate injections for decades. And as in Python, there could be people that ignore it and stuff query params directly into strings. This has nothing to do with anything but these people being ignorant. There are of course tons of web apps in PHP, much more than in Python, so among them inevitably would be crappy ones. If you run one of them, do take precautionary measures.


> Now you are confusing security of PHP as a platform with security of applications

The parent comment explicitly made this distinction


My job used to include writing PHP webpages that were exposed to the internet, and looking after the webservers they were running on.

I'm, not responsible for any public-facing webservers any more. My life is much better.

[Edit: I reckon using a well-designed functional language might reduce the risk - like, PHP was never designed at all, it grew by accretion]


If anyone is following standard "recommended" practice, like in Nextcloud case, they get burned. This is true for all languages and frameworks.

If you dont practice defense-in-depth, you get rekt eventually.

I just checked my installation, its safe, since I didnt follow their "recommended" settings at all.

Since its a PHP app, what I do is put an extra nginx proxy infront of it, so there is nginx 1 (this one runs in its own lxc container with seccomp and has not even a /bin/sh in it, only executables and libc required for nginx, its only a network reverse proxy) - nginx2 (special instance for each php app, also same as nginx1 container with _nothing_ in it, but also a bind mounted ipc_socket which is also mounted to php_app container.

So the php_app container and nginx2 share only 1 ipc.socket bind mounted from the host. Now then, in the php app container, there is also lack of /bin/sh or anything else, not even the package manager, only what php-fpm requires, and the nextcloud .php scripts, are in there. _nothing_ else.

So even if this exploit would have worked, there would still be nothing standard to run in the php-app container (like /bin/ls), except the php scripts themselves, though thats bad enought to steal my nextcloud data.


The recommended way off installing it uses apache instead of nginx, which happens not to be vulnerable.

Only if you looked up their instructions for nginx did you get burned.


I didn't realise that the recommendation had gone back the other way again; any good resources discussing current recommendations and the whys?

We have a fair bit of legacy stuff hanging around, all nginx+php, none of which appear to be vulnerable due to consistent use of try_files.



Ah, I misunderstood your comment sorry - I thought you meant recommendations for installing _PHP_ + some web server in general, not nextcloud specifically!


Wow! I'm glad I saw this today. The server we use to host our corporate blog was vulnerable. I updated the php-fpm to the latest and I think I'm OK now.


Funny that php5 is safe from it


Actually not, it's safe from the exploit published, but not from the actual bug.


Interesting to see the exploit written in Go. Proof, maybe, that Go has finally landed.


I noticed that, too. Including a `go get` instruction to get it, no less.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: