Installing Nginx on Ubuntu 10.04 with PHP-FPM
Written
To get myself more fully acquainted with Nginx, I decided to finally take the plunge and move out of using Apache. To note, I do not have any real problems with Apache; it has treated me very well for years; It is flexible and very easy to configure. I've also been using Varnish as an HTTP accelerator on this blog for a few months and given that I'm dealing with anonymous folk viewing content, the setup was more than adequate. But I have also heard great things about nginx for the past couple of years (I've talked about it in at performance talks in a 'I hear it uses less memory and is more performant than apache' manner). Recently, my workplace started to switch over our sites into an nginx environment (I didn't install the software and only participated in part of the site configuration) and have noticed the sites do seem zippier. My server only consists 512 megs of memory so it would be nice to see just how well it performs under such limitations. As a result, I wanted to know:
- How easy is it to switch from Apache to Nginx?
- What kind of a difference is there in performance between apache and nginx for regular site needs (with and without an HTTP accelerator)
I made the switch from MySQL to MariaDB a little while ago and the process was very painless (I haven't done any benchmarking tests but MariaDB seems atleast a little bit faster). So how difficult could this be?
First, I did a benchmark of the current site. I tested how the site performs strictly on Apache and later on Varnish as a frontend to Apache. This consisted of 2 tests.
Test 1: Apache on 1000 requests with 100 concurrent users
shell:~# ab -n 1000 -c 100 http://btmash.com/ Benchmarking btmash.com (be patient) Completed 100 requests Completed 200 requests Completed 300 requests Completed 400 requests Completed 500 requests Completed 600 requests Completed 700 requests Completed 800 requests Completed 900 requests Completed 1000 requests Finished 1000 requests Server Software: Apache/2.2.14 Server Hostname: btmash.com Server Port: 80 Document Path: / Document Length: 49337 bytes Concurrency Level: 100 Time taken for tests: 2.564 seconds Complete requests: 1000 Failed requests: 0 Write errors: 0 Total transferred: 49771000 bytes HTML transferred: 49337000 bytes Requests per second: 390.07 [#/sec] (mean) Time per request: 256.363 [ms] (mean) Time per request: 2.564 [ms] (mean, across all concurrent requests) Transfer rate: 18959.27 [Kbytes/sec] received
Not too bad, right? Being able to process 390 requests / sec means the site should be pretty darn zippy. So lets go crazy.
Test 2: Apache on 10000 requests with 1000 concurrent users
This was probably excessive but the last set of results looked very promising.
shell:~# ab -n 10000 -c 1000 http://btmash.com/ Benchmarking btmash.com (be patient) Completed 1000 requests Completed 2000 requests apr_socket_recv: Connection reset by peer (104) Total of 2344 requests completed
Hmm, that didn't work out too well. It just...died after 2300ish requests.
Test 3: Apache on 10000 requests with 100 concurrent users
This should work out well again...except it didn't. Once I started to run it, my server slowed down to a crawl and I was barely able to stop the test and restart the apache server. I did end up taking a look at the load average.
shell:~# uptime 18:06:41 up 97 days, 20:24, 1 user, load average: 129.42, 58.16, 22.56
Yikes! As Kevin said on Twitter yesterday:
Wow, a load average of 129...I didn't even know load averages went that high.
Suffice to say...Drupal and Apache didn't play all that well with a high traffic load.
Test 4: Varnish in front of Apache on 10000 requests with 1000 concurrent users.
Ok, after the colossal fail of the the last two tests, there is no reason I should be doing this kind of test. However, I have been using Varnish for a few months already and I already know it can handle this:
shell:~# ab -n 10000 -c 1000 http://btmash.com/ Benchmarking btmash.com (be patient) Completed 1000 requests Completed 2000 requests Completed 3000 requests Completed 4000 requests Completed 5000 requests Completed 6000 requests Completed 7000 requests Completed 8000 requests Completed 9000 requests Completed 10000 requests Finished 10000 requests Server Software: Apache/2.2.14 Server Hostname: btmash.com Server Port: 80 Document Path: / Document Length: 49337 bytes Concurrency Level: 1000 Time taken for tests: 2.930 seconds Complete requests: 10000 Failed requests: 0 Write errors: 0 Total transferred: 498540000 bytes HTML transferred: 493370000 bytes Requests per second: 3412.78 [#/sec] (mean) Time per request: 293.016 [ms] (mean) Time per request: 0.293 [ms] (mean, across all concurrent requests) Transfer rate: 166153.26 [Kbytes/sec] received
As you see, it can handle 3400 requests per second! It finished the test in 2.9 seconds so the move over to Varnish was excellent. But keep in mind that varnish is only going to handle anonymous / static content requests. Imagine having a site where users can log in and the page requests are now handled by Apache. Again, you will run into issues on a big site.
Time for Nginx
So to get Nginx up and running, we need to install 2 components.
- Nginx
- Install FastCGI and Spawn PHP
In the past, this has been a somewhat painful process (mind you, my experience is with apache on this aspect) and been the primary reason I stayed with Apache and mod_php. But this is where PHP-FPM enters. PHP-FPM is another implementation of PHP FastCGI but with supposedly better performance (cannot vouch on it) but definitely easy enough to spawn off. The best part about this is that as of November 29, 2011, it became a part of PHP-5.4 and is also available in 5.3.3. The bad news would be that PHP-FPM is not in the Ubuntu repositories (atleast, not for 10.04 and lower since it uses 5.3.2). Luckily, the Nginx team has created a debian/ubuntu repository which does it all for you (see link). We need to add the ability to add additional repositories (via the add-apt-repository
command which you can get via the python-software-properties package).
shell:~# sudo apt-get install python-software-properties shell:~# sudo add-apt-repository ppa:nginx/php5 shell:~# sudo apt-get update shell:~# sudo apt-get upgrade
This should take care of upgrading your php libraries to be PHP-FPM ready. So you would shutdown apache (/etc/init.d/apache2 stop
) and install nginx and php-fpm:
shell:~# sudo apt-get install nginx php5-fpm
Now it is all installed for you! You should be able to see various instances of php-fpm running as part of your background process at this stage and you should also see port 9000 in usage (this is by the php-fpm package). Its now time to start setting up your nginx config. And rather than write it out here and get outdated, the nginx team have written up their own set of configuration suggestions (see link). There were a couple of things I added in my script, however. First in my nginx.conf file inside http {}
, I added the following:
upstream php { # This is a comment; it is not seen by the server for any configuration related information. # server unix:/tmp/php-cgi.socket server 127.0.0.1:9000 }
And in your server configuration for the site, I would change:
fastcgi_pass unix:/tmp/phpcgi.socket
into
fastcgi_pass php;
The reason for this is that in the event that you decide to use a phpcgi socket in the future, it is easy enough to change it in one place (so you could uncomment the unix line and comment the IP/port combo) and all your sites will now use the new setting once you restart nginx.
Now we are ready to start testing the new setup.
Test 5: Nginx on 1000 requests with 100 concurrent users
shell:~# ab -n 1000 -c 100 http://btmash.com/ Benchmarking btmash.com (be patient) Completed 100 requests Completed 200 requests Completed 300 requests Completed 400 requests Completed 500 requests Completed 600 requests Completed 700 requests Completed 800 requests Completed 900 requests Completed 1000 requests Finished 1000 requests Server Software: nginx/0.7.65 Server Hostname: btmash.com Server Port: 80 Document Path: / Document Length: 48881 bytes Concurrency Level: 100 Time taken for tests: 2.728 seconds Complete requests: 1000 Failed requests: 0 Write errors: 0 Total transferred: 49334000 bytes HTML transferred: 48881000 bytes Requests per second: 366.56 [#/sec] (mean) Time per request: 272.805 [ms] (mean) Time per request: 2.728 [ms] (mean, across all concurrent requests) Transfer rate: 17660.10 [Kbytes/sec] received
A little strange. These first set of numbers seem to imply that nginx processes slightly fewer requests than Apache does. Let's see how it performs under more stressful conditions.
Test 6: Nginx on 10000 requests with 1000 concurrent users
root@li166-63:/etc/nginx/sites-enabled# ab -n 10000 -c 1000 http://btmash.com/ This is ApacheBench, Version 2.3 <$Revision: 655654 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking btmash.com (be patient) Completed 1000 requests Completed 2000 requests Completed 3000 requests Completed 4000 requests Completed 5000 requests Completed 6000 requests Completed 7000 requests Completed 8000 requests Completed 9000 requests Completed 10000 requests Finished 10000 requests Server Software: nginx/0.7.65 Server Hostname: btmash.com Server Port: 80 Document Path: / Document Length: 193 bytes Concurrency Level: 1000 Time taken for tests: 51.494 seconds Complete requests: 10000 Failed requests: 9539 (Connect: 0, Receive: 0, Length: 9539, Exceptions: 0) Write errors: 0 Non-2xx responses: 515 Total transferred: 468114195 bytes HTML transferred: 463734600 bytes Requests per second: 194.20 [#/sec] (mean) Time per request: 5149.447 [ms] (mean) Time per request: 5.149 [ms] (mean, across all concurrent requests) Transfer rate: 8877.51 [Kbytes/sec] received
I'm honestly quite impressed. It wasn't the fastest site in the world, but it actually processed and completed all those requests. I checked the load average after the process and it peaked at 5.6! A massive difference from Apache.
I decided not to include the results of the benchmarks when Varnish is in front of Nginx since those results were very similar to the benchmarks of Varnish is in front of Apache (makes sense since Varnish is serving the content and not the web servers). But in an environment where users log in, using Nginx could make a huge difference in your server setup.