ServerStack » Server Performance https://www.serverstack.com/blog Scalability Blog Sun, 03 Mar 2013 23:58:43 +0000 en-US hourly 1 http://wordpress.org/?v=4.2.2 Upgrading Your Managed Server to SSD for Maximum Performance and Cost Savings https://www.serverstack.com/blog/2013/01/29/upgrading-your-managed-server-to-ssd-for-maximum-performance-and-cost-savings/ https://www.serverstack.com/blog/2013/01/29/upgrading-your-managed-server-to-ssd-for-maximum-performance-and-cost-savings/#comments Tue, 29 Jan 2013 20:49:06 +0000 https://www.serverstack.com/blog/?p=336 The biggest advantage of Solid State Drives is lack of moving parts as compared to traditional hard drives.  This allows the drives to survive longer, and have faster read and write times.  It makes them ideal for an enterprise environment where performance and reliability are expected.  The SSD drives use less than a third of power compared to SAS or SATA, and promise twice the life expectancy.   Power consumption alone should save ... ]]>

The biggest advantage of Solid State Drives is lack of moving parts as compared to traditional hard drives.  This allows the drives to survive longer, and have faster read and write times.  It makes them ideal for an enterprise environment where performance and reliability are expected.  The SSD drives use less than a third of power compared to SAS or SATA, and promise twice the life expectancy.   Power consumption alone should save you money in the long run.

ssd-drives

Let us review one particular server that was upgraded from SAS to SSD drives.  Although SSD drives are more expensive, the cost can be offset by higher productivity, stability, and faster load times. The best uses for SSD drives are for applications that require a lot of reads and writes to the disk, such as a MySQL database, as well as a low latency.  You will also benefit by having SSD for disk-based caching.  An SSD drive for NFS caching with Nginx can be used for static content delivery, significantly improving load times on your server.

Having a slow hard drive increases CPU load and can lock up the server during high traffic:

sas-drives

Even the network speed picked up because the LAMP stack was able to serve more requests:

lamp-stack-more-requests

What makes a server a great candidate for SSD upgrade is its current Inputs/Outputs Per Second rate (IOPS). For most SATA drives the upper limit is 150 IOPS, and for SAS it is 200 IOPS.  If your server is constantly going above 200 IOPS you should consider upgrading to SSD drives.

Lets review another server with 2x 73GB 2.5” SAS drives in RAID1 :

server-analytics-ssd-1

Reviewing the Disk I/O graphs :

Disk-IO-graph

Since current IOPS rates are 167 reads/second and 1035 writes/second, the overall IOPS rate far exceeds the SAS drive capacity.

We can further troubleshoot the cause of high IOPS by using iotop :

high-iops

In this particular case, solving the issue of why the Qmgr was writing so much to disk was the right course of action.  It was due to deferred messages being logged by Postfix.

After purging Postfix queue, all of the deferred messaged stopped, which solved the high IO issue:

high-io-issue

The IOPS rate was down significantly:

iops-rate-dropped

With iotop confirming that the issue has been resolved:

iotop-issue-resolved

The biggest deterrent to getting SSD drives is price.  Currently a new Seagate Savvio 10K.3 300GB SAS drive costs approximately $190, and Seagate Cheetah 15K.7 ST3450857SS 450GB drive costs approximately $220.  While consumer grade SSD drives like Crucial m4 CT256M4SSD2 256GB cost $200, and Crucial m4 CT512M4SSD1 512GB cost $400.  We would stick to comparing enterprise grade drives, since there is a hidden bonus for going enterprise – power consumption.

Enterprise grade SSD drives like Intel 520 480GB SSDSC2CW480A310 cost $500, and Samsung 840 MZ-7PD512BW 512GB drives cost $600.

This places the initial costs at $0.48/GB – $0.63/GB for SAS drives, $0.78/GB for consumer grade SSD drives, and $1.04/GB – $1.17/GB for enterprise SSD drives.

Before we dive into mathematics of how SSD drives save you money, here is a map of average prices for kWh of electricity (in cents):

save-with-ssd-drives

The higher the price of electricity, the quicker will SSD drives break even as compared to SAS drives.

Comparing SAS to SSD drive in terms of power consumption, we have 4.8 kWh/day with SAS, and 0.06 kWh/day for SSD drive. This in turn comes out to 1752 kWh/year with SAS, and 21.9 kWh/year with SSD drive.  You would be saving $207/year with SSD drives on power consumption alone.

(For reference: http://www.storageperformance.org/spc-1ce_results/Seagate/e00002_Seagate_Savvio-10K3/e00002_Seagate_Savvio-10K3_SPC1CE-executive-summary.pdf and http://www.storagereview.com/ocz_vertex_4_ssd_review Where Annual energy use in kWh = Nominal Power * 24 * 0.365.  The annual cost is calculated at average of $0.12/kWh.)

So when would an SSD drive reach a break-even point?  If you purchased 2x 300GB 10,000 RPM drives for your server, you would be paying $380 for drives, and $420/year for electricity.  If you purchased 2x 450GB 15,000 RPM SAS drives you would be paying $440 for drives, and over $420/year for electricity.

Meanwhile, with 2x 480GB Intel 520 you would pay $1000 for drives.  If you went with 2x 512GB Samsung 840s, you would pay $1200 for drives, and as as for power consumption, there is a bonus. Intel 520 has 0.85W power consumption rate, would use 7.446 kWh/year, and cost $0.89352 per year. Samsung 840 Pro has 0.068W power consumption rate, would use 0.5956 kWh/year and cost $0.0713 per year.  For dual drives, this is still below $2/year in electrical charges.

Given all of these data points, even with dual Samsung 840 Pro SSD drives at $1200, as compared to dual SAS drives at $440, you are faced with a difference of $760 in initial cost for 2x 500GB drives.  This $760 price saving is quickly eroded away by $420/year electrical bill, which after less than 2 years starts to cost you more money if you went with SAS drives.  In the long run, SSD drives save you money, improve efficiency, and last longer.

]]>
https://www.serverstack.com/blog/2013/01/29/upgrading-your-managed-server-to-ssd-for-maximum-performance-and-cost-savings/feed/ 0
Load Distribution with Nginx and Cloudflare https://www.serverstack.com/blog/2013/01/21/load-distribution-with-nginx-and-cloudflare/ https://www.serverstack.com/blog/2013/01/21/load-distribution-with-nginx-and-cloudflare/#comments Mon, 21 Jan 2013 16:59:23 +0000 https://www.serverstack.com/blog/?p=294 Nginx is a popular reverse proxy application that is very efficient at serving static content and forwarding requests to other webservers.  It can provide a much needed performance boost for websites that have a lot of visitors and static content like images, videos, PDF files, etc.  While dynamic content like PHP, Python, Ruby, and other scripts, are passed off to an interpreter.  This is usually an Apache webserver, which receives a request ... ]]> nginx-cloudflare-header

Nginx is a popular reverse proxy application that is very efficient at serving static content and forwarding requests to other webservers.  It can provide a much needed performance boost for websites that have a lot of visitors and static content like images, videos, PDF files, etc.  While dynamic content like PHP, Python, Ruby, and other scripts, are passed off to an interpreter.  This is usually an Apache webserver, which receives a request for dynamic content like a PHP code, and renders it for a user.  When scaling these services, it is important to note that Apache uses a lot of memory to serve these requests, so optimization of content delivery is important.  This is where Nginx is very handy, as it serves static content like images very quickly with a minimal memory footprint.  By combining the two you can serve a lot more traffic.

If you choose to use Nginx for reverse proxy, you’ll also be able to customize where content is delivered from.  For example, you’ll be able to serve images from one cluster of servers, and videos from another:

nginx-cloudflare-diagram

This helps to optimally scale your servers and minimize idling.

For our example, suppose we use Nginx on 192.34.56.28.  The DNS record would look like this:

domain.com.            300     IN      A       192.34.56.28

Keeping the refresh rate to something small like 300 seconds (5 minutes) would allow you to scale up your infrastructure horizontally pretty quickly, but those IPs are best reserved for front-facing Nginx proxies.  These proxies, in turn, can have as many webservers in upstream as you’d like, handling actual traffic.  This protects the webservers from being exposed to DDoS attacks, and also allows optimizing the traffic delivery by splitting destination for different content.

Here is a snippet of Nginx config for multiple IPs that you can place on main entry point:

location / {
proxy_pass http://LOAD-BALANCED-IPS;
proxy_redirect     off;
proxy_set_header   Host             $host;
proxy_set_header   X-Real-IP        $remote_addr;
proxy_set_header   X-Forwarded-For  $proxy_add_x_forwarded_for;
}

upstream LOAD-BALANCED-IPS {
#LBENTRYPOINT
server 192.34.56.29:80 max_fails=1 fail_timeout=1;
server 192.34.56.30:80 max_fails=1 fail_timeout=1;
}

This essentially forwards all requests for domain.com from Nginx proxy (192.34.56.28) to 192.34.56.29 and 192.34.56.30, evenly distributing requests between these two upstream servers.  The best part about this setup is that if the upstream server is down, the Nginx would not send a visitor to that server, and there would be no 404 error displayed.  Nginx would continue to poll the server to see if it is alive, and once that upstream server is back online, the traffic will resume.

Placing tags like “#LBENTRYPOINT” would allow you to create a script that would insert or delete a line based on IP address of your webserver. You can use command line tools like sed to accomplish this in Linux.

Once we have added our SSH key on the Nginx proxy, we can make a script that would insert a new server address with IP of 192.34.56.31 on our Nginx proxy (192.34.56.28):

sed -i '/#LBENTRYPOINT/a\server 192.34.56.31:80 max_fails=1 fail_timeout=1;' /etc/nginx/nginx.conf && service nginx reload

This assumes your configuration file is in /etc/nginx/nginx.conf but on Nginx compiled from source this could be in /usr/local/nginx or /usr/share/nginx.  Make sure to tailor it to your own system.

After the line is inserted, it would also be prudent to check for any duplicate entries and remove them, as the load balancing evenly distributes traffic among all entries of ‘server’ on the list, so having duplicate entries would send multiple requests to that server.

The following command would remove this upstream server (192.34.56.31) from Nginx:

sed -i "/$192.34.56.31/d" /etc/nginx/nginx.conf && service nginx reload

 

With these simple tools you can now automate the process of cloning a VM and placing it into proxy server’s upstream rotation.  This would essentially be scaling up your proxy server vertically.

To add additional proxy servers and scale horizontally, we would need to use a DNS manager with API toolset.  Cloudflare offers just the solution.  Click Account and copy your API key:

Cloudflare allows you to modify DNS records with three API commands: rec_new, rec_edit, and rec_delete.  Their documentation covers each in greater detail.

For a quick example, we will create a new subdomain for our images using Cloudflare’s API.  We’ll call this subdomain images.domain.com and give it a 300 second TTL (5 minutes):

[root@web ~]# curl "https://www.cloudflare.com/api_json.html?a=rec_new&tkn=62a946da58115cc89cff61f84b4a6c8f401b3&email=
root@domain.com&z=domain.com&type=A&name=images&ttl=300&content=192.34.56.28"

{"request":{"act":"rec_new","a":"rec_new","tkn":"62a946da58115cc89cff61f84b4a6c8f401b3",
"email":"root@domain.com","z":"domain.com","type":"A","name":"images","ttl":
"300","content":"192.34.56.28"},"response":{"rec":{"obj":{"rec_id":"32696770","rec_tag":
"b469d45498fc38d7792a46bafcff0136","zone_name":

"domain.com","name":"images.domain.com","display_name":"images","type":"A","prio":null,
"content":"192.34.56.28","display_content":"192.34.56.28","ttl":"300","ttl_ceil":86400,
"ssl_id":null,"ssl_status":null,"ssl_expires_on":null,"auto_ttl":0,"service_mode":"1",
"props":{"proxiable":1,"cloud_on":1,"cf_open":0,"ssl":0,"expired_ssl":0,"expiring_ssl"
:0,"pending_ssl":0,"vanity_lock":0}}}},"result":"success","msg":null}

Adding more proxies is as simple as running the same command but replacing IP address with that of a new proxy.  This is an example of round-robin DNS load balancing. You can also modify how the request will be handled by the DNS server by using service_mode.  By setting service_mode to 1, the requests go through ‘orange’ cloud of CDN proxies.  Setting service_mode to 0 points the A record directly to the IP address you specified.

a-record-proxy

Now we can modify this record and add new Nginx proxies to scale it horizontally.  The entire process can be automated using Bourne shell, PHP, Python, Ruby, and so on.

]]>
https://www.serverstack.com/blog/2013/01/21/load-distribution-with-nginx-and-cloudflare/feed/ 0
Encrypting Sensitive Partitions with dm-crypt and LUKS https://www.serverstack.com/blog/2012/11/26/encrypting-sensitive-partitions-with-dm-crypt-and-luks/ https://www.serverstack.com/blog/2012/11/26/encrypting-sensitive-partitions-with-dm-crypt-and-luks/#comments Mon, 26 Nov 2012 16:07:43 +0000 https://www.serverstack.com/blog/?p=259 Introduction There are many reasons an individual or organization may need/want to encrypt their data. Unfortunately, encryption of data can cause extra overhead and slightly degraded performance, depending on the method being used. We’ve chosen to highlight block-layer encryption, as it gives the best overall performance among the most commonly used methods. Note: This example setup used involves encrypting the entire /home partition. This partition will need to be manually unlocked each boot by ... ]]>

Introduction

There are many reasons an individual or organization may need/want to encrypt their data. Unfortunately, encryption of data can cause extra overhead and slightly degraded performance, depending on the method being used. We’ve chosen to highlight block-layer encryption, as it gives the best overall performance among the most commonly used methods.

Note: This example setup used involves encrypting the entire /home partition. This partition will need to be manually unlocked each boot by the root user. This is not always practical in all multiuser environments, but it’s assumed the administrator has root privileges the entire time the server is operational and does not have any services that depend on /home immediately after boot. You can easily remedy this by using a key file to encrypt/decrypt the volume and storing it on some USB drive that’s left plugged into the machine or just by placing it on the root filesystem. You would then also remove the noauto option from /etc/crypttab to automate the process of mounting this partition.

 

Installing cryptsetup

On CentOS:

# yum install cryptsetup-luks

On Debian:

# apt-get install cryptsetup

 

Filling the partitions with random data

For this step, we assume you will encrypt the following physical partitions on /dev/sda (your actual schema may vary):

1) /dev/sda2 (swap partition)
2) /dev/sda4 (/home partition – encrypted with passphrase)
3) /dev/sda5 (/tmp partition)

Now, most might assume encrypting only /home (where the sensitive data shall reside) would be sufficient. This is not necessarily the case. When you access encrypted data, it’s transparently decrypted by the kernel and stored either in memory or temporary files (usually /tmp) on the disk. Some times, the data that’s stored in memory can be swapped out to the disk, whenever the kernel decides it can do so. For these reasons, it’s very important to ensure at least /tmp and swap are encrypted as well. There are even some other cases where you’ll need to encrypt /var as well (Example: encrypting databases storing medical records, etc). However, in our case, we can simply create a symbolic link from /var/tmp → /tmp to ensure 99% or more of our temporary files always remain encrypted on disk.

WARNING! YOU ARE NOW GOING TO IRREVOCABLY ERASE ALL CONTENTS OF THESE PARTITIONS!

This part has the potential to take a very long time, so feel free to go occupy your time elsewhere for at least the next few hours, unless these partitions are fairly small (less than 50 GB or so).

/dev/sda2 (swap):

# shred -vf -n 1 /dev/sda2

/dev/sda4 (/home):

# shred -vf -n 1 /dev/sda4

/dev/sda5 (/tmp):

# shred -vf -n 1 /dev/sda5

 

Configuring LUKS parameters and formatting partitions

The next step is actually configuring the partitions for use with LUKS and formatting them with your desired filesystem. For our own purposes, we will be using the aes-xts-plain64 with a 512-bit keysize (2 * 256-bit keys) and the hash algorithm sha512.

/dev/sda2 (swap):

Skip this step

/dev/sda4 (/home):

# cryptsetup -c aes-xts-plain64 -s 512 -h sha512 --iter-time 5000 --use-random --verify-passphrase luksFormat /dev/sda4

 

/dev/sda5 (/tmp):

Skip this step

We’ve enabled the –use-random option here rather than –use-urandom since /dev/random blocks (to wait for new random data) and will not repeat any previously used data. It does not require large amounts of entropy to perform this step, so it’s generally safe to use this option. You will skip this step on swap and /tmp partitions, as these will be created with randomly-generated keys on each boot.

 

Writing /etc/crypttab and /etc/fstab for automatic boot configuration

/etc/crypttab:

#               
homevol /dev/sda4 none noauto,cipher=aes-xts-plain64,hash=sha512,size=512
swapvol /dev/sda2 /dev/urandom swap,cipher=aes-xts-plain64,hash=sha512,size=512
tmpvol /dev/sda5 /dev/urandom tmp,cipher=aes-xts-plain64,hash=sha512,size=512

You’ll notice the field names are defined on the first line. The first field is an arbitrary name you choose to represent your device mapping. Feel free to choose what you like here, but try to keep it relevant to avoid confusion. The second field tells the actual partition on which the real data is kept. The next field is optional and specifies a key file location (in case you use a key file in addition to (or in lieu of) a passphrase. Finally, the last field contains all the options to use when mounting your volumes.

Note: On the homevol line, you’ll notice the noauto option was added here. This was done to prevent stops on bootup when the passphrase is requested and the machine waits for you to type it. This will mean you need to manually mount this encrypted partition after each boot.

/etc/fstab:

/dev/mapper/swapvol     none  swap  defaults,sw 0 0
/dev/mapper/homevol     /home ext4  defaults,noatime,user,nosuid,nodev,noauto 0 0
/dev/mapper/tmpvol      /tmp  ext4  defaults,noatime,nosuid,nodev

This is a pretty standard fstab, with the only real difference being the device name – now /dev/mapper/(your previously chosen target name).

 

Formatting the partitions

We will now use the luksOpen command from the cryptsetup utility to open the new encrypted volume you created for /home. We will then format the partition in the ext4 filesystem, using the standard Linux utility mkfs.ext4.

/dev/sda4 (/home):

# cryptsetup luksOpen /dev/sda4 homevol

 

The previous command opens the volume (you’ll be prompted for your password) and readies it for our use.

/dev/sda4 (/home):

# mkfs.ext4 /dev/mapper/homevol

 

You’ve just formatted this volume’s filesystem with ext4. To configure your /tmp and swap partitions automatically, you should now perform a reboot. Just about any system out there should automatically load all required modules upon boot when mounting the filesystems. Since you are not encrypting /boot or the root filesystem (/), there is no need to recreate the initial ramdisk (initrd) to include those modules.

After your system has booted again, you should notice the swap and /tmp volumes were automatically configured and are mounted with the desired encryption settings.

 

Manually mounting your encrypted /home partition

# cryptsetup luksOpen /dev/sda4 homevol
# mount /dev/mapper/homevol /home

 

You will be prompted for your passphrase after the first command. Once you’ve unlocked the volume, you can proceed to mount the volume as normal with the second command.

 

Backing up the LUKS header from your /home partition

# cryptsetup luksHeaderBackup --header-backup-file /homevol_luksheader /dev/sda4

Change /homevol_luksheader to whatever filename you wish to use. Encrypt this file if desired and copy it to a safe remote location for safe keeping. You never know when you’ll need it.

 

Conclusion

In just a small amount of configuration time (excluding random data waiting time), you’ve managed to configure the most important partitions on your system to transparently encrypt/decrypt all data written/read from them. You’ve made tremendous strides in securing data, and are just a simple shutdown -h now or poweroff away from turning your precious customer data, secret documents or maybe even illicit material (shame on you!) into random data that’s useless to anyone without your passphrase or several hundred theoretical years of brute force key searching attempts.

]]>
https://www.serverstack.com/blog/2012/11/26/encrypting-sensitive-partitions-with-dm-crypt-and-luks/feed/ 0