Web Site Performance Monitoring Tools: Load Testing

Determining the performance of web site and web applications is important. In order to create sites users want to visit they need compelling content but they also need to provide the performance users expect.

One of the wonderful aspects of creating web applications is the availability of software as a service solutions to meet your needs. These include monitoring tools and tools for testing the performance of your sites.

Image of LoadView testing - $9.99/mo + load test costs of each test

In this post I look at load testing using LoadView Stress Testing service. Tests are not cheap. The cost is $9.99 a month for an account plus the costs of each stress test you run. The next image shows the breakdown of a small stress test on this blog which cost $8.25 to

image of a small load test cost breakdown ($8.25 total)

Continue reading

Locking and Unlocking a User in Ubuntu

To lock a user in Ubuntu sudo usermod –expiredate 1 [username] Substitute the user’s username where username is shown below.

[code]sudo usermod –expiredate 1 username[/code]

This also can be shortened to

[code]sudo usermod -e 1 username[/code]

To unlock a user

[code]sudo usermod -e -1 username[/code]

This will disable the user both from accessing via password and from accessing via a private key.

You can also expire a user at a future date using

[code]sudo usermod -e YYYY-MM-DD username[/code]

To expire a password and force a user to enter a new password

[code]sudo passwd -e YYYY-MM-DD username[/code]

Related: Managing Users in UbuntuReset Root Password on MySQL DatabaseDonโ€™t Copy-Paste Directly from Website to Terminal

How to Manually Run Cron Tasks

To manually run cron tasks you can use the run-parts command in Linux.

So to run your cron-weekly, for example, to test that a fix you just made runs without error (this is what I just did, in fact)

run-parts /etc/cron-weekly

run-parts will run all the executables in a directory (you must point at the directory). So if you have several files in cron-weekly to run, you can’t just point to one of the files.

You may run into environmental differences running the script as a different user than the cron test runs at, so you can run as that user if needed. You need to be aware this is a quick and simple way of testing part of the process but it doesn’t do a perfect job of testing if it works as a cron task. But it will let you catch some failures quickly and fix them in time for the actual cron task to run. So do check that the everything works after the real cron job runs.

This is just the kind of thing I said I would put in this blog. Simple stuff but things I forget – so I put it here to remember and maybe help out others, like me, that need really basic tips.

If you have a cron task item (or have setup the whole task this way) that is just a script and you just want to test that 1 item you may run the script directly. For example (for a Linux shell script):

sh /etc/cron.weekly/your_crontask_script.sh

Related: Updates Needed When Upgrading from Apache 2.2 to 2.4Rsync to copy Files Between Servers and ComputersBash Profile Adjustments for Scrolling HistoryChecklist: Setting Up a New Domain on VPS

Compare WordPress Files on Server to Proper WordPress Version

Sadly one of the hassles in managing your own WordPress blog is dealing with people that use your blog to serve spam content. These hacks can insert spam links into your pages and posts or create spam directories that are completely their own content on your domain.

There are many issues to deal with in re-establishing control of your server; but that isn’t the scope of this post.

This is just a tips if you are troubleshooting to try and determine what is going on. Often your server has been hacked to allow uploaded php pages to be added or for WordPress php files to be edited.

One way to track down if the files have been changed or new ones added is to compare the WordPress files on your server to the current files for a fresh WordPress install. This assumes your blog is using the current version, which hopefully it is because on the big improvement WordPress made is to make those updates automatic. That greatly reduces the chance to have WordPress be the vector to infecting your server. If you were using a older version then just compare to the field for that version from the WordPress server.

If you don’t have a current backup I would make a backup before I tried this. Obviously, don’t make any deletions or changes to your server unless you understand what you are doing. You can create big problems for yourself.

You can use the diff command to view the difference between WordPress on your sever and the fresh install from WordPress. I install the new WordPress in a new directory outside public_html. At the cli on a Ubuntu/Linux server:

[code]sudo wget http://wordpress.org/wordpress-4.0.zip
unzip wordpress-4.0.zip
diff -rq wordpress ../public_html/[/code]

wordpress-4.0.zip – replace with whatever the version is you are using.
../public_html/blog/ – replace with the path to your blog

Continue reading

VPS, Cloud and Server Colocation Options

Finding the right place to host your content is important. Thankfully their are several excellent providers. For virtual private servers (one server shared with multiple virtual servers) Linode and DigitalOcean. And there are lots of good choices but those two are widely appreciated for excellent service at a good price.

AWS ec2 (the Amazon elastic cloud) is not great for minimal hosting in my opinion – it adds extra complexity and is likely more expensive. But it is a great solution when you have the resources to manage it and you have significantly variable demand. Because of the ability to add capacity on the fly as you need it you can maintain a low baseline and add capacity only as needed and drop that extra capacity as soon as it isn’t needed.

Rackspace is another good option for hosting. Rackspace and AWS are often used for very large applications and sites but Linode and Digital Ocean also can serve those needs and provide similar options to add capacity on the fly.

All of these options require you to manage your server (which may well be a virtual server – that is just a portion of a actual physical server that you control).

Rackspace also offers co-location where your physical server is put in their network operation center with electricity; cooling; network and internet connections; and physical security managed by them and the server managed by you.

As colocation has evolved what is included and to what level things like physical security and redundancy are dealt with have evolved. It has become quite complex to understand all the options for those organizations that need more than a simple virtual private server. As often happens when their is a business need, people offer solutions. And there are companies that specialize in helping you find the best colocation options for your needs.

Today the cloud options have led many organizations to eliminate (or greatly reduce) there own network operations centers and colocation needs. But cloud options are not always the right choice. And for some needs cloud options are not appropriate yet (mainly due to security or legal issues stemming from security concerns).

Managing your own servers with a colocation arrangement can be significantly cheaper than cloud hosting options (especially if you don’t need to massively increase capacity to deal with short term bursts of demand). Of course, technology continues to change so quickly it is hard to predict what the future will bring.

Service quality is absolutely critical for colocation. While saving money is important, the reason colocation was selected (over virtual private servers or the cloud) is normally how critical the function was. Using experts to help sort through the options and assure the quality of service of provides is wise.

Related: Keeping Your Hosted Ubuntu Web Server Software Up to DateChecklist for Setting Up a New Domain on VPSSystem Monitoring Tools for VPS

Updates Needed When Upgrading from Apache 2.2 to 2.4

I updated from Ubuntu 12.04 to Ubuntu 14.04 on a virtual private server.

When you do that update, Apache is updated from 2.2 to 2.4. Certain changes mean that until you update the site-available configuration files no web sites will work.

The quick checklist of what you have to do for each configuration

  1. update the file to add or edit the directory details (where /var/www/curiouscatnetwork.com/public_html/ is the path to the website files on your server)
    [bash]<Directory /var/www/curiouscatnetwork.com/public_html/>
    Require all granted
    </Directory>[/bash]
    Remember if you have details needed for WordPress those instructions need to remain. I didn’t have a Directory area listed on my non-Wordpress sites.
  2. The files now need a .conf extension so move the rename/move the files to the new name
    [bash]sudo mv curiouscatnetwork.com curiouscatnetwork.com.conf[/bash]
  3. Enable the new configuration
    [bash]sudo a2ensite curiouscatnetwork.com.conf[/bash]
  4. Reload apache to reload the new configuration
    [bash]sudo service apache2 reload[/bash]

More details from Linode.

The upgrade to Ubuntu 14.04 LTS was interrupted (I quit a window using the wrong command – oops). Luckily it wasn’t a big deal. I was able to delete the lock file.

[bash]sudo rm /var/lib/dpkg/lock[/bash]

Then trying to update again

[bash]sudo apt-get dist-upgrade[/bash]

gave a message telling me what command to use to have the upgrade continue.

[bash]sudo dpkg –configure -a[/bash]

which seemed to work fine.

Related: Bash Profile Adjustments, Scrolling HistoryChecklist: Setting Up a New Domain on VPSModSecurity: Adjustments for WordPress

Rsync to copy Files Between Servers and Computers

I found rsync when I wanted to use scp to copy files to a server but not overwrite files already there. Rsync is actually more efficient no matter what (it seems) but it is really great if there are a bunch of duplicate files (Rsync will just skip them).

To copy files from your current computer to a server:

rsync -azv -r ssh [directory] [username]@remotehost:[location]

[bash]$ sudo rsync -azv -e ssh directory-on-my-computer/ [email protected]:~/files/go/here[/bash]

a = archive mode
z = compress file data
v = verbose
r = recurse through subdirectories and copy all of them

Compressing file data saves bandwidth so if that is an issue it is another big win over scp. And in my reading it seems rsync can restart a broken file transfer in process (while scp you have to redo the whole file transfer).

To copy from the server to your computer just reverse the order of the locations. And you can even just put in two addresses not on your current computer and copy between then.

As a reminder, I realize this blog is made up of stuff that is obvious to a large number of people. It is really aimed at me (so I can quickly find what I found before), and to a lessor extent others like me (who use cli some but are not system administrators or programmers to any significant extent).

Related: Making Sure You Donโ€™t Run Out of Space on Your VPSBash Profile Adjustments (Scrolling History)Customizing the Command Prompt for Terminal in Ubuntu

Turn It Off and On Again

I had a WordPress blog lose the database connection. As I tried to troubleshoot it I rapidly got to the point of thinking that maybe just rebooting the server would fix things – since no changes had been made that should cause the database connection to be lost. But also I figured I couldn’t be so lucky that such a simple thing would work.

I tried to log into the database using the cli and it was failing. MySQL was still running – according to top. I have automated security updates setup for the VPS server running Ubuntu. My thought now is those updates somehow messed things up to the extent the server database connections somehow wasn’t working. I sure hoped that was the case.

I rebooted and hoped.

After a bit I was able to see that it was working.

When I went to reboot I noticed I hadn’t rebooted in a long time. My guess is I might have the automated security updates setup wrong on this server (maybe it is installing more than just the security updates), I’ll check. I have not had this problem before with other servers, obviously breaking a database in this way would cause lots of problems on production machines so I tend to think it is more likely I messed something up than this is a likely outcome when using automated security updates (but such things are possible which is why I think places with full time system admins and important servers rely on manual updates with professionals watching everything to be sure nothing obvious breaks).

My feeling is the longer you go with not rebooting the server the more likely some issues are to crop up (but also every reboot is more like to result in some broken thing, right then, than doing nothing so it is a tradeoff). And I could be wrong in that feeling, it is just what I guess without much evidence to support my guess.

Anyway I was very happy turning it off and on again worked. Honestly I went ahead and updated the server before I rebooted, but my guess is just turning it off and on again would have worked.

Related: Making Sure You Donโ€™t Run Out of Space on Your VPSBasic MySQL Performance MonitoringWordPress error: Image could not be processed. Please go back and try again.

Top with Better Display Options

Scout Realtime is a Ruby Gem that allows you to view top in the browser. One huge advantage is to view charts of activity over time.

Scout Realtime is open source and free.

Related: System Monitoring Tools for VPSBasic MySQL Performance MonitoringKeeping Your Hosted Ubuntu Web Server Software Up to Date

Making Sure You Don’t Run Out of Space on Your VPS

Over the years I have had web sites I work on go down because the server ran out of space (when I wasn’t responsible for the system administration). Well today I had it happen for some sites on one of my servers.

I have a remote monitoring service so I was notified when the sites were unavailable. Looking in the error logs I could see I ran out of space, which is a fairly lame error to make, I think. I cleared up some space quickly and things were in working order again.

A few tips from this experience. To see data on space used and available space from the command line (linux/ubuntu) use the df command.

[bash]df[/bash]

You could check that occasionally to see if anything strange is going on. But it is better to use something like Nagios to monitor the server and provide notice if usage goes past certain points is wise (75%, 80%…).

I found a nice way to find large files (that may be possible to delete to free up space)

[bash]sudo find . -mount -ls | awk ‘{print $7, $11}’ | sort -rn > large_file_list[/bash]

If you get an error with the file creation, you can log in as root (su) or create the file first and then it can overwrite the existing file.

It is smart to have log rotate setup for logs. I didn’t have it setup for every log. To create a file to keep track of all your logs for domains you can do the following. Create a file in the logrotate directory so it is run per your settings.

[bash]sudo vim /etc/logrotate.d/virtualhosts[/bash]

This article from Slicehost provide details on options, a simple setup is to include text such as this in the file created in the line above.

[bash]/var/www/domain1.com/logs/*log /var/www/domain2.com/logs/*log {
rotate 14
daily
compress
delaycompress
sharedscripts
postrotate
/usr/sbin/apache2ctl graceful > /dev/null
endscript
}[/bash]

Related: System Monitoring Tools for Ubuntu VPSPhusion Passenger Tips and Troubleshooting IdeasChecklist: Moving WordPress site to a New Host

Continue reading