Manage Your Log Files with Logstash

Your dedicated server has lots of logs. Almost every service and program running on a Linux or Unix server has a log file associated with it that includes relevant information about processes, errors and warnings. Sifting through all of those logs can be a pain, especially if you need to review old logs or compare them with newer ones. A tool called logstash may be the answer to your concerns.

logstash allows you to collect all of your logs, parse them, index them, store them and search them. For example, you could use it to find all instances of 404 File Not Found errors in your Apache HTTP Server logs, or you could find InnoDB warnings in your MySQL logs. You can get pretty specific with your searches and avoid a lot of false positives.

Freely available for download and installation, logstash is free and open source software, released under the Apache 2.0 license. You can download the source code or install binaries available in its APT or YUM repositories for Debian, Ubuntu, Red Hat Enterprise Linux, CentOS and Fedora.

Better Server Hosting Network Naming Schemes

It might be fun to name your networked servers something like “hiveship1” and “hiveship2”, but even if you do have some fun with it, you should still make sure you have a good naming system and stick to it. What follows are a few tips for server and network naming.

Hostname (also A record) – This should be a unique name for the machine you want to identify. It does not need to indicate what the machine does or even where it is located, but the physical device (or virtual one) should have the same name. Therefore, the hostname could appear as:


Subdomains or CNAME records should specify more information about the location and purpose of the machines. For example, a development machine located in building C that is mainly for databases, could look like this:

Other CNAME records may be useful solely for the sake of convenience. For example, if you want your billing manager to be easily accessible through a subdomain, you might use:


Your records might look like this:

hiveship1.domain.tld A

sql.prd.buildc.domain.tld CNAME hiveship1.domain.tld

billing CNAME hiveship1.domain.tld

By keeping your naming scheme unified and organized, you avoid the unfortunate scenario where you lose a server or simply have a hard time identifying the servers or virtual machines you have running and connected to a network.

Some Benefits of FUSE on a Linux Server

FUSE stands for Filesystem in Userspace. As the name implies, it allows a user with limited privileges to create a functional filesystem without requiring root (administrative) privileges. Because the filesystems exist in userspace, they are technically virtual filesystems. They, nevertheless, function as though they are not. Does FUSE have any use on a server? In some cases, yes.

One example of a practical use for FUSE is for mounting network file systems. SSHFS, for example, is a remote file system that is accessible via SSH. If you have a folder on your server that you would like to function as a normal folder on your local machine, SSHFS allows you to accomplish that. In this case, FUSE would be necessary on the local machine.

FUSE might also be necessary on a server, if you are mounting filesystems from other servers, such as a backup server. You can temporarily mount them using SSHFS, NFS or any other method you choose. In all cases, FUSE makes it convenient to mount without needing to make changes to fstab. When you are done with your mounts, you can detach them and not have to worry about them anymore. You can find out more about FUSE at

Tips for Boosting VPS Performance

In some instances, the performance of a VPS is limited to the hardware and software specifications of the host machine. You have a finite amount of CPU and memory resources at your disposal, and you are not free to change the operating system’s global configurations. Nevertheless, there are a plethora of OS and software parameters that you can manipulate to boost your virtual private server’s performance.

Web server tweaking – You can enable features like dynamic module management, caching and limiting running processes. Remember, you VPS likely has a limited amount of RAM, and many web server processes and functions can drain a lot of that memory.

Optimize database memory usage – Database systems like MySQL can use a lot of memory, and some VPS platforms will essentially fall apart under heavy MySQL loads. A number of functions in my.cnf can help reduce memory usage, such as disabling BSD DB and InnoDB, increasing key_buffer and use of query cache.

Finally, make sure you are only running applications and services that are necessary for your VPS. Eliminate the ones you do not need, and you will free up a lot of CPU cycles and memory for the applications your server actually uses.

Summer Cleaning for that Dusty Data Center

Summer is in full swing, and if you missed your opportunity for Spring cleaning, you should seize the day and clean out your data center. What does data center cleaning mean? Many times the dirtiest areas are the places you cannot see at first glance, but all of that dust and particles can have an effect on the efficiency of your servers.

If you run a large enough operation, it might be a good idea to hire professional cleaners to clean your data center. These people should be more than maids for hire. Make sure you hire someone who has experience dealing with servers and other electronic equipment, especially when dealing with controlled environments and advanced cooling systems. Some of the problematic dust particles can be extremely small, so special HEPA filtered vacuum cleaners and air scrubbers will help.

Beyond general dust and dirt, cleaning should also entail moving out old equipment, replacing worn cables, checking electrical outlets and power supplies and recovering unused space. Since it is doubtful that every machine in your data center was there from the start, some equipment might be older than others. Take care to perform physical maintenance on the older equipment.

With your data center thoroughly cleaned, you can run at peak efficiency and also keep it looking nice for customers and other visitors.

Energy Efficient Data Center Cooling

Depending on where you are located, summer can get very hot. That means it is more important than ever to keep your servers running cool. The following are some innovative ways people are making new energy efficient data centers.

1. Free air cooling – Despite the name, “free” does not mean it does not cost you anything, but free air cooling involves taking cooler air from outside and using it to cool water that cools the data center. Obviously this does not work in summer, but it can save money in cool months that you can use in summer.

2. Water recycling – Goolge, HP and some others are actually treating water that they use for cooling, returning the purified water to the water supply. This is actually cheaper than bringing in fresh water and helps the environment as well.

3. Open air – A data center designed with the right orientation for maximum wind throughput and air flow can actually naturally cool servers to some extent. This would not be enough in extreme heat, but in moderate climates, it can certainly help reduce cooling costs.

4. Submersion – While it sounds like server suicide, some data centers actually submerge their servers in liquid cooling containers. The liquid naturally has a lower temperature than the open air, and as long as the enclosures stay closed, no circuitry is damaged.

How to Create and Change Linux System Passwords

Server and website security seems to be in the headlines often lately with news of government surveillance and the heartbleed exploit in SSL. More than ever, it is important that you have a strong password. If you need to change your password on your Linux dedicated server or virtual private server, this guide should help you.

Command Line

If you want to change your password from the command line, use the “passwd” command. To change your own password, type it without any qualifications:

$ passwd

To change someone else’s password, type passwd followed by the username you want to change. You must have the necessary permissions to change that user.

# passwd bobdobalina

After you press Enter, Linux will prompt you for the current password:

Changing password for bobdobalina.
(current) UNIX password:

Type the current password and press Enter. It will then prompt you for the new password and then ask you to confirm it:

Enter new UNIX password:
Retype new UNIX password:

You will not see the characters you type or any indication that you are even typing something. This is just a security precaution.

For more information on the passwd command, type “man passwd” from the command line, or see the online documentation. In the next part of this series, we will look at changing passwords from within cPanel and Webmin.

Get Rsync Functionality in Windows with DeltaCopy

If you run Linux on your server, rsync is a great option for incremental backup management. The problem is, if you are running Windows on your backup machine or on the machine you use to manage backups, rsync is not available. There are, however, ways to make rsync work in Windows. Most of those methods involve running Cygwin. DeltaCopy is one tool that makes it a little easier.

DeltaCopy gives you many of the benefits of rsync, such as fast incremental backup, task scheduling, email notification and easy restoration, but it does not require all of the libraries and configuration necessary to run Cygwin. Instead it functions as a Windows wrapper for rsync.

DeltaCopy also has a full GUI interface that makes it easy to schedule backups and manage them from your Windows machine. Best of all, DeltaCopy is free and open source, licensed under the GPL v3 and is available for most Windows versions (XP, 2000, 2003, Vista, 7 and 2008).

To learn more about DeltaCopy and how to use, see the online documentation (PDF) and the FAQ. For more information about rsync on Linux, see this tutorial.

Why Use Linux for Dedicated Servers?

A recent report indicates that 97 percent of the world’s top 500 supercomputers run Linux. That statistic seems unfathomable, as if no other operating system even exists or is even worth mentioning. Most of the remaining 3 percent are some other Unix variant, and Windows barely registers at all. The question one should ask is: Why? Why do organizations, system administrators and programmers routinely turn to Linux for their server needs?

If you talk to any of these system administrators about Linux, they will throw out terms like scalability, high performance, reliability, security and even usability. Above all flexibility seems to be one of the most important factors. Linux, after all, is not a single operating system but rather the kernel that powers a myriad of operating systems. Red Hat Enterprise Linux, SUSE Linux Enterprise Server, CentOS, Scientific Linux, Debian, Ubuntu and many others power many of the world’s servers and supercomputers.

Cost is also a major factor. Even the most commercial of Linux distributions is more cost effective than its Unix and Windows competitors, and when you factor in that organizations can essentially develop their own unique Linux distribution for free to meet their large server and cluster needs, it becomes a winner by a landslide.

Above all, Linux is free and open source. Its development is transparent, so you know exactly what goes into and what to expect to come out of it. It is easy to scale, cheap to deploy and made for customization.

Htop: An Advanced Alternative to Top

Top shows you a great deal of information about the top running programs on your server, load averages, memory usage and more. Now imagine Top a little more colorful with some graphical representations of CPU, memory and swap usage, and a full range of shortcuts and functions that can help you manage tasks and find exactly what you need. Enter Htop, an alternative to Top that greatly expands its functionality.

To install htop on your RHEL, CentOS or Fedora system, you need to use the RepoForge repository. The easiest way is to download the appropriate RPM and install it. For example, for the 64-bit version of RHEL 6:

# wget

# rpm -ivh rpmforge-release-0.5.3-1.el6.rf.x86_64.rpm

Then install via yum:

# yum install htop

On a Debian or Ubuntu Server, it should be in the main repository. To install, type:

# apt-get install htop

At the top, you will see meters for each CPU/core, memory and swap. It will also show you the number of running tasks, the load average and uptime. The middle section will look nearly identical to top with PID, user, memory and cpu usage, and the commands. At the bottom, you have a menu with function key shortcuts that allow you to manipulate tasks. You can select tasks with your keyboard or mouse.

For more help with htop, press F1 within htop or see this documentation.

Find Out What Files Are Open On Your Linux Server

When managing your Linux server, you may encounter lag or other performance issues that lead you to question what files your server might be accessing at a given time. Or you might just want to run routine diagnostics to make sure your server is only running and manage files that it is supposed to run. You may also need to remove a file or unmount a drive but cannot because the system tells you that it is being accessed by a program. Regardless of the reason, you need a tool that can shed light on what is files are being accessed. The command lsof is exactly what you need.

If a file is open on your server, lsof will tell you what user has it open, the type of file it is, the node associated with it and more. You should note that Linux will consider directories, devices, pipes, sockets and anything else on the file system as a file.

To use lsof, simply type it from the command line:

# lsof

The output will look like this:


mysqld 1275 mysql 12u unix 0xffff8801fe724680 0t0 10184 /var/run/mysqld/mysqld.sock

mysqld 1275 mysql 13u REG 8,18 2048 21248418 /var/lib/mysql/mysql/host.MYI

mysqld 1275 mysql 14u REG 8,18 0 21248832 /var/lib/mysql/mysql/host.MYD

mysqld 1275 mysql 15u REG 8,18 2048 21247064 /var/lib/mysql/mysql/user.MYI

mysqld 1275 mysql 16u REG 8,18 928 21247125 /var/lib/mysql/mysql/user.MYD

mysqld 1275 mysql 17u REG 8,18 5120 21247493 /var/lib/mysql/mysql/db.MYI

mysqld 1275 mysql 18u REG 8,18 1760 21248784 /var/lib/mysql/mysql/db.MYD

mysqld 1275 mysql 19u REG 8,18 5120 21254689 /var/lib/mysql/mysql/proxies_priv.MYI

In the above example, the command is “mysqld”, which is a database server run by the “mysql” user. The first file being accessed is mysqld.sock. The FD column provides useful information. For example, the number 12 is the file descriptor, and “u” indicates that the file allows both read and write access.

For more information about lsof and all it can do, type “man lsof”, see the online documentation or read this tutorial.

How to Build Linux Server Programs with CMake

We have previously illustrated how to compile a program with Make. In this brief guide, you will learn a little bit about building with CMake, a useful alternative found in many Linux distributions.

First, you should install CMake on your system, if it is not already present. On RHEL and CentOS, type:

# yum install cmake

On Debian/Ubuntu type:

# apt-get install cmake

Next, install any dependencies required by the program. There should be a list of them in the README or INSTALL file. If not, you might need to check the program’s documentation.

Next, you should navigate to the directory containing the cmake files and create a build directory, and then change to that directory:

$ mkdir build

$ cd build

Then, simply run CMake:

$ cmake ..

You can consult the CMake documentation for available options. For example, to select a build type, you would pass an option like this:

cmake .. -DCMAKE_BUILD_TYPE=Release

Once CMake has generated the buildscripts, you should run Make as you normally would.

$ make

And finally, install the program:

# make install

For more information about CMake as well as full developer information, visit the CMake resources website.

3 Ways to Enable and Disable Linux Services

Linux services or daemons are programs that typically start when the system boots and remain running in the background until the system shuts down. What follows are three distinct ways to manage services RHEL and CentOS servers.

1. chkconfig – You can use this simple command to show current services, enable them, disable them, start and stop them. You can also set run levels for services to make sure they start when and how you want.

2. Setup – Red Hat based distributions have an ncurses tool that you can navigate with your keyboard for a variety of configuration settings. Among those are services. To run setup, simply type “setup” as root and navigate to “System services”.

3. Hosting Automation – If you do not want to manage services through the console or SSH, you can rely on some form of hosting automation, such as cPanel WHM or Webmin. These tools provide full functionality for services management.

For security and performance it is better to disable services you do not use. You can use any of the above three methods to accomplish that. You can also uninstall services if you are sure you will not need them in the near future.

Encrypt Your Linux Server Filesystem with eCryptfs

Encryption has become a highly requested feature on the web lately with all of the talk of government spying, heartbleed and general security concerns. While most discussion has centered around encrypting the transport of data (via SSL), you might also want your data encrypted on disk as well. On a Linux-based server, you can encrypt your filesystem using eCryptfs.

eCryptfs is itself an actual file system with cryptographic metadata of each file, allowing for copying between hosts. It is also relatively easy to use and is even used for some desktop operating systems such as Ubuntu and Google ChromeOS.

eCryptfs is available for installation on most major Linux distributions. It is free and open source and can be applied to a home directory, partition or other distinct filesystem entity.

On Red Hat Enterprise Linux and CentOS, run:

# yum install ecryptfs-utils

On Debian and Ubuntu run:

# apt-get install ecryptfs-utils

The benefits of eCryptfs are clear, but there is one possible drawback. If for some reason, you lose your password and are unable to access your files through the traditional method of logging in, it might be impossible to gain access. A user without proper credentials cannot unencrypt your filesystem.

For more information about eCryptfs, see the online documentation.

Working with Symbolic Links in Linux / Unix

In many cases, you may find yourself needing a particular file or directory in one location while it is actually stored in another. One solution to this in Linux and Unix operating systems is linking. There are two types of links: hard links and symbolic links.

Hard links are always associated with a specific piece of data at a specific location. Even if moved or removed, a hard link still links to it but cannot cross system boundaries or link directories. Symbolic links refer to the abstract location of a file. If the file is moved or removed, the symbolic link essentially no longer exists. It can, however, cross file system boundaries and link directories.

To make a hard link, simply run the “ln” command without options:

# ln /filesystem/file /filesystem/file2

To make a symbolic link, use the “-s” option:

$ ln -s /filesystem/file /filesystem/file2

In most Linux shells, depending on the configuration, symbolic links will appear as a different color than normal files, often a light blue. This lets you know that it actually points to another file. If you list in long form with “ls -al”, you will see indications of where links point. For example:

lrwxrwxrwx 1 root root 19 Nov 15 2013 ->

This means that is actually a symbolic link for

Hard and symbolic links are a quick and easy way to make sure programs and services can access files the way they need them, and it is built into most Linux file systems.

Increase Server Security by Restricting Cron Jobs

Cron is one of the outstanding features in Linux and Unix-like operating systems that many system administrators love. It provides a full range of automation capabilities by allowing admins to schedule programs, scripts or other processes for any time of any day. Allowing other users to do this, however, can pose a security risk. Therefore, it is a good idea to restrict cron jobs to only the necessary users.

There are two files that provide cron access control:



If you have a specific user you do not want using cron, enter that user into /etc/cron.deny. On the other hand, you can disable all users by putting “ALL” in /etc/cron.deny and then whitelisting the ones you want to give access to in /etc/cron.allow.

Once a user is denied from crontab, he will not be able to set or modify cron jobs. However, if you set a cron job to run as that user, it will still work on that user’s behalf. This may be convenient if you want to give limited cron access, or if you have a control panel that will allow users to set cron jobs only for themselves through your administrative approval.

The Benefits and Drawbacks of Using Aptitude for Package Management

If you spend enough time around Debian and Ubuntu folks, you are sure to see someone praising the benefits of Aptitude over Apt-Get for package management. Aptitude, which has both an n-curses semi-graphical version and a command-line version, is superior according to some, but there are plenty who still prefer apt-get. What follows are some benefits and disadvantages of aptitude.


  • Aptitude offers per-package flags so that you can determine if a package was automatically installed and configure aptitude to treat it however you want
  • Aptitude will automatically offer to do what “apt-get autoremove” does, removing unused packages
  • You can search with aptitude, just as you would with the separate apt-cache command.


  • It can sometimes be too specific. If you do not want to go through interactive screens fine-tuning your packages, apt-get might be simpler
  • Apt-get may hold advantages over aptitude when it comes to installing source packages
  • Aptitude may automate some things you want to do manually.

The advantages of aptitude are very good for some users, and the disadvantages are rather small. Which package management tool you choose is really a preference, and both are great choices.

Should I Use CentOS for Business?

If you ask someone at Red Hat about CentOS, they will likely tell you that it is a great “community” distribution, but for enterprise servers, you need Red Hat Enterprise Linux and the support that comes with it. This is their job, of course, to promote their own software, but there is also some truth to it. Are there situations, however, when it might be better for you to use CentOS for your business?

The first thing to understand about CentOS is that in terms of technology it is every bit as effective as Red Hat Enterprise Linux (RHEL). In fact, on an OS and software level, it is RHEL to the core. It receives the same security updates and the same packages. What it does not have is Red Hat’s backing in any shape or form. You cannot call Red Hat if your CentOS server stops working, nor can you consult Red Hat for help with optimizing it or anything else. What you do get is a large CentOS community that may be able to help in many cases.

In terms of cost to maintain, RHEL obviously has the upfront licensing costs and the support costs. For CentOS, you will need someone, possibly a team of people, depending on your business size, who can maintain a Linux server on their own. Having said that, there are also companies more than willing to offer paid CentOS support. You can save on licensing but still pay for a managed server.

Finally, RHEL may be overkill for a small business with one server. It is likely just not in your budget to pay for the licensing. On the other hand, a large enterprise that wants to keep everything in house for research and development may also like CentOS. There are many other scenarios where companies might prefer CentOS, but it ultimately is a preference. You have to decide for yourself if it is right for you.

Monitor MySQL Activities with MyTop

If you have been around Linux and/or Unix long enough, you have probably heard of “top”. It is a convenient program that can give you information about running processes, memory and cpu usage, load averages and a host of other details about your server. You can do almost the same thing with “mytop”, only specifically focusing on MySQL databases.

Mytop will tell you useful information that will help you optimize your databases and repair any problems you might find. You can install it by doing the following:

For Debian-based Linux distributions, type:

# apt-get install mytop

For Red Hat Enterprise Linux, CentOS or Fedora type:

# yum install mytop

To use mytop, you will need to enter this information:

mytop -u <username> -p <password>

Replace <username> with your database administrator’s username and <password> with the appropriate password. For example:

$ mytop -u admin -p mypassword

With no other options, mytop will display information about all the databases connected to the database server. If you want to focus only on a specific database, add the “-d” option.

$ mytop -u <username> -p <password> -d <database>

For more information about mytop and how to use it, see the program’s online documentation.

How to Convert MyISAM to INNODB in phpMyAdmin

MySQL has a number of options for database storage. Two popular storage engines are MyISAM and INNODB. Each method has its advantages, and the purpose of this brief tutorial is not to debate which one is greater. If, however, you decide that you need to switch from one to the other, this guide should help.

You can perform this conversion from the command line, but you also have the option of using phpMyAdmin for graphical manipulation of your database. To convert from MyISAM to INNODB, follow these steps:

  1. Login to phpMyAdmin (usually at yourdomain.tld/phpmyadmin)

  2. Find your database and click on the table you want to convert

  3. Click the “Operations” tab at the top

  4. In the “Table options” box, change the Storage Engine to INNODB

  5. Click “Go”

  6. Repeat for each table

Assuming all goes well, each conversion will give you a “successful” message at the top of the page in green. Once all tables are converted, check your database and corresponding web application to ensure everything is running smoothly. For a full comparison of MyISAM and INNODB, see this detailed description.

How to Convert MyISAM to INNODB in phpMyAdmin 

MySQL has a number of options for database storage. Two popular storage engines are MyISAM and INNODB. Each method has its advantages, and the purpose of this brief tutorial is not to  debate which one is greater. If, however, you decide that you need to switch from one to the other, this guide should help.

You can perform this conversion from the command line, but you also have the option of using phpMyAdmin for graphical manipulation of your database. To convert from MyISAM to INNODB, follow these steps:

1. Login to phpMyAdmin (usually at yourdomain.tld/phpmyadmin)
2. Find your database and click on the table you want to convert
3. Click the “Operations” tab at the top
4. In the “Table options” box, change the Storage Engine to INNODB
5. Click “Go”
6. Repeat for each table

Assuming all goes well, each conversion will give you a “successful” message at the top of the page in green. Once all tables are converted, check your database and corresponding web application to ensure everything is running smoothly. For a full comparison of MyISAM and INNODB, see this detailed description.

How to Manage Linux Server Timezone Settings

Setting your date and time correctly on your Linux server is very important. Your server logs and other important information will all reflect the timezone of your server. In most cases, you will want to set your server’s time to match your own local time, but if your server is remote or hosting sites for people primarily located in another timezone, you might choose a different one.

On Red Hat Enterprise Linux or CentOS, you can use the setup program or redhat-config-date to set the timezone. The setup program, although run from the command line, uses a semi-graphical interface to make it easier.

On a Debian-based distribution, use dpkg-reconfigure tzdata to set the timezone.

With all of these commands, they will include onscreen instructions that will walk you through the process. Once you have changed the time, you can verify that your settings are correct by running “date”. For example:

$ date

Thu Apr 17 14:47:12 EDT 2014

In the case of my system, it is 2:47 PM on Thursday April, 17 2014, and the timezone is Eastern Daylight Time.

Once you have your timezone set, you should not have to reset it ever again unless you decide to change it. More information about Red Hat timezones is available here, and Debian information is available here.

How to Perform an In-Box Upgrade on an Ubuntu VPS

In-box upgrades on any operating system can be tricky. You are essentially updating all of the software, including the kernel, while keeping all of the current data. This makes it inherently risky, and some would argue against it and maintain that you should either not do upgrades or only do backups and clean installs.

For Ubuntu systems, the clean install route would be troublesome if you are on the normal release cycle, which is every six months. Having to completely backup your system and reinstall a new image would mean way too much down time. On the other hand, you could opt for an LTS install from the beginning, which stands for “Long Term Support”. These Ubuntu systems are released every 2 years and are sometimes supported up to 5 years.

If you still want to go the in-box upgrade route, you will need to check with your VPS provider for specifics on the kernel. Many virtual private servers have kernels controlled by the VPS system itself, not the OS. For example, your upgrade path might look like this:

  1. sudo do-release-upgrade
  2. halt (to shutdown the VPS)
  3. Install VPS kernel matching your new version
  4. Boot VPS

For specifics, check your VPS provider’s documentation. During the installation, Ubuntu will give you an alternate SSH port, just in case your main SSH server stops working for some reason. For more information about upgrading, see this documentation.

Use Mpstat to Monitor Multiple Linux Server Processors

Probably the first Linux tool you think of when you want to monitor CPU usage is “top”. Top is light and gives you a good glimpse of processes that are consuming processing power. One thing top will not do by default, however, is show you how each processor is working and how much of their power is being used.

If, for example, you have 8 CPUs or cores, simply viewing the overall usage does not always give an accurate picture of how much of your resources are being consumed. Mpstat gives you each processor’s usage percentage, niceness and many other important details.

To use mpstat, type from the command line:

# mpstat -P ALL

This will show all of the processors/cores running on your system. The output will look something like this:

Linux 3.11.0-19-generic (serverschool) 04/14/2014 _x86_64_ (4 CPU)

11:52:47 AM CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle

11:52:47 AM all 10.10 0.03 1.09 0.68 0.00 0.11 0.00 0.00 0.00 87.98

11:52:47 AM 0 9.97 0.03 1.11 0.62 0.00 0.19 0.00 0.00 0.00 88.08

11:52:47 AM 1 10.19 0.04 1.07 0.74 0.00 0.10 0.00 0.00 0.00 87.86

11:52:47 AM 2 9.82 0.03 1.11 0.71 0.00 0.08 0.00 0.00 0.00 88.25

11:52:47 AM 3 10.42 0.03 1.08 0.66 0.00 0.07 0.00 0.00 0.00 87.74

On this quad-core system, “all” represents total usage, and 0 through 3 represent the four processor cores and their individual usage. For more information about mpstat, see the online documentation or type “man mpstat” from the command line.

Setup and Configure CentOS Server Part 2

In part one, we began by changing the root password and then creating an account that will be able to escalate to root privileges when needed. Now, you need to make sure that only that user can become root and not anyone else.

The easiest way to do this is to use the “wheel” group. You can configure CentOS only allow users in that group to run the “su” command to become root.

First, add your user to wheel:

usermod -G wheel <username>

Replace “<username>” with your actually username. Next, you need tell PAM (Linux’s password management system) to only allow wheel users to become root. Edit /etc/pam.d/su and uncomment the line:

auth required /lib/security/$ISA/ use_uid

Now, you have setup your system to allow you to login as your unique user and then become root. The final step for best security practices is to disable root logins completely. You should test your new setup first to make sure you have access via your new user before you proceed.

To disable root logins via SSH, do the following:

1. edit /etc/ssh/sshd_config

2. Remove the “#” from this line and change “yes” to “no”

#PermitRootLogin yes

Change to:

PermitRootLogin no

Save the file and restart SSH. You will now forbid even the attempt of root logins. This will protect your system from possible brute force attacks and other methods of Root password guessing.

In part three of the series, we will move further into CentOS server setup and learn about some of the software you will need.

How to Update Your OpenSSL Version to Fix Heartbleed Bug

Heartbleed, the highly publicized OpenSSL bug with the unfortunate name, has a lot of system administrators scurrying to fix the problem. If you have not heard about it by now, it is a security hole found in OpenSSL’s TLS heartbeat extension that a cyber criminal can use to reveal 64k of memory on a connected client or server. The bug is present in versions 1.0.1 and 1.0.2-beta of OpenSSL. You can fix it by upgrading to 1.0.1g or 1.0.2-beta2.

On a Linux server, you can check your OpenSSL version using your package manager. Depending on your distribution, the procedure will vary.

First, do the following:

# openssl version -a


$ sudo openssl version -a

The output will look like:

OpenSSL 1.0.1e 11 Feb 2013

built on: Wed Jan 8 20:58:47 UTC 2014

platform: debian-amd64

To upgrade your Debian-based system, including Ubuntu, run:

$ sudo apt-get dist-upgrade

For Red Hat and CentOS, you would run:

# yum update

If your distribution does not update to a newer version of OpenSSL, you may be running a version that is no longer being updated, or your distribution may not have fixed the problem yet. Moreover, if you have installed OpenSSL manually, you might need to recompile OpenSSL with the “-DOPENSSL_NO_HEARTBEATS” flag.

Setup and Configure CentOS Server Part 1

CentOS is essentially a free implementation of the open source code from Red Hat Enterprise Linux. Logos and other trademarks aside, CentOS is RHEL at its core without the licensing fees. As such, CentOS has become very popular among server administrators as an ideal Linux server solution. This brief tutorial will explain how to get started with a new CentOS installation.

1. Change your root password – When you sign up for a dedicated server, your host probably provided you with a randomly generated root password. If you installed CentOS yourself, you can skip this step. For security, however, it is best to change a password that was sent to you via email. Simply run this command through SSH:

# passwd

It will prompt you twice to type your new password.

2. Create an admin user

This account will act as your gateway to root. You don not want to have to login as root ever again after this first time. Therefore, create a new user that can escalate to root. If you want your username to be “bobalina” type:

# adduser bobalina

You then need to create a password for the new user that is different from the root password:

# passwd bobalina

In part two, we will continue the initial setup of users and authentication.

How to backup configuration files on Webmin

Most Linux items that need backing up are stored in user home directories. This usually includes virtual web server directories as well, depending on the web server and settings you are using. One exception to this rule is configuration files. These are stored in /etc and other places. When it comes time to backup, many people backup database files and home directories but forget about configuration files. Webmin has a solution.

To backup configuration files, login to Webmin and navigate to “Webmin – Backup Configuration Files”. Next, select the Webmin modules that you want to backup. This might include Apache web server, Bind DNS, etc. You then need to specify where you want the backups to be saved. You can save them locally to a file, use an FTP or SSH server or download them through your web browser. Finally, specify whether you want to backup the server configuration files, the Webmin module config files or both. You can then click “Backup Now”.

In addition to manual backups, the second tab in the module allows you to schedule backups. The settings are pretty much the same, except the last section gives you a cron tool that lets you specify the times when you want backups to occur. You can do simple schedules for cron hourly, daily or monthly, or you can specify specific dates and times. When you are finished, click Create. You will now have your configuration files backed up and safe.

How to Install Softaculous in cPanel/WHM

Softaculous is a handy add-on for cPanel that gives your users the ability to quickly and nearly effortlessly install web application scripts. You can administer it through WHM, but once it is installed, any of your cPanel users can do one-click script installations. Installing Softaculous is relatively easy. Just follow these steps.

First, you should make sure that your firewall will allow downloads from * You will also need to have iconCube loaders enabled for PHP.

From the command line, type:

wget -N

chmod 755


You should now be able to login to WHM, navigate to Plugins and find Softaculous – Instant Installs. If all goes well, you should see information about your software and server. Your installation is complete, and your users can now access it from within cPanel according to your settings and hosting plans.

For more information about installing Softaculous, visit for online documentation and help.

Can I Upgrade the Kernel on My VPS?

It is generally understood that a server running Linux needs to have a relatively recent kernel version or at least one that has been securely patched to fix any vulnerabilities. For dedicated servers, a kernel upgrade is not big deal; a simple install, reboot and you are done. For a virtual private server, it can differ depending on the technology and method of deployment your provider uses.

On OpenVZ, for example, the virtual OS does not actually use its own kernel. It relies on the host’s kernel. Therefore, upgrading your kernel package will not actually have an effect and might even produce errors. Instead, you can either depend on your hosting provider to update the kernel periodically or you might be able to use some internal method that the host has to update the kernel to latest.

Generally speaking Xen will have the same issues as OpenVZ. You are usually at the mercy of your host’s kernel version. If you find that the kernel is not being updated, you should contact your host to have it done. Apparently, however, there is a way to achieve kernel freedom using a tool called PyGrub.

Because of the nature of VPS, it is important that you choose a host you know will stay on top of security. Like any hosting, a website is only as secure as the server hosting it. While much of the VPS security is in your hands, you still need the help of your host.

Parallels Summit – If you were thinking of going: GO.

This year’s New Orleans event resonated all the best qualities of hosting industry conventions: Deep sessions, Great keynotes, Interesting venue, and, of course, Excellent networking opportunities.

We typically post answers to server-related questions or how-tos for common server-related tasks, but today it seems like giving Parallels a little love is well deserved as many of our readers deploy at least one Parallels product and many more could benefit from a deeper understanding of APS and the future of Cloud Computing. To both audiences, I say this: Parallels Summit 2015 will be in San Antonio next February (9th to 11th); add that to your Google calendar today.

Here is what you missed this week:

• Parallels Automation Workshop with structured Lab activities and Business Consulting Best Practices (with Exams)
• Application Packaging Standard (APS 2) Training Workshop including Labs and Go-To-Market Best Practices (with Exam)
• Parallels Plesk Panel: Professional Level Training Workshop
• Parallels Cloud Server: Professional Level Training Workshop

• Birger Steen (CEO of Parallels)
Putting the cloud to work for real businesses. Proven models for success in a true multi-service world.
• Blake Irving (CEO and Board Director of GoDaddy)
Mashing Up The Future of Cloud Services
• Abhijit Dubey (Partner at McKinsey & Company)
Big Business in Small Business: 5 Strategies to Win in Cloud Services
• Serguei Beloussov (Exec. Chairman and Chief Architect)
Cloud computing is now IT. Play to your strengths to win the business customer
• Nicholas G. Carr (Best-Selling Author of The Big Switch: Rewiring the World, from Edison to Google)
Building A Bridge to the Cloud

Breakout Sessions:
Track 1
• Show and Tell: Exclusive First Look at New APS 2 Packages
• APS Roadmap: Creating New Channels and Enhancing Integration
• Technical Deep Dive: Building APS Packages that Expand Market Opportunity
• APS Lifecycle: Building Packages, Landing Deployments and Selling Services
• Technical Deep Dive: Build User Experiences that Drive Sales and Maximize Usage Like Never Before
• APS Lifecycle: Accelerate Developer Productivity with New APS Tools and Resources

Track 2
• Key Market Trends to Refine Your Parallels Automation-based Cloud Services Offerings
• Insights into the SMB Customer Experience to Increase Wallet Share for Cloud Services
• Building a Channel Strategy That Goes Beyond the Online Marketplace
• Taking Advantage of APS Ecosystem and Microsoft to Differentiate Your Cloud Portfolio
• The Parallels Automation Vision and Roadmap
• Beyond the Sale: What Parallels is Doing to Help You Grow

Track 3
• Long Live Hosting: Using Solution-based Offers to Re-position Your Business and Reach New Audiences with Plesk
• Protect Your Network and Grow from the WordPress Opportunity
• Protect Your Assets with Server-to-site Security for Hosting
• Parallels Plesk Technical Deep Dive: Tips & Tricks
• Parallels Plesk Automation Technical Deep Dive
• Websites that Sell: Top Digital Optimization Strategies to Increase Online Traffic and Sales
• Best Practices to Extend Plesk Using the SDK
• Build a Multi-service Cloud Business with Parallels Plesk Automation

Track 4
• How to Boost Infrastructure Performance with Parallels Cloud Server
• Your IaaS, Your Choice: Delivered Through APS
• Parallels and OpenStack: Making it Work for Service Providers
• Parallels Cloud Storage Workshop

And, of course, the ever-popular attendee party, this time featuring Ra Ra Riot at the House of Blues (which you can probably imagine is pretty good in a town like New Orleans).

While attending, I was able to catch up with a few exhibitors and attendees that had nothing but good things to say about their Parallels Summit experience:

“It’s been fantastic! I’ve been here with the Internet Infrastructure Coalition as a guest of Parallels and it’s amazing to see how many member companies are a part of the Parallels community. We’ve been able to come together at events and really build a broad community of people that care about the future of the Internet and care about Internet freedom, and it’s great.” – Christian Dawson, Co-Founder and Board Chair, Internet Infrastructure Coalition

“We knew Parallels Summit would be an integral part of our channel development efforts and this year’s event didn’t disappoint. The speed and ease of which we were able to engage with other attendees is almost unbelievable. We will absolutely make this event a staple in our overall channel development strategy.” – Jennifer Cunningham, Partner Manager, McAfee Secure

“I enjoy the show for networking … and I actually love the New Orleans location, I think it’s just the right place to do things. House of blues was very fun; being dragged through Maple Leaf was probably the best jazz bar I’ve ever been to…” – Sharon Koifman, President, Distant Job

Check out a few pictures from the exhibit hall (if you’re anything like me, you will immediately appreciate how accessible the people are; always room to engage, never a Black-Friday-esque stampede):