How to Manage Linux Server Timezone Settings

Setting your date and time correctly on your Linux server is very important. Your server logs and other important information will all reflect the timezone of your server. In most cases, you will want to set your server’s time to match your own local time, but if your server is remote or hosting sites for people primarily located in another timezone, you might choose a different one.

On Red Hat Enterprise Linux or CentOS, you can use the setup program or redhat-config-date to set the timezone. The setup program, although run from the command line, uses a semi-graphical interface to make it easier.

On a Debian-based distribution, use dpkg-reconfigure tzdata to set the timezone.

With all of these commands, they will include onscreen instructions that will walk you through the process. Once you have changed the time, you can verify that your settings are correct by running “date”. For example:

$ date

Thu Apr 17 14:47:12 EDT 2014

In the case of my system, it is 2:47 PM on Thursday April, 17 2014, and the timezone is Eastern Daylight Time.

Once you have your timezone set, you should not have to reset it ever again unless you decide to change it. More information about Red Hat timezones is available here, and Debian information is available here.

How to Perform an In-Box Upgrade on an Ubuntu VPS

In-box upgrades on any operating system can be tricky. You are essentially updating all of the software, including the kernel, while keeping all of the current data. This makes it inherently risky, and some would argue against it and maintain that you should either not do upgrades or only do backups and clean installs.

For Ubuntu systems, the clean install route would be troublesome if you are on the normal release cycle, which is every six months. Having to completely backup your system and reinstall a new image would mean way too much down time. On the other hand, you could opt for an LTS install from the beginning, which stands for “Long Term Support”. These Ubuntu systems are released every 2 years and are sometimes supported up to 5 years.

If you still want to go the in-box upgrade route, you will need to check with your VPS provider for specifics on the kernel. Many virtual private servers have kernels controlled by the VPS system itself, not the OS. For example, your upgrade path might look like this:

  1. sudo do-release-upgrade
  2. halt (to shutdown the VPS)
  3. Install VPS kernel matching your new version
  4. Boot VPS

For specifics, check your VPS provider’s documentation. During the installation, Ubuntu will give you an alternate SSH port, just in case your main SSH server stops working for some reason. For more information about upgrading, see this documentation.

Use Mpstat to Monitor Multiple Linux Server Processors

Probably the first Linux tool you think of when you want to monitor CPU usage is “top”. Top is light and gives you a good glimpse of processes that are consuming processing power. One thing top will not do by default, however, is show you how each processor is working and how much of their power is being used.

If, for example, you have 8 CPUs or cores, simply viewing the overall usage does not always give an accurate picture of how much of your resources are being consumed. Mpstat gives you each processor’s usage percentage, niceness and many other important details.

To use mpstat, type from the command line:

# mpstat -P ALL

This will show all of the processors/cores running on your system. The output will look something like this:

Linux 3.11.0-19-generic (serverschool) 04/14/2014 _x86_64_ (4 CPU)

11:52:47 AM CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle

11:52:47 AM all 10.10 0.03 1.09 0.68 0.00 0.11 0.00 0.00 0.00 87.98

11:52:47 AM 0 9.97 0.03 1.11 0.62 0.00 0.19 0.00 0.00 0.00 88.08

11:52:47 AM 1 10.19 0.04 1.07 0.74 0.00 0.10 0.00 0.00 0.00 87.86

11:52:47 AM 2 9.82 0.03 1.11 0.71 0.00 0.08 0.00 0.00 0.00 88.25

11:52:47 AM 3 10.42 0.03 1.08 0.66 0.00 0.07 0.00 0.00 0.00 87.74

On this quad-core system, “all” represents total usage, and 0 through 3 represent the four processor cores and their individual usage. For more information about mpstat, see the online documentation or type “man mpstat” from the command line.

Setup and Configure CentOS Server Part 2

In part one, we began by changing the root password and then creating an account that will be able to escalate to root privileges when needed. Now, you need to make sure that only that user can become root and not anyone else.

The easiest way to do this is to use the “wheel” group. You can configure CentOS only allow users in that group to run the “su” command to become root.

First, add your user to wheel:

usermod -G wheel <username>

Replace “<username>” with your actually username. Next, you need tell PAM (Linux’s password management system) to only allow wheel users to become root. Edit /etc/pam.d/su and uncomment the line:

auth required /lib/security/$ISA/pam_wheel.so use_uid

Now, you have setup your system to allow you to login as your unique user and then become root. The final step for best security practices is to disable root logins completely. You should test your new setup first to make sure you have access via your new user before you proceed.

To disable root logins via SSH, do the following:

1. edit /etc/ssh/sshd_config

2. Remove the “#” from this line and change “yes” to “no”

#PermitRootLogin yes

Change to:

PermitRootLogin no

Save the file and restart SSH. You will now forbid even the attempt of root logins. This will protect your system from possible brute force attacks and other methods of Root password guessing.

In part three of the series, we will move further into CentOS server setup and learn about some of the software you will need.

How to Update Your OpenSSL Version to Fix Heartbleed Bug

Heartbleed, the highly publicized OpenSSL bug with the unfortunate name, has a lot of system administrators scurrying to fix the problem. If you have not heard about it by now, it is a security hole found in OpenSSL’s TLS heartbeat extension that a cyber criminal can use to reveal 64k of memory on a connected client or server. The bug is present in versions 1.0.1 and 1.0.2-beta of OpenSSL. You can fix it by upgrading to 1.0.1g or 1.0.2-beta2.

On a Linux server, you can check your OpenSSL version using your package manager. Depending on your distribution, the procedure will vary.

First, do the following:

# openssl version -a

or

$ sudo openssl version -a

The output will look like:

OpenSSL 1.0.1e 11 Feb 2013

built on: Wed Jan 8 20:58:47 UTC 2014

platform: debian-amd64

To upgrade your Debian-based system, including Ubuntu, run:

$ sudo apt-get dist-upgrade

For Red Hat and CentOS, you would run:

# yum update

If your distribution does not update to a newer version of OpenSSL, you may be running a version that is no longer being updated, or your distribution may not have fixed the problem yet. Moreover, if you have installed OpenSSL manually, you might need to recompile OpenSSL with the “-DOPENSSL_NO_HEARTBEATS” flag.

Setup and Configure CentOS Server Part 1

CentOS is essentially a free implementation of the open source code from Red Hat Enterprise Linux. Logos and other trademarks aside, CentOS is RHEL at its core without the licensing fees. As such, CentOS has become very popular among server administrators as an ideal Linux server solution. This brief tutorial will explain how to get started with a new CentOS installation.

1. Change your root password – When you sign up for a dedicated server, your host probably provided you with a randomly generated root password. If you installed CentOS yourself, you can skip this step. For security, however, it is best to change a password that was sent to you via email. Simply run this command through SSH:

# passwd

It will prompt you twice to type your new password.

2. Create an admin user

This account will act as your gateway to root. You don not want to have to login as root ever again after this first time. Therefore, create a new user that can escalate to root. If you want your username to be “bobalina” type:

# adduser bobalina

You then need to create a password for the new user that is different from the root password:

# passwd bobalina

In part two, we will continue the initial setup of users and authentication.

How to backup configuration files on Webmin

Most Linux items that need backing up are stored in user home directories. This usually includes virtual web server directories as well, depending on the web server and settings you are using. One exception to this rule is configuration files. These are stored in /etc and other places. When it comes time to backup, many people backup database files and home directories but forget about configuration files. Webmin has a solution.

To backup configuration files, login to Webmin and navigate to “Webmin – Backup Configuration Files”. Next, select the Webmin modules that you want to backup. This might include Apache web server, Bind DNS, etc. You then need to specify where you want the backups to be saved. You can save them locally to a file, use an FTP or SSH server or download them through your web browser. Finally, specify whether you want to backup the server configuration files, the Webmin module config files or both. You can then click “Backup Now”.

In addition to manual backups, the second tab in the module allows you to schedule backups. The settings are pretty much the same, except the last section gives you a cron tool that lets you specify the times when you want backups to occur. You can do simple schedules for cron hourly, daily or monthly, or you can specify specific dates and times. When you are finished, click Create. You will now have your configuration files backed up and safe.

How to Install Softaculous in cPanel/WHM

Softaculous is a handy add-on for cPanel that gives your users the ability to quickly and nearly effortlessly install web application scripts. You can administer it through WHM, but once it is installed, any of your cPanel users can do one-click script installations. Installing Softaculous is relatively easy. Just follow these steps.

First, you should make sure that your firewall will allow downloads from *.softaculous.com. You will also need to have iconCube loaders enabled for PHP.

From the command line, type:

wget -N http://files.softaculous.com/install.sh

chmod 755 install.sh

./install.sh

You should now be able to login to WHM, navigate to Plugins and find Softaculous – Instant Installs. If all goes well, you should see information about your software and server. Your installation is complete, and your users can now access it from within cPanel according to your settings and hosting plans.

For more information about installing Softaculous, visit softaculous.com for online documentation and help.

Can I Upgrade the Kernel on My VPS?

It is generally understood that a server running Linux needs to have a relatively recent kernel version or at least one that has been securely patched to fix any vulnerabilities. For dedicated servers, a kernel upgrade is not big deal; a simple install, reboot and you are done. For a virtual private server, it can differ depending on the technology and method of deployment your provider uses.

On OpenVZ, for example, the virtual OS does not actually use its own kernel. It relies on the host’s kernel. Therefore, upgrading your kernel package will not actually have an effect and might even produce errors. Instead, you can either depend on your hosting provider to update the kernel periodically or you might be able to use some internal method that the host has to update the kernel to latest.

Generally speaking Xen will have the same issues as OpenVZ. You are usually at the mercy of your host’s kernel version. If you find that the kernel is not being updated, you should contact your host to have it done. Apparently, however, there is a way to achieve kernel freedom using a tool called PyGrub.

Because of the nature of VPS, it is important that you choose a host you know will stay on top of security. Like any hosting, a website is only as secure as the server hosting it. While much of the VPS security is in your hands, you still need the help of your host.

Parallels Summit – If you were thinking of going: GO.

This year’s New Orleans event resonated all the best qualities of hosting industry conventions: Deep sessions, Great keynotes, Interesting venue, and, of course, Excellent networking opportunities.

We typically post answers to server-related questions or how-tos for common server-related tasks, but today it seems like giving Parallels a little love is well deserved as many of our readers deploy at least one Parallels product and many more could benefit from a deeper understanding of APS and the future of Cloud Computing. To both audiences, I say this: Parallels Summit 2015 will be in San Antonio next February (9th to 11th); add that to your Google calendar today.

Here is what you missed this week:

Workshops:
• Parallels Automation Workshop with structured Lab activities and Business Consulting Best Practices (with Exams)
• Application Packaging Standard (APS 2) Training Workshop including Labs and Go-To-Market Best Practices (with Exam)
• Parallels Plesk Panel: Professional Level Training Workshop
• Parallels Cloud Server: Professional Level Training Workshop

Keynotes:
• Birger Steen (CEO of Parallels)
Putting the cloud to work for real businesses. Proven models for success in a true multi-service world.
• Blake Irving (CEO and Board Director of GoDaddy)
Mashing Up The Future of Cloud Services
• Abhijit Dubey (Partner at McKinsey & Company)
Big Business in Small Business: 5 Strategies to Win in Cloud Services
• Serguei Beloussov (Exec. Chairman and Chief Architect)
Cloud computing is now IT. Play to your strengths to win the business customer
• Nicholas G. Carr (Best-Selling Author of The Big Switch: Rewiring the World, from Edison to Google)
Building A Bridge to the Cloud

Breakout Sessions:
Track 1
• Show and Tell: Exclusive First Look at New APS 2 Packages
• APS Roadmap: Creating New Channels and Enhancing Integration
• Technical Deep Dive: Building APS Packages that Expand Market Opportunity
• APS Lifecycle: Building Packages, Landing Deployments and Selling Services
• Technical Deep Dive: Build User Experiences that Drive Sales and Maximize Usage Like Never Before
• APS Lifecycle: Accelerate Developer Productivity with New APS Tools and Resources

Track 2
• Key Market Trends to Refine Your Parallels Automation-based Cloud Services Offerings
• Insights into the SMB Customer Experience to Increase Wallet Share for Cloud Services
• Building a Channel Strategy That Goes Beyond the Online Marketplace
• Taking Advantage of APS Ecosystem and Microsoft to Differentiate Your Cloud Portfolio
• The Parallels Automation Vision and Roadmap
• Beyond the Sale: What Parallels is Doing to Help You Grow

Track 3
• Long Live Hosting: Using Solution-based Offers to Re-position Your Business and Reach New Audiences with Plesk
• Protect Your Network and Grow from the WordPress Opportunity
• Protect Your Assets with Server-to-site Security for Hosting
• Parallels Plesk Technical Deep Dive: Tips & Tricks
• Parallels Plesk Automation Technical Deep Dive
• Websites that Sell: Top Digital Optimization Strategies to Increase Online Traffic and Sales
• Best Practices to Extend Plesk Using the SDK
• Build a Multi-service Cloud Business with Parallels Plesk Automation

Track 4
• How to Boost Infrastructure Performance with Parallels Cloud Server
• Your IaaS, Your Choice: Delivered Through APS
• Parallels and OpenStack: Making it Work for Service Providers
• Parallels Cloud Storage Workshop

And, of course, the ever-popular attendee party, this time featuring Ra Ra Riot at the House of Blues (which you can probably imagine is pretty good in a town like New Orleans).

While attending, I was able to catch up with a few exhibitors and attendees that had nothing but good things to say about their Parallels Summit experience:

“It’s been fantastic! I’ve been here with the Internet Infrastructure Coalition as a guest of Parallels and it’s amazing to see how many member companies are a part of the Parallels community. We’ve been able to come together at events and really build a broad community of people that care about the future of the Internet and care about Internet freedom, and it’s great.” – Christian Dawson, Co-Founder and Board Chair, Internet Infrastructure Coalition

“We knew Parallels Summit would be an integral part of our channel development efforts and this year’s event didn’t disappoint. The speed and ease of which we were able to engage with other attendees is almost unbelievable. We will absolutely make this event a staple in our overall channel development strategy.” – Jennifer Cunningham, Partner Manager, McAfee Secure

“I enjoy the show for networking … and I actually love the New Orleans location, I think it’s just the right place to do things. House of blues was very fun; being dragged through Maple Leaf was probably the best jazz bar I’ve ever been to…” – Sharon Koifman, President, Distant Job

Check out a few pictures from the exhibit hall (if you’re anything like me, you will immediately appreciate how accessible the people are; always room to engage, never a Black-Friday-esque stampede):

Using Fdisk to View Partition and Disk Information

If you ever need information about your attached media, whether it is a hard disk, solid state drive or something else, you can use fdisk to view them. It is quick, easy and can give you a lot of information.

To view currently attached disks, run this command as root:

fdisk -l

(That is a lowercase L)

Your output should look something like this:

Disk /dev/sda: 500.1 GB, 500107862016 bytes

255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x0008cdc1

 

Device Boot Start End Blocks Id System

/dev/sda1 2048 17598463 8798208 82 Linux swap / Solaris

/dev/sda2 * 17598464 976771071 479586304 83 Linux

In this example, “/dev/sda” is the device name that Linux will use. It is a 500 GB disk, and it has two partitions: /dev/sda1, which is your swap partition, and /dev/sda2, which is a bootable partition. The “System” column gives you an idea about the file system type, also identified by the “Id” number. It will not list the specific file system type.

You can also use Fdisk to create and manipulate partitions. For more extensive information on fdisk, see the online documentation or type “man fdisk” from the command line.

Monitor System Services with Webmin

A dedicated server typically runs several background services or daemons. These may include but are not limited to your web server, database server, and mail server. All of these services need to be running 24/7 to keep them available for your users. When one of the services goes down, you need to know. Webmin has a built-in monitoring system that can alert you when one of them is not working.

To monitor services in Webmin, follow these steps:

  1. Login to Webmin using your administrative account
  2. Navigate to Others — System and Server Status
  3. If you see the service you want to monitor already listed, click on it. Otherwise, use “Add monitor of type” to add it
  4. Select number of failures before reporting (default is one)
  5. Check “Email” as notification method. It should use your system email address. If you want to use another, enter it on the next line
  6. Specify any commands you want to run if the service is down or comes back up
  7. Click “Save”

You should now have a working system monitor. Assuming you used the default Webmin installation, you can even monitor your web server, since Webmin runs on its own web server. For more information about system monitoring in Webmin, see the online documenation.

5 Dumb Mistakes to Avoid with a Dedicated Server

This list is essentially useless. It is useless because no one would actually do the things on the list. Right? I joke of course, but these mistakes are ones you should definitely avoid. Some may seem like common knowledge, but they still happen far too often.

1. Using “password” as a password – For that matter, do not use “1234″, “bigdaddy”, or anything else easy to guess (and quite embarrassing).

2. Leaving root logins enabled – You are just asking for someone to attempt to login as root. Even if they do not succeed, you could avoid the attempt simply by disabling root logins.

3. World writable file permissions – When should you chmod something 777? Never. It should never be necessary, although there are some situations where it is unavoidable. Still, the more lenient you are with file permissions, the more likely you are to open up a huge security hole.

4. Running Telnet – Add to this list any number of antiquated unsecured protocols that make your server easy pickings for cyber criminals. Instead favor secure connections like SSH.

5. Going straight to production – If it is truly your first time, take a while to familiarize yourself with the OS and applications. Learn the best practices for security, and make sure you are fully prepared before you open up your server to the world.

The Benefits of Multi-Core Processors in Servers

Boosting a server’s processing power by adding cores is nothing new. In the old days, you had to add multiple physical processors. Today, some processors come with multiple cores. Therefore, adding two quad-core processors would give you 8 cores, which is 8 times the processing power. But are there really noticeable benefits to having multiple cores? The following are a few advantages.

1. Application threads – Multi-core processors allow you to allocate resources to specific applications. You can dedicate entire cores to a single process if necessary.

2. Energy efficiency – As opposed to multiple physical processors, multi-core processors use less energy and require less cooling. The cores share I/O, cache, and some other features.

3. Virtualization – It is hard to find a server nowadays that is not using some form of virtualization. It just makes economic and technical sense. Virtualization works much better when you can dedicate cores to virtual machines. On a quad core system, you could theoretically assign each core to one of four virtual machines.

4. Cost – Ultimately, it will save you money because of 1 through 3. You do not have to dedicate an entire machine to a resource-intense application. You do not spend as much on energy costs. And you do not have to have as many physical servers because of virtualization.

 

Storing Web Cache on a RAM Disk: Part Two

In part one, you learned how to create a basic RAM disk and how to make it permanent by creating an entry in /etc/fstab. In this section, you will learn how to configure your Apache to send certain files to your RAM drive.

In this example, we will only send images to the RAM drive. Special thanks to NixCraft for the info.

First copy your images to the ram drive you created:

cp /home/user/www/public_html/images/*.jpg /webcache

Next, you will need to edit your Apache configuration to point any visitors to images.yourdomain.tld to the right place. This assumes you want to cache your images and that you have created a subdomain for those image files.

<VirtualHost 1.2.3.4:80>

ServerAdmin admin@yourdomain.tld

ServerName images.yourdomain.tld

DocumentRoot /webcache

#ErrorLog /var/logs/httpd/images.yourdomain.tld_error.log

#CustomLog /var/logs/httpd/images.yourdomain.tld _access.log combined

</VirtualHost>

Then, reboot Apache:

# service httpd reload

Finally, to ensure your images are not deleted upon reboot, you will need to create a script called initwebcache.sh and place it in /etc/rc.local. You can use code similar to this:

#!/bin/sh

mkfs -t ext2 -q /dev/ram1 8192

[ ! -d /webcache ] && mkdir -p /webcache

mount /dev/ram1 /webcache

/bin/cp /home/user/public_html/images/*.jpg /webcache

Make sure you make the script executable:

# chmod +x /path/to/initwebcache.sh

You should now have a working cache for your images that loads them into RAM and gives your website viewers a much faster experience.

Storing Web Cache on a RAM Disk: Part One

One way to speed up your web server is to cache frequently accessed pages and content. This is much faster because dynamic pages do not have to be recreated every time someone accesses them. Instead, the cached HTML files are loaded at a much faster rate. Using a RAM disk, you can make that caching even faster.

A RAM disk is a virtual drive that is entirely located within the system’s random access memory (RAM). Nowadays, most servers have a ton of RAM, so allocating a small portion of it to cache web files is no big deal. In fact, it may actually cost more memory to dynamically create pages from the database than it does to store compressed HTML files.

On Linux, you can create a RAM disk with a single command, such as this one:

mount -t ramfs none /path/to/directory

Upon reboot, however, this RAM disk will be gone. To make it permanent, you need to add it to /etc/fstab:

webcache /path/to/directory ramfs defaults 0 0

It should be noted that no matter what you do, a RAM disk, even a permanent one, will empty itself upon reboot. That is the nature of RAM. This should not be a problem with a web server, but if you need to keep cached files, even after a reboot, you can configure Linux to store them on disk before shutting down. In Part Two, you will learn how to configure Apache to use your RAM disk.

Glances System Monitoring Tool for Linux

There is certainly no shortage of system monitoring tools for Linux. If you are the type who likes to know exactly what is going on with your server, Linux is probably ideal for you. Many of these tools are very specific, focusing on one or two aspects like CPU, Memory or Disk usage. Others cover a wide range. Glances fits into the latter category, providing information about your kernel, CPU, system load, processes, memory usage and more.

Glances is free and open source and relies on curses to display semi-graphical information from the server console or terminal. It is written in python and is available in many Linux distribution package repositories.

To install on RHEL/CentOS, you will need the EPEL repository enabled. Then type:

yum -y install glances

Glances is available in the default Ubuntu repository. Simply type:

sudo apt-get install glances

You can run glances with no options and get quite a bit of information, such as load per CPU, memory usage, swap usage, network usage, disk I/O, disk space usage and running processes. The app updates dynamically, so you can see process usage in near real time. To exit, press “q”.

Glances features a ton of options you can use to enhance your experience. For a full list of options and detailed help, see the online documentation.

Find Out When a Unix/Linux User Last Logged In

Keeping track of your users’ activities may seem a little bit intrusive, but it is very necessary for security-conscious system administrators. When users are logged onto the system, you should know, and if a user account is up to anything suspicious, knowing when the user’s logged in might very well save your system. It may also be useful to know the last time a particular user logged on. This brief guide will explain how to do just that.

The command to view a user’s last login is simply “last”. If you just type “last” from the command line with no options, it will show you the last several logins of all users and also system reboots. Type “last” followed by the username of the user in question, and you will get only the logins from that user. For example:

last tavis

This will display all information for my account on my system. The output will look like this:

tavis pts/1 :0 Mon Feb 10 22:07 still logged in

tavis pts/0 :0 Mon Feb 10 18:32 still logged in

tavis pts/0 :0 Mon Feb 10 18:23 – 18:23 (00:00)

tavis pts/0 :0 Mon Feb 10 12:56 – 18:22 (05:25)

reboot system boot 3.5.0-27-generic Mon Feb 10 12:54 – 22:07 (09:12)

The last command is useful for finding out exactly when a user was last online, which can help safeguard your system and protect the users themselves.

 

How Low Can I Set My DNS TTL?

TTL refers to Time To Live for a DNS server. You may find that your server’s default TTL is quite long, perhaps even 48 hours. This means that any changes to DNS zone files will take up to 48 hours to propagate to users. Because that can be very inconvenient when you want to change something fast, you might be tempted to make your TTL value lower. Is this a good idea, and how low can you go?

Ideally, you want to get the job done fast without causing any problems for your server. Making the TTL too long will mean you have to wait anytime you create a new subdomain or perform other minor tasks. Making it too short can add unnecessary load to your server, use more bandwidth and possibly even violate Internet regulations.

To remedy this dilemma, some system administrators will lower the TTL whenever they are doing DNS work and then raise it back to a reasonable level when they are finished. That way, you get your job done in a timely fashion but do not leave your DNS server in a constant state of flux when you are not even making changes.

For more information about this issue, you can read a quite detailed book on the subject online.

 

Linux OS Reboot and Service Restart Guide

Your brand new Linux server may seem a bit intimidating at first, but with a little training and practice, you will find it to be very manageable. The following is a quick guide to restarting services and your OS itself.

Reboot – Rebooting your server is quite simple. On the console, you can press CTRL+ALT+Delete. If you are accessing your server remotely via SSH, simply type while logged in as Root:

reboot

All logged in users will be informed that your system is rebooting.

Another method of rebooting is to use the shutdown command with the “-r” option. This is useful if you want a delayed reboot and if you want to add a custom message to users. For example, to reboot in 5 minutes, type:

shutdown -r +5 "This server will reboot in 5 minutes. Please save your work and log off."

Service Restart -To restart services, it may vary depending on distribution. For RHEL/CentOS/Fedora and Ubuntu, you would type:

service [servicename] restart

So for Apache on CentOS, it would be:

service httpd restart

Always keep in mind that any service related to your connection must be available for you to stay connected. In other words, if you restart networking, you will lose your connection, so you had better hope it comes back online. The same goes for rebooting the entire server.

How to Add Repositories to Ubuntu Server

Ubuntu is well known for being one of the most popular Linux-based desktop operating systems, but it is also become popular for server usage as well. The highly-acclaimed cloud platform OpenStack is built around Ubuntu, and many web hosts now offer it as an option for their VPS and dedicated server clients. This brief tutorial will explain how to add repositories to Ubuntu from the command line.

Presumably, if you have Ubuntu installed on a server, you will not have access to a graphical interface. Therefore, you need to know how to add repositories from SSH or directly from the console.

First, you should know where the software repository list is located. You can find it at /etc/apt/sources.list. Before you start editing it, you should make a backup copy:

sudo cp /etc/apt/sources.list /etc/apt/sources.list.backup

Inside the file, you will find a list of repositories. Most likely you will only see the default ones, unless you have added others. To add repositories not listed, simply start on a new line and paste in the information so that it looks like this:

deb http://archive.canonical.com/ubuntu saucy partner
deb-src http://archive.canonical.com/ubuntu saucy partner

To add third-party PPAs, you can use the following command structure:

sudo add-apt-repository ppa:<repository-name>

After you add any repositories, you need to refresh the package list with:

sudo apt-get update

You can then install any packages available for your system in those repositories.

Use VLC to Stream Live Media

You do not have to pay big money to stream video or audio over the Internet. If you already have a dedicated server or VPS, you can use VLC to stream media to your users.

VLC is a free and open source video player and streaming media server. On the client side, you can use it to view videos and listen to audio. On the server side, it becomes a powerful tool for on-the-fly media streaming.

First, you need to make sure VLC is installed on your server. You can follow the instructions at tecmint to get it installed on CentOS, RHEL or Fedora. For Debian and Ubuntu, type:

sudo apt-get install vlc

Once installed, you can setup your stream however you want using VLC’s advanced command line interface. For example, from the server you could type:

vlc -vvv input_stream --sout '#standard{access=http,mux=ogg,dst=server.example.org:8080}'

On the client side, your users would connect to http://server.example.org:8080

For more examples and more detailed explanations of command line functionality, see the VLC online documentation.

 

Is an In-Box OS Upgrade Worth the Risk?

Many Linux operating systems offer the ability to upgrade to a new version of the OS without wiping the hard drive and reinstalling. On a dedicated server, you might call this an in-box upgrade. Generally, a new version of your Linux distribution will include a newer kernel version and newer software. Is such an upgrade worth the risk?

First, it is important to understand what will happen when you upgrade your system. Generally, updates will install patched versions of your software. In some cases, you may even see slightly newer versions. An upgrade will install the latest version of your distribution, which usually has much later snapshots of your software. You might jump a whole version in Apache, PHP and other mission-critical applications. This can be risky.

You also need to consider how much maintenance time you will need for such an upgrade. You will have to dedicate a significant amount of resources to downloading and installing packages. It could take hours or even an entire day depending on network traffic.

Finally, it boils down to value vs. risk. A new upgrade may give you newer software but it might not be enough of a change to warrant the risk. The only situation in which an upgrade would be absolutely necessary would be when your OS will not longer be supported and receive security updates from the developers. In that case, an upgrade is a must.

Is it Wise to Oversell to Your VPS Customers

Overselling is not a new concept. Every web hosting company that offers “unlimited” anything (bandwidth, disk space, etc.) is overselling. There is always a limit, even if they do not tell their customers. Their bet is that at least some customers, maybe even most, will not use up all the resources at once. It is a similar bet the bank makes in hoping that all its customers do not withdraw their money at once.

Is it wise to oversell VPS resources? That is a difficult question to answer. On the one hand, it allows you to put more customers on a node, thus generating more revenue. On the other hand, it is slightly dishonest, while not an outright lie. Most VPS customers want a VPS so that they can have a more dedicated server feel. With shared hosting, they expect to share resources with others. With a VPS, they expect those resources to be guaranteed.

If played carefully, you can get away with overselling VPS resources with no complaints from your customers. One slip up, however, can damage your reputation as a reliable host, especially if your overselling results in downtime for customers who thought they were getting dedicated resources. Ultimately, it becomes a percentages game. It might be safe to oversell 15 percent of resources but may be dangerous to oversell by 30 percent. You will have to decide on your own if it is worth the risk.

How to Tunnel a MySQL Connection Through SSH

In a previous post, we highlighted some of the benefits of tunneling with SSH. Now, you will learn how to use an SSH tunnel to connect to MySQL remotely. With this method, you will connect to SSH, forwarding all information on port 3306 (the MySQL port) through this encrypted connection.

To get started, connect via SSH:

ssh -L 3306:localhost:3306 user@yourdomain.tld

Once you have the tunnel open, you can proceed to connect to MySQL as you normally would in order to manage your database. That may involve using MySQL Admin, Query Browser or whatever tool you normally use. Just make sure your connection goes through port 3306, as it should anyway.

This method is great if you want to use a graphical desktop program to manage MySQL (rather than phpMyAdmin or the command line) but do not want to risk having an unsecured connection. SSH can fix all of that for you.

 

The Benefits of SSH Tunneling

SSH tunneling is a method of connection that, as the name implies, tunnels your information stream through SSH. The result is an encrypted and secure transaction, even if the standard connection method itself is not normally secure. Many system administrators may use SSH tunneling for file transfer, database management and many other tasks.

The benefits of SSH tunneling are:

  1. A secure transaction even for connections that normally would not be
  2. The ability to access a connection that might be blocked by your place of employment or other institution. You can use a different port and also mask the information from prying eyes.
  3. As a result of #2, you have increased privacy
  4. Better security management – If you are allowing other users to connect to your server, you can rest a little easier knowing that they are using secure SSH accounts

SSH tunneling is secure, affordable and easy to setup. It is not the only method out there, but it is certainly one of the best.

PHP Troubleshooting Tips

PHP is a versatile server side scripting language that powers many of the world’s websites. Sometimes, however, things can go wrong. The following are tips to help you troubleshoot your PHP installation.

Scripts do not execute – If you load a PHP page and see the contents of the file rather than the proper scripting output, you might need to make sure the PHP module is enabled on your web server. You should also check to make sure “.php” is defined in your Apache configuration file.

For example, to enable a module on a Debian, run:

# a2enmod php
or
# a2enmod php5

The proper configuration file setting for php mime types would be:

<IfModule mime_module>
AddType text/html .php .phps

</IfModule>

Internal Server Error – In pretty much all cases, an internal server error is a server-side problem, not something caused by the user’s web browser. It could be related to permissions. Make sure the PHP files you are running (and any associated files) have the proper permissions that allow your web server to execute them.


It could also be another server-side problem. If you have php-cli installed, you can run PHP from the command line to view any errors. Then, from the command line, simply run:

$ php your-cold.php

If there is an error in execution, you will likely see it in the output. If there is not an error, it will simply display the HTML output.

Come back soon for more helpful server troubleshooting tips.

 

How to Fix Out of Memory Script Errors

A common problem you might face when running PHP scripts is an “out of memory” error. It usually looks something like this:

PHP Fatal error: Out of memory (allocated 51795435) (tried to allocate 84524 bytes) in /home/user/public_html/randomdirectory/file.php on line 750

First, it is important to understand what this error means. PHP is configured to only allow a certain amount of memory to be allocated to a script. This is a security and performance precaution so that a single script does not exhaust all of your RAM. In most cases, the PHP memory limit is set much lower than it needs to be by default.

To change the global PHP memory limit, edit your php.ini file. In CentOS, it is usually located at: /etc/php.ini.

You should see a line in the file that looks like:

memory_limit = 16M

You can raise that amount to whatever you need to make your script function properly while also not taking up too much memory so as to compromise the integrity of your system. For example, you can set it to:

memory_limit = 64M

This should fix your problem. If your script continues to need more memory no matter how much you allocated to it, the script itself may be the problem, and you should investigate further.

 

What is a Virtual Storage Appliance (VSA)?

The “Information Age” has left us with more information than we can easily handle. As a result, storage has become an industry in itself, and it is no longer just for backup and recovery. Simple access and retrieval can be difficult when you have terabytes of data to deal with. Virtualization has presented its own unique challenges for storage. If you have saved on hardware by running a virtual machine, do you really want to go out and purchase hardware to add storage to it? A virtual storage appliance (VSA) can help.

A VSA is a storage controller that runs on the virtual machine (VM) and allows you to create shared storage without needing to purchase additional hardware. The VSA can tap into several virtual machines and utilize available storage on each. It can then create a shared virtual storage pool for all of the virtual machines. The result looks like a networked storage appliance without the cost of actually purchasing one or maintaining a physical machine.

Companies that offer virtualization systems such as VMware and HP now sell VSA technology to small and medium businesses that cannot afford additional hardware and large enterprises that need to consolidate storage.

Speed Up Your Web Applications with Memcached

Most modern websites are dynamic in nature. They are updated on the fly, rely on database backends and often feature some type of support for user interaction. While those aspects of dynamic websites are all positive, the negative side is that they require more system resources, especially memory. A more resource-intense web application can slow down your server and make user experiences less pleasant. Memcached is a tool that is designed to help alleviate some of those slow downs.

Based on its website’s own description, Memcached is defined as a, “Free and open source, high-performance, distributed memory object caching system”. Memcached lifts some of the strain on servers by storing small chunks of frequently accessed data in memory to speed up page rendering.

To install memcached on Red Hat Enterprise Linux or CentOS, you need the following packages:

  • memcached
  • memcached-selinux
  • perl-Cache-Memcached
  • php-pecl-memcache
  • python-memcached

The first step is to enable RHEL EPEL repository. In CentOS 6, for example, you would run the following commands:

wget http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

wget http://rpms.famillecollet.com/enterprise/remi-release-6.rpm

sudo rpm -Uvh remi-release-6*.rpm epel-release-6*.rpm

You should then check /etc/yum/repos.d/remi.repo to make sure it is enabled.

Next, install the packages:

yum install memcached php-pecl-memcache memcached-selinux

Finally, you will need to configure memcached according to your preferences. You can see an example here and read the full documentation here. After configuration Memcached will be ready to lighten the load on your server.