What are the Major Advantages of Business Colocation?

ColocationAs a business, hosting is a very important thing. It may be what keeps you going and keeps everything running smoothly.

You have many different hosting options all the way from shared hosting to dedicated and even colocation. For many businesses, colocation makes quite a bit of sense, especially if you already own servers.

Colocation will allow you to store the equipment you use (servers) in rack space located within a secure data center. You will be able to use a public IP address, power from the service provider and bandwidth from the service provider.

Advantages of Colocation for Businesses

When it comes to using colocation for your business servers, there are several advantages. Here are just a few of the major advantages you will gain.

Better Network Security

Colocation Data CenterIn most cases, renting data center space will be far more secure than housing your servers at your actual place of business. Most data centers provide top-notch network security with the best firewalls and IDS systems you can find. This will ensure nobody gets in without authorization.

Room for Growth

Maybe your business is growing and you’ve already outgrown the area you house your servers in. With colocation, you have plenty of room to grow and expand, as needed. Data centers are often very large and offer plenty of room for the IT infrastructure of your company to grow. You won’t need to invest as much money into growth, either.

Better Connectivity

With a colocation data center you will gain access to a fully redundant network. This provides better connections for your business applications and keeps everything running without interruption.

Excellent Capabilities

Data centers will give you the ability to use higher levels of bandwidth when you have a huge amount of traffic. Maybe a video went viral or you just see more traffic during the holiday season, with colocation, you won’t have to worry. Data spikes happen in business and a colocation center will be able to help you handle the traffic without issue.

Redundant Power

Colocation centers also provide you with redundant power, which is another level of security. It will ensure you stay up when the power goes out. Many use diesel power generators, double battery backup systems and other power backups to ensure your servers are always up.

Moving Towards Cloud Migration

Data CenterIf you plan to move to the cloud in the near future, colocation may be the first step. This allows you to move all of your equipment out of your facility and gain increased capacity and performance for your business. As you move to the cloud, this will be a very important part of the process if you want to ensure a smoother transition.

There are several benefits of colocation for businesses. When you want to grow or you simply need space your servers are taking up, colocation may be the answer. It will allow you to run your servers without the need for the larger overhead of a larger space.

Biggest Advantages of a Dedicated Server

Dedicated ServerCompared to any other type of hosting, a dedicated server comes with plenty of advantages. While many are moving to the cloud or trusting their cheap shared hosting, dedicated servers still provide the best hosting option for most companies and websites.

A dedicated server certainly provides more power than shared hosting, VPS hosting or cloud hosting. It gives you more control and plenty of resources to use for your website. Here are a few of the biggest advantages you gain with a dedicated server.


It’s a bit overrated by some, but flexibility is vital to the success of many websites and applications. A dedicated server gives you the ability to fully customize the server however you see fit. You can adjust the CPU, disk space, RAM and software, even if you have managed dedicated hosting.

Shared hosting doesn’t allow much flexibility and you’re often stuck with the software they give you. VPS hosting offers some customization and flexibility, but it simply cannot compare to that of a dedicated server.

With a dedicated server, you can customize the server environment to fit your specific needs. You get to choose the software and platform best for your projects.

All the Resources are Yours

Dedicated HostingDedicated hosting is the only hosting option offering all of the resources of the server to one client and one client only. Shared hosting uses a “free for all” method of sharing, while VPS hosting partitions the resources. In both cases, you have to share the server with other clients and you don’t gain the full power of the server.

If you need all of the CPU and RAM or you just need the security of knowing you are the only client on the server, a dedicated server is the only way to go.

Better Performance

Dedicated servers outperform all other types of hosting when properly utilized. They provide faster website load times, better uptime and better overall performance. You don’t have to worry about anybody else impacting the performance of the server with a traffic spike or a user error.

More Security

If you’re on a shared server, you count on hundreds, maybe thousands of other clients to keep their sites and accounts secure. A mistake made by someone else may cost you.

With a dedicated server, you don’t have this type of security concern. You’re the only one on the server, so if a mistake is made, it was your fault not someone else’s. You also get the ability to customize the server security however you please.

Unique IP Address

Dedicated Server HostingA unique IP address helps with SEO and other parts of any online project. With shared hosting, you share the IP address with a bunch of other websites. If one of those sites becomes a spam site, it may cause you to struggle with your ranking.

Your dedicated server will have a unique IP address only for you. This means, you gain full control of it. If you plan to run an eCommerce site, this is vital for security.

There are many benefits and advantages to a dedicate server. Whether you choose to buy a server and use rack space or you plan to rent a server and let another company manage it, you gain all types of benefits compared to shared, VPS or cloud hosting.

Cloud Computing vs. Dedicated Servers – What You Should Know

Cloud ComputingEverything in the hosting industry seems to be about cloud computing these days. It provides scalability, redundancy and on-demand services, but it is really as good as many advertisers say it is?

While it may not fit with every business or website, it’s important to know the differences between cloud computing and dedicated servers. You want to be sure you get the right hosting and here’s what you should know about both.

What the Heck is the Cloud, Anyway?

Understanding what the cloud is will help in your search for the right hosting. The cloud is often prone to hardware problems and even network issues. It’s not always as good as advertised and can be prone to bottlenecks. The cloud is shared services where the RAM, CPS and network resources are shared with others.

Which is Faster?

There is no doubt that a good dedicated server will outperform most, if not all, cloud services when it comes to speed. Cloud may provide excellent storage, but it’s not going to provide the same speed. Many use cloud computing due to the scalability. However, a dedicated server can easily be scaled and gives you plenty of speed compared to cloud services.

Disk IO Comparison

When a dedicated server is configured correctly, it often provides excellent value for your money, especially with the disk IO. Cloud services, on the other hand, often lead to an unpredictable disk IO. In many cases, if one client on cloud hosting starts to see a ton of traffic, it slows everything down for the rest of those on the cloud computing. In addition, not all cloud computing disk IO issues are easily solvable within the cloud framework.

Redundancy Comparison

Dedicated ServersRedundancy is one of the reasons many choose cloud services. It’s more redundant than a single dedicated server. This is simply the truth and there’s not much you can do. However, the node for cloud services isn’t much more reliable than a single dedicated server. If the node fails, it will be much the same as if the CPU or the RAM failed for a dedicated server.

Complexity Comparison

One of the very mystic things about cloud computing is the complexity. Dedicated is the better choice when complexity is considered. Complexity will add costs and cloud hosting is certainly more complex than dedicated server hosting. If you offer web design services, using WordPress, Drupal or Joomla with cPanel or Plesk isn’t as easy with cloud services as dedicated servers. You will get more value out of a dedicated server, as well.

Price Comparison

Cloud Computing vs. Dedicated ServerWhile cloud services may seem cheaper up front, they can get very expensive very quickly. A well-managed dedicated server or servers can provide excellent performance with plenty of value. When comparing the two, you get more value out of dedicated server than cloud computing, in most cases.

Overall, cloud computing vs. dedicated server hosting will be a debate many will disagree on. For most businesses, a dedicated server will provide more bang for your buck, more speed and a better, hosting choice.

An Introduction to Web Servers: Part 2

A web server will typically run as a daemon (system service) under a single application process. That initial process will then spawn child processes that handle virtual servers, individual websites, or even individual requests. As such, a web server could spawn hundreds or even thousands of processes per day, per hour, or even per minute.

For security, child processes normally run as unprivileged users (i.e. nobody) to prevent hackers from gaining access to the server operating system through a back door in the web server. Dynamic web applications can also serve as an added security risk, so a web server may run scripting modules as separate CGI programs, rather than as part of the web server itself.

SSL and TLS are standard on all web servers and facilitate the use of secure transactions via the HTTPS protocol.

Web servers use compression to speed up requests and serve web pages and content using less bandwidth. Gzip compression is a standard method of delivering compressed data.

The standard HTTP port for web servers is port 80, and the standard port for HTTPS is 443; however, most web servers can be configured to run on virtually any port.


An Introduction to Web Servers: Part 1

Once you have chosen an operating system, setup some basic security, and decided on a web-based control panel, you will need to decide what software you will run on your server. Some control panels will install your software for you, but it may be worth it to choose one that is right for your needs.

Web server software displays the HTML pages stored on a server. In addition, a web server may also process scripts, run additional processes, and connect to a database server. It is essentially the intermediary between your dedicated server and the Web. Without it, no one will ever see what you have on your server. That makes web servers one of the most crucial components of server, and choosing the right one is critical.

There are a few key factors to consider when selecting a web server:

1. Performance – speed, load averages, and memory consumption

2. Stability and security

3. Scalability

4. Features

Some web servers are fast and lightweight but may not provide as many features. Others may offer a wide range of features but may not scale well. The following are web servers that work well in most situations and have proven to be reliable for small and large websites. They all have the basic features one would expect, like support for virtual servers and SSL (secure socket layer), as well as static and dynamic web pages. Additionally, each has its unique features that make it ideal for specific types of websites.

How to Fix the Bash Shellshock Vulnerability on Linux

In the previous post, we explained how to check your Linux server for the highly publicized Shellshock vulnerability in Bash. Fortunately, most, if not all, major Linux distributions have already uploaded the fix into their package management repositories. All you have to do is install the latest version. Unfortunately, there is some evidence to suggest that those updates are currently incomplete. Nevertheless, keeping it updated will thwart potential attackers. Red Hat and other companies are working at this very moment to roll out full fixes. They may even already be available by the time this article is published.

On Debian or Ubuntu distributions, run the following command to update bash:

$ sudo apt-get update && sudo apt-get install –only-upgrade bash

For Red Hat, Fedora, and CentOS, run:

# yum update bash

Once you have ensured that you have the latest version of bash, you should re-run the vulnerability check to see if your server is now safe from it.

You should not need to reboot your server or take any further action once you have completed the update. Nevertheless, over the next few days, more updates might be released as developers and system vendors learn more about the extend of the vulnerability and the types of malware that cyber criminals produce to exploit it.


How to Check Your Server for Bash Shellshock Vulnerability

The hosting world has been hit with yet another highly publicized server vulnerability. This one affects the ubiquitous shell program GNU Bash and is referred to as Shellshock. Most Linux, BSD and Mac OS X operating systems and variants use Bash or derivatives of it. All Bash versions between versions 1.14 and 4.3 are potentially vulnerable. Fortunately, it is easy to check for the vulnerability and easy to fix.

To test your Linux server for the vulnerability, login via SSH and type this command from the bash prompt:

env VAR='() { :;}; echo Bash is vulnerable!’ bash -c “echo Bash Test”

If the vulnerability is present, the output will look like this:

Bash is vulernable!

Bash Test

The “Bash is vulnerable” is the line where an attacker could potentially inject code through any service or program that uses bash scripting. These programs may be more prevalent than you might think. If your Bash installation is not vulnerable, the output will not print the “Bash is vulnerable” line but should still print “Bash Test”.

In the next post, we will cover ways to fix the vulnerability on a number of Linux distributions.


Server Architecture: Past, Present, and Future – Part 3

The Future

One challenge that chip makers have juggled is the power requirements for faster chips and the heat output associated with them. In some cases, cooling fans and heat sinks have become inefficient ways to keep chips cool, leading a few to invest in liquid cooling and other unconventional methods.

As the world becomes more conscious of the need for energy conservation and corporate social responsibility becomes more of a requirement rather than a suggestion, many businesses with servers are looking for ways to reduce energy output and costs. It is, therefore, no surprise that AMD and Intel are introducing chips that are designed to be “green” and reduce energy usage, while still giving applications their necessary power.

Companies like VIA see this as an opportunity to introduce their low-powered embedded chips into the server market. Dell, for example, is using VIA’s nano chips for their mini servers.

Another embedded chip manufacturer, ARM Holdings, plans to introduce server processors based on their Advanced RISC Machine (ARM) chips in the coming years. ARM has become well known for powering mobile phones and other small devices, but the company believes its processors can issue in a new era of low-powered, energy-efficient servers.

Without a doubt, Intel, AMD, and others will have to find ways to compete with this new trend of smaller, more energy-efficient processors. In that regard, the future remains uncertain. What is certain is that server processors will continue to get smaller and faster. There may come a time when a server is kept in the drawer of an office desk, which is a future many business owners and IT professionals will welcome.

Server Architecture: Past, Present, and Future – Part 2

Other companies that produce x86 processors include Cyrix, AMD, and VIA. AMD, in particular, has become Intel’s biggest competitor.

At the turn of the century, Intel introduced their high-powered 64-bit processor line called IA-64 or Itanium. Originally produced at HP, Intel later joined in the development with the intention of making server-class processors to compete with IBM PowerPC and Sun SPARC processors. The Itanium mainly powered servers running the HP-UX operating system (HP’s UNIX-like OS), but Microsoft and others also supported it.

AMD’s release of their 64-bit extension for x86 processors (called x86-64), however, put a serious dent in the Itanium market, one that would eventually lead to its demise. Recently, Microsoft announced that it would stop supporting Itanium processors.

In order to compete with AMD’s server processors, such as their Opteron, Intel decided to join in on AMD’s party and create their own x86-64 processors, based on their previous Xeon server models.

Today, multi-core Xeon and Opteron processors are very popular choices for dedicated servers, particularly those that power web applications. Current clock speeds reach as high as 3.8 GHz. Although PowerPC, SPARC, and other architectures still exist, they have only become prevalent in more specialized markets.

Server Architecture: Past, Present, and Future – Part 1

Servers have evolved significantly over the past three decades. At one time, a single server filed an entire room, required its own cooling system, and had processors that ran slower than many of today’s mobile phones. Today, servers are smaller, faster, and more energy efficient, but the most important element is still the processor.

Because server technology is constantly evolving, it is important to know the history of server processors, how they came to be, and where they are headed. Microprocessors have evolved from single-core, inefficient chips to multi-core powerful multi-taskers with on-chip cache and incredible speed.

Brief Processor History

Early server processors were primarily RISC (reduced instruction set computing) chips, dating back to 1964. By the standards of the time, these processors were extremely fast and revolutionary.

Later RISC-based processors would include IBM’s PowerPC chips and Sun Microsystem’s SPARC. For several decades (from the 80s until present) these chips have powered many of the world’s servers.

Intel’s 8086 processor introduced a new type of chip architecture, which would come to be known as x86. The 8086 was introduced in 1978 as Intel’s first 16-bit processor. That first chip had a maximum CPU clock of 10 MHz. Although most people think of these processors as being designed primarily for desktop computers, the 16-bit and later 32-bit versions have both powered a multitude of small and large servers.


How to Add/Remove Yum Repositories

Red Hat Enterprise Linux, CentOS, Fedora and other Linux distributions based on RHEL all use YUM as a package management system to install, remove, and update software. Each distribution has its own main repository, but you can also install or remove third-party repositories whenever you like.

To add a YUM repository, type as root:

yum-config-manager –add-repo repository_url

For example:

yum-config-manager –add-repo http://www.serverschool.com/serverschool.repo

And the output will look like:

Loaded plugins: product-id, refresh-packagekit, subscription-manager

adding repo from: http://www.serverschool.com/serverschool.repo

grabbing file http://www.serverschool.com/serverschool.repo to /etc/yum.repos.d/serverschool.repo

serverschool.repo | 413 B 00:00

repo saved to /etc/yum.repos.d/serverschool.repo

You can then enable the repository with:

yum-config-manager –enable serverschool

To disable a respository:

yum-config-manager disable serverschool

For more information about repository management with YUM, see this Red Hat documentation.


How to Add and Remove APT Repositories

Ubuntu based Linux distributions rely on a program called APT to handle package management. Using the command “apt-get”, you can install, remove and update other programs. The packages installed with APT are determined by software repositories, and while every Linux distribution has default repositories, you can also add or remove third-party sources.

To add a Personal Package Archive (PPA) to your Ubuntu server or virtual private server (VPS), run this command:

sudo add-apt-repository [ppa name]

For example, if the PPA is ppa:server/school, you would type:

sudo add-apt-repository pp:server/school

The output will look like this:

tavis@serverschool:~$ sudo add-apt-repository ppa:server/school

[sudo] password for tavis:

Server school is a fake program I made up for the purposes of this article

More info: http://www.serverschool.com

Press [ENTER] to continue or ctrl-c to cancel adding it

…followed by gpg information

Then, to refresh the repository list, you need to run:

sudo apt-get update

Finally, install the software you wanted:

sudo apt-get install server-school

To remove a PPA, you can use a similar command structure:

sudo add-apt-repository –remove [ppa name]

So, to remove the server school repository, you would type:

sudo add-apt-repository –remove ppa:server/school

As with any third-party software, be careful when adding PPAs, especially from sources you do not fully trust. If you are unsure about a PPA, it is best to leave it off of your server.

How to Add and Remove APT Repositories
Ubuntu based Linux distributions rely on a program called APT to handle package management. Using the command “apt-get”, you can install, remove and update other programs. The packages installed with APT are determined by software repositories, and while every Linux distribution has default repositories, you can also add or remove third-party sources.
To add a Personal Package Archive (PPA) to your Ubuntu server or virtual private server (VPS), run this command:
sudo add-apt-repository [ppa name]
For example, if the PPA is ppa:server/school, you would type:
sudo add-apt-repository pp:server/school
The output will look like this:
tavis@serverschool:~$ sudo add-apt-repository ppa:server/school
[sudo] password for tavis:
Server school is a fake program I made up for the purposes of this article
More info: http://www.serverschool.com
Press [ENTER] to continue or ctrl-c to cancel adding it
…followed by gpg information
Then, to refresh the repository list, you need to run:
sudo apt-get update
Finally, install the software you wanted:
sudo apt-get install server-school
To remove a PPA, you can use a similar command structure:
sudo add-apt-repository –remove [ppa name]
So, to remove the server school repository, you would type:
sudo add-apt-repository –remove ppa:server/school
As with any third-party software, be careful when adding PPAs, especially from sources you do not fully trust. If you are unsure about a PPA, it is best to leave it off of your server.

Top Web Server Software for Dedicated Servers

Netcraft publishes a list of the web’s most widely used web server software every month. Here is a brief look at each of those top web servers and what they can do.

Microsoft IIS (37% market share) – Microsoft Internet Information Services is the web server designed specifically for Microsoft Windows Server operating systems. It has recently gained popularity due to Windows Azure cloud services and is now the most widely used web server on the web. It is proprietary and requires a license for Windows to legally run.

Apache HTTP Server (35% market share) – Apache enjoyed the top spot for most used web server for over a decade, but that reign has come to an end. The free and open source web server software is available for installation on most Linux distributions, BSD, variants of Unix and even Windows.

Nginx (14% market share) – Nginx is known for being a high-performance web server that can handle high load and traffic. Many extremely popular websites with millions of daily visitors depend on it. It is now the third most popular web server and continues to chip away at Apache’s #2 spot. Like Apache, it is also free and open source.

There are a number of other web servers that make up a considerable percentage of the web, including Google’s own custom web server, but since these are not available to the public, we have not included them.

Understanding Systemd and How to Use It: Part 2

In part one, you learned a little about what systemd is and which Linux distributions plan to use it. In part 2, you will learn how to use systemd to start and stop services. We will use Red Hat Enterprise Linux, CentOS and Fedora in the explanation, but most of it will apply to other distributions that use systemd as well.

With the old init system, you could use the “service” command to start and stop services. For example, to restart Apache, you would type from the command line:

service httpd restart

The more direct way of doing it was to find the actual init script in /etc/rc.d/init.d and restart it using that script. The service command has now been replaced by the systemctl command. For now, you can still use the “service” command, and the OS will just remind you that it is no longer the standard way.

[root@serverschool ~]# service sshd restart

Redirecting to /bin/systemctl restart sshd.service

Service scripts now have the “.service” extension, and you can use them by executing the systemctl command:

systemctl restart httpd.service

The important thing to note is that the action is now listed before the script. You would type “start httpd.service” rather than typing “httpd start”.

For more information about the systemctl command, see this documentation.

Understanding Systemd and How to Use It: Part 1

Systemd has gradually made a name for itself in the Linux world and is or will eventually be the default service management system for a number of major Linux distributions. Those accustomed to the old init systems will not find Systemd to be horribly complex, but it does feature some significantly different approaches to service starting and management.

Systemd runs as a daemon, hence the “d” at the end of the name. It manages all other daemons from boot to shutdown. Rather than using a shell script to initialize each daemon, Systemd relies on a declarative configuration file. It also is capable of starting processes concurrently, allowing for faster boot times.

While Systemd has found a home in numerous Linux distributions, it is not without its detractors, including Linux creator Linus Torvalds and Slackware founder Patrick Volkerding over the way development is being handled and the use of config files rather than shell scripts. Despite this, Fedora, Arch Linux, CoreOS, openSUSE, Red Hat Enterprise Linux and CentOS all use it by default, with Debian and Ubuntu planning to do the same. Therefore, it is a good idea to learn how to use it, and part 2 of this introduction will start you on that journey.


How to Configure Linux Password Policies

One of your best weapons in the fight for server security is strong password management. Using the password policies you set in Linux, you enforce strong passwords, require password renewals and many other effective security measures.

First, you should install the cracklib module for PAM. Cracklib tests password strength. If you are using RHEL, CentOS or Fedora, it is installed by default. You can find password security options in /etc/pam.d

To set the minimum password length, edit /etc/pam.d/system-auth on Red Hat distributions or /etc/pam.d/common-password on Debian distros. The length setting will look something like this:

password requisite pam_cracklib.so retry=3 minlen=8 difok=3

Where minlen is the length in characters. In this example, the minimum length is 8 characters. The “difok” setting specifies the number of characters that must be different from the previous password.

Next, you can set password complexity in a line that contains “password” and “pam_cracklib.so”. It will look like this:

password requisite pam_cracklib.so retry=3 minlen=10 difok=3 ucredit=-1 lcredit=-2 dcredit=-1 ocredit=-1

“ucredit” is the number of uppercase letters. “lcredit” is the number of lowercase letters. “dcredit” is the number of numerals, and “ocredit” is the number of symbols.

For more on PAM and all that it can do to manage your passwords, see the online documentation.



4 Common Open Source Licenses

As you enter the world of server management, you are likely going to encounter free and open source software. Even a Windows system administrator these days will likely have to at least run Linux in a virtual machine at some point. Therefore, having a little background knowledge on how Linux and other open source software differs from proprietary software can be useful.

The following are 4 common open source licenses:

1. GNU GPL – Perhaps the most widely used, the GPL is also the most strict. While it allows anyone to freely use, download, distribute, modify or even sell the software, it also strictly requires that any redistribution carries the same license. Linux is famously licensed under GPL v2.

2. BSD – This type of license is considered more permissive. With it, you can essentially create a proprietary version of the software after you have performed your modifications. You are not required to release anything under the same license.

3. Apache – Similar to the BSD license, Apache allows for releasing your changes to the code under a difference license. It provides some additional legal information about copyrights, trademarks and patents that the BSD license does not explicitly mention.

4. GNU LGPL – Designed as a compromise for linking non-GPL libraries, the LGPL allows for linking with code that has a different license, but like the GPL itself, the code licensed under it must be re-released under the same license.


5 Signs You Need a Managed Server

Unmanaged servers are available all over the web for lease. They are cheap, rapidly deployed and usually connected to very fast networks inside of secure data centers. Nevertheless, an unmanaged server is not for everyone. Here are five signs you need a managed server rather than an unmanaged one.

  1. Your frustration level has reached an all-time high, and you are finding it difficult to setup server applications and troubleshoot problems.
  2. You want to spend more time on your business and less time on the technology that powers it.
  3. You have suffered some serious security issues and do not know how to fix them or prevent them from happening again in the future.
  4. You have operational expenditure that you can afford to spend on server management and find that cheaper than having full-time IT staff to make sure your server keeps running smoothly.
  5. You simply do not have the time to manage your server by yourself.

Manging your own server can be difficult, time consuming and ultimately expensive if you have to hire staff to take care of it. On the other hand, it might be worth the extra money to have a managed server and not have to spend time dealing with technical issues.


Send System Messages to Server Users

If your server has multiple users, you might want an easy way to send messages to them and make sure they receive them. The best way to do that is to send it right in the console. One tool you can use to do just that is “wall”. With it, you can send messages to all logged in users at once, and unless they have specifically disabled it, they will receive it.

To send a message to logged in users, type wall, press Enter, type the message you want to send and finally press CTRL+D. For example, to send a test message:

# wall

This is a test message



Broadcast Message from tavis@serverschool

(/dev/pts/2) at 20:35 …

This is a test message

If you do not want the “broadcast message” banner, simply use the “-n” option. This message will reach all users. It is particularly helpful if you want to tell them the server will be unavailable shortly due to maintenance.

# wall

This server will be unavailable in 5 minutes do to scheduled maintenance. Please save your work and log out.

For more information about wall, type “man wall” from the command line.

Is Mac OS X Good for Servers?

When you think of Mac OS X, you probably tend to think of various iTunes, graphic design, music production and other artsy activities. It is primarily a desktop operating system, but Apple does sell a server add-on for its OS. The question is: Is that server version useful for real-world server operations?

Some of the advantages of OS X server include:

  • Ease of administration – Like many Apple products, it is designed to be relatively easy. It includes graphical administration tools and easy setup of client systems.
  • Low cost – Although it is obviously more expensive than a free Linux distribution, it is still less expensive than Windows Server or a commercial Unix license.
  • Unix strength – Underneath, OS X Server includes many Unix-like tools that give it a surprising amount of power.

Some of the disadvantages are:

  • Not an enterprise OS – Do not expect to easily deploy a cluster of OS X servers. It is not built like an enterprise OS and does not include many of the tools you might want if you were to think big.
  • Hardware – OS X Server supports Apple hardware, which is more costly and difficult to support than alternatives.
  • Vendor lock-in – If you build your server on OS X, you are locked into the hardware and software. As with any proprietary OS, you give up the freedom to easily migrate to something else.

OS X Server has its pros and cons. Ultimately, if have a specialized product involving Apple systems, it might make sense. For web hosting or large scale enterprise, you will probably want to look elsewhere.

What is a Journaling File system

As we discussed in a previous post, Linux servers offer many different types of file systems, and every other server operating system also offers a choice of file systems. One type of file system you might encounter is called a journaling file system. What is it and how does it differ from a standard Linux file system?

A journaling file system is designed to protect against data loss by recording disk transactions to a log in case of system failure. Upon reboot, the file system normally compares the log to the actual files and corrects any discrepancies. Without this type of journaling in place, a single crash could cause disk corruption.

The old default Linux file system, Ext2, did not have journaling. Newer Linux file systems such as Ext3 and Ext4 use journaling. XFS supports journaling as well. Similarly, older Windows file systems, such as FAT and FAT32 do not support journaling, whereas NTFS does.

The main disadvantage of a journaling file system is that it involves more disk accessing than other file systems. This theoretically could make them slower, but with many modern disks and file systems, you might not notice a difference. There is also some debate about how to use journaling with solid state drives (SSD) or even if one should use them at all.

For more information about journaling file systems, see this article.

What are binary and source packages?

While learning to use a Linux or BSD dedicated server, you are likely to encounter the terms binary and source software packages. Depending on your actual operating system, it may use one, the other or both as default methods of software installation.

A source package is a file archive that contains the full source code of a given software. In order to install that software, you would need to unpack that archive and build the software from source using whatever required building tools are necessary (i.e. make, cmake, or others).

Some operating systems, such as Gentoo or FreeBSD, will also provide package repositories that allow you to automatically build software from source. The advantage is that programs built from source are usually better optimized for your architecture and settings.

A binary package is one that is pre-compiled and built to general specifications that should be compatible with your OS and architecture but that may not be tailored to meet your specific settings. Most Linux distributions include binary package repositories that make installation easy. Binary packages require dependencies to match the specifications spelled out when the packages were originally built. Therefore, installing them manually can sometimes be a pain. When using a repository, however, they are easy to install and much faster than building from source.

Differences Between Windows and Linux Servers

Linux is the kernel for many free and open source operating systems. Windows is a proprietary and commercial operating system, but there are many other differences between the two. When you are choosing an OS to run on your server, it is important to know some of the technical differences as well.

1. Access – Linux servers are almost always headless, meaning there is no graphical interface. Management is performed locally from the console or remotely via SSH. Windows systems are normally not headless, although it is possible. You can typically access Windows via the graphical interface and remotely through the Server Core via MMC, TS RemoteApp Terminal Services, or Remote Shell.

2. Software – On Linux you normally install software via a package management system with local or remote software repositories. You can also install binary files manually or compile source code. On Windows, your would primarily manually install binary software.

3. Web server and applications – The default web server software for Windows is Internet Information Services (IIS). It is generally the only web server designed to run the ASP.NET web application framework. Therefore, if you want to use ASP.NET, you should use a Windows server. Linux may run any number of web servers, and the default one installed may depend on your distribution. Apache HTTP Server and Nginx are among the most popular. Windows can also run other web servers, though not with official support.

These are just some surface differences between Windows and Linux. There are many others that will will explore in part 2 of this introduction.

Common Linux Commands You Should Know: Part 2

In our last post, we looked at 5 Linux commands (technically six) that are invaluable to any new system administrator. The following are a few more, some of which are critical to know.

cat – This is a unique program that allows you to combine multiple files or parts into a whole. It can also print the contents of those files to the screen or to the location you specify.

crontab – Use crontab to view, create and manipulate cron jobs for your server, which allow you to automate other commands, services, scripts and more.

ps – With this command you can list running processes, sort them the way you want and get detailed information about each process, the user running it and the location of the program that initiated it.

top – Top reveals a plethora of information relevant to system performance, including memory usage, CPU usage, load average and others. It will also reveal the top running processes that consume the most CPU, memory, etc.

cp – This command allows you to copy files and directories with many options and configurations.

rm – Remove files from your server with this command and its numerous options.

Knowing Linux commands allows you to communicate with your server, both to gather information and to take action. The more commands you know and better understanding of them you have, the better your control will be over your server’s performance and security.

Common Linux Commands You Should Know

In the previous 3 posts, we have covered several Linux terms that you should know when getting started managing a Linux server. What follows are some actual commands that will help you as you begin your journey.

cd – Probably the command you will use most frequently, “cd” stands for “change directory”. From the command line interface (CLI), it is the only effective way to navigate through your filesystem.

ls – Another frequently used one, ls tells Linux to list the files in a directory.

chmod – Permissions are very important for access control and security, and chmod is the command you use to actually change those permissions.

sudo or su – Depending on your distribution, you might use one or the other. Both essentially do the same thing, elevate your system priveleges to the level you specify, usually Root (administrator or super user).

touch – This is a quick way to create an empty file. Simply type “touch” followed by the filename. You can later edit the file and put whatever you want in it.

We will cover several more commands in part 2 of this series, some of which are critical to your server’s performance and stability.

Linux/Unix Terms You Should Know: Part 3

Part of the learning curve of a new Linux server is learning all of the terminology. You might not be familiar with some of them even if you have experiencing working on other operating systems. In part 3, we will look at some more of these terms.

daemon – This is a program that is often started at boot time and continues to run on the system in the background. Another term for a daemon is a service. If a daemon is not started at boot, you can usually still start or stop it on demand. Some daemons may include the web server, database server and DNS server.

hostname – This is the formal name of the server that designates how it will be recognized on the network. Every server has a hostname, and it will generally be a unique name on that network. For internet-connected servers, their hostnames will be part of a fully qualified domain name (FQDN).

kill – In part 2, you learned about processes. In the event that you need to stop an unresponsive process, the “kill” command can handle that. In general, killing a process involves typing “kill” followed by the process identification number (PID). If you need to kill a process based on the name and are not worried about killing any processes associated with it, use “killall”.
In part 4, we will explore some common Linux commands that you will likely need to know when you start work on your sever.

Linux/Unix Terms You Should Know: Part 2

In part one, we looked at some of the important terms a new system administrator should know when starting work on a Linux server. These next terms are equally important and will help you along the way as you begin to learn more about your server.

man – This command, short for manual, gives you access to documentation about any other command on your Linux server. For example, to learn about the command “ls”, simply type “man ls”, and it will display the command’s complete documentation, which you can scroll through and read.

permissions – Linux and Unix file systems have basic sets of permissions for all files and directories. Every file is owned by a user, and every file has a set of permissions determining which users can read, write, and/or execute them.

process – A process is a program or portion of a program that is currently running in the memory. A process can also spawn another process called a child process. For example, your Apache web server may be configured to spawn child processes for each unique access of its websites.

In part three of this series, we will explore even more in-depth terms that you should definitely know when diving into the world of dedicated servers.

Common Linux Terms Every SysAdmin Should Know: Part 1

A large percentage of the world’s servers run Linux, so it is a good idea to know some of the common terms you might encounter while using it. The following terms are a good starting point.

Command Line Interface (CLI) – On the server end, this is the main method of interfacing you will use to interact with your server. Unlike a graphical user interface (GUI), there is nothing to point, click or drag. You get a flashing cursor and the ability to type in commands.

Filesystem – This term is used to describe the system used to organize, access, create and delete files. Common Linux filesystems include Ext2, Ext3, Ext4, ReiserFS (nearly defunct now), XFS and JFS. Choosing a good filesystem for your server is very important for stability and effective storage.

GRUB – GNU GRand Unified Bootloader is what your server needs to properly boot a Linux kernel. It also allows you to configure a wide range of boot options that can both help or hinder a system. Therefore, it is a good idea to know exactly what you are doing before messing with GRUB’s configuration.

In part two, we will look at more common Linux terms that you should definitely know when you become a system administrator.

Know Your SQL Databases

Relational database management systems (RDMS) are very popular in the web hosting world. While non-relational databases, often called NoSQL, may be gaining popularity, most small, medium and even still some large websites rely on SQL technology. This is just a brief intro to the various brands of SQL out there.

MySQL – One of the older versions on the market, MySQL is both free and open source as well as commercial. It has gone through the hands of more than one vendor, including Sun Microsystems and now Oracle.

MariaDB – After Oracle purchased Sun, the original MySQL creator made MariaDB as a fork of the original project. It is open source and managed by the MariaDB Foundation.

PostgreSQL – An object-relational database management system, PostgreSQL is one of, if not the oldest. Like the others, it is free and open source. It is also known to work on a variety of platforms, including Windows.

SQLite – Unlike the others on the list, SQLite does not require a database server. Instead, it is embedded, allowing it to run with very little software. It is only 658 KiB, and is released to the public domain with no software license.

MSSQL – Completely proprietary and commercial, Microsoft SQL Server is the company’s answer to its open source competitors. Designed specifically for Windows Server, it supports Microsoft’s business applications, its cloud infrastructure and even web server offerings.

The database system you choose depends entirely on your licensing requirements and the type of work you want to do. The best choice for you may not work at all for others.

How to Manage Linux Kernel Modules

Linux is the kernel for a variety of operating systems that power many of the world’s servers. Although the operating systems themselves are often commonly called Linux, the actual term refers specifically to this kernel and all of its parts. In addition to the components that are compiled into the kernel, Linux also supports modules that can be loaded or unloaded on demand.

Linux modules are useful for hardware drivers, network interfaces and much more. To find out which modules are running on your system, type from the command line as root:

# lsmod

The output might look something like this:

iptable_filter 12810 0

ip_tables 27239 1 iptable_filter

x_tables 34059 2 ip_tables,iptable_filter

kvm_intel 138567 0

kvm 431754 1 kvm_intel

ppdev 17671 0

binfmt_misc 17468 1

microcode 23656 0

psmouse 97655 0

serio_raw 13413 0

parport_pc 32701 0

parport 42299 2 ppdev,parport_pc

floppy 69370 0

The first column lists the module name. For example, kvm is Linux’s virtualization module. The second column is the size of the module, and the third column tells if the module is being used by another module. In the case of kvm, it depends on kvm_intel.

To find out more about a module, run modinfo [modulename]. For example:

# modinfo kvm

The output will resemble this:

filename: /lib/modules/3.11.0-19-generic/kernel/arch/x86/kvm/kvm.ko

license: GPL

author: Name

srcversion: ****


intree: Y

vermagic: 3.11.0-19-generic SMP mod_unload modversions

parm: ****

To load a module into the kernel, type modprobe [modulename]. For example:

# modprobe kvm

To remove a module, type rmmod [modulename]. For example:

#rmmod kvm

Linux kernel modules make it easy to load and unload components and drivers without having to reboot or reconfigure software. You can learn more about module management at linux.com.