5 Common Server Setups For Your Web Application

Introduction

When deciding which server architecture to use for your environment, there are many factors to consider, such as performance, scalability, availability, reliability, cost, and ease of management.

Here is a list of commonly used server setups, with a short description of each, including pros and cons. Keep in mind that all of the concepts covered here can be used in various combinations with one another, and that every environment has different requirements, so there is no single, correct configuration.

 Google larry page sergey brin first server

1. Everything On One Server

The entire environment resides on a single server. For a typical web application, that would include the web server, application server, and database server. A common variation of this setup is a LAMP stack, which stands for Linux, Apache, MySQL, and PHP, on a single server.

Use Case: Good for setting up an application quickly, as it is the simplest setup possible, but it offers little in the way of scalability and component isolation.

Everything On a Single Server

Pros:

  • Simple

Cons:

  • Application and database contend for the same server resources (CPU, Memory, I/O, etc.) which, aside from possible poor performance, can make it difficult to determine the source (application or database) of poor performance
  • Not readily horizontally scalable

2. Separate Database Server

The database management system (DBMS) can be separated from the rest of the environment to eliminate the resource contention between the application and the database, and to increase security by removing the database from the DMZ, or public internet.

Use Case: Good for setting up an application quickly, but keeps application and database from fighting over the same system resources.

Separate Database Server

Pros:

  • Application and database tiers do not contend for the same server resources (CPU, Memory, I/O, etc.)
  • You may vertically scale each tier separately, by adding more resources to whichever server needs increased capacity
  • Depending on your setup, it may increase security by removing your database from the DMZ

Cons:

  • Slightly more complex setup than single server
  • Performance issues can arise if the network connection between the two servers is high-latency (i.e. the servers are geographically distant from each other), or the bandwidth is too low for the amount of data being transferred

 

3. Load Balancer (Reverse Proxy)

Load balancers can be added to a server environment to improve performance and reliability by distributing the workload across multiple servers. If one of the servers that is load balanced fails, the other servers will handle the incoming traffic until the failed server becomes healthy again. It can also be used to serve multiple applications through the same domain and port, by using a layer 7 (application layer) reverse proxy.

Examples of software capable of reverse proxy load balancing: HAProxy, Nginx, and Varnish.

Use Case: Useful in an environment that requires scaling by adding more servers, also known as horizontal scaling.

Load Balancer

Pros:

  • Enables horizontal scaling, i.e. environment capacity can be scaled by adding more servers to it
  • Can protect against DDOS attacks by limiting client connections to a sensible amount and frequency

Cons:

  • The load balancer can become a performance bottleneck if it does not have enough resources, or if it is configured poorly
  • Can introduce complexities that require additional consideration, such as where to perform SSL termination and how to handle applications that require sticky sessions

 

4. HTTP Accelerator (Caching Reverse Proxy)

An HTTP accelerator, or caching HTTP reverse proxy, can be used to reduce the time it takes to serve content to a user through a variety of techniques. The main technique employed with an HTTP accelerator is caching responses from a web or application server in memory, so future requests for the same content can be served quickly, with less unnecessary interaction with the web or application servers.

Examples of software capable of HTTP acceleration: Varnish, Squid, Nginx.

Use Case: Useful in an environment with content-heavy dynamic web applications, or with many commonly accessed files.

HTTP Accelerator

Pros:

  • Increase site performance by reducing CPU load on web server, through caching and compression, thereby increasing user capacity
  • Can be used as a reverse proxy load balancer
  • Some caching software can protect against DDOS attacks

Cons:

  • Requires tuning to get best performance out of it
  • If the cache-hit rate is low, it could reduce performance

 

5. Master-Slave Database Replication

One way to improve performance of a database system that performs many reads compared to writes, such as a CMS, is to use master-slave database replication. Master-slave replication requires a master and one or more slave nodes. In this setup, all updates are sent to the master node and reads can be distributed across all nodes.

Use Case: Good for increasing the read performance for the database tier of an application.

Here is an example of a master-slave replication setup, with a single slave node:

Master-Slave Database Replication

Pros:

  • Improves database read performance by spreading reads across slaves
  • Can improve write performance by using master exclusively for updates (it spends no time serving read requests)

Cons:

  • The application accessing the database must have a mechanism to determine which database nodes it should send update and read requests to
  • Updates to slaves are asynchronous, so there is a chance that their contents could be out of date
  • If the master fails, no updates can be performed on the database until the issue is corrected
  • Does not have built-in failover in case of failure of master node

 

Example: Combining the Concepts

It is possible to load balance the caching servers, in addition to the application servers, and use database replication in a single environment. The purpose of combining these techniques is to reap the benefits of each without introducing too many issues or complexity. Here is an example diagram of what a server environment could look like:

Load Balancer, HTTP Accelerator, and Database Replication Combined

Let’s assume that the load balancer is configured to recognize static requests (like images, css, javascript, etc.) and send those requests directly to the caching servers, and send other requests to the application servers.

Here is a description of what would happen when a user sends a requests dynamic content:

  1. The user requests dynamic content from http://example.com/ (load balancer)
  2. The load balancer sends request to app-backend
  3. app-backend reads from the database and returns requested content to load balancer
  4. The load balancer returns requested data to the user

If the user requests static content:

  1. The load balancer checks cache-backend to see if the requested content is cached (cache-hit) or not (cache-miss)
  2. If cache-hit: return the requested content to the load balancer and jump to Step 7. If cache-miss: the cache server forwards the request to app-backend, through the load balancer
  3. The load balancer forwards the request through to app-backend
  4. app-backend reads from the database then returns requested content to the load balancer
  5. The load balancer forwards the response to cache-backend
  6. cache-backend caches the content then returns it to the load balancer
  7. The load balancer returns requested data to the user

This environment still has two single points of failure (load balancer and master database server), but it provides the all of the other reliability and performance benefits that were described in each section above.

Conclusion

Now that you are familiar with some basic server setups, you should have a good idea of what kind of setup you would use for your own application(s). If you are working on improving your own environment, remember that an iterative process is best to avoid introducing too many complexities too quickly.

Let us know of any setups you recommend or would like to learn more about in the comments below!

Tutorial courtesy: Digital Ocean

Advertisements
How to setup SSH on web server

How To Set Up SSH Keys

About SSH Keys

SSH keys provide a more secure way of logging into a virtual private server with SSH than using a password alone. While a password can eventually be cracked with a brute force attack, SSH keys are nearly impossible to decipher by brute force alone. Generating a key pair provides you with two long string of characters: a public and a private key. You can place the public key on any server, and then unlock it by connecting to it with a client that already has the private key. When the two match up, the system unlocks without the need for a password. You can increase security even more by protecting the private key with a passphrase.

Step One—Create the RSA Key Pair

The first step is to create the key pair on the client machine (there is a good chance that this will just be your computer):

ssh-keygen -t rsa

Step Two—Store the Keys and Passphrase

Once you have entered the Gen Key command, you will get a few more questions:

Enter file in which to save the key (/home/demo/.ssh/id_rsa):

You can press enter here, saving the file to the user home (in this case, my example user is called demo).

Enter passphrase (empty for no passphrase):

It’s up to you whether you want to use a passphrase. Entering a passphrase does have its benefits: the security of a key, no matter how encrypted, still depends on the fact that it is not visible to anyone else. Should a passphrase-protected private key fall into an unauthorized users possession, they will be unable to log in to its associated accounts until they figure out the passphrase, buying the hacked user some extra time. The only downside, of course, to having a passphrase, is then having to type it in each time you use the Key Pair.

The entire key generation process looks like this:

ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/demo/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/demo/.ssh/id_rsa.
Your public key has been saved in /home/demo/.ssh/id_rsa.pub.
The key fingerprint is:
4a:dd:0a:c6:35:4e:3f:ed:27:38:8c:74:44:4d:93:67 demo@a
The key's randomart image is:
+--[ RSA 2048]----+
|          .oo.   |
|         .  o.E  |
|        + .  o   |
|     . = = .     |
|      = S = .    |
|     o + = +     |
|      . o + o .  |
|           . o   |
|                 |
+-----------------+

The public key is now located in /home/demo/.ssh/id_rsa.pub The private key (identification) is now located in /home/demo/.ssh/id_rsa

Step Three—Copy the Public Key

Once the key pair is generated, it’s time to place the public key on the virtual server that we want to use.

You can copy the public key into the new machine’s authorized_keys file with the ssh-copy-id command. Make sure to replace the example username and IP address below.

ssh-copy-id user@123.45.56.78

Alternatively, you can paste in the keys using SSH:

cat ~/.ssh/id_rsa.pub | ssh user@123.45.56.78 "mkdir -p ~/.ssh && cat >>  ~/.ssh/authorized_keys"

No matter which command you chose, you should see something like:

The authenticity of host '12.34.56.78 (12.34.56.78)' can't be established.
RSA key fingerprint is b1:2d:33:67:ce:35:4d:5f:f3:a8:cd:c0:c4:48:86:12.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '12.34.56.78' (RSA) to the list of known hosts.
user@12.34.56.78's password: 
Now try logging into the machine, with "ssh 'user@12.34.56.78'", and check in:

  ~/.ssh/authorized_keys

to make sure we haven't added extra keys that you weren't expecting.

Now you can go ahead and log into user@12.34.56.78 and you will not be prompted for a password. However, if you set a passphrase, you will be asked to enter the passphrase at that time (and whenever else you log in in the future).

Optional Step Four—Disable the Password for Root Login

Once you have copied your SSH keys unto your server and ensured that you can log in with the SSH keys alone, you can go ahead and restrict the root login to only be permitted via SSH keys.

In order to do this, open up the SSH config file:

sudo nano /etc/ssh/sshd_config

Within that file, find the line that includes PermitRootLogin and modify it to ensure that users can only connect with their SSH key:

PermitRootLogin without-password

Put the changes into effect:

reload ssh

Tutorial courtesy: Digital Ocean

Youtube Vimeo on click play pause api javascript

Play and Pause Buttons for YouTube and Vimeo Videos (via Their APIs)

Sometimes you just need to start a video playing by some user interaction on the page other than clicking right on that video itself. Perhaps a “Play Video” button of your own creation lives on your page and you want to start that video when that button is clicked. That’s JavaScript territory. And if that video is a YouTube or Vimeo video, we’ll need to make use of the APIs they provide. No problem.

For these examples, we’ll assume you’ve already picked out a video and you’re going to put it on the page in an

Tutorial source: CSS-Tricks

For YouTube

1. Make sure the iframe src URL has ?enablejsapi=1 at the end

Like this:

<iframe src="//www.youtube.com/embed/FKWwdQu6_ok?enablejsapi=1" frameborder="0" allowfullscreen id="video"></iframe>

I also put an id attribute on the iframe so it will be easy and fast to target with JavaScript.

2. Load the YouTube Player API

You could just link to it in a <script>, but all their documentation shows loading it async style, which is always good for third-party scripts, so let’s do that:

// Inject YouTube API script
var tag = document.createElement('script');
tag.src = "//www.youtube.com/player_api";
var firstScriptTag = document.getElementsByTagName('script')[0];
firstScriptTag.parentNode.insertBefore(tag, firstScriptTag);

3. Create a global function called onYouTubePlayerAPIReady

This is the callback function that the YouTube API will call when it’s ready. It needs to be named this.

function onYouTubePlayerAPIReady() {

}

It’s likely you have some structure to your JavaScript on your page, so generally I’d recommend just having this function call another function that is inside your organizational system and get going on the right track right away. But for this tutorial, let’s just keep it soup-y.

4. Create the Player object

This is the object that has the ability to control that video. We’ll create it using the id attribute on that iframe in our HTML.

var player;

function onYouTubePlayerAPIReady() {
  // create the global player from the specific iframe (#video)
  player = new YT.Player('video', {
    events: {
      // call this function when player is ready to use
      'onReady': onPlayerReady
    }
  });
}

Another callback!

5. Create the “player ready” callback and bind events

We named this function when we created the player object. It will automatically be passed the event object, in which event.target is the player, but since we already have a global for it let’s just use that.

Here we bind a simple click event to an element on the page with the id #play-button (whatever custom button you want) and call the player object’s playVideo method.

function onPlayerReady(event) {
  
  // bind events
  var playButton = document.getElementById("play-button");
  playButton.addEventListener("click", function() {
    player.playVideo();
  });
  
  var pauseButton = document.getElementById("pause-button");
  pauseButton.addEventListener("click", function() {
    player.pauseVideo();
  });
  
}

All Together Now

And that’ll do it! Here’s a demo:

I used a little SVG templating in there for the buttons just for fun.

For Vimeo

1. Make sure the iframe src URL has ?api=1 at the end

<iframe src="//player.vimeo.com/video/80312270?api=1" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen id="video"></iframe>

I also put an id attribute on the iframe so it will be easy and fast to target with JavaScript.

2. Load the “froogaloop” JS library

The Vimeo player API actually works by sending commands through postMessage right to the iframe. You don’t need the froogaloop library to do that, but postMessage has some inherent complexities that this library (by Vimeo themselves) makes way easier. Plus it’s only 1.8kb so I’d recommend it.

<script src="froogaloop.min.js"></script>

3. Create the player object

var iframe = document.getElementById('video');

// $f == Froogaloop
var player = $f(iframe);

We target the iframe by the id attribute we added. Then we create the player using the special froogaloop $f.

4. Bind events

All we have to do now is call methods on that player object to play and pause the video, so let’s call them when our play and pause buttons are clicked.

var playButton = document.getElementById("play-button");
playButton.addEventListener("click", function() {
  player.api("play");
});

var pauseButton = document.getElementById("pause-button");
pauseButton.addEventListener("click", function() {
  player.api("pause");
});

All Together Now

That’ll do it for Vimeo. Here’s a demo:


You can do much more with the APIs for both of these services. It can be pretty fun, just dive in!