When choosing the right server setup for your domain, there are numerous things to consider. Scalability, availability, execution time, reliability, affordability, and simplicity are some characteristics to evaluate. The most popular server-side configurations are listed here, summarising each one’s advantages and disadvantages.
Remember that there is no ideal setup because the ideas discussed here can be applied in various mixtures and other ideas. After all, every domain has different needs.
Table of Contents
Here are some of the concepts below:
1. Server for a separate database
To stop the application and database from competing for resources and to improve security by keeping the database out of the open internet, the database management system can be distinguished from the rest of the domain.
Quick application setup prevents databases and programs from competing for resources from the system.
– It could be set up in a way that boosts security by removing your database from the open internet
– The server’s CPU, memory, I/O, and other resources are not shared between the application and database layer.
– Each layer can be vertically scaled separately by providing more resources to the servers that require more capacity.
– Problems affecting the output level can occur if the bandwidth is insufficient for the quantity of data being transferred or the network link between the two servers has excessive latency (i.e., the servers are physically disconnected from one another).
– Set up is complex compared to the one server for everything setup.
2. One server for everything
According to this, a single server is thought to house the complete domain. Web, application, and database servers would all be part of a typical web application. A common variant of this configuration is a LAMP stack being on one server (LAMP is an acronym for Linux, Apache, MySQL, and PHP)
It’s the simplest setup that can be done, enabling applications to start up fast, but it doesn’t give much in component isolation or scalability.
– Set up is easy and simple
– Horizontal scaling is difficult
– In addition to probable poor performance, the application and database compete for resources from the server (Memory, CPU, I/O, etc.), making it challenging to pinpoint whether there’s a problem with the application or database.
3. Caching Reverse Proxy or HTTP Accelerator
There are several ways to speed up the process of serving content to a user using an HTTP accelerator, also known as a caching HTTP reverse proxy. The basic method used by an HTTP accelerator is caching feedback from an application or web server in memory. This is done so that subsequent requests for the same material can be promptly fulfilled with less need for interaction with the web or application servers. Nginx, Varnish and Squid are a few programs that can accelerate HTTP.
Helpful in a setting where there are plenty of frequently requested files or when dynamic web apps have a lot of material.
– DDOS attacks can be defended against certain caching software.
– Boost website speed by decreasing web server CPU strain through compression and caching, expanding user experience.
– Capable of serving as a load balancer for reverse proxy.
– A low rate of cache-hit could have a negative impact on performance.
– Requires fine adjustment to achieve optimal performance.
4. Reverse proxy or Load Balancer
By dividing the burden among several servers, load balancers can be introduced to a server environment to increase performance and stability. If one of the other servers malfunctions, the other servers will handle the incoming traffic until the load-balanced server is operational. An application layer(layer 7) reverse proxy can also be utilised to deliver numerous apps through the same domain and port. Varnish, HAProxy, and Nginx are three examples of programs with load balancer capabilities.
Effective in a setting where adding extra servers, commonly known as horizontal scaling, is necessary for scaling.
– By regulating the number and frequency of client connections may defend against DDOS assaults.
– Facilitates horizontal scalability, which increases the environment’s capacity by adding additional servers.
– May bring about complications that call for further assistance, such as where to do termination of SSL and how to manage applications that need sticky adjustments.
– Insufficient resources or improper configuration of the load balancer could cause it to become a performance hindrance.
– Your entire service could go down if the load balancer goes down because it is a single point of failure, and a setup with a high availability has no single point of failure.
5. Database replication with primary replicas
Utilising primary-replica database replication is one technique to enhance the performance of a database system, such as a content management system (CMS), that conducts more reads than writes. One or more replica nodes and a primary node are needed for replication. In this configuration, reads can be dispersed among all nodes while all updates are delivered to the principal node.
Good for improving the read performance of an application’s database layer.
– It serves read requests instantly.
– By distributing reads across replicas, the database read performance is improved.
– Because replica updates are happening at the same time, there is a potential that their content won’t be current.
– Lacks built-in failsafe if the primary node fails
– The database accessing application needs a way to choose which database nodes to send read requests and updates to.
– Until any issue is resolved, no database updates are permitted if the primary node fails.
Combining different Concepts
In a single setup, database replication and load balancing of the caching servers and application servers are both feasible. Combining these methods aims to maximise the advantages of each without adding undue complexity or problems. Here is an illustration showing what a combined server environment might resemble:
Explanation of what would transpire in response to a user’s request for static content:
– If the requested material is cached, the load balancer examines the cache backend to determine whether it has been hit or not.
– If a cache hit occurs, the load balancer will receive the requested material.
– In case of a cache miss, the cache server routes the request through a load balancer to the app backend.
– The app-backend retrieves data from the database and sends the load balancer the requested content.
– The response is forwarded to the cache backend via the load balancer.
– The material is cached by the cache backend and then delivered back to the load balancer.
– The load balancer returns data requested by the user.
Explanation For when the user requests dynamic content:
– The load balancer receives the user’s request for dynamic content.
– Requests are sent to the app backend by the load balancer.
– Application backend accesses the database and provides a load balancer with the requested content.
– The load balancer gives the user the requested data.
Although the load balancer and primary database server remain two single points of failure, this system still offers the various dependability and performance advantages mentioned in each of the previous sections.
Now that you are acquainted with a few fundamental server configurations, you need to understand the configuration you would apply to your application. If you’re trying to make your environment better, keep in mind that an iterative procedure is preferable to prevent adding too many intricacies too soon.
We are Team EMB the voice behind this insightful blog.