There are many ways to increase the capacity of your application. Of course you can tune a single server to make it more and more performant but this can easily become too expensive and will reach limits one day. Much easier and cheaper it is to parallelize the application and to run it simultaneously on several servers. This is not only cost-effective and simple but, theoretically, endlessly repeatable: The application scales linearly and gets more and performant with every new server.
Together with the performance, also the availability increases as the load balancer is of course redundant, too. The load balancer detects the failure of the server and distributes the tasks among the remaining active serves.
We can integrate your servers and applications into our data center where we run multiple and redundant load balancing systems. Thus you save costs and time compared to an in-house data center and benefit from our centralized and highly reliable cluster.
Load balancing | Direct Routing method
Our load balancing is based on the direct routing method. The destination and source IP address will not be rewritten contrary to the NAT method. Direct routing takes place at layer 2 in the OSI model based on the MAC address. With this solution you can run a server landscape with an unlimited number of machines and it is more effective than the NAT method regarding performance.
Functionality: A request (1) is put from a client to our load balancing. The load balancer processes the request and sent it to the network (2). The responded server identifies the request and receives the IP packet (3). The response packet can be sent via any gateway (4).