Learn the difference between LVE limits & Apache RLimits
What makes both shared hosting provider and its customers happy? Stable servers, of course!
The problem is that even a single site can easily bring the servers to a halt by consuming its CPU, memory, and IO resources. The best way to improve your server stability is to limit and partition resources per account, thus avoiding slowdowns and downtimes.
That’s exactly what CloudLinux OS does with the help of its proprietary Lightweight Virtualized Environment (LVE) technology. It allows hosts to set up resource limits (amount of CPU, IO, memory, processes, etc.) per account, and ensures that a tenant can never use more resources than he or she is given. If a single site hits a set limit, other sites will not be impacted, because the ‘offender’ is immediately throttled. This prevents performance spikes, ensures your customers are happy, and helps you avoid the avalanche of calls to your support team.
For the past 20 years, Apache RLimits, or system ulimits, have helped administrators limit processes and resources that could be taken by account, however, CloudLinux OS was specifically designed for multi-tenancy and LVE limits is the superstar! It really gives more strict and clear resource limitations per account.
Here are some core differences:
For the CPU limits, Apache’s RLimitCPU (or system ulimit -t) is set for the number of seconds per process, and hitting the limit kills the process itself, yet does not limit the amount of CPU that can be used at the same time by an account. In CloudLinux OS LVE limits, the CPU is allocated to the whole user account - the amount of CPU as a fraction of the core for all processes of the account that can use it at the same time. If the processes are using more, they will be throttled as a result user processes will not be killed, just slowed down.
For the memory limits, RLimitMEM is limiting memory consumption per process, so, if you run 100 processes they will take 100x the limit set. In contrast, LVE Memory limit sets the total memory consumption for all processes within the account - irrelevant of the number of processes - which ensures that an individual user does not use more memory that is allocated to his account slowing down the rest of the websites on the server. With CloudLinux memory limit you have clear understanding how much memory can be taken by user processes.
And even though the limits set for the Number of Processes are the same in RLimits and LVE limits, LVE counts and sets limits for ALL processes, not only those launched by Apache. Indeed, you can control total number of processes per account with system ‘ulimit -u’ command.
In the nutshell, LVE limits provide a more flexible way to manage them and even include features that are absent in RLimits altogether, such as limiting of IO, IOPS, and Entry Processes.
We’ve outlined the differences between LVE limits and RLimits in the chart below:
To summarize the benefits:
LVE limits can be changed on the fly, RLimits require processes to be restarted.
It is very easy to see when LVE limits are hit, but requires some sophisticated debugging with RLimits, and RLimits are often the cause of 'unexplained' issues.
LVE limits statistics are collected, so you can know how much the user is using / was using at any given time. Nothing like that exists for RLimits.
If you want to know how to set and optimize CloudLinux OS limits, just listen to this webinar recording. Our support specialist Bogdan Shyshka goes over the dangers of low or high limits, defaults and starting points, and shares some tips and tricks that can help you maximize your server performance.