CloudLinux


Tony
 Share

Recommended Posts

It's not really a suggestion considering we're going to be using it but I figured I'd make a post. With us becoming more popular the number of people who abuse our services is growing with that. So we needed a better solution as having technicians disabling accounts and sending out emails was a time consuming process. Users were also at random causing service degradation which is difficult to stop.

So basically it will allow us to do is limit users CPU automatically and this will reduce the amount of time technicians are dealing with issues. Basically for the users who abuse resources they're going to get a slow web site while the good majority of users will operate like there is nothing happening. Which is how it should be it is far more fair if the guy who loads up 100 wordpress plugins and has 25 second load times for pages they should not be slowing down your site. They should be slowing down strictly their web site. You can apply this to many other cases so attacks, looped scripts, exploited scripts etc. All cases where technicians acted they should no longer have to do that in most cases.

So some links: http://cloudlinux.co...tions/overview/

We've been running it on Frog Host for almost 2 weeks now which is a production environment. We were testing it even before then to see exactly what would happen. So here's a great example of our testing we loaded up a wordpress blog just vanilla install made a single post. We decided to replicate a huge amount of traffic by having 500 concurrent users visit this blog. We loaded CloudLinux up on our little Opteron 170 test server and had a brand new not fully deployed Dual Xeon 5620 system. We ran the test on the cloudlinux server and that web site slowed down however SSH and everything was still responsive such as SSH, cPanel etc. as the PHP processes only got a few percentage points of CPU. We ran the same test on a machine with 16 visible CPU's to the OS vs 2 on the Opteron and it was slowing down this machine as each process was able to take as much CPU as it liked. It's not a great test obviously but gives you an idea on just how effective it can be with a lot of situations.

I have no doubt some users may end up with slower sites as a result. A small minority who are attempting to use more than their fair share of resources. So they have their wordpress blog with 100mb error_log file full of problems and lots of max execution time errors. Though the person with a properly optimized site so in wordpress using caching and what not their site will remain just as fast as before. So basically it's just going to encourage people to optimize their sites or look to a VPS or dedicated where they could attempt to use up entire CPU's for extended periods of time.

Now for a deployment time well for now it's deploying on Mustang and Spitfire next week. Then the week after it'll show up on the rest of our shared hosting boxes assuming everything goes as well as it did in our testing. That week we'll also deploy it on our first few reseller / mixed servers to make sure in that environment it works the same. Then finally the week after that it'll be deployed on all machines.

Feel free to post your comments and concerns here :)

Link to comment
Share on other sites

IBasically for the users who abuse resources they're going to get a slow web site while the good majority of users will operate like there is nothing happening. Which is how it should be it is far more fair if the guy who loads up 100 wordpress plugins and has 25 second load times for pages they should not be slowing down your site. They should be slowing down strictly their web site. You can apply this to many other cases so attacks, looped scripts, exploited scripts etc.

...

So basically it's just going to encourage people to optimize their sites or look to a VPS or dedicated where they could attempt to use up entire CPU's for extended periods of time.

Fair enough.

Questions:

1) Are the resource limits applied for each cPanel account or for each domain/website?

2) Will we get any info to let us know that our website has been slowed down because it exceeds the resource limits?

Link to comment
Share on other sites

Fair enough.

Questions:

1) Are the resource limits applied for each cPanel account or for each domain/website?

2) Will we get any info to let us know that our website has been slowed down because it exceeds the resource limits?

1) It is limited per cPanel account as it's limited at the linux user level.

2) There is no notification via email or anything like that. There is integration in cpanel however which will show the real time usage. The usage levels we're going to be setting on a per account basis right now is going to be really high. It's not going to be reflective of what a user could actually expect if everyone used the resources. It's just where we believe after that point there is going to be some serious problems. So it'll be around 1 CPU which very few people attempt to utilize right now but those few cause the majority of our problems like I said.

Link to comment
Share on other sites

The usage levels we're going to be setting on a per account basis right now is going to be really high. It's not going to be reflective of what a user could actually expect if everyone used the resources. It's just where we believe after that point there is going to be some serious problems. So it'll be around 1 CPU which very few people attempt to utilize right now but those few cause the majority of our problems like I said.

That's a relief - I was concerned that the limits may be too stringent.

3) Will the limits for each cpanel account be the same across all shared and reseller plans?

4) How does CloudLinux inform the system administrator that an account has exceeded its limit? I'm asking this because it would be nice if we are informed as well. I mean, if CloudLinux can alert the admin then it should be able to inform the affected user, too. It would give us a chance to correct the script problem or whatever that caused the spike.

Link to comment
Share on other sites

That's a relief - I was concerned that the limits may be too stringent.

3) Will the limits for each cpanel account be the same across all shared and reseller plans?

4) How does CloudLinux inform the system administrator that an account has exceeded its limit? I'm asking this because it would be nice if we are informed as well. I mean, if CloudLinux can alert the admin then it should be able to inform the affected user, too. It would give us a chance to correct the script problem or whatever that caused the spike.

3) It is limited per cPanel account

4) It does not even inform us how it works is it restricts the amount of CPU one can use in real time. So here's an example:

You can max 100% (1 CPU)

You launch one process you attempt to use 100% CPU it allows you.

You launch another and now it splits the CPU between the two so each is 50%

You launch two more processes for a total of four you're now using 25% per process.

So basically you become capped the more CPU you attempt to use the slower all your processes will become. So if you launched 10 processes trying to use 100% CPU each process would only get 10% as your total account is capped at 100%

Do that clear up how it works now? It basically works the same way Xen, OpenVZ etc. work where you're capped on CPU. Those you had your own operating system but in this case you're still part of the main system just your processes that we choose to put into it's light weight virtual environment are limited by what we set the maximum resources to be.

Link to comment
Share on other sites

Yes, the examples helped a lot. Thanks.

5) What are the limits for:

a.) Memory

b.) I/O

c.) Inodes

a) It's going to be greater than the amount possible to use. We've always limited the number of PHP processes you can generate so it will not be possible to go over the memory limit which would probably be in the generous 768MB+ range

B) Right now I/O limiting is not supported so no limit there

c) Our control panel says 500,000 which I don't think a single user is even remotely close to that amount. It's not controlled by CloudLinux either so you could go over it we just added the display to the control panel since users have been asking for one.

Link to comment
Share on other sites

a) It's going to be greater than the amount possible to use. We've always limited the number of PHP processes you can generate so it will not be possible to go over the memory limit which would probably be in the generous 768MB+ range

B) Right now I/O limiting is not supported so no limit there

c) Our control panel says 500,000 which I don't think a single user is even remotely close to that amount. It's not controlled by CloudLinux either so you could go over it we just added the display to the control panel since users have been asking for one.

So far, so good. Not as bad as I thought.

CloudLinux...quite an excellent idea, actually. Looks like things are just gonna get better at Hawk Host! :)

Link to comment
Share on other sites

I'm curious, for instance, if, say you had a Basic shared plan, and your traffic and so on slowly built up over a period of time, so if you then moved to something like an Advanced shared hosting (for instance if you don't really need a VPS yet), would the limits be different between the Basic and Advanced? Or if you moved up to a Reseller account to create separate accounts for different domains you had -- would the limits increase?

Or are the limits more there just to protect the stability of the server? I'm just curious -- either way, it sounds like a great idea. I'd certainly feel bad if I made a stupid mistake somewhere and caused problems for other people -- it's nice to have a safety net there.

In the sense that if your site is growing, could you make incremental steps within the shared/reseller environment before jumping into a VPS or a dedicated? That might be a really cool way to scale it up without disturbing others, and then making the jump to a VPS, etc..., when you really need it.

Link to comment
Share on other sites

I'm curious, for instance, if, say you had a Basic shared plan, and your traffic and so on slowly built up over a period of time, so if you then moved to something like an Advanced shared hosting (for instance if you don't really need a VPS yet), would the limits be different between the Basic and Advanced? Or if you moved up to a Reseller account to create separate accounts for different domains you had -- would the limits increase?

Or are the limits more there just to protect the stability of the server? I'm just curious -- either way, it sounds like a great idea. I'd certainly feel bad if I made a stupid mistake somewhere and caused problems for other people -- it's nice to have a safety net there.

In the sense that if your site is growing, could you make incremental steps within the shared/reseller environment before jumping into a VPS or a dedicated? That might be a really cool way to scale it up without disturbing others, and then making the jump to a VPS, etc..., when you really need it.

The limits would likely be the same unless we decided to offer a different type of addon (ex: guarantee up to a certain threshold of resources) - I however don't see that happening anytime soon as the primary reason for this is to ensure the stability of the server / to prevent crashing due to a single user. Ultimately if you're surpassing these limits you're likely not suited for a shared environment in the first place. We're not setting the limit thresholds too low to affect people / milk the machines resources but to ensure a single individual can't exhaust the resources. Think of it as a fail safe for _if_ a user / script wants to hog a lot of resources it will be limited.

We could however offer a whole new tiered "Guaranteed resources hosting" or something of that nature. Baby steps however :).

Now unfortunately we've ran into some issues with our first CloudLinux deployment that didn't happen in our test environment so we're investigating that / working with the developers of the software involved to get this ironed out.

Link to comment
Share on other sites

Looks like things are just gonna get better at Hawk Host! :)

Now unfortunately we've ran into some issues with our first CloudLinux deployment that didn't happen in our test environment so we're investigating that / working with the developers of the software involved to get this ironed out.

Oops, let me qualify my sentence with "...after some teething problems, naturally."

Link to comment
Share on other sites

  • 2 months later...

Any updates on these issues? Resolved?

They are not resolved and affect any host running Litespeed or using the event mpm in Apache. My guess it has something to do with how event driven web servers work. We're simply not capable of even having close to the same capacity we'd have non CloudLinux before we run into troubles with it. They don't figure they'll have time to dig deep into this until their next release of new features. After that they figure it might be several months before they are able to figure this out and solve it.

Link to comment
Share on other sites

  • 2 months later...

They are not resolved and affect any host running Litespeed or using the event mpm in Apache. My guess it has something to do with how event driven web servers work. We're simply not capable of even having close to the same capacity we'd have non CloudLinux before we run into troubles with it. They don't figure they'll have time to dig deep into this until their next release of new features. After that they figure it might be several months before they are able to figure this out and solve it.

Will these issues disappear with the new LiteSpeed 4.1?

Link to comment
Share on other sites

  • 2 months later...

I like this proactive approach to monitoring resource usage. My site does not suffer because someone on the node is hogging all the shared resources until someone complains and the techs track down the offending user. I imagine the only people who won't welcome this are the folks who need to be on a VPS or dedicated box but don't want to give up the lower cost of shared hosting. Thank you for running Cloud Linux.

Link to comment
Share on other sites

  • 3 months later...
  • 1 month later...

I like this proactive approach to monitoring resource usage. My site does not suffer because someone on the node is hogging all the shared resources until someone complains and the techs track down the offending user. I imagine the only people who won't welcome this are the folks who need to be on a VPS or dedicated box but don't want to give up the lower cost of shared hosting. Thank you for running Cloud Linux.

This is the best application on "Prevention is better than cure" less customer complains, less work on admins. Happy life! heheh.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
 Share