The quest for colocation

Finally, the post a lot of you have been waiting for! I must admit I haven’t written as much here as I was originally planning to: I won’t include the name of all the companies I contacted and their responses for example. And I certainly won’t go naming and shaming the companies that didn’t even reply to my queries.

I started by perusing an Ask Slashdot post and the plethora of web sites Google suggests that try to help you answer the question of what to actually ask ISPs… That yielded the rather long list of questions I list below:

General Aspects

  • Can I visit the data centre where my server will be colocated before I enter into any agreement with you?

    I didn’t ask this question because I want to visit rooms full of computers (although it was really interesting when I did go), but more to judge security of a facility. If a visit is allowed, that means they are likely to let anyone in who does a good enough job pretending to be a prospective customer. Then again they do all have crazy complex security systems anyway…

  • Can you recommend any particular server? Are certain configurations to be avoided?

    What are they most comfortable working with? Have they or their customers had problems with particular setups?

  • Are there any limits as to what content can be provided through your network?

    Obviously illegal content and activities won’t be allowed, but this can give an indication as to how busy their Internet connections are likely to be. Porn, file sharing, and IRC all eat a lot of bandwidth.

Your Network

  • How many connections do you have to the Internet backbones?

    Traffic to and from your server can transit via any of these physical routes, thus if one fails another can take over. The more of these connections there are, the less likely you are to get bumped off the ‘net.

  • Do you use BGP peering? Who are your peers?

    BGP is a must for any large network, and the more peers the better. Peering means routes are exchanged between the local routers and those of the peers, and the more peers there are the more likely that routes quickly adjust to physical connections coming down.

  • Do you use switched ethernet or hubs? Do you use 10 Mbit, 100 Mbit, or 1 Gbit ethernet?

    Any network that uses ethernet hubs or 10mbit Ethernet these days is a joke. Most colos will use Gbit Ethernet between switches, but only offer 100mbit to your server. This is more than enough.

  • What are your failover / backup strategies in case a pipe / router / switch should fail?

    What happens if any of these components fail? Physical connections (pipes) and routers can all be, er, routed around by other routers and usually aren’t a problem at all. Switches usually can’t simply failover and have to be replaced, and they should always keep spares around in case.

  • Is there a firewall protecting the entire network? Are any ports in particular blocked or monitored?

    I’m a techie. I know what I’m doing. A firewall that I can’t control will only get in my way.

  • How many IP addresses is a server assigned? Can you please indicate pricing for extra addresses?

    If you’re using SSL, you can only host one certificate per port. If you want to keep port numbers out of customer URLs, you need to have an IP address for each protected host / domain (if you’re using wildcards).

Technical Support

  • Do you monitor the facility 24 hours a day, 7 days a week, 365 days a year without interruption?

    If the answer is not ‘yes’, run like hell! Having staff present all the time isn’t necessarily a must, but some sort of monitoring definitely is and should be capable of waking people up in the event of failure.

  • What sort of monitoring is performed on servers and other network equipment? How is everything monitored?

    It’s nice if they offer some sort of monitoring for your own servers, since getting a third party to do this can be very costly. Of course, monitoring of their own equipment is imperative.

  • Do you offer a remote hands / remote reboot facility? How does that work?

    Sometimes you have to pay extra for this, sometimes not. Also, remote reboot is sometimes done by someone physically pressing the reset button on your server, sometimes there is a device than power cycles your server automatically. Remote hands can be handy if you’re a long way away and don’t fancy driving around the country to pop in an extra stick of RAM, for example. This does involve a certain amount of trust, however.

  • What kind of uptime can you guarantee?

    Most don’t like giving guarantees at all. Usually they will say that anything less than 100% isn’t good enough for them, and that’s very good. If they give a guarantee like 99.99%, that only allows them 52 minutes and 33 seconds of downtime per year.

  • Do you offer (partial) refunds if your SLA is broken?

    If they offer an SLA of course… If not, you tend to have cheaper fees. If so, would a small amount of money really compensate for hours of downtime? I don’t think so.

The Facility

  • Do you have temperature and/or humidity control systems in place?

    Air conditioning is a definite must, humidity control usually isn’t all that important but is a good indication of how seriously they take themselves.

  • Does the facility have raised flooring?

    A good indication of whether the facility was purpose built.

  • How high off the actual floor is the lowest server in a rack likely to be?

    If you’re in a region prone to flooding, this can be very important! If you’re right at the bottom of a rack, forget it. If you’re at the top, you have a good chance of survival! Of course a lot of the time you’re several floors up and this means nothing at all…

  • Do you use drop ceilings for cabling?

    Once again, if you’re somewhere that floods regularly having all the wiring under the floor might not be such a good idea.

  • What kind of fire safety / fire suppression systems do you have in place?

    Say a server in the rack next to yours fries and catches fire. How soon will they know about it? Can they put it out without drenching the place with water, and thus ruining all the kit? If they use gas, is it safe if there’s someone in the building?

  • What kind of flood safety can be expected?

    If you’re in a region that’s prone to flooding…

  • What kind of power backup systems are in place?

    You should expect at least some sort of battery backup system and power generators to keep you going for several hours without utility power.

  • How well marked are the emergency power cut-off switches?

    Can some twerp easily mistake them for posh light switches and make the room go very, very quiet?

  • Do you keep racks in cages? Are they locked?

    Physical security here. If customers are allowed in regularly, how easy is it for them to get at your server? Most places don’t use cages, but then most keep the servers in locked racks, however this does have an impact on cooling.

  • What sort of physical access can be achieved? Can I visit my server in person? How often can I enter the facility? How much notice is required to do so?

    If a disk in my RAID array dies, how soon can I get at my server to replace it and start the resync? Things in this world like to die in pairs or threes, so the shorter notice the better. And what’s the point of even having hot swap if you aren’t allowed in and they have to take down your server and let you work on it in a separate room?

  • If somebody causes damage to my server, or happens to unplug it, what sort of compensation can I expect to get?

    If some klutz of a customer spills his tea / drops his server on mine, what happens?

So, how well did all the companies I contacted answer? I sent my questions to 15 different companies, all in the UK (although one was rather International). Only 9 of these bothered to get back to me at all, and as I mentioned before I was going to name and shame the non-respondents but I’ve decided against it now. By the time the 9th company got back to me I had already chosen my favourite candidate and so I didn’t get all the details out of them.

Before I launch into a detailed discussion about various companies let me say that, below the surface, all of the companies offered colocation gave mostly the same service: a slot, in a rack, in a secure data centre with very good environment control (temperature and humidity). In fact, most of the companies are all located in the same series of data centres: Interhouse Redbus. Most of the difference appeared in the packages each offered and there was also some difference in their Internet connections, so I will focus mostly on that.

My searching found packages ranging in price from £23/month right up to £100/month or more, although most were at the £50 mark. Some had a setup charge up to £100, and some waive it if you pay quarterly or yearly. And, I must add, some of the cheaper companies were far more impressive than the much more expensive ones, so don’t rule out the cheap ones!

All of the companies that replied to me offered bandwidth of 100Mbps on their internal networks, and bursting to that speed over their Internet connections. I was in contact with one company before my questions, and that one only offered 10Mbps bursting throughout the network. Depending on what you’re doing, this could make a really big difference.

Actual monthly data transfer was another story altogether: most companies only offered data transfer in the low tens of gigabytes (less than 50GB/month) which may sound like a lot but I’m heading for 15GB usage in my first month, and I haven’t posted any largish downloads nor do I have a very popular sites, so be very careful about this. Only a couple of companies offered 100GB data transfer as standard. Also, it’s very interesting to know what happens when you go over your limit: do you get cut off or have to pay extra? How much extra? Some will charge a fixed amount per GB, some will bump you up to the next highest tier, it all depends what’s best for you.

Next, what about multiple IP addresses? Well, why would you want to have more than 1 IP address in the first place? I can think of two good reasons: the first and most popular is that you can only have one SSL certificate per port number, so if you want to host several HTTPS web sites each has to have its own port number. If you don’t want people to have to type in non-standard port numbers, you need more IP addresses. The second reason? Domains need at least 2 DNS servers attached to them, and you can only run one DNS server per IP address. OK, so it’s cheating to run the same server on both IP addresses, but it does make it work! So, how many do you get and how much does it all cost? Most places give you one IP address as standard, and most give you extra addresses for free (within reason). Some won’t give you extras (!), and some will go and charge you for each extra IP (up to £10/month in one case).

I ended up going for a company called RapidSwitch, but is trading under the name 49pence. They were the second cheapest offer at £29/month, but offer 100GB data transfer, unlimited IP addresses, and all sorts of snazzy extras like a free monitoring service. They use Interhouse Redbus which is really quite an impressive facility (I visited it), and their network gear certainly has something going for it: a pair of Extreme Networks BlackDiamond switches make up the backbone of their network. They also have two rather large Cisco routers connecting everything to the Big Bad Internet, whose names I can’t remember. Both of the switches and routers are redundant, i.e. if a part fails another is ready to take over and parts can be replaced without powering down the device, even down to the power supplies. Should the whole router / switch topple over, the second one is there to take over from it automatically. I wish all ISPs did this! Thanks Ed and friends, you’re a load of cool people, and if you’re reading this please fill me in on details if I’ve missed any!

1 thought on “The quest for colocation

  1. An interesting write up. Something I would like to look into when I can justify, and afford, it! Hope it keeps on going smoothly for you.

Comments are closed.