Blog calendar
— or —
michal-frackowiakSquark
thejeshgn
shark797039
Arotaritei Vlad
cleareki
Refutnik
TRT- Vipul Sharma
Matt Gentile
Hirelawyer
Helmut_pdorf
Sven Stettner
michalf23
leiger
srivercx
Joshua Darby
lil g easy
Mr Shaggy
Chen XX
Super Dr Green
... and more
Blog tags
09 Mar 2009 10:38
So as you have probably already read on my previous blog post, we are thinking about re-designing the Wikidot infrastructure to make it more failure-proof, elastic and more efficient. One of the concepts have is to use one of the "cloud solutions", e.g. Amazon AWS (i.e. EC2 + S3).
So here are a few facts:
- In max 6 months from now we need to reorganize our server infrastructure anyway— simply because demand for Wikidot is growing.
- We want to create a more self-configuring (self-healing) infrastructure, removing as many single-points-of-failure as possible.
- We want to be able to scale just by throwing additional servers into the cluster, possibly within minutes, and be able to throw them away when we do not need them.
- Managing hardware is a pain and we would rather move our effort into higher-level management.
- We want better separation of various services we are running (like daily tasks, log analysis etc.). Some of them deserve separate servers.
Some of the solutions that more or less comply with the above requirements:
- Simply add more hardware — it would work. it works for most applications. But is costly and not elastic. Difficult to add resources on-the-fly. With SoftLayer server provisioning is really good, we can have a new server within 2-3 hours.
- Virtualize our own hardware — SoftLayer has an offering here, we would need to look deeper into this. But is it really what we need? Our hardware would still be a point of failure.
- Use virtualized instances — we could get virtual instances from a 3rd party provider (is SL going to offer virtual servers?). The problem is however that we need some good performance from our boxes too, and thus we would need a good degree of control over it.
- Use a "pure cloud solution" like Google App Engine. GAE is out-of-question because we would need to rewrite our code, and I am not sure it can run something as complex as Wikidot, with a lot of background services etc.
- Use Amazon EC2 — guess what, it looks like an optimal solution.
There is more info about EC2 here. Basically you can rent instances (virtual servers) on a per-hour basis ($0.10 - $0.80 per hour), there is a nice API to manage your instances, storage, IP addresses etc. EC2 deserves a separate article obviously. I can only say that:
- I am using EC2 + S3 + SQS from one of my other project and it rocks in terms of performance and scalability. A properly-designed application can handle millions of visits per day without much magic.
- Pricing is nice, but does not necessarily mean we could save any $$ moving to AWS. You pay only for what you are using, no up-front fees or plans. Good for small startups that grow over time.
- AWS meets most of our above requirements.
One of the nice things about EC2 is that you can get a new server within 2-5 minutes, use it as long as you wish, and terminate it. Everything is automated. There are dozens of Linux images available, tons of documentation and support from the community and Amazon itself (this one is paid extra).
Since we already had experience with AWS, we decided to run a couple of tests last Friday, just to get a glimpse of the situation. So what we did was:
- We set up a simple cluster configuration (1 front-end web server + 1 database server)
- We installed Wikidot on it using a fresh database dump.
- We tried to simulate read-only traffic by taking access logs from Wikidot.com and throwing the requests to the test server in parallel.
The only thing we are still concerned about is database performance in the virtualized (Xen) environment, over network-attached drives. Although in our tests the database was doing really fine, we need to do more read+modify tests.
It looks like Wik.is (although a bit less popular than Wikidot) is using quite an interesting cluster on EC2.
Amazon AWS is getting a lot of attention recently, and the recent changes and future enchancements look really promising.
After performing the tests we terminated the servers — Amazon charged us about $3 ;-)
BTW: This opens one more interesting case for us. SaaS with Wikidot — would you like to get your very own Wikidot installation within minutes, hosted on a virtual server instance? Yes, we know how to do this, and we might automate the whole process someday.
rating: 1, tags: aws cloud testtag wikidot
Google App Engine does not support PHP nor PostgreSQL. You need to write Python application (can be Django) and use their ORM with Google database (non-relational, but nice anyway). They supply automatic traffic distribution network (clients get connected to the Google server closest to their location). Other than that, there are not too many nice things about it. Good for some fun-things, probably not for professional project. But I predict this will change once they test everything :).
Piotr Gabryjeluk
visit my blog
Outsourcing to a major company like Amazon is appealing. Wikidot can then scale easily and you can use your time more productively.
I have red the "terms of Service" of AMAZON - there are a lot of causes hey can switch off their service by themself… and you have a lot of problems.
What is with the content of the wikis ( szubdomains) on wikidot. If there is an abused and forbidden content.. who controls it?
Do you get a lot of OTHER work than - paying thereofre with more control wotk for less operational work?
You give the last control out of your hands into another company.. this is outsourcing anyway . but no Service Level Agreement can help yu if something goes wrong and yu have operhaops only some days to go back to old environment solution.
Thats are my thought about giving away the service on a third party.
Service is my success. My webtips:www.blender.org (Open source), Wikidot-Handbook.
Sie können fragen und mitwirken in der deutschsprachigen » User-Gemeinschaft für WikidotNutzer oder
im deutschen » Wikidot Handbuch ?
Thanks for the comments.
Amazon's Terms look quite OK to me — every hosting provider reservers the right to terminate agreements, but what I can see Amazon mostly secures themselves from spamming activities. The entry cost of running services within EC2 is so low that it has been notoriously abused, e.g. to send spam. This is why they mention spamming so many times in their Terms.
However the stories of "real people" using AWS are encouraging. And I have not heard about any particular case when Amazon terminated agreement with a legitimate client. AWS Blog is quite hype-free and a nice starting point.
EC2 also has a 99.95% SLA, but this one is tricky (on paper) — they only guarantee availability of the whole "availability zone" (datacenter), not individual instances nor any parts of it. But again — it looks like their services are available close to 100%. On the other hand most of "dedicated-hardware" datacenters only give you SLA on network availability — hardware failures are obviously an issue you need to care about yourself.
I guess we would rather wait until we have some real-project experience with AWS — some of my friends are just deploying there. Probably we will install a database mirror for Wikidot on EC2 next week too. At any cost I would like to avoid "improvements" that are not thought carefully. Especially that Wikidot is not starting from zero-traffic, it is a high-traffic project with a lot of content and high load. If we do any kind of infrastructure change, we need to make sure it works from the very first second.
Michał Frąckowiak @ Wikidot Inc.
Visit my blog at michalf.me
Amazon ec2 with s3 is great combination. Also check mosso by RackSpace.
—
Thejesh GN