Milchi wrote: stconrad wrote:
Thanks both for responding. Brian I look forward to what you find out.
I have been doing more research and found architectures similiar to what Dave described. One change was that the Joomla docroot was on a shared drive, behind all the webservers. This essentialy sync's their codebase instantly, without having to do an rsync.
When we finalize our architecture and
it is load tested, I will diagram it and post it here.
We use the following Configuration
[load balancer] Webserver 1, Webserver 2, Webserver 3
Joomla, DocRoot NFS Share on a SAN Cluster
1 Databaseserver for all Operations
Working very well with Nice Page Times, But one big problem
you can't work with the backend
If the Load balancer kicks you on another server while you are working, you have to login again and all work is lost. We never get a solution to this problem. In Frontend, this problem never happens....
This is something that you can handle on a load balancer level. Our configuration:
Internet --> LB layer (2 x Dell R210 II servers with red hat load balancing in active passive) --> Proxy Layer (bunch of Dell R210 II servers with apache load balancer) --> Web Layer (several Dell R710 Servers) --> DB Layer
On a Proxy layer we add cookies (proxyserver=<number> and webserver=<number>) so first request will be load balanced, but then all the requests will go to the same proxy server and web server. You will have to configure mod load balancer to check cookies as per following manual:
http://httpd.apache.org/docs/2.2/mod/mo ... ancer.html
Alternatively, you can force load balancer to use different configuration for back end URI, pointing it to one server only - we have tried that and it works well with rsync file replication between various web servers.
In terms of the database, we use bidirectional replication, well described in following article:
Last, but not least, by making all the traffic go through separate vips on a loadbalance layer infrastructure is fully fault tolerant without even a single point of failure from visitors perspective. From backend perspective, proxy layer is point to web server...
Performance related note: If you have a massive infrastructure with over 300 servers, shared storage is a "must have", but if you have less than 30 servers, rsync and solution above would suffice in 99% of cases. Assuming that you have more memory than size of the content of /var/www, I would also recommend putting in cron following command:
find /var/www/ -type f -print0 | xargs -0 cat > /dev/null
It will make sure that all the files are being kept in a system cache = accessible in a matter of nanoseconds.