..in the .NET development arena.
Consider a system that has many distributed nodes gathering data that needs to be sent to a single central enterprise level system ("the core").
These remote data gathering nodes have the potential to send up to 5 transactions per second to the core for processing, but it's likely to be less. Each single transaction will be small serialized xml data, probably around 50 bytes.
The number of remote nodes in this application is 1000 so the potential max tps is 5000.
As a transaction is received at the core, it will be processed (de-serialized) and it's contents re-broadcast out to connected clients (the UI) so these clients are aware of the data that was originally sent from a remote node. The number of connected clients that will be interested in recieving any single transaction broadcast from the core will be 20 max.
I have already developed a C++/C# IOCP socket server as a windows service which I believe will provide the best performance and highest possible tps processing capability but I don't think any single socket server instance will be able to handle 5000 tps. Therefore I think there will be the need for some load balancing hardware up front to spread the transactions received at the core to a number of back-end application servers. The load balancing hardware will need to be in an active passive arrangement to prevent single point of failure at this level and I think it will also need to be intelligent enough to distribute the load of transactions based on the CPU activity of the target app servers.
Has anyone got any experience of developing a system of this size/type?
Does anyone have any good suggestions for intelligent load balancing hardware?
Consider a system that has many distributed nodes gathering data that needs to be sent to a single central enterprise level system ("the core").
These remote data gathering nodes have the potential to send up to 5 transactions per second to the core for processing, but it's likely to be less. Each single transaction will be small serialized xml data, probably around 50 bytes.
The number of remote nodes in this application is 1000 so the potential max tps is 5000.
As a transaction is received at the core, it will be processed (de-serialized) and it's contents re-broadcast out to connected clients (the UI) so these clients are aware of the data that was originally sent from a remote node. The number of connected clients that will be interested in recieving any single transaction broadcast from the core will be 20 max.
I have already developed a C++/C# IOCP socket server as a windows service which I believe will provide the best performance and highest possible tps processing capability but I don't think any single socket server instance will be able to handle 5000 tps. Therefore I think there will be the need for some load balancing hardware up front to spread the transactions received at the core to a number of back-end application servers. The load balancing hardware will need to be in an active passive arrangement to prevent single point of failure at this level and I think it will also need to be intelligent enough to distribute the load of transactions based on the CPU activity of the target app servers.
Has anyone got any experience of developing a system of this size/type?
Does anyone have any good suggestions for intelligent load balancing hardware?
Comment