15 MB/s is probably the minimum we want. We need to assure there's enough capacity to keep ahead of the data rate to disk (which is between 8--12 MB/s depending on config, lower if you average over beam trips and periods where CODA isn't running). FWIW, with rcp the outbound bandwith was 'self-capped' at around 220 Mbits/s or ~22 MB/s.
It's very possible that the combined network and disk traffic on adaql2 is reaching the limit of the PCI bus. A 66 MHz/32 bit PCI bus peaks at 266 MB/s. Since all of the network and disk traffic must travel over this same bus so you get a multiplying effect: DAQ traffic needs to get rerouted to the RAID, the software RAID imposes some additional traffic multiplier (the 'R'edundancy part), the analyzers all hit the RAID and the network, and then the periodic silo copy eats up whatever's left. Hmm, it looks like the video card (w/ dual displays) is on the same bus too.
If it's only a PCI/33MHz bus, then we're hitting its limit for certain.
I'm looking into ways to lower the inbound data rate by, for example,
shifting the common stop to the MWDC TDCs and reducing the their accept
window. We might be able to shave a Mbit/s or more with some tuning...
A copy of this log entry has been emailed to: rom