[RADIATOR] Performance tuning

Jared Watkins JWatkins at acninc.com
Fri Feb 10 10:31:31 CST 2012


Replying to my own comment.. just so others could maybe benefit.  After wasting many hours trying to figure this out.. I finally did this morning.  My problem was that I was sending multiple streams of records from one IP to the same Radiator server and it was discarding most of them because of overlapping sequence numbers.  The network buffer *may* have been part of the original problem.. but subsequent attempts to load historical data through the same servers I was loading live data was not working as expected. 




On Feb 9, 2012, at 11:07 AM, Jared Watkins wrote:

> Hello.. 
> 
> I'm attempting to load some old cdr accounting data into my dev environment through Radiator and I'm seeing a problem with dropped records.  I saw the note in the doc about increasing the SocketQueueLength and I've done that both in the Radiator config and in the OS... taking it from 128k to 12M. (overkill?) but even with that... every time I reload the same days data from disk.. I'm getting more and more records into the DB for that day. It's not a big difference.. maybe 200 more recs out of 18k each time I resend the data. I'm using the unique recordid as the primary DB key to avoid duplicate entries.   I'm not seeing any errors or timeouts on the client side.. which tells me the packets are being acknowledged as received.... and I see a continuous stream of packets to and from the client.. but when tailing a debug log from radiator where I should see one log entry for every record.. I see periodic pauses in the log... sometimes of a few seconds. 
> 
> Of course this isn't a normal level of activity... this is me blasting the server from cdr files stored on disk. Any insight on what else I might try here? 
> 
> Thanks,
> J
> 
> _______________________________________________
> radiator mailing list
> radiator at open.com.au
> http://www.open.com.au/mailman/listinfo/radiator



More information about the radiator mailing list