[RADIATOR] Trace level online changing

Hugh Irvine hugh at open.com.au
Fri Aug 6 05:47:02 CDT 2010


Hello Arthur -

Well if you only use FarmSize on its own, the children will round-robin taking packets from the socket queue.

In your case I see you are only processing accounting requests, so I am guessing that if you only process accounting starts and accounting stops, two of the processes are handling starts and two are processing stops. Further, I am guessing that the overhead involved in processing the accounting stops is much greater than for the accounting starts.

It is more usual to use a frontend/backend configuration with some form of load balancing in the frontend distributing to the backends.

The frontend simply proxies the requests with an "AuthBy *BALANCE" to the backends where the processing occurs.

You would set up multiple backends on different port numbers such that all of your processors are working in a similar fashion.

See section 5.44 in the Radiator 4.6 reference manual ("doc/ref.pdf") and the examples in "goodies/eapbalance.cfg", "goodies/hashbalance.cfg" and "goodies/proxyalgorithm.cfg".

regards

Hugh


On 6 Aug 2010, at 19:46, Arthur Konovalov wrote:

> OK,
> it's clear now. Thank You.
> 
>> The parent process does not handle any RADIUS requests itself, so although you have increased the debug level for the parent process, the children are still running at Trace 3.
>> 
>> 
> BTW, about Farming. It's unclear for me from CPU load aspect. My 
> Radiator has big load (with 80% of peak hour), but only a few of CPUs 
> loaded and one instance of radiusd has big load:
> 
> top - 12:36:56 up 94 days, 12:59,  1 user,  load average: 0.42, 0.41, 0.42
> Tasks: 184 total,   2 running, 182 sleeping,   0 stopped,   0 zombie
> Cpu0  :  0.3%us,  0.0%sy,  0.0%ni, 99.7%id,  0.0%wa,  0.0%hi,  0.0%si,  
> 0.0%st
> Cpu1  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  
> 0.0%st
> Cpu2  :  1.3%us,  0.0%sy,  0.0%ni, 98.7%id,  0.0%wa,  0.0%hi,  0.0%si,  
> 0.0%st
> Cpu3  : 21.9%us,  0.7%sy,  0.0%ni, 76.5%id,  0.0%wa,  1.0%hi,  0.0%si,  
> 0.0%st
> Cpu4  : 15.0%us,  0.3%sy,  0.0%ni, 84.7%id,  0.0%wa,  0.0%hi,  0.0%si,  
> 0.0%st
> Cpu5  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  
> 0.0%st
> Cpu6  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  
> 0.0%st
> Cpu7  :  0.0%us,  0.3%sy,  0.0%ni, 99.7%id,  0.0%wa,  0.0%hi,  0.0%si,  
> 0.0%st
> Mem:   6096408k total,  1834452k used,  4261956k free,   212452k buffers
> Swap:  3998392k total,        0k used,  3998392k free,  1363816k cached
> 
>  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
> 10436 root      20   0  113m  34m 3516 R   37  0.6 373:18.71 radiusd
> 10435 root      20   0  113m  34m 3552 S    3  0.6  13:46.54 radiusd
> 10437 root      20   0 87460  26m 2744 S    0  0.4   0:00.02 radiusd
> 10438 root      20   0 87328  26m 2696 S    0  0.4   0:00.03 radiusd
> 10434 root      20   0 72804  23m 1100 S    0  0.4   0:00.09 radiusd
> 
> How to achieve smooth CPUs and radiusd load?
> 
> 
> br,
> Arthur
> 
> _______________________________________________
> radiator mailing list
> radiator at open.com.au
> http://www.open.com.au/mailman/listinfo/radiator



NB: 

Have you read the reference manual ("doc/ref.html")?
Have you searched the mailing list archive (www.open.com.au/archives/radiator)?
Have you had a quick look on Google (www.google.com)?
Have you included a copy of your configuration file (no secrets), 
together with a trace 4 debug showing what is happening?

-- 
Radiator: the most portable, flexible and configurable RADIUS server
anywhere. Available on *NIX, *BSD, Windows, MacOS X.
Includes support for reliable RADIUS transport (RadSec),
and DIAMETER translation agent.
-
Nets: internetwork inventory and management - graphical, extensible,
flexible with hardware, software, platform and database independence.
-
CATool: Private Certificate Authority for Unix and Unix-like systems.





More information about the radiator mailing list