[RADIATOR] Proxy with Diameter [was Trace level online changing]

Arthur Konovalov kasjas at hot.ee
Fri Aug 6 07:56:23 CDT 2010


Thread name changed, because I'm interesting does Diameter works with 
Radiator proxy environment.
Say, one instance runs as proxy with 'AuthBy *BALANCE' and others in the 
different ports as child instances.
Without Farming.

br,
Arthur


Heikki Vatiainen wrote:
> On 08/06/2010 02:12 PM, Arthur Konovalov wrote:
>
>   
>> Does proxy will work with DIAMETER protocol too?
>>     
>
> Server farming was mentioned in this thread, so I thought I would say
> something about Diameter and farming. From experience I can tell that
> Diameter works with server farm. Since Diameter runs over TCP, incoming
> TCP connection from a peer is accepted by one child and this child will
> then handle all Diameter messages that arrive over this connection.
>
> In other words server farm works with Diameter but it does not
> distribute the messages like RADIUS over UDP.
>
>   
>> br,
>> Arthur
>>
>>
>>     
>>> The frontend simply proxies the requests with an "AuthBy *BALANCE" to the backends where the processing occurs.
>>>
>>> You would set up multiple backends on different port numbers such that all of your processors are working in a similar fashion.
>>>
>>>
>>>   
>>>       
>>>> OK,
>>>> it's clear now. Thank You.
>>>>
>>>>     
>>>>         
>>>>> The parent process does not handle any RADIUS requests itself, so although you have increased the debug level for the parent process, the children are still running at Trace 3.
>>>>>
>>>>>
>>>>>       
>>>>>           
>>>> BTW, about Farming. It's unclear for me from CPU load aspect. My 
>>>> Radiator has big load (with 80% of peak hour), but only a few of CPUs 
>>>> loaded and one instance of radiusd has big load:
>>>>
>>>> top - 12:36:56 up 94 days, 12:59,  1 user,  load average: 0.42, 0.41, 0.42
>>>> Tasks: 184 total,   2 running, 182 sleeping,   0 stopped,   0 zombie
>>>> Cpu0  :  0.3%us,  0.0%sy,  0.0%ni, 99.7%id,  0.0%wa,  0.0%hi,  0.0%si,  
>>>> 0.0%st
>>>> Cpu1  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  
>>>> 0.0%st
>>>> Cpu2  :  1.3%us,  0.0%sy,  0.0%ni, 98.7%id,  0.0%wa,  0.0%hi,  0.0%si,  
>>>> 0.0%st
>>>> Cpu3  : 21.9%us,  0.7%sy,  0.0%ni, 76.5%id,  0.0%wa,  1.0%hi,  0.0%si,  
>>>> 0.0%st
>>>> Cpu4  : 15.0%us,  0.3%sy,  0.0%ni, 84.7%id,  0.0%wa,  0.0%hi,  0.0%si,  
>>>> 0.0%st
>>>> Cpu5  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  
>>>> 0.0%st
>>>> Cpu6  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  
>>>> 0.0%st
>>>> Cpu7  :  0.0%us,  0.3%sy,  0.0%ni, 99.7%id,  0.0%wa,  0.0%hi,  0.0%si,  
>>>> 0.0%st
>>>> Mem:   6096408k total,  1834452k used,  4261956k free,   212452k buffers
>>>> Swap:  3998392k total,        0k used,  3998392k free,  1363816k cached
>>>>
>>>>  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
>>>> 10436 root      20   0  113m  34m 3516 R   37  0.6 373:18.71 radiusd
>>>> 10435 root      20   0  113m  34m 3552 S    3  0.6  13:46.54 radiusd
>>>> 10437 root      20   0 87460  26m 2744 S    0  0.4   0:00.02 radiusd
>>>> 10438 root      20   0 87328  26m 2696 S    0  0.4   0:00.03 radiusd
>>>> 10434 root      20   0 72804  23m 1100 S    0  0.4   0:00.09 radiusd
>>>>
>>>> How to achieve smooth CPUs and radiusd load?
>>>>
>>>>
>>>> br,
>>>> Arthur
>>>>
>>>> _______________________________________________
>>>> radiator mailing list
>>>> radiator at open.com.au
>>>> http://www.open.com.au/mailman/listinfo/radiator
>>>>     
>>>>         
>>>
>>> NB: 
>>>
>>> Have you read the reference manual ("doc/ref.html")?
>>> Have you searched the mailing list archive (www.open.com.au/archives/radiator)?
>>> Have you had a quick look on Google (www.google.com)?
>>> Have you included a copy of your configuration file (no secrets), 
>>> together with a trace 4 debug showing what is happening?
>>>
>>>   
>>>       
>> _______________________________________________
>> radiator mailing list
>> radiator at open.com.au
>> http://www.open.com.au/mailman/listinfo/radiator
>>     
>
>
>   



More information about the radiator mailing list