(RADIATOR) Port limit accuracy

David Miller dmiller at newportnet.com
Thu Jan 2 18:11:00 CST 2003


Marius:

At 11:34 PM 1/2/03 +0100, you wrote:
>First of all thanks for your feedback David
>
>Some questions:
>-how often you do the backend replication? How often can the sessiondatabase
>be replicated? The point is -Let's suppose on the x replication time there
>are 100 ports in use(from 200) and close to the x+1 replication there are
>200 ports in use  but a failure is happening before the replication so the
>radius will check the secondary database and will still allow 100 extra
>connections!

The replication is controlled by the MySQL replication code. For all intent 
it is real time. Each master keeps a journal, each slave keeps track where 
it is in the master's journal. After any loss of connectivity the slave 
will start where it left off and bring itself up-to-date. In a circular 
master/slave-slave/master configuration. Any changes applied to either 
database while connectivity is down will be applied when connectivity is 
re-established. This has shown itself to work flawlessly with no 
intervention necessary.


>-the script identifies which database is prim or sec(basically the one that
>is leading)? By the way on which machine is running?

The script does not know anything about the database replication as such. 
It simply queries the session database its pointed at (in my case the 
"secondary") and verifies that each session is still active (via SNMP, 
other means are available). If it finds one that is not active, it deletes 
it. The database replication code then takes care of making the same change 
in the "master." The script is running as a cron on the secondary database 
server.

>-if I got it correctly the bellow mentioned  cron basically is doing what
>the Simultaneous -use is doing when the port limit is almost
>reached(checking if the items that are shown in the session database are
>really set-up or not into the NAS).The only difference is that this task is
>performed every 5 minutes in your case. If this is the case does this
>regular check have any impact in a network with 100.000 ports(SNMPs all
>over) and above?

This is not correct. We use the standard <SessionDatabase SQL> declaration 
in radius.cfg with a primary and secondary defined. Authentication is from 
flat file (<AuthBy FILE>), for robustness, with a "DefaultSimultaneousUse" 
of 1. Radiator is making all simultaneous-use checks against the session 
database.

The cron script is just to correct the session database in the event of 
lost stop (accounting ???) packets from the NAS. As Hugh has pointed out, 
the RADIUS protocol uses UDP and therefore is susceptible to lost packets. 
As Hugh also pointed out in his post, Radiator is very self healing in that 
it performs a delete on the NAS/NASPORT combination before doing an insert. 
The situation will eventually correct itself when the NAS/NASPORT 
combination is re-used, but in the mean time the user is "locked out" as 
there is an entry in the session database. The cron script will detect and 
correct this situation within a few minutes. I have tried running the 
script more frequently, but the snmp queries are slow and take awhile to 
complete. Five minutes seems to be a good compromise of overhead verses how 
long a user may find themselves "locked" out of the system.


Regards,
Dave

===
Archive at http://www.open.com.au/archives/radiator/
Announcements on radiator-announce at open.com.au
To unsubscribe, email 'majordomo at open.com.au' with
'unsubscribe radiator' in the body of the message.


More information about the radiator mailing list