(RADIATOR) Duplicate detection doesn't use source port
Jonathan Kinred
jonathan.kinred at dot.com.au
Mon Sep 5 20:43:05 CDT 2005
Hi,
During testing we discovered that the source port of a request is not
taken into account for duplicate detection. With the default DupInterval
of 2 seconds this means that Radiator will start ignoring packets as
dupes when a NAS sends more than ~128 requests per second. We are using
a Cisco SSG and want to turn on extended source ports, so we're
definitely going to run into the excessive memory problem below.
I'm wondering if there is a more elegant way for Radiator to handle this
rather than breaking the RFC and not using the source port. I am
thinking along the lines of having requests purged from the
RecentIdentifiers hash when they exceed DupInterval. It sounds like an
expensive operation but as i see it our only option for these NAS's is
to enable NoIgnoreDuplicates or live with the 128 per second limit.
I'd also like to hear if someone has a better solution using the current
Radiator versions.
--
Jonathan Kinred
System Administrator
Dot Communications
Revision 3.6 (2003-04-14 Significant improvements to wireless support)
Client duplicate detection now ignores the source port, due to some
clients (notably Cisco APs) using a different port for every request,
resulting in excessive memory usage.
Revision 2.16.2 (21/8/00) Minor fixes
Duplicate checking now takes the client port into account, as required
by RFC 2865.
--
Archive at http://www.open.com.au/archives/radiator/
Announcements on radiator-announce at open.com.au
To unsubscribe, email 'majordomo at open.com.au' with
'unsubscribe radiator' in the body of the message.
More information about the radiator
mailing list