?

Log in

No account? Create an account
Some random non-scientific Avahi "scaling" figures - Lathiat's Journal [entries|archive|friends|userinfo]
Trent 'Lathiat' Lloyd (トレント)

[ website | lathiat dot net ]
[ userinfo | livejournal userinfo ]
[ archive | journal archive ]

Some random non-scientific Avahi "scaling" figures [Feb. 27th, 2007|10:12 pm]
Trent 'Lathiat' Lloyd (トレント)
[Tags|, , ]
[Current Location |Home]
[Current Mood |accomplishedaccomplished]

Talking to sjoerd and others on IRC, (for the benefit of the OLPC project), I decided to attempt to get some kind of an idea of the amount of traffic Avahi generates on a large network.

I booted up 80 UMLs, running 2.6.20.2, on my AMD Athlon64 X2 4200+ (O/C to 2.5GHz per core), with 2GB of ram.

Each was running with 16M ram, a base debian etch install with Avahi 0.6.16.

Interestingly with 80 VMs running my memory usage looked like this:
Mem: 2076124k total, 2012064k used, 64060k free, 18436k buffers
Swap: 996020k total, 8k used, 996012k free, 1476504k cached



I configured a 'UML Switch' with a tap device on the host attached (tun1) and told each VM to come up and use avahi-autoipd to obtain a link-local IP.

I had each VM set to advertise 3 services, via the static service advertisement files

  • _olpc_presence._tcp
  • _activity._tcp (subtype _RSSActivity._sub._activity._tcp)
  • _activity._tcp (subtype _WebActivity._sub._activity._tcp)

plus it was configured with Avahi defaults so it would announce a workstation service (the default 'ssh' service was however NOT present) and the magic services that indicate what kind of services are being announced

So I started Wireshark and IPTRAF and started booting 80 VMs, at a pace of 1 every 10 seconds, after roughly 10-15 minutes the following numbers of packets were seen on the host tun1 interface

704 UDP (56.3%)
390 ARP (21.2%)
156 OTHER (12.5%)


The ARPs are for avahi-autoipd and the UDP packets are for avahi-daemon to speak mDNS, iptraf reported

Incoming Bytes: 417,391

I then gave my local machine an IP which bumped the packet count to 712, 395 and 157.

I then started 'avahi-browse _activity._tcp', this would result in 2 services from each machine being returned, following that tidying up the packet count was at

935 UDP
Incoming Bytes: 496,901
Outgoing Bytes: 28,787 (30 packets according to iptraf)


Now this *really* gave me machine a heart attack, as many 'linux' processes we're eating 20% CPU as possible, and took a good 10+ seconds for my machine to start responding again, I suspect if i was running the SKAS3 patch it might be a little less harsh.

I then after cancelling that, run avahi-browse -r _activity._tcp which causes Avahi to resolve each of the services, following that run

UDP 1287
Incoming Bytes: 570,000 packets 1384
Outgoing Bytes: 185,000 packets 227


In this case most of the services were cached and I just had to resolve each one.

I forgot to watch for traffic counts, so I re-ran the above test and iptraf claimed 165kbits/second at peak for 1 5 second interval. In this time I noticed a bunch of the service resolution queries timed out, I suspect this may have to do with it causing my machine to lock hard for a bit while it does it's magic... ;)

So that's the end of my very simple basic run of basically doing some real (rather than theoretical) tests of the number of packets seen flying around with 80 hosts on a network with Avahi with a few services, and the impact of people running a browse/resolve on a popular service type.

I'm going to try comandeer some more hardware to run some faster tests and collect some more useful data.
LinkReply

Comments:
From: (Anonymous)
2007-02-27 02:27 pm (UTC)

Is it good or is it bad?

You've forgotten to actually make an interpretation of the data. Is this good, or is this bad?
(Reply) (Thread)
From: (Anonymous)
2007-02-27 02:33 pm (UTC)

Re: Is it good or is it bad?

also, is it time to do some optimization work here?
(Reply) (Parent) (Thread)
From: (Anonymous)
2008-03-20 04:45 am (UTC)

Quality pharmacy

Hi!
Visit new online pharmacy to protect your health
(Reply) (Thread)