Trent 'Lathiat' Lloyd (トレント) (lathiat) wrote,
Trent 'Lathiat' Lloyd (トレント)
lathiat

  • Location:
  • Mood:

Some random non-scientific Avahi "scaling" figures

Talking to sjoerd and others on IRC, (for the benefit of the OLPC project), I decided to attempt to get some kind of an idea of the amount of traffic Avahi generates on a large network.

I booted up 80 UMLs, running 2.6.20.2, on my AMD Athlon64 X2 4200+ (O/C to 2.5GHz per core), with 2GB of ram.

Each was running with 16M ram, a base debian etch install with Avahi 0.6.16.

Interestingly with 80 VMs running my memory usage looked like this:
Mem: 2076124k total, 2012064k used, 64060k free, 18436k buffers
Swap: 996020k total, 8k used, 996012k free, 1476504k cached



I configured a 'UML Switch' with a tap device on the host attached (tun1) and told each VM to come up and use avahi-autoipd to obtain a link-local IP.

I had each VM set to advertise 3 services, via the static service advertisement files

  • _olpc_presence._tcp
  • _activity._tcp (subtype _RSSActivity._sub._activity._tcp)
  • _activity._tcp (subtype _WebActivity._sub._activity._tcp)

plus it was configured with Avahi defaults so it would announce a workstation service (the default 'ssh' service was however NOT present) and the magic services that indicate what kind of services are being announced

So I started Wireshark and IPTRAF and started booting 80 VMs, at a pace of 1 every 10 seconds, after roughly 10-15 minutes the following numbers of packets were seen on the host tun1 interface

704 UDP (56.3%)
390 ARP (21.2%)
156 OTHER (12.5%)


The ARPs are for avahi-autoipd and the UDP packets are for avahi-daemon to speak mDNS, iptraf reported

Incoming Bytes: 417,391

I then gave my local machine an IP which bumped the packet count to 712, 395 and 157.

I then started 'avahi-browse _activity._tcp', this would result in 2 services from each machine being returned, following that tidying up the packet count was at

935 UDP
Incoming Bytes: 496,901
Outgoing Bytes: 28,787 (30 packets according to iptraf)


Now this *really* gave me machine a heart attack, as many 'linux' processes we're eating 20% CPU as possible, and took a good 10+ seconds for my machine to start responding again, I suspect if i was running the SKAS3 patch it might be a little less harsh.

I then after cancelling that, run avahi-browse -r _activity._tcp which causes Avahi to resolve each of the services, following that run

UDP 1287
Incoming Bytes: 570,000 packets 1384
Outgoing Bytes: 185,000 packets 227


In this case most of the services were cached and I just had to resolve each one.

I forgot to watch for traffic counts, so I re-ran the above test and iptraf claimed 165kbits/second at peak for 1 5 second interval. In this time I noticed a bunch of the service resolution queries timed out, I suspect this may have to do with it causing my machine to lock hard for a bit while it does it's magic... ;)

So that's the end of my very simple basic run of basically doing some real (rather than theoretical) tests of the number of packets seen flying around with 80 hosts on a network with Avahi with a few services, and the impact of people running a browse/resolve on a popular service type.

I'm going to try comandeer some more hardware to run some faster tests and collect some more useful data.
Tags: avahi, olpc, xo
Subscribe
  • Post a new comment

    Error

    default userpic

    Your IP address will be recorded 

    When you submit the form an invisible reCAPTCHA check will be performed.
    You must follow the Privacy Policy and Google Terms of use.
  • 3 comments