This story is from the category Innovation 

Optimising Wi-Fi performance

We take a look at the approach adopted by a research project to deal with Wi-Fi performance problems.

Text: Kurt Baumann, published on 02.03.2016

IT departments are used to hearing university members complain about poor wireless performance because the websites they want to look at take ages to load, but why exactly does this happen? Are end-user devices not set up correctly? Are sudden bottlenecks happening because there is not enough bandwidth? Could there be other reasons?

It is really not so easy to find where the problems are and offer a quick solution to them. Hardly any long-term data and histories are available for recording and assessing Wi-Fi performance problems on campus, which makes it very difficult to reconstruct problems after the fact.

With this in mind, SWITCH has taken the lead in a GÉANT task to develop a wireless crowdsourced performance monitoring and verification (WCSPMV) concept that can be used to track performance and the causes of problems affecting it with the aid of end-user feedback. The focus here is on non-invasive bandwidth tests on end-user devices. We gained some initial experience at the TERENA Network Conference 2015, where we presented the concept live. Further implementations and concept improvements are currently under way at a number of universities.

Concept components:

Key indicators for WCSPMV are the end users – i.e. mobile clients as traffic generators – and the defined data collectors gathering data from Wi-Fi controllers and DHCP/RADIUS log files as well as access point identifiers. The data needed to measure network performance (bandwidth and latency) are collected using JavaScripts installed on selected websites and then sent to an analytics engine. The architecture diagram below (see figure) shows the main components and mechanisms needed to build up a meaningful picture of how the network is performing.

How it works:

A mobile client (see figure) connects to the nearest access point (AP). It authenticates and authorises itself and receives an IP address from the DHCP server. DHCP, system and/or RADIUS log files are thus created (see "Data Sources"), which allows matching of the client MAC address with the client IP address and of the access point identifier (AP-ID) and time stamp of the successful attempt to connect with the campus wireless network. These data are now entered into a relational database (RDB) and analysed or prepared for visualisation by an analytics engine (AE).

JavaScripts run in the end user's browser, delivering a series of network performance data concerning relative bandwidth (upload and download speeds of defined images) and latency (using pings measuring the Round Trip Time or RTT).

Now we have all the information we need to correlate the data source (log files), the time (time stamp) and the AP identifier (AP-ID) with the network performance data delivered by the JavaScripts.

These data can now be called up as needed from the RDB and the AE via a graphical user interface (GUI) to produce reports for the wireless network operator. Attention must be paid here to data protection (see box). These are, after all, personal data that provide information on the end user's behaviour.

First tests and results:

We presented the concept described above for the first time at the TERENA Network Conference (TNC2015) in Porto, Portugal and tested it live. We deliberately chose a relatively large conference because the accuracy of measurement depends on the number of Wi-Fi users (crowdsourcing).

The JavaScripts were implemented on the TNC2015 main and sub-websites. The NetTest server ran on a virtual instance in Athens, Greece. Only performance measurements from selected IP subnets were allowed; all others were blocked. We also prevented the duplication of measurements by limiting the life of cookies to one hour.

During the conference, we were able to carry out initial rough analyses of the network bandwidth and latency data collected and trace the crowding effect that happens when lots of conference participants are in the same place. We took over 1,700 performance measurements at TNC2015, sorted them into complete data sets and correlated them.


The chart shows the download and upload speeds in the large conference hall where the plenary sessions and presentations were held. The hall was equipped with several APs. At a first glance the results showd not clear patterns, but we could separate them by access pionts.The Radius logs told us which IP address was in use at wich access point, and when. We recorded an average download speed of 662.9 KB/s and an average upload speed of 406.5 KB/s. This suggests a high degree of fluctuation. As expected, we saw that the distance to the AP affects the quality of measured data. It also became clear that the image was too small at 1 megabyte and bigger sizes (e.g. 2-5 megabyte) would not compromise the available bandwidth.


The latency chart paints a somewhat clearer picture. Our test showed three things:

  1. Latency between the conference location in Porto and the NetTest server in Athens was 40 milliseconds.
  2. Clustering was in the 20-30 millisecond range, which shows a pretty healthy network.
  3. Anomalies were caused by various factors: incorrectly configured end-user devices, wireless network configuration, crowding, distance to AP etc.


Our first test confirmed our assumption that it is possible to collect information on Wi-Fi performance using non-invasive bandwidth tests on end-user devices. The tests supplied relatively good information:

  • We limited the performance measurements to defined IP subnets.
  • We prevented the duplication of measurements with the aid of cookies.
  • We recorded the browser’s user-agent string, which allowed us to break our results down by browser, platform and mobile/desktop. We also extracted geolocation information from the browser.
  • We compared objective performance measurements from hardware samples with those from end-user devices.

Further test implementations are planned or already running at various locations, including Dublin City University (DCU) and a small Internet service provider.

Improvements and innovations are being incorporated into the concept in terms of measurement data verification, automation of data collection and processing, drawing up a suitable RDB/AE concept with appropriate software and the GUI as a front end for network administrators.

The work presented here was done as part of the GÉANT Project, GN4-1, Task GN4-1-SA3T3 under Grant Agreement No. 691567.
About the author
Kurt   Baumann

Kurt Baumann

Kurt Baumann gained a Master's in mathematics from the University of Zurich in 2001. After working for IBM, he joined SWITCH in 2005. He is a member of the Network Team and represents SWITCH’s interests in GÉANT.


Questions on protection of personal data

Geolocation information allows us to track the behaviour of mobile clients very precisely with our wireless crowdsourced performance monitoring approach. This means that end-user profiling is possible. We held an initial discussion on the protection of end users' data as part of eduPERT.

Test volunteers needed

A first draft of the implementation guide is now available. We need volunteers to take part in testing so that we can optimise the concept, and we would be delighted if members of the SWITCH community could sign up.

If you are interested, please contact Kurt Baumann

Other articles