Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

August 29 2017


GSoC: Improving nodewatcher data representation capability(final report)


This summer was really exciting because it was my first Google Summer of Code! I worked for Freifunk, a non-commercial initiative for free wireless networks. I would like to thank them for the opportunity to expand my programming skills and a great summer. I am excited that I got around to use Git on a serious multi-developer project and even create my first pull request. I developed a strong affection for Docker, which I now use for personal projects as well. After 5 years of programming in Python I am happy to add Django to my programming skill set.


My work was split into two main parts, one was working on nodewatcher, a mesh networking tool, the other was trying to update a long dead repository of Wlan Slovenia webpage. My work can be seen in the following two pull requests:

Ip space module

This was my first task. I had to create a module for nodewatcher which would draw a map of all the networks, something similar to this. The module was needed because some huge networks had no idea, which ip space was used and which was not, this map would give a better idea of how much space is left, and how it is structured. I started off slow as I had very little experience with django, it took me about a week to even find where all the modules where defined and how to implement a new one. Once I actually got to program in html/javascript, my work progressed very quickly, I was basically done with it around mid summer. While working i noticed that the network is huge, compared to the small nodes, you could barely see the nodes, this is why I also added the ability to zoom to top-level nodes, which adds extra clarity to how the nodes are distributed. The positioning is calculated using the hilbert curve, which is a way to map 1d, numbers to 2d space. It works great for numbers that are power of 2. The drawing is done using d3.js which is good at handling huge amounts of data, and displaying it in a svg form, which enabled me to implement zoom with ease.

Automaticly generated top level nodes

Problems I had

I spent most of my time of the first task researching and learning… I have never worked with docker, so it took some time to even boot up the nodewatcher container, never having worked with django it also took a rather long time to find out how things are done. Anyone working on nodewatcher in the future should really know this two fields better then I did going in.


I still have a few things to add, a better visual representation will be a good place to start.


This task on the other hand had a really fast start and it ground to a halt when I started to work on Django. My task was to revive a project not touched in years still using virtualenv and old pil library which hasn’t been maintained in 10 years. The wlan-slo webpage is a great starting point for, anyone trying to starting making meshed wifi nodes, being abandoned for so long, some of the pages functionality was not operational, this would mean that people who wanted to join the community would, be deterred from doing so. The webpage is split in two containers, one holding the postgresql database for the webpage the other having the webpage served using uwsgi for djnago and nginx for static files. Having been in a struggle with docker in my previous task I was afraid my lack of docker is gonna slow me down but it was quite the opposite. I managed to create both containers in a week or so, but having to struggle when it came to django.

Problems I had

My lack of django really showed here, I spent days solving simple problems of miss-configuration, I wish I had a better understanding of django when working on this project. The second big problem was the fact that the project was already finished and I was just updating it, meaning I had no idea where anything was or how things worked. If I started from the bottom up I am sure I would have learnt more about django, but the page was too big to do that in this time frame.


I grew to love docker, even adapting it to a few of my own projects. When it comes to docker I am sure I can help, however with my current knowledge of django I don’t think I can contribute to the project. I would really like a professional django developer to take a look at it, I am sure I was missing something really obvious to him, but not to me.

Finishing thoughts

I really enjoyed the opportunity to join an open source development team, it inspired me to work on more open-source projects… you never know who you might end up helping down the road. All in all it was a great summer and I really enjoyed learning about docker and django.

Der Beitrag GSoC: Improving nodewatcher data representation capability(final report) erschien zuerst auf Freifunkblog.


GSoC: Libremesh Spectrum Analyzer – summary

Hello everybody!
This is my final report for GSoC 2017.
I have enjoyed this GSoC a lot. Having the chance to get involved with the LibreMesh development community has been a blessing, thank you Freifunk and GSoC for giving me this opportunity!
This months of coding for LibreMesh have allowed me to learn many new skills while being able to contribute to the common project and getting more involved in the governance and community of the project.
I have been working on many features and most of them have been merged to the LibreMesh main branch, so in the following pages you can find all the technical details of the work done.

Principal contributions

This repository contains all the code related to the spectrum analyzer.

It is also an OpenWRT/LEDE feed, so it can be added to a feeds.conf file to be used as a source of packages.

In this repository you can find:

  • Spectral Scan Manager: It manages ath9k states, recovers i/q data from the atheros modules and hands them over through ubus
  • Spectral Scan Decoder: FFT_eval wrapper that will receive Spectral Scan Manager i/q data and turn it into JSON
  • Spectral Analysis Collector: A configurable daemon that will collect the Spectral Scan Decoder data for further analysis. This collection could be kept in memory or sent to a secondary server (like the OpenPAWS server)
  • Visualization Module: Access the information handed by the Decoder or the Collector (depending which information we would like to access) and visualize it in a Waterfall graph.
  • OpenPAWS Server: OpenPAWS is an open implementation of the PAWS protocol, TVWS Database. The Spectrum Analyzer can talk to it to inform on the usage of TVWS frequencies.

I had the chance to contribute some changes to the LibreMesh core, namely:

  • Adding the first steps of Continuous Integration and Continuous Deployment
  • Enhancing LibreMesh LuCI web interface

Outreach activities

As part of my involvement on the LibreMesh team, I got the chance to be part of many outreach activities to spread the word:

Things I had to learn

This were the things that I had to got deep into to get things done during the GSoC:

  • OpenWRT/LEDE Build pipeline
  • ath9k module functionality
  • LuCI module creation
  • Data visualization and D3.js Visualization tool

Reports on Freifunk blog

During my GSoC I did some articles about the life of a GSoC and LibreMesh/OpenWRT/LEDE programmer:

Things done

Things to be done

  • Proper packaging: right now the packages are not ready yet, so manual installation is required. I’m getting into this in the upcoming weeks.

    Variable frequencies: right now the Visualization Module only shows frequencies in the 5Ghz range. Refactor the code to be able to display all frequencies.

    Integration with LuCI: A LuCI module would be much more practical for integration with the rest of the architecture.

Future enhancements

  • Support for frequency shifters: there is a device that allows to do frequency shifting between 2.4Ghz and TVWS frequencies by connecting it to the radio conector.. Allow the system to be able to support it, namely, configure that there is one connected in a specific interface, and shift frequencies detected accordingly.
  • Add Support as an OpenPAWS agent: The scans done by this module could be used as an input for the OpenPAWS Server to monitor TVWS frequencies use and be able to handover frequencies based on current usage. For that, an agent needs to be developed that consumes the Spectral Analysis Collector data and sends it to the OpenPAWS Server.

Final Thoughts

The project has been very successful for me to get more deeply involved with the LibreMesh community.

Also, it helped me understand the complexity and diversity of knowledge required to engage with FLOSS projects.

Moving forward, I commit to continue working with the LibreMesh project.

Will continue mantaining the packages I produced and learning from the community to better serve it.

Der Beitrag GSoC: Libremesh Spectrum Analyzer – summary erschien zuerst auf Freifunkblog.


GSoC 2017 – wlan slovenija – Final report

What’s been done

The first blog post that describes the idea and goals can be read here, the first update here and the second one here.

So the Google Summer of Code came to a close. It was an interesting journey of learning, adapting and frustration. First I struggled with setting up the work space to work on LEDE platform. It was in the end successful and the whole process well documented, from setting up the virtual machines for running nodewatcher and nodewatcher-agent to actually coding, compiling and updating the agent with new packages. The end product is working HMAC signing of agent’s report messages that are sent to the nodewatcher. It can be used as a lightweight alternative to SSL certificates.

After that I tackled the task of improving the Tunneldigger, but was again met with deprecated documentation that wasn’t helpful at setting things up. After much struggle and digging around Slack I managed to get things going. Unfortunately my health disagreed and prevented me from finishing the task fully.


What’s next

If possible, I intend to finish the last task anyway so my contribution to wlan-si and Google Summer of Code is complete.

I am contributing using my github account.

Thank you for the great opportunity and good luck!

Der Beitrag GSoC 2017 – wlan slovenija – Final report erschien zuerst auf Freifunkblog.


geolocator (Software defined GPS) final evaluation

Hi everyone,

with this blog post I would like to explain the full Google Summer of Code Project as a final post. For people who haven’t read over the geolocator (Software defined GPS) project before, it might be interesting to read these three blog posts at first:

– geolocator (Software defined GPS) (english)[1] and (german)[2]

– geolocator (Software defined GPS) first evaluation (english)[3] and (german)[4]

– geolocator-software-defined-GPS-second-evaluation (english)[5] and (german)[6]

Otherwise I will give in the following a short overview about the project structure to remind you of it. I structured the Google Summer of Code project into 3 main subprojects:

web backend,

– The web backend named sgps-core is a service, which should give requested clients their geo position.


– The idea of gps-share is to create an udev device, which provides NEMA-formata protocols over tty addicted on information, which is received from the above mentioned backend.

LEDE Package,

– The intention behind this subproject is to develop a new package for LEDE called geolocator, which should provide the geo position of LEDE devices.

Now I would like to give you a full state of each above mentioned subproject. Firstly I will explain about the web backend and finally a peroration including my valediction as a Student by Google Summer of Code.

web backend

Generally the backend service receives over the OpenWLANMap[7] App from mobile phones the mac addresses of surrounding wireless networks linked to GPS positions. This information will be stored into a database. If a device like a WiFi router requests its position, it will send surrounding Wireless mac addresses to the backend and get back a geo position, which is calculated from these information in the database.

The new web backend called sgps-core[8] is an API-core, which should replace the old backend. The old one consists of a collection of different programs in different program languages. sgps-core includes a fully backwards compatibility to the old openwifi API[9] for requesting a position. sgps-core is written in Golang, which processes a lot faster than the old API, which is written in Ruby. sgps-core is more secure because it checks and take only requested strings, which contain only comma separated macaddresses with 12 hex characters.

As a fallback feature, sgps-core is able to receive coordinates from unknown WIFIs by requesting them on Mozilla Location Service (MLS)[10] if there are no db entries for that WIFIs. The position for clients will be returned in form of latitude and longitude. As a quick reminder, here is the schemata from the first post, which represented the the functionality of sgps-core:

The sgps-core solved a problem about calculating the position. The old method counts the average of all latitude values. Analogous for longitude. The new method calls geographic midpoint calculation and needs 4 parameters lat0, lon0, lat1, lon1 (give two take one) which will be explained in detail in following:

deg have to be replace with latitude or longitude value.

rad = deg *π / 180 <- Generally conversion from degrees to radiant.

dlon = (lon1 – lon0) * pi / 180

lat0 = lat0 * π / 180 <- lat0 from degrees to radiant.

lat1 = lat1 * π / 180 <- lat1 from degrees to radiant.

lon0 = lon0 * π / 180 <- lon0 from degrees to radiant.

Converting into Cartesian coordinate system.

Bx = cos(lat1) * cos(dlon)

By = cos(lat1) * sin(dlon)

Calculate new position reference to sphere and Converting back from Cartesian coordinate system into new latitude and longitude:

lat2 = atan2(sin(lat0) + sin(lat1), (cos(lat0) + Bx)² + By²)^(1/2))

lon2 = lon0 + atan2(By, cos(lat0) + Bx)

On this point it is also possible to use a ellipsoid to increase the accuracy of positions. This may be interesting for long distances. For short ones like from seen wireless networks, it is not really relevant.

Converting back to degrees:

deg = rad / pi * 180 <- Generally conversion from radiant to degrees.

lat2 = lat2 / pi * 180 <- lat2 from radiant to degrees .

lon2 = lon2 / pi * 180 <- lon2 from radiant to degrees.

In the last few weeks, I spent a lot of time on discussing with the current server administrator of to deploy the sgps-core on the server for a test environment. But he did not have much time, so we decided to migrate the to our Nordwest Freifunk infrastructure to make the administration more accessible for other people. In my last report I wrote “I will release in the next few days a first version”. This could not be done because of the above mentioned discussion. After the migration I can test that backend on huge databases and compatibility to the DBS. The current code can be found here [11]. For people who want to try the sgps-core please check out the following URL[12].


The Idea at the beginning of GSoC17 was to write a program to provide GPS NEMA-formats over a tty udev device. The information for the GPS NEMA-formats should come from the above mentioned sgps-core. As I told in the first blog post I discussed with some people from the Mozilla Location Service Malinglist and it turned out that something similar was already exist called geoclue. To avoid developing redundant software I decided to drop this idea. Instead of it the new plan was to build support for native GPS in gps-share[13], which is an add-on for geoclue. But during the Google Summer of code I had to focus more on the both other subprojects because they are more important, especially for Freifunk. In my peroration I will tell about the future plans, especially for gps-share.

LEDE Packages

The third subproject was to create some opkg packages for LEDE[14] and similar Frameworks. The main package called geolocator provides the geo position of the device via UCI[15]. Positions should be received from the above explained sgps-core. The 4 other packages are only for Gluon[16], which add the configuration options of the geolocator to the Web-interface.

This month I mainly worked on the LEDE Packeges. At the beginning of the august I sent a merge request to Gluon for integrating the Gluon-geolocator[17]. The containing geolocator program was written in ash shell code. While reviewing and discussing about the merge request, I realized that I had to rewrite the program from shell to lua code because Gluon mainly work with in lua written programs. You can find the shell code here[18] and the Lua version here[19]. At the moment I am waiting here for another review and subsequently merging.

The other packages for the Gluon Web-interface are also already in process. The first idea was to create a detection of installed packages to show related configuration options on the Web-interface. This idea was dropped because I found a better solution. The problem is detecting packages on runtime, which means many extra code on Routers, which only have 4MB Flash for example. So I decided to generate the package with their options on compile time. These packages are:





The main package is gluon-config-mode-geo-location, which is already exist in gluon, but with a difference Web-interface. Each package should either integrate an open street map or the geolocator options. Integrating both are also possible. For communities which would like to stay on the currently variety of functionalities, it is also no problem not integrate any of these extra options.

Here is how the new packages look like:

I wrote some C++ programs, which generate me the Lua code for the Gluon Web-interface, which is written in Lua. Base on preprocessor variables, the amount of options for each package will be included into the Lua output from the C++ program. These preprocessor variables will be set by selecting one of the above packages. Also PO files for the translation will be generated in the same way. A merge request of the above new packages can be found here[20] I am still working on it.

Peroration and Future plans

Now I am coming to my peroration.The last 3 months were really awesome, just like last year as a student on the Google Summer of Code. I would love to recommend this great opportunity for not only students but also for open source organizations. Students can not only learn a lot of new things but also meet new great people, make new friends and take part in many events. For example: at the beginning of august I was on the SHA2017[21] (Still Hacking Anyway) and had a meetup with some Freifunk communities there. We had a great discussion about a lot of technical stuffs and a nice time for socializing. The SHA2017 took place in Netherland nearby Amsterdam. Another example is : this week I flew to Spain to start my exchange semester. Coincidentally a student from Germany who I met at the beginning of the GSoC17 in Berlin on the WCW[22] (Wireless Community Weekend) is also doing an exchange semester in Spain. We have already emailed each other and planned to meet up in the next months, probly in Barcelona or any other places. As I said above, this is my second time as a student on the GSoC, which means this is also my last time and sadly I have to say goodbye to GSoC as a student now. But maybe next year I can work as mentor to support other students in their great opportunities.

Back to the projects, as i said I’m still working on it. I will finish the Integration into Gluon and LEDE and continue developing sgps-core integrate new features and migrate the infrastructure to a better server. I would like to contact Zeeshan Ali, the maintainer of gps-share and try to help on this project as well. Also I am still working on the hoodselector which is my Google Summer of Code project from the last yeah. You can read about it here[23]. The hoodselector should also integrated into Gluon but it requires for sure a few weeks of work to integrate VXLAN on it. A merge request for can be found here[24].

Also I would like to say thank you to my Mentors Clemens John from the Google Summer of code 2016 and Johannes Rudolph from 2017 and especially to Andreas Bräu who works so hard on the Freifunk Org for many years to give students these opportunities to be a part of the Freifunk Community.


























Der Beitrag geolocator (Software defined GPS) final evaluation erschien zuerst auf Freifunkblog.


Implementing Pop-Routing in OSPF – Final evaluation updates

Hello again, since last updates I worked hard to finish my project and to reach the final milestone for this project.

As I explained in my previous post[1], due to some issues, we’ve decided to change the topic of the project to Implementing Pop-Routing in OLSRd instead of OSPF.

In this last month I completed the code for the OLSRd plugin[2], which I hope will be merged soon [3].In order to allow PRINCE to interact with OLSRd I had to modify the PRINCE source code[4] and create a new plugin[5].

The last part of my GSOC was testing the functionalities of my project.
To perform this tests I used a tool developed by the University of Trento, called “NePA TesT”[6]. NePA allowed me to simulate a mesh network in my laptop and to perform tests on it. The network topology was defined using NetJSON, but for my purpose modified it to use graph generators[7]

To ensure that PRINCE was working correctly on this virtual network I measured the centrality and the tuned timer for each node. Then I compared these values to the ones calculated by the original algorithm. Since the simulated network was real, and it needed a bit of time to converge, I took the last 10 values to avoid to measure errors. This are the maximum errors for each size and each kind of graph:

Maximum of percentage errors calculating nodes centrality

I also measured the “hello” messages’ rate to check if it was being calculated correctly by PRINCE. As I did for the centrality I took the mean of the last 10 values for each node and I compared them against the ones calculated using the python Pop-Routing algorithm.

Maximum of percentage errors calculating “Hello” messages’ emission rate

Hence, as we can see from these tables, PRINCE is calculating the Centrality, and the timer’s value, with a really small error. This test also highlighted a bug (*) in the c_graph_parser library with very that particular kind of graph [8].

The last test I performed was to check whether the message to update the timers’ emission rate was actually modifying the emission rate of the messages.
I used a simple graph: 2 nodes connected by one link. And I captured the traffic with tcpdump, before and after the update message.
After 30 seconds I sent a message to the OLSRd poprouting plugin to update the hello timer to 5s. As you can see from the graph below it is working correctly!

Hello messages measured emission rate

I can conclude that PRINCE is working correctly with OLSRd and now it can be used to enhance the Wireless Community Networks that are still using it.
I would like to thank Freifunk, Ninux and Google for giving me the opportunity to participate in GSoC.

Cheers, Gabriele Gemmi


Der Beitrag Implementing Pop-Routing in OSPF – Final evaluation updates erschien zuerst auf Freifunkblog.

August 28 2017


PowQuty Live Log GSoC 2017 Final Update

This is the last blog entry in the series of Google Summer of Code project updates. It will describe, what has been done and what is left to improve in the future in the PowQuty project.


PowQuty is a power quality monitoring tool, which can be installed on a router running LEDE or OpenWrt. The router can be connected to an USB oscilloscope providing measurements which powqutyd will process and provide to the user in human readable form.
All this was tested on a x86 based LEDE router.

GSoC 2017

During this Google Summer of Code a live log functionality was added to PowQuty to provide information on power quality events. These events are:

  • voltage dip of 10% – 90% of the reference voltage on the measurement signal
  • voltage swell > 110% of the reference voltage on the measurement signal
  • voltage dip < 10% of the reference voltage on the measurement signal
  • > 5% of the measured values of one specific harmonic are over the defined threshold
  • On event occurrence important information like time, duration and event type will be written to a log file and presented in the extended luci app.

    As shown in the above picture, the interface provides a traffic light like color system behind these events, green indicates everything is within the EN50160 power quality norm. Yellow means, that 80% of the maximum time per week is already reached, red means, that the norm was violated during the last week.
    In addition to log writes, notifications are send out with Mosquitto. Mosquitto is a message broker using the MQTT protocol. It provides a publish/subscribe model, which allows a central server to subscribe to a topic and clients to send out messages to the server with a topic. Mosquitto was already in use in powquty but was extended for EN50160 event notifications. This will allow a central logging of bigger power supply networks, monitored by multiple devices.

    As another option Slack messages can be send by powquty now. Slack is a messaging program, using (as one option among many) webhooks for interaction. Everyone with the webhook can send messages to the team. Sending out messages allows a user to react quickly to changing situations, or get immediately informed on power event occurrence.

    Beginning with pull request 20 [] I started to implement these features.
    First an option was developed to read measurements from a file, as most power supply networks are pretty stable and wont provide many opportunities to test event handling in powquty.
    Afterwards slack and MQTT notifications where added.
    During testing of mosquitto event messages, some seemed to be lost on intervals with many En50160 events in a short period(sometimes more than 35 events per seconds). The solution seems to buffer all events before sending.
    Something similar happened with Slack. Slack only allows one message per second(short bursts excluded) [rate-limits].
    Buffering events would resolve this problem as well. An option for live email notification was considered at first, but was dropped as spam protection would stop most of the messages and probably list users as spammers.
    The last step was to add the traffic light system to the luci app, to enable users without knowledge of the norm to get an idea of the power quality of their power supply network.
    In addition a slack library was written [libwebslack] to send slack messages from PowQuty.

    What can be improved
  • As mentioned before event buffering is needed and will be added after GSoC
  • Email notification in form of a weekly summary
  • More Error checking and handling
  • improving libwebslack to not use libcurl to reduce its size
  • provide libwebslack as OpenWrt/LEDE package, for easier future use
  • Finally I have to thank Dr. Thomas Huehn for being my mentor and Freifunk for their work they do and especially for being a mentoring organisation for Google Summer of Code.
    Last but not least I would like to thank Google for making this all possible.

    If you want to review some of my earlier posts:

  • Introduction
  • First Update
  • Second Update
  • Best regards

    Der Beitrag PowQuty Live Log GSoC 2017 Final Update erschien zuerst auf Freifunkblog.


    GSoC 2017 – Add MPTCP support in LEDE/OpenWRT trunk – Final

    Brief summary

    In the first post (beginning of the GSoC 2017 project) I made a few checkpoints to complete at the end of the summer. Now I don’t copy them here, but the good news is all of them completed successfully. The main goal was a very simple transparent multipath Wi-Fi link bandwidth aggregation. The proof in the video above. And the details below.

    What has been done in August

    Because everything tested successfully on a virtual environment, the next step was port everything into a real, LEDE based physical test environment. The first step was to build the LEDE with MPTCP support to the routers. It went without any problem and I have installed it to Netgear R7000 and Netgear R7800 routers. These are quite powerful SOHO routers, R7000 with 1.4GHz- and R7800 with 1.7GHz dual core CPU. But R7800 using more recent architecture so it seems like more than twice as fast as R7000. So I installed ss-redir to R7800 and ss-server on R7000 and configured as before.
    On the client, every TCP traffic redirected to ss-redir in the iptables PREROUTING chain (except where the destination is the same LAN as the source). When this happens, the client’s TCP flow from LAN just gets split into two MPTCP sub-flow on the two WAN, which is two Wi-Fi bridge connection in our case. I use some old Ubiquiti devices (2 NanoStation M5 and 2 NanoStation Loco M5) as You see on the video just to try out if it works. When I experimented with ss-server and ss-redir with simple UTP cable connection, it turned out the encryption is very slow even in these powerful CPU-s. I get 700Mbps between the two router (measured with iperf3) but when encryption turned on it slows down 50Mbps or less (depends on the type of the cipher). I decided to fork shadowsocks-libev and make a version which makes the encryption optional. I also created a custom package feed for my LEDE fork which is contains that version. So if You clone MPTCP LEDE and update the feeds, shadowsocks-libev-nocrypto packages are available in the menuconfig. This helps the connection over the ss-redir/ss-server become much faster.
    On the server, there is no special config, just an ss-server and static WAN IP addresses with a DHCP server. Every other device (the router with the client, and the 4 Wi-Fi bridges) got the addresses and gateways from DHCP. This makes the configuration very comfortable.

    Simple topology of the multipath Wi-Fi bridging topology

    I configured the Wi-Fi station pairs to different bands: 5180MHz and 5700MHz to make sure they are not interfering with each other. Then I started the test! As You can see in the figure (and on the video, but it’s not as clear because of my small desk and extra cables for the PoE injectors) I connected one LAN client to each router. One of them is my PC and the another is my notebook, each of them run iperf3. Very important, as I mentioned in my previous post, none of them have any special configuration! Just plugged into the LAN port of the router and that’s it. During the iperf transmission, I unplug one Wi-Fi bridge (from path #1 VLAN) from the router: the iperf session continues, only the throughput slows down to half, from 40Mbps to 20Mbps. This is the expected result: one MPTCP sub-flow torn down between the routers, but the another still alive and functional. When I plug the bridge back in and get the IP address over DHCP, another MPTCP sub-flow builds back over the recovered Wi-Fi bridge and the throughput goes back to 40Mbps.

    Potential use-case and deployment

    This is a small proof-of-concept testbed but I think this project maybe works on real-life Wi-Fi mesh networks. It is not hard to imagine a mesh network with multiple available paths between the intermediate router devices. Another use-case is to speed up point-to-point rooftop Wi-Fi links – with this You might be beat Ubiquity airFiber24‘s speed with multiple cheaper bridges :-). As I presented there is a realizable gain for the user with minimal configuration on the routers and no configuration on the end devices. In my opinion, the throughput depends on the CPU performance and not on the number of TCP flows. I also verified this with my virtual environment. So if there is many clients and many TCP flow completely fine but for high throughput, the setup require powerful (x86 if possible) hardware.

    Future plans

    The work in the GSoC 2017 completed, but there are some thing what has to be done in the future. The most important is the UDP or another kind of traffic. Currently, this is singlepath, routed through the default gateway. There is a MPT application (MultiPath Tunnel, like a multipath VPN without encryption) which is suitable for UDP traffic and handles many paths with different weight value (use paths in different ratios). Another interesting approach is the MPUDP kernel module + OpenVPN but this is a small “hack” for research purposes at this moment.
    Sadly the current implementation of shadowsocks-libev is single threaded, using only one CPU core. I would like to make it multithreaded if possible in the near future. I will maintain the MPTCP LEDE repo as long as possible in the future and my shadowsocks fork. It depends on resources, but I would like to make a repo for my feed which contains the compiled packages. And yes, the feed now only contains one application, I would like to improve it with other MPTCP related stuff. The ride never ends, the work continues!

    I would like to say thanks for Freifunk to adopt this project and my mentors – Benjamin Henrion and Claudio Pisa for their ideas and help! And of course to Google, for make this project possible.

    MPTCP LEDE on github:
    Feed for packages on github:
    Shadowsocks-libev-nocrypto on github:
    Blogpost with the tutorial and detailed configuration:

    Der Beitrag GSoC 2017 – Add MPTCP support in LEDE/OpenWRT trunk – Final erschien zuerst auf Freifunkblog.


    OpenWifi – GSoC 2017 final report

    Hello everyone!
    First things first so here is the code that I’ve written. You can find it in these repositories:
    OpenWifiCore (core server application)
    OpenWifiFeed (LEDE/OpenWRT feed with boot flasher and boot notifier)
    OpenWifiWeb (web frontend)
    OpenWifiTemplates (old templating system)
    OpenWifiLocation (plugin that detects the location of a node via google location api and nearby wifi aps)

    I also worked together with Arne to interact with his SDWN controller and agent. We created a small website that informs you on how to use the two tools together. (As I’m writing this it is still somewhat under construction. But I hope everything will be there soon 🙂 ) Furthermore there is also specific documentation (WIP) for OpenWifi here.

    To be a little more precise here is the list of commits that have been done:









    Overview about what has been done

    Everything that has been done can be put into 4 categories: authentication/authorization, api, database-model and infrastructure. I want to give you a brief overview of all these categories.


    Docker images are now build automatically via TravisCI and deployed on docker hub. You can find more information in my first evaluation blog post. How to use the docker images is described in the documentation. These images are also used for testing.


    The graph based database model was made a first class citizen. It now automatically converts between the internal representation and the representation needed to sync to the AP. It is also now possible to create new configurations and create links between these. The query format is also used for authentication/authorization.


    Providing all the new functionality via a REST style API was the focus of this google summer of code (web views are still needed for quite some things). It is used for managing users, services, nodes and changing the graph based configuration model. It is described in the documentation.

    The concept of having a service is also something new that I created together with Arne in august. If an external application needs to make changes to configurations of specific nodes it can just subscribe as a service. The service takes a list of database queries, a name and a shell script with a compare string. If the output (stdout) of the shell scripts matches the compare value the node gets the name of the service as it’s capability (also something that was added during this GSoC). When a node has the capability the queries are applied to it’s config in a regular interval by the job-server.


    This was the biggest part of this GSoC. There are two ways of giving access to a node – either by giving access to a path string of a configuration or by allowing a specific database query. For more details see my second evaluation blog post.

    There were quite some challenges as this system allows for some really complex access configurations and there are still some things to improve here. (see below)

    What else? Aka the smaller bits

    There were also some smaller bits that were done that don’t really fit the other categories. The boot-notifier and boot-flasher scripts were improved, a way to abstract communication with the nodes was started (see documentation), some small UI fixes to the graph view were done. And probably a lot of other smaller fixes I forgot 😉

    Future – or what needs to be done

    I guess the most important thing is to get all of these new features exposed via the web views. That shouldn’t be to hard but wasn’t the highest priority during GSoC. Everything related to the graph should be made aware of all possible path strings that lead to a configuration. Currently just one string is used – this should be lists! The fine grained authorization needs some more testing and I also want to improve the pattern matching by combining the regular expressions you could use to describe a path string. This could be done with greenery. The authorization right now is also just focused on nodes and configuration – it would be nice to restrict access to some views and actions. For example restrict the access to LUCI, sshkeys or executing commands on a node. Furthermore there should be a per node option where “the truth” of a config lies (like if there is a difference between the actual node configuration and the configuration on the server which one is considered the one to go with) – than it would also be nice to disable the sync for some nodes (if manual changes on the node need to be made for example).make


    A big thanks goes out to Google for organizing something cool as the Google Summer of Code, to Freifunk for letting me do this project with them and last but not least Julius for being my mentor!

    Der Beitrag OpenWifi – GSoC 2017 final report erschien zuerst auf Freifunkblog.

    August 26 2017


    Luci2 and Libremesh – GSoC – Final


    In these three months I was working on the implementation of Luci2 (the graphic interface of LEDE / OpenWrt). The project was to translate the functionalities that Libremesh currently uses in Luci to the new proposal. The new environment consists of a backend based on UBUS that exposes JSON with data and the structure of the view.

    As far as Google Summer of Code goes, I was able to make the ubus modules that emit information about bmx6, batman-adv, alignment, spectrum analysis, libremap and finally a series of several utilities. The results can be found in the lime-packages-ui repository.


    Each module has its documentation on the calls and the expected answers.

    To future

    Finally I want to clarify that it is my intention to continue until achieving a complete implementation with the front end and keep the packages made. I will adapt the current luci packages to consume the data from the UBUS modules, so its use can be immediate and not wait for the complete development of luci2.

    Thanks to the Freifunk community and the Libremesh team for giving me the opportunity to participate in this GSoC. No doubt I will continue to contribute to free software, to have more community and free networks.

    Der Beitrag Luci2 and Libremesh – GSoC – Final erschien zuerst auf Freifunkblog.


    netjsongraph.js – Google Summer of Code (GSoC) 2017 summary

    Throughout the last three months, I was quite fortunate to work for Freifunk on netjsongraph.js under the guidance of my mentor Federico Capoano. Thanks for this invaluable experience that I learned a lot of knowledge and use them in a practical project. Here is a summary of the work I have done during the Google Summer of Code (GSoC) 2017.

    Google Summer of Code project page


    netjsongraph.js is a visualization library for NetJSON, a network topology data format. The main goal of netjsongraph.js may be concluded in below three lines (more details you can see in GSoC 2017-netjsongraph.js: visualization of NetJSON data):

    • Apply the modern front-end development tools and add tests workflow (#1, #45)
    • Rewrite it with WebGL (#11, #29, #39, #42, #47)
    • Improve the performance (#41, #44, #46)


    Github Repository :

    Examples on GitHub pages:

    You can browse all examples on GitHub pages. Some screen shots of the application:
    basic example
    performance example
    The force-directed layout is usually used to visualize network data. It offers insights on the relationships between nodes and links. The previous version of netjsongraph.js is implemented by d3 and it’s rendered using SVG. It would be very slow if there were thousands or ten of thousands nodes or links. So I have to embrace the WebGL speeded up by GPU to have a better performance.

    I have recorded my work in the blog every milestone:

    BTW, It’s a great management method to make members submit weekly reports and blog posts in Freifunk.

    During the three months, there have been 116 commits from me. I created a big Pull Request include them:
    netjsongraph.js #48
    netjsongraph.js project panels
    Almost all goals have achieved:

    • Published a minor version
    • Improved development workflow
    • Tests Added
    • Refactored visualization by Three.js and d3-force
    • Added more interaction like hover (show nodes tooltips), click (show nodes or links information panel), pan and zoom
    • Improved performance

    Especially on performance aspect, it runs efficiently on Chrome reached 60FPS under 5k nodes and 10k links. And if you don’t wanna animation, you can choose the static rendering layout.


    I also encounter some challenges I never met before.

    Event binding and handling

    As you know, WebGL renders all objects in one canvas tag. How to bind events on every geometry? You should use the Ray casting. Raycasting is used for mouse picking (working out what objects in the 3d space the mouse is over) amongst other things. So you can know which geometry your mouse over and add some interaction effect.
    There are thousands of objects and every object has several events you should handle, I had to develop an event controller to manage it.


    The bottleneck in this visualizer is performance(#41). I tried many methods to improve it include:

    Reuse geometry and material

    However, the color of every node is different and the one link should highlight itself when it hovered, so the material should be independent and can not use in common.

    Combine the mesh

    Same problem with above. It’s not flexible to combine them to one mesh, different nodes and links should have different positions.

    Static rendering

    Make calculation before rendering, so there is no animation and repaint.

    Using Web Worker

    Web Workers is a simple means for web content to run scripts in background threads. The worker thread can perform tasks without interfering with the user interface. So put static layout calculation into it will be efficient.

    Force-directed algorithm

    There are different complexity and cost in the different force-directed algorithm. The Force-Atlas2 algorithm has some benefits over the force layout implemented in d3-force. So current version may be refactored by an advanced algorithm in the future.

    What is left to be done

    • Add optional geographic map (#40)
    • Using Force-Atlas2 algorithm

    More interactions and features should be added, and performance may be optimized by using new algorithm. I’d like to continue developing this project after GSoC.

    In the end, thanks for the great patience and guidance from my mentors. Thanks for Google to provide me with this rare chance to contribute to an open source community together with awesome members from all over the world. I really appreciate this invaluable experience accumulated this summer and I believe it will have the profound impact on my career and life.

    Der Beitrag netjsongraph.js – Google Summer of Code (GSoC) 2017 summary erschien zuerst auf Freifunkblog.

    May 31 2017


    Spectrum Analyzer in the context of LibreRouter

    Hello to all!

    My name is Nicolas Pace and this is the first time and engage into participating in the GSoC for Freifunk.

    For this opportunity I’m engaging with the LibreMesh community on the context of the LibreRouter project by implementing a Spectrum Analizer for LibreMesh and also for LEDE/OpenWRT.

    Spectrum Analisis is a very powerful tool for anyone that wants to enhance the quality of the links created, but also can be used as a base for more complex functions like diagnose of the physical layer, or many other things that have been implemented in other firmwares.


    What has been done already

    During this last weeks I’ve the chance to engage the community, and also to deepen my understanding of the problem at hand.

    Also, I’ve got a working prototype of the command-line interface, and a prototype of code that has been used to show that information.

    Next steps

    • To create a lightweight service that shares the information with the web
    • To make a nice interface for the output

    Thanks for the opportunity of joining you, and hope to deliver as expected!

    Der Beitrag Spectrum Analyzer in the context of LibreRouter erschien zuerst auf Freifunkblog.


    Bringing a Little SDN to LEDE Access Points

    Hi everyone!

    My name is Arne Kappen and this is the beginning of my second participation in GSoC for Freifunk.

    Last year, I implemented an extension for LEDE’s netifd which enabled network device handling logic to be outsourced to an external program and still integrate with netifd via ubus. My work included a proof-of-concept implementation of such an external device handler allowing the creation and configuration of OpenVSwitch OpenFlow switches from the central /etc/config/network file [1].

    Sticking with Software-Defined Networking (SDN), this year I am going to provide the tools to build SDN applications which manage wireless access points via OpenFlow. The main component will be establishing the necessary message types for the control channel. I am going to extend LoxiGen to achieve this. In the end, there should be OpenFlow libraries for C and Java for the development of SDN applications and their agents running on LEDE.
    I will also write one such agent and a control application for the ONOS platform to test my implementation.

    My ideal outcome would be a REST interface putting as many of the APs configuration parameters under control of the SDN application as possible. Such a system could provide comfortable management of a larger deployment of LEDE access points and be a stepping stone for more complex use cases in the future.

    I am looking forward to working with Freifunk again. Last year’s GSoC was a great experience during which I learned a lot.

    [1] Last Year’s GSoC Project

    Der Beitrag Bringing a Little SDN to LEDE Access Points erschien zuerst auf Freifunkblog.

    May 30 2017


    Implementing Pop-Routing in OSPF

    Hello everyone.

    I’m Gabriele Gemmi, you may remeber me for… Implementing Pop-Routing[1]
    This is the second time I participate in GSoC and first of all I’d like to thanks the organization for giving me this opportunity.
    Last year I implemented PR for OLSR2. The daemon, called Prince [2], is now available in the LEDE and the OpenWRT feeds.

    What is Pop-Routing

    PR is an algorithm that calculate the betwenness centrality [3] of every nodes of a network and then uses this values to calculate the optimal message timers for the routing protocol on each node. In this way a central node will send messages more frequently and an outer one less frequently.
    At the end the overall overhead of the network doesn’t change, but the convergence gets faster.


    My project focuses on extending Prince functionalities to use Pop-Routing with OSPF. I decided to work with BIRD, since it’s written in C and it’s already available for OpenWRT/LEDE
    In order to do this I need to develop 2 components:
    — A plugin for BIRD that expose the OSPF topology in NetJSON and allows to update the timers
    — A plugin for Prince that communicate with the BIRD plugin

    I already started developing the former [4], and I’m looking forward to implement the latter.
    I’ll keep reporting my updates here, so stay tuned if you wanna hear more.

    Cheers, Gabriele


    Der Beitrag Implementing Pop-Routing in OSPF erschien zuerst auf Freifunkblog.


    GSoC 2017 – RetroShare mobile improvements

    Hi readers! I am Angela Mazzurco and I am very grateful to the GSoC community
    (Google, Freifunk, RetroShare etc.) for giving me te possibility to participate
    as GSoC student this year!
    I study Architecture and Engineering at Pisa University, and here in Pisa I am
    involved in the local community network (eigenNet/
    Thanks to the local community I get to know RetroShare and now I use it on my
    daily life when I am in front of my laptop. Remote comunication today is very often
    displaced from the personal computer to the smart-phones, because of this very
    often I have to downgrade to less ethical and less secure communication
    platforms, because most of my friends are reachable only on the smart-phone.
    This last unfortunate situation inspired me to help developing RetroShare for
    mobile phones.
    In this deirection the RetroShare community has already done some effort but
    still the Retroshare Android app is in an early stage and need much improvement.
    I ‘ll give my contribution to this big project, trying to solve issues with the
    interface and helping to develop it, to make it user friendly and easy to use
    for all users.
    During the community bonding period I started to prepare the developing
    envir one ment with suggestions from my mentors, I have been remotely meeting them
    on RetroShare and I have been successful compiling RetroShare for desktop, and
    now I am preparing the toolchain to compile RetroShare for Android, that is not
    so easy as it may seems.
    The application interface is writted in Q ml , a language part of Q t framework,
    so my first steps have been prepare Qt Creator IDE to, and to create my own fork
    of the Retroshare project [0]
    The app comunicates with the Retroshare API to get the information, using unix 
    sockets, and also with the native Android operating system, using JNI ( Java Native
    Interface) .
    After having the toolchain working I’m going to start improving the QML
    interface, adding features, improve the integration with Android operating
    system, improve usability, and fix a bunch of bugs.

    Der Beitrag GSoC 2017 – RetroShare mobile improvements erschien zuerst auf Freifunkblog.


    GSoC 2017 – RetroShare via Freifunk

    Hello, my name is Stefan Cokovski and I’m an undergraduate student at the Faculty of Computer Science and Engineering, Saints Cyril and Methodius University of Skopje. My field is Computer Networks Technologies.

    Firstly, I would like to thank Google and the team responsible for organizing GSoC. GSoC is a wonderful opportunity for many students all over the world to gain some real experience working on open-source projects, but also to expand their network with new friends and potential colleagues. I would also like to thank Freifunk (for taking many projects related with computer networks under their wing and for also supporting the project RetroShare) and the lead developers (and my mentors) of RetroShare for being here for me during this community bonding period, answering my questions and helping me to improve my ideas. I’m sure they will continue to help me during the later parts of GSoC.

    Before I tell you what my project involves, I would like to introduce you to what exactly RetroShare is and maybe convince you to start using it (if you don’t use it already) and possibly join the development process.

    RetroShare is a decentralized, private and secure commmunication and sharing platform which provides many interesting features like file sharing, chat, messages, forums and channels. RetroShare is a free and open-source project, completely free of any costs, ads and terms of service. RetroShare is available on several operating systems, including various GNU/Linux distributions, FreeBSD, Microsoft Windows and Mac OS X.

    Sounds interesting? Read more.

    Why should you use and recommend RetroShare to your friends?

    With the recent disclosures involving the violation of privacy, exercising the right to have secrets and secure communication between friends has never been more difficult. Information is often intercepted by various agencies and the need for a secure communication and sharing platform has never been greater. This is where RetroShare comes in play.


    •   is a completely decentralized, friend to friend network designed for people who don’t want to be dependent on centralized systems which often invade their users’ privacy.
    • provides you the means of exercising your right to have secrets and control what you share and to whom you share it.
    • makes use of strong cryptographic algorithms, while keeping simplicity of use which is very important for average computer users.
    • can be the all in one alternative you’re looking for to replace the dozen other communication methods you’re using at the moment.

    Technical specifications:

    RetroShare’s network topology, by definition, is a decentralized friend to friend network (F2F). RetroShare uses DHT (distributed hash table) to locate friends and make the initial connection process easier. Transport is provided by IPv4 TCP and UDP, Tor, I2P, while IPv6 support is still in development. Authentication is utilized with PGP keys and the traffic is encrypted with TLS (OpenSSL). UPnP and NAT-PMP provide port forwarding support, while UDP support helps to connect to friends which are behind NAT. RetroShare can be extended via plugins.

    Alright, so now since you’re at least partly familiar with what RetroShare is, let me tell you about my project.

    Currently, RetroShare is mostly being used as a desktop application. It has a Qt-based GUI which has been polished over time. RetroShare also has a web interface (bingo, this is what I’m interested in). Sorry for keeping you in suspense there. The web interface is a bit behind the main Qt GUI in terms of functionality and appearance. In an age where mobile and portable devices dominate the percent of online devices, it’s absolutely crucial for RetroShare’s web interface to be improved in both appearance and functionality. Being a RetroShare user myself, I (and also many others) have expressed the need for an improved web interface which can drastically improve RetroShare usability on devices other than desktops and open up many possibilities. What I mean is the following: with a solid web interface, RetroShare users can host the core of the application on a machine which could, for example, run 24/7 and thus help support the network by being a constantly active node and enjoy in the features that RetroShare provides on an interface suitable for mobile and portable devices. By now you should be wondering: “Well then, why is there no version of RetroShare for Android?”. Due to the nature of RetroShare (network bandwidth, hardware and other requirements), simply porting the application for Android (or any other mobile operating system) will not result in a stable and usable solution for the end user. Many other applications provide decent web interfaces which allow control and use via mobile and portable devices and my goal with this project will be to improve RetroShare’s web interface to this point.

    Just for a reference, I will show you how RetroShare’s web interface (just on the home screen, as to not leak any information from the chats and etc) looks at the moment. And if I’m successful, I will get to show you how it will look at the end of this Google Summer of Code.

    Der Beitrag GSoC 2017 – RetroShare via Freifunk erschien zuerst auf Freifunkblog.


    Summer of code: Week 1

    I am a student of computer science, but most of my knowledge comes from my diy projects. I am a jack of all trades kind of a guy; I have tinkered with low level stuff like add-ons and fpgas,
    but I also worked with everything UE4 gaming engine, blender and other high level programs. I like creating visual things such as music visualizations, graphs and other more interactive ways of displaying data. This summer I will help improve the visualization capabilities of nodewatcher.

    My project has two main branches, improvement of existing graphs and developing a brand new way to visualize used IP space of nodewatcher. Nodewatcher has many graphs and a lot of data to visualize. Understanding the data is hard without the proper representation. My knowledge of d3.js will help me create interactive, smooth and slick graphs that will help improve the nodewatcher presentation of node data. Creating more dynamic and visually appealing graphs is important and very useful, especially for new users who might get lost in the data.
    My second contribution would be a brand new map of the used IP space. At this moment nodewatcher doesn’t have any way to represent the IP pools that nodes are using. Looking at hundreds of subnets is hard to visualize and time consuming. This map would provide a way to show which space is already used and which isn’t.  I have already made a small prototype visualizing the wlanslo IP pools ( ). It takes a while to load as it has to draw over 7 thousand subnets, but when it’s done, you can clearly see which space is used and which isn’t. The map also provides a way to zoom in on different subnets of wlanslovenia giving a better look into how they are composed.
    Reports about progress will be posted to developers mailing list ( ). 
    Me, my sumbrero and I are read for the summer of code
    Sem študent računalništva, ampak večina mojega znanja prihaja iz mojih diy projektov. Sem “jack of all trades”, zanimajo me vse stvari, od nizko nivojnega programiranja čipovja ter fpga-jev, vse do UE4 gaming engina, blender animacij ter drugih visoko nivojskih programskih problemov. Rad se ukvarjam z vizualizacijo podatkov, kot recimo vizualizacije glasbe, grafov in drugih interaktivnih načinov prikazovanja podatkov. To poletje bom pomagal z izboljšanjem načinov prikazovanja podatkov, ki jih ima nodewatcher.
    Moj projekt ima dva glavna taska; Eden je izboljšava že obstoječih grafov, drugi projekt pa je narediti čisto novi način prikazovanja uporabljenega ip prostora v nodewatcherju. Nodewatcher vsebuje dosti grafov a le ti bi lahko bili izboljšani. Podatke je težko razumeti, brez pravilne reprezentacije. Moje znanje d3.js mi bo pomagalo narediti interaktivne, responsive grafe ki bodo izboljšali prikazovanje podatkov. Izboljšanje grafov je zelo pomembno saj ljudje, ki prvič odprejo nodewatcher težko razberejo vse podatke in se lahko zgubijo, če ti niso pravilno prikazani.
    Moj drugi del, bo čisto nova mapa na kateri bo narisana proaba ip prostora. Do zdaj ni bilo dobenega načina kako si predstavljati ip pools, ki jih nodi uporabljajo. Gledati po stotine subnetov v tekstovni obliki je zahtevno in zelo težko. Ta mapa bo prikazala kateri prostor je že uporabljen in kateri je še prost. Naredil sem majhen prototip, ki prikaže wlanslo ip prostor ( ). Traja kar dolgo da se izriše saj obstaja preko 7 tistoč subnetov, ampak ob koncu lahko vidimo kako je ip prostor uporabljen. Ta prototip vsebuje tudi wlanslo subnete na katere lahko približate tako da lahko vidite hirarhijo dela omrežja.
    Moja poročila bodo objavljena na ( ).
    Jaz ter moj sumbrero sma pripravljena na poletje kodiranja.

    Der Beitrag Summer of code: Week 1 erschien zuerst auf Freifunkblog.


    GSoC 2017 Attended Sysupgrade

    GSoC 2017 – Attended Sysupgrades

    Hello, my name is Paul Spooren and I’ll be working on attended sysupgrades this Google Summer of Code. I’m 24 years old and studying computer science at the university of Leipzig. With this blog post I try to explain my project, the advantages and it’s challenges.

    Topic Change from captive portals.

    When I applied to GSoC my first application covered the implementation of “Captive Portals” for the LibreMesh. After discussing details with my mentors we decide to switch the project.
    The main shortcomings where the following:
    * Captive portals need testing on all kind of devices, Apple devices using a different approach than Android, Linux distribution differ, all kinds of Microsoft’s Windows as well. Testing would claim to much effort to provide a stable solution
    * Captive portals usually intercept HTTP traffic and changing it content with a redirect to the login provider’s splash page. This does not work with encrypted traffic (https) and would result in certification errors.

    Discussing what has generic use to OpenWRT/LEDE and LibreMesh we came up with the topic of a simple sysupgrade solution and fixed on that.

    What are attended sysupgrades?

    Performing updates on routers is quite different from full Linux distribution. It’s not always sustainable to do release upgrade via a packet manager. Instead it’s usually required to re-flash the system image. Depending on the installed packages an image rebuild may be to complex for regular users. A more convenient way is needed.

    The main idea is to provide a simple function within the web interface to automatically download a custom sysupgrade-image with all currently installed packages preinstalled.
    An opt-in option would check for new releases and notify via luci(-ng) or command line.

    This approach would also help to upgrade a router without full computer hardware. The web interface can be accessed from mobile phones and as no complicated image downloading is required all users can perform sysupgrades on their own.

    Distributions like LibreMesh may have a more frequent package release cycle and devices may don’t offer opkg due to limited flash storage. The simple sysupgrade approach could be used as a opkg replacement for these special cases and keep devices up to date.

    How does it work?

    The web interface will have a new menu entry called “Attended Upgrade”. The page send the currently installed release to the server and checks it response. If an upgrade is available a notification will be shown. A click on the download button sends a request to the server and downloads the image. Another click uses the sysupgrade mechanism and installs the image. After reboot the system should run as excepted with all before installed packages included.

    This project will implement an “image as a service” server side which provides custom build images depending on installed packages. A JSON API will enable routers to send requests for custom images. Build images will be stored and reused for other requests with the same package selection and device model.
    A simple FIFO queue will manage all builds requests. Created images will be stored by priority queue system so most requested combination are always in cache.


    * With new releases packages may be renamed. This can be due to a split after growing in size as more and more features are added or if different versions of a tool exists. The update server has to know about all renamed packages and created an image with all needed programs. Therefore a replacement table will be created which can be managed by the community. Merges, splits and new naming convention will be covered. To make updating easy the server will try to handle changed names as automatic as possible. If there exists different possibilities to choose from there will be a menu in the web interface.

    * Currently luci is the de facto web interface of LEDE/OpenWRT. Eventually it will be replaced by luci-ng with a modern JavaScript framework. All router sided routing has to be easily portable to the new web interface.

    Implementation details

    The main logic will happen within the browser and so can use secure HTTPS to communicate with the update server. The users browser communicates between router and upgrade server. The following diagram tries to illustrate the idea.

    Once opened the upgrade view will ask the router via an rpcdcall to receive the installed release and send the version to the update server as an *update availability request*. The server will answer with an *update availability response* containing information about the update if exists or a simple status 204 (No Content) code. If a new release exists the web interface will perform another rpcd request to get details of the device, installed packages versions and flash storage. The information are then combined and send as an JSON request to the update server as an *image request*.

    The update availability request should looks like this:

        "distro": "LEDE",
        "version": "17.01.0",
        "packages": {
           "opkg": "2017-05-03-outdated"

    The update server will check the request and answer with an *update availability response*:

        "version": "17.01.1",
        "packages": {
            "opkg": "2017-05-04-new",
            "ppp-mod-pppoe2": "2.0"
        "replacements": {
           "ppp-mod-pppeo": "ppp-mod-pppoe2"

    The response contains the new release version and packages that will be updated. Not that even if there is no new release, packages could be updated via a sysupgrade. The idea is that packages without opkg installed can receive package updates as well.

    All changes will be shown within the web interface to let the user know what will change. If the user accepts the upgrade an request will be send to the server. The image requests would look something like this:

        "distro": "LEDE",
        "version": "17.01.0",
        "revision": "48d71ab502",
        "target": "ar71xx",
        "subtarget": "generic",
        "machine": "TP-LINK CPE510/520",
        "packages": [
            "ppp-mod-pppoe2": "2.0",
            "kmod-ipt-nat": "4.9.20-1",

    Once the update server received the request it will check if the image was created before. If so it will deliver the update image straight away. If the request (meaning device and package combination) was done for the first time a couple of checks will be done if the image can be created. If all checks pass the wrapper around the LEDE ImageBuilder will be queued and a build status API is polled by the web interface. Once created a download link is provided.

    In the unlikely event of an unsolvable package problem the replacement table can’t fix itself the user will be asked to choose from a list. The new combination of packages will be send to the server as a new request resulting in an sysupgrade image. This approach still needs some evaluation if utilizable and really needed.

    Using the ImageBuilder offers an generic way to offer sysupgrades for different distribution. The image builder feeds can be extended to include distribution specific packages like LibreMesh package feed

    The replacement table could be implemented as followed:

    # ./lede/replacements/17.01.1
       "libmicrohttpd": [
       "libmicrohttpd-no-ssl": [
           "default": true
        "libmicrohttpd": []
        "openvpn": [
            "openvpn-openssl" [
                "default": true
            "openvpn-mbedtls": [
                "installed" [
             "openvpn-nossl": []
        "polarssl": [
            "mbedtls": [
                "default": true

    libmicrohttpd was replaced by libmicrohttpd-no-ssl (installed as default) and libmicrohttpd.
    openvpn splittet into various packages depending on the installed crypto library, openvpn-openssl is the default while openvpn-mbedtls is only installed if mbedtls (or it’s prior name polarssl) was installed before.

    For better readability the yaml format could be preferred.

    LibreMesh introduced a simple way to manage community specific configurations. This configuration method is flexible for other communities as well and should be integrated into the update server. A optional parameter could contain the profile name which will be auto integrated into new images.

    "community": "",

    The parameter could also contain a full domain with leads to the needed files, this feature need more evaluation.

    Possible features

    * The current design is an attended upgrade triggered by and dependent on the web interface. A feature could be to add logic to the command line as well.

    * Once the sysupgrade is possible via shell, an unattended sysupgrade would be possible. A testing and a release channel could enable unattended upgrades for tested images (device specific) only. If an image works after an attended upgrade it could be tagged and offered via the release channel.

    * Mesh protocols may change and outdated routers loose connectivity. A possible solution to upgrade the devices losing contact could be to automatically login the outdated routers to updated routers open access points, perform an update and reconnect to the mesh.

    Final Thoughts?

    All thoughts above are not final and are more likely an RFC. I’m very happy to receive comments and critic. My goal is to have an generic update service where all communities and LEDE/OpenWRT itself can benefit from.
    Feel free to contact me at paul [a-t) spooren (do-t] de or on freenode/matrix as aparcar.

    Der Beitrag GSoC 2017 Attended Sysupgrade erschien zuerst auf Freifunkblog.


    GSoC 2017-netjsongraph.js: visualization of NetJSON data

    Project intro

    NetJSON is a data format based on JSON(What is NetJSON?), and netjsongraph.js(GitHub) is a visualization library for it. This library has attracted quite some interest from around the world, but there are some defects, such as tests and modern build process lacking.

    Therefore our goal is to improve the features and development workflow of netjsongraph.js. To be specific:

    • make it faster with large numbers
    • make it more mobile friendly
    • use modern tools that are familiar to JS developers, so they can contribute more easily
    • add automated tests so we can be more confident of introducing changes
    • get rid of complex features
    • make it easy to extend, so users can experiment and build their own derivatives
    • make it easy to redraw/update the graph as new data comes in, at least at the library level we should support this
    • geographic visualization (like (nodeshot project)


    About me

    I’m a graduate student from China and also a front-end developer with more than one-year working experience. And now I am interested in the Data Visualization and already made several visualization projects of network structure. So lucky my proposal selected by Freifunk in Google Summer of Code 2017. It’s a great opportunity to participate in a promising open source project. Thanks for my mentor’s guidance and hope I can finish an excellent job. So I listed the following plan:

    Tasks and Schedule

    • create a new branch: build the project with yarn, Webpack and Babel. 1 week
    • To build a (mostly) backward compatible version 1 week
    • draw a demo graph using canvas or WebGL. 2 week
    • make a example page to show visualization results. 1 week
    • add test(using Ava and XO) and CI. 1 week
    • discuss and design visualization view 1 week
    • import and integrate with OpenStreeMap or Mapbox to make a map. 1 week
    • visualization implemention. 8 weeks
    • beautify the visualization. 1 weeks
    • improve visualization and test. 4 weeks
    • design interface for plugin (to make this library extensible) *2 week

    Der Beitrag GSoC 2017-netjsongraph.js: visualization of NetJSON data erschien zuerst auf Freifunkblog.


    GSoC 2017 – wlan slovenija – HMAC signing of Nodewatcher data and IPv6 support for Tunneldigger


    I am a student at Faculty of Computer and Information Science in Ljubljana, Slovenia. Like (almost) every “computer enthusiast” I liked gaming and later found myself developing an OpenGL graphics engine. All engrossed in C++ and all sorts of algorithmic challenges I slowly came to realize that something is missing. Yes, my knowledge of anything network related. So, combining my two other interests, those being information security and inexplicable love of tunnels, I applied myself to Google Summer Of Code with the following ideas. As a participant in this year’s Google Summer of Code I will develop some new goodies for two projects of wlan slovenija open wireless network.

    The first one is for the nodewatcher, which is an open source system for planning, deployment and monitoring of the wireless network. It is a centralized web interface which is also used for generating on OpenWrt based firmware images for specific nodes. After flashing the wireless router with the generated image, it just needs to be fed some electricity and it automatically connects into the network using VPN, or in case of an existing nearby node wirelessly. Nodwatcher then collects all the data about node’s performance by connecting to nodes to obtain data, or by nodes pushing their data to nodewatcher. This data is not sensitive, but we can still worry about it being manipulated or faked while in transit between the node and nodewatcher. The problem though is that all the monitoring reports are currently unsigned. This poses a security risk in the form of a spoofing attack, where anyone could falsify the messages sent to the nodewatcher. The solution is to assign a unique nodewatcher signing key to every node. The node will then sign the monitoring output using a hash function in HMAC (Hash-based message authentication code) mode. This means that a computed “signature” would be sent along with every message and nodewatcher can check whether the data was altered in any way. In the event of a signature verification failure a warning will be generated within the nodewatcher monitoring system. This is imporatant, because it assures the integrity of recieved data and inspires confidence in using it to plan deployment of new nodes in the future.

    The second contribution will be to the Tunneldigger, which is a simple VPN tunneling solution based on L2TPv3 tunnels. It is used to connect nodes which do not have a wireless link between them in to a common network. Using existing network connectivity it creates L2TP tunnels between nodes. The current limitation is that tunnels can only be established over IPv4. This poses a problem because due to dramatic growth of the internet, the depletion of the pool of unallocated IPv4 addresses is anticipated for some time now. The solution is the use of its successor, IPv6. Since the tunnels are already capable of carrying IPv6 traffic, the capability of establishing them over IPv6 will be developed. The Tunneldigger will also support IPv4/IPv6 mixed environment where both server and client have some form of IPv6 connectivity. That way the Tunneldigger will finally be made future proof!

    Reports about my work will be available on developers mailing list.

    Yay for the free internet!

    Der Beitrag GSoC 2017 – wlan slovenija – HMAC signing of Nodewatcher data and IPv6 support for Tunneldigger erschien zuerst auf Freifunkblog.

    May 29 2017


    GSoC 2017 – LuCI2 on LibreMesh

    My name is Marcos Gutierrez, I am from Argentina and this year I participate in the GSoC 2017 in Freifunk. My main task is to incorporate LuCI2 into LibreMesh and to adapt or rewrite the modules that are currently used.

    LuCI2 – UI

    In my first approach to LuCI2 I realize that there is much more to do than it seemed. The development of Luci2 is still looking for a more stable path, there are good ideas, but a resolution, to my understanding, is incomplete. Only the base UI build weighs 1.4MB, this far exceeds what LibreMesh requires. So I should explore some alternatives to drastically reduce the size.

    LuCI2 – UBUS

    It seems to me the right choice to interact with UBUS through an API rest avoiding the rendering of LuCI2 on the router. For now the interaction with the frontend is programmed as an AngularJS service, but it could be abstracted from the Framework, published as a separate package, strengthening the possibility of using lighter or updated Frameworks.

    AngularJS, Gulp, Bootstrap, Icons….

    The development of Javascript in the last years has a difficult rhythm to follow, and more than everything to maintain. Decisions about which frameworks to use may be correct at the beginning of development and look outdated when a stable release is published. This happened to LuCI2, the solution is to modularize as much as possible to make small parts reusable, agnostic and maintainable. This way, versions developed on different frameworks can coexist, mobile applications, even command line. In addition, most web libraries are not designed to take up little space. In the context of embedded devices it is problematic to choose libraries developed for the web only because they are the most popular.


    • Make the core elements of LuCI2 modular and abstract from the web framework.
    • Look for alternatives to AngularJS that better fit the limitations of the routers.
    • Try to implement retrocompatibility with the modules already developed
    • Document the necessary changes to migrate components.

    Der Beitrag GSoC 2017 – LuCI2 on LibreMesh erschien zuerst auf Freifunkblog.

    Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
    Could not load more posts
    Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
    Just a second, loading more posts...
    You've reached the end.

    Don't be the product, buy the product!