Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

August 14 2018

15:58

VRConfig Final

Hi,

this is the final blog post about my project VRConfig.
VRConfig aims to improve the accessibility and usability especially for inexperienced users of OpenWrt and its Webinterface LuCI.
It achieves this by introducing a graphical configuration option. The users can configure their router by interacting with a picture of the router model they are using, instead of digging through menus full of technical terms they do not understand.
In order to be able to present to every user the correct picture of the more that 1000 different supported user models, the help of the community is needed.
Everyone can take a picture of the backside of their router and annotate the ports on that picture using the annotation-app, I developed (https://vrconfig.gitlab.io/annotator/).

The annotator can be used to mark the location of all ports of the router

 

You could then send in the jpg-file together with the annotation file (which is a json-file) to the luci app via a merge request here: https://gitlab.com/vrconfig/luci-app-vrconfig.
The makefile will automatically choose the right jpg/json file based on their file name during the build process.

The luci application is currently a demo application, which will be improved in the future.
Currently, it looks like this:

You can hover over the different ports, a click will bring you to the corresponding configuration. It also marks those LAN-ports green, which are currently connected with a LAN cable.
For that I developed a lua demon which monitors the corresponding ports in real time and provides the interface with their status.
Also there is list of all currently configured virtual interfaces. Clicking on them will mark the associated physical ports on the image.

Future Plans

In the future I plan to continue to polish the Luci interface. One extension could be to marks those ports, which currently have Internet access. Other extension could revolve around making it possible to configure some setting via drag and drop on the image.

Acknowledgments

Thanks a lot to my mentor Thomas for his excellent support and his long term visions that made this project possible in the first place.
Also thanks to my colleague Benni for his extremely helpful suggestions throughout the project.
Also thanks to Freifunk for letting me work on this project and thanks to Google for organizing GSoC.

The full source code of everything related to this project can be found here: https://gitlab.com/vrconfig

Der Beitrag VRConfig Final erschien zuerst auf Freifunkblog.

13:49

OpenWLANMap App Final Update

Hi,

This is my final update for GsoC.

In this blogpost I would like to summary all the work I have done in the last 3 months, as well as available problems and future plan.

An introduction, my progress and further information can be found under [0] [1] [2]

The new app is compatible to the old app on all basic functionalities [3]. Beside that the code is validated after google check style, contains a full java doc, clear interfaces and the app performance is partly improved.

Final architecture and app design:

Basic changes in comparison to old app

  • The old broken UI is replaced by a new designed UI.
  • Storing: The old app uses a non-standard database by writing multiple access points in bytes in file, which stores redundant data and difficult to maintain. In the new app, Sqlite is used for storing data in order to make it easier to maintain and extend. There is no redundant of multiple access points in database since BSSID is used as primary key and data is updated if RSSID is bigger. The storing process is done not by scan thread, but by a separate thread (WifiStorer), which reads from a blocking queue (WifiQueue) . A list of 50 APs (can be moved up if necessary) is put into the blocking queue as an item and storer thread is blocked on no item in the queue to pretend writing around storage the whole time.
  • Uploading: As old app, the uploading depends on user’s setting: manual, automatic on internet or on wireless connection. User can also set up the start number of APs to trigger auto upload from 5000 to 50000. User can only trigger uploading at at least 250 APs. The new app processes uploading with maximal 5000 APs once in order to pretend out-of-memory problem at device with small ram. An message contains upload process summary, new rank or error is given back to user. The WifiUploader uses UploadingQueryUtils for openwifi.su and can be changed quickly if backend changes.
  • Scanning: Scan period is dynamically set not only depending on speed but also night mode and I am working on movement detection base on sensors. Every 2s as default , the scanner thread sends Wifilocator a request on position and scanned wifis. Wifilocator uses GPS to define position, in case of no GPS defined, scanned wifis are used to define user’s position which is not working anymore at old app. The method for location can be displayed by overlay big number color as in setting.
  • Resource is managed and can be set as user’s options (kill app on low battery, on long time no gps etc.). While the old app triggers compare on noExistGps every time it scans, the new apps starts a new thread for checking resources only if user configures.
  • The old app exports only user’s own bssid and puts the export file/requires import file default in external storage. The new app allows user to import/export account with team and tag information as well as reset all settings back to default. User can browser the import file, as well as save the export file her/himself at any writable place in storage.
  • A map of all data from openwifi.su and a map from user’s contributed data is immigrated directly into the app (using osmdroid) , as well as a ranking list.
  • Min app api is moved to 19, which covers over 96% devices currently [4]. Permission is checked at runtime as required.

What I learned

I learned a lot about android developemnt.

  • Life cycles: The UI and service communicates per LocalBroadCast, the BroadCastReceiver has to be register and unregister on Resume/on Pause. Also the SettingPreferenceListener to have a proper lifecycle management of the activity.
  • Using LocalBroadCast instead of global BroadCast to keep data inside the app only.
  • From Api 23, the “dangerous permission” has to be asked at run time. Many system flags and parameters differ from api versions, which requires a lot of version control check in runtime
  • Working and managing service with a lot of parallel processes.
  • Osmdroid: open source lib for working with osm which is compatible to google map API
  • Database sqlite with lib Room
  • etc. 

    Also I learned how important architecture is since I was too fast jumping in coding where my mentor had to stop me and gave me some helpful advice. We re-designed the architecture with a component controller in center. All other components should only do its job and communicate with controller and not with others directly to make it easy to extend/replace any components.

    Difficulties I met

    It was hard to work on the app while I have no access on backend. I have to test all the APIs while analyzing the old app, which is also not nicely documented and implemented. Furthermore the backend is quite unstable and sometime unreachable. Another problem is testing. Since the app works with collected wifi access points, testing and debugging at home became very hard.

    Future plan

    There are still some points on the app performance I want to optimize further. I already started on working with the android sensors to detect movement, to scale the scan time more effectively to save resource since scanning wifi and gps are two of the services which cost bunch of phone battery.

    The app is currently only in development mode since I haven’t had a google play store account. But as soon as I do, I will release it. Until then, if you want to try it, an .apk is to download here [5]

    Acknowledgement

    Many thanks to freifunk community and my mentor Jan-Tarek Butt for this amazing opportunity. Even though there are still some small stuffs to do/fix, I am so glad that a new wardriving app is coming soon for openwifi.su. Many thank to Google summer of Code team for making this happen.

    [0] https://blog.freifunk.net/2018/05/14/introduction-openwlanmap-app/

    [1] https://blog.freifunk.net/2018/06/10/openwlanmap-app-update-1/

    [2] https://blog.freifunk.net/2018/07/09/openwlanmap-app-update-2/

    [3] https://github.com/openwifi-su/OpenWLANMap-App

    [4] https://developer.android.com/about/dashboards/

    [5] https://androidsmyadventure.wordpress.com/2018/06/03/openwlanmap/

 

Der Beitrag OpenWLANMap App Final Update erschien zuerst auf Freifunkblog.

13:00

Meshenger – P2P local network messenger – final update

Meshenger is meant to be an open-source, P2P audio and video communication application, that works without centralized servers, thus without a connection to the internet, does not need DHCP servers and can be used in LAN networks such as Freifunk community networks.

It was brought to life to demonstrate the use of such networks other than simply internet access as well es to discover the decentralized use of WebRTC in conjunction with IPv6.

I spent the last few weeks polishing and improving my project, getting it to a usable and stable state.

An APK with version 1.0.0 can be found here, as well as the whole source code.

In the last month i fixed a some bugs, like a wrong serialization of IPv6 addresses, making a phone ring even with the screen off, preventing duplicate contact entries, prevented the app from freezing and some more.

Of cource, the app gained some new features, including a ‘settings’ page with the language, the username etc., additional information for each contact, a possibility to share contacts through third-party messengers or a QR-codeignoring calls from unsaved contacts and several more.

Oh, and if you suddenly dislike someone, you can now simply delete him.

Settings Contact options

 

 

 

 

The app now has an ‘about’ page containing some meta-data about Meshenger as well as the license:

About page

 

I extraced a lot of hard-coded strings in order to make it easier to translate the app into different languages.

 

As of now, it is planned for the future to implement profile photos, file transfer and asynchronous messaging.

All in all, i would conclude that Meshenger was a successful project and reached most of its goals.

It gave me the chance to dive into new subjects and learn a lot about VoIP and IPv6 as well as get to know the Freifunk community and learn about other interesting ideas.

Der Beitrag Meshenger – P2P local network messenger – final update erschien zuerst auf Freifunkblog.

07:53

A module for OLSRv2 to throughput estimation of 2-hop wireless links

Hi to community members!

Here the final report! In this project, we introduced throughput estimation strategies in OLSRv2 based networks. Basically we follow two strategies, the first one which relies on iperf3 and a second one which relies on packet timestamping.

We prototyped the iperf3 strategy in PRINCE. The basic idea is that each node has an iperf3 server and a node can estimate the neighbor throughput by running an iperf3 evaluation. The code is available at https://github.com/pasquimp/prince/tree/iperf.

We set up an emulation environment in CORE, then we tested PRINCE with iperf client/server in CORE. We built a simple three nodes topology (n1, n2, n3) where n1 is directly connected (in the wireless coverage) to n2 and n2 is directly connected to n3. n1 is connected to n3 through OLSR. The neighbor estimated throughput at IP level is of about 43 Mbps on a physical link of 54 Mbps (in the figure the throughput estimated from n2 towards n1).

In order to introduce a lightweight measure strategy (without a further server process), we worked on a OONF plugin to throughput estimation based on packets timestamp. The code is available at https://github.com/pasquimp/OONF/tree/neighbor-throughput. The basic idea is that the plugin sends a couple o probe packets towards each neighbor. A neighbor can estimate the throughput starting from the time difference between the time of reception of the second probe minus the time of reception of the first probe (probe-size / (t2 – t1)).

We tested the plugin in our environment in CORE. Unfortunately, the time of reception of packets probe in the plugin doesn’t fit our needs since the couple of probe packets has a time difference close to 20 us (and then overestimated throughput close to Gbps on a 54 Mbps link).

We experimented by taking socket timestamps in the reception phase (required several changes in the OONF socket code). However, the results are mainly unchanged. Then an approach entirely based on oonf_rfc5444 (which is the messaging system used in the plugin) is not accurate due to possible delay or messages manipulation in the sending phase. Then, this approach requires a different messaging system, probably in both transmission and reception phases, to keep a reliable procedure in OONF.

Thank you for the opportunity and thank in particular to my mentors for the suggestions!

Der Beitrag A module for OLSRv2 to throughput estimation of 2-hop wireless links erschien zuerst auf Freifunkblog.

August 13 2018

23:37

GSoC 2018 – Kernel-space SOCKS proxy for Linux – Final

Short description

The original plan was a full kernel-space SOCKS proxifier, but that would be a little bit complex for the goal: a faster TCP proxy. Then I found a very elegant solution for the problem: eBPF sockmap support. There is a API for redirect packets between sockets in kernel space using sockmap eBPF program. I decided to extend my shadowsocks-libev fork with the eBPF support. The disabled encryption already give some additional performance, so if anyone already using this one, there is a new option to get more performance.

The results

I did some performance and functional tests in my test environment (described here). The eBPF performance is fairly good compared to the regular version. I included two screenshots, one with and one without eBPF enabled. For sockmap support, at least 4.14 kernel required! (The iperf3 tests performed on 4.14.41 MPTCP supported kernel with 1 network path). host1 machine is connected to ubu1 machine and use it as a router to ubu2 (see the diagram here). For more information about use a machine as a router and transparent proxy please take a look into this.

iperf3 -c ubu2                                            iperf -s
+-----------+              +------------+               +----------+
|           |              |            |               |          |
|   host1   +------------> |    ubu1    +-------------> |   ubu2   |
|           |              |            |               |          |
+-----------+              +------------+               +----------+
                          ss-redir --ebpf                 ss-server
eBPF dataplane disabled

 

eBPF dataplane enabled

The branch with eBPF sockmap support: https://github.com/SPYFF/shadowsocks-libev-nocrypto/tree/ebpf

Future work

Sadly in practice, there is an issue with the current version. For example in a long FTP file transmission, at the end it will skip some bytes or push additional (duplicated) bytes to the receiver.  That’s because in the current kernel implementation sometimes if an error happens during the transmission, the packet size returned instead of the error code. I consulted with John Fastabend (he is the creator of the eBPF sockmap) about the issue and he told me he will send a patch for that which will be get merged into the kernel soon. After that if everything works fine, I will put the whole work into a OpenWRT package.

Links

Der Beitrag GSoC 2018 – Kernel-space SOCKS proxy for Linux – Final erschien zuerst auf Freifunkblog.

20:23

GSoC 2018: qaul.net changes and experiences (final report)

This is my final report for Google Summer of Code, working on the userspace, backend agnostic routing protocol for qaul.net. Or also entitled: how to not go about writing a userspace, backend agnostic routing protocol (in general).

The work that was done

If you’ve been reading my first three blog posts, you will know that we had some issues designing and coming up with plausible ways for such a routing core to interact with network layers. The biggest challenge is the removal of adhoc wifi mode from Android, thus requiring root as an app to provide our own kernel module for that. Before specifying what I would be working on this summer we had a very idealised view of what a routing protocol could depend on, making a lot of assumptions about the availability (aka connection reliability) and usability of WiFi Direct and were negatively surprised when we ran into various issues with both the WiFi direct standard.

Secondly, I built a prototype that uses bluetooth mesh networking to allow multiple phones to communicate which has given us much better results from the beginning. Connections are more stable, however their range is more limited than with WiFi. It does however come with the benefit of power saving.

These prototypes will serve as a good base to play around with larger networks and more devices but won’t end up being part of the qaul.net code base. The code written to remain in qaul.net is relatively little. There is the routing-core repository which provides a shim API between a generalized routing adapter and a generic network backend, that can either be bluetooth, wifi direct, even adhoc or ethernet. We ended up not focusing on this code very much because there were too many open questions about the technologies at hand to proceed with confidence.

The code for both prototypes is available here, the routing core shims can be found here

What wasn’t done

We didn’t end up writing a userspace, network agnostic routing protocol, of the likes of BATMAN V. This is very unfortunate and probably comes down to the fact that, when summer of code started, we only had theoretically worked with WiFi Direct before, making a lot of assumptions that were ultimately wrong (and based on the way adhoc works).

The next steps

We will proceed with bluetooth meshing as our primary network backend, where we still have to figure out a few questions about the captive portal functionality, how to subdivide a network into smaller chunks and how moving between subnetworks will work. Bluetooth meshing isn’t exactly made for what we’re trying to do but it’s a close approximation.

When it comes to the actual qaul.net code, we need to write a bluetooth mesh adapter which plugs into the routing core, at which point we can start testing the protocol layout that we designed and work on the actual routing heuristics. The work for this is largely done, based mostly on the BATMAN protocol documentation.

Acknowledgements

I want to thank the Freifunk organisation and community, my mentor Mathias who worked with me on figuring out how to get around the problems we encountered. We managed to get a good step closer to moving qaul.net away from adhoc networking, even though we didn’t reach all the goals we set out to. Finally, I would like to thank Google for the Summer of Code and its efforts during all these years and for its commitment to the development of open source software.

Der Beitrag GSoC 2018: qaul.net changes and experiences (final report) erschien zuerst auf Freifunkblog.

20:06

The Turnantenna – Final evaluation update

We are at the end of the journey. Today is the last day of the 2018 version of the Google Summer of Code.

So, here what I have done during this month of hard (and hot) work!

States Machine

The SM presented in the previous article has evolved to a newer and more complete version. The whole machine is defined through the following states and transitions:

# In controller.py

class Controller(object):
    states = ["INIT", "STILL", "ERROR", "MOVING"]
    transitions = [
        {"trigger": "api_config", "source": "INIT", "dest": "STILL", "before": "setup_environment"},
        {"trigger": "api_init", "source": "STILL", "dest": "INIT", "after": "api_config"},
        {"trigger": "api_move", "source": "STILL", "dest": "MOVING", "conditions": "correct_inputs",
         "after": "engine_move"},
        {"trigger": "api_move", "source": "STILL", "dest": "ERROR", "unless": "correct_inputs",
         "after": "handle_error"},
        {"trigger": "api_error", "source": "STILL", "dest": "ERROR", "after": "handle_error"},
        {"trigger": "engine_reached_destination", "source": "MOVING", "dest": "STILL",
         "before": "check_position"},
        {"trigger": "engine_fail", "source": "MOVING", "dest": "ERROR", "after": "handle_error"},
        {"trigger": "error_solved", "source": "ERROR", "dest": "STILL", "after": "tell_position"},
        {"trigger": "error_unsolved", "source": "ERROR", "dest": "INIT", "after": ["reconfig", "tell_position"]}
    ]

There are not many differences with the older graph but, behind the appearance, there is a lot of work. Now every arrow correspond to a series of defined actions, and the scheme was implemented as a real working program.

The structure of the Turnantenna’s brain

During the last week I worked on the refactoring of all the work done until that time. The final code is available in the new dedicated “refactor” branch on GitHub.

The States Machine above is implemented in the main process, which is able to communicate with 2 other processes: the engine driver and the RESTful server.

# In turnantenna.py

from multiprocessing import Process, Queue
from controller import Controller      # import the states machine structure
from stepmotor import engine_main      # import the engine process
from api import run                    # import the api process

def main():
    engine_q = Queue()
    api_q = Queue()
    api_reader_p = Process(target=run, args=(api_q, ))
    engine_p = Process(target=engine_main, args=(engine_q, ))
    controller = Controller(api_q, engine_q)                  # start the SM

    api_reader_p.start()                                      # start the api process
    engine_p.start()                                          # start the engine process
    controller.api_config()

The processes communicate with each other through messages in the queues. Messages are json, and have the following format:

{
    'id': '1',
    'dest': 'controller',
    'command': 'move',
    'parameter': angle
}

The “id” key is needed in order to control more than one engine, this is useful for future upgrades. “dest” specify the process that should read the message, and avoid wrong deliveries. “command” is the central content of the message, while “parameter” contains detailed (optional) informations.

Processes are infinite loops, where the queues are checked continuously. An example of this loop is:

# In api.py

from queue import Empty

while True:
    try:
        msg = queue.get(block=False)
        if msg["dest"] != "api":
            queue.put(msg)       # send back the message
            msg = None
    except Empty:
        msg = None

    if msg and msg["id"] == "1":
        command = msg["command"]
        parameter = msg["parameter"]
        if command == "known_command":
            # do something

API

In order to interact with the turnantenna, I defined 3 methods: get_position(), init_engine() and move().

It is possible to call them through an HTTP request. A json needs to be attached to the request in order to make things work. In fact, APIs need some critical data: e.g. the id of the specific engine targeted, or a valid angle value to move the engine of that amount. If the request come without a json, or with a wrong one, the RESTful service respond with an error 400.

Here an example of input controls:

import requests

if not request.json or not 'id' in request.json:
    abort(400)
id = request.json['id']
if id != '1':            # still mono-engine
    abort(404)

For the moment the system works with only one engine, but in the future it will be very simple to handle more motors

...
# if id != '1':
if id not in ('1', '2')
    abort(404)
...

Final results

In these moths we started from an idea and a basic implementation, and we build-up a complete system ready to be tested. It is possible to see the Turnantenna logic run cloning the Turnantenna code from GitHub from the link Musuuu/punter_node_driver/tree/refactor.
Following the instructions in the readme file, you can run the turnantenna.py file and observe how it reacts to the HTTP requests made with curl.
The full documentation of the project could be found at turnantenna.readthedocs.io.

We are proud of the work done, and we’re ready to implement the whole system onto the hardware and make the Turnantenna turn!

Der Beitrag The Turnantenna – Final evaluation update erschien zuerst auf Freifunkblog.

19:49

DAWN – Final Post

So did I achieve my aims with DAWN?

GSOC Aims

  1. Simple Installation
  2. All patches Upstream
  3. Configuration of the nodes should be simplified
  4. Visualize the information of the participating nodes
  5. Improve the controller functionality by adding mechanisms like a channel interference detection and other useful features

1 and 2:


Everything is upstream!
All hostapd patches are merged. I even added some patch that extending the hostapd ubus functionality.
The iwinfo patches are merged too. But actually the patch from the other guy was merged that contained my patch #1210.
You can now simply add the feed and compile DAWN.

3 and 4:

I added a luci app called luci-app-dawn, there you can configure the daemon. If you do this, the daemon configuration is send to all participating nodes. So you don’t have to change the config on every node.
Furthermore, you can see in the App all participating WiFi Clients in the network and the corresponding WiFi AP. Furthermore, you can see the Hearing Map for every Client.

 

5:

So I’m still refactoring my code. Some code snippets are ugly. :/
I read stuff about 802.11k and 802.11v.
802.11v is very interesting for DAWN. It would allow DAWN a better handover for the clients. Instead of disassociating the client, the client can be guided to the next AP using a BSS Transition Management Request frame format.
This request can be sent by an AP or station (?) in response to a BSS Transition Managment Query frame or autonomously.

I want to send this request autonomously instead of disassociate clients if they support 802.11v.
For that I would set a the Disassociation Timer (the time after the AP disassicates the client if it’s not roaming to another AP) and add another AP as a candidate. Furthermore I should enable 802.11r for fast roaming…
If you want to play around with 802.11v you need a full hostapd installation and enable bss transition in the hostapd config.

bss_transition=1

The stations sends in the association frame if it supports bss transition when associating with an AP.
My plan is to extend the hostapd ubus call get_clients with the information like it’s already done with the 802.11k flags.
After this I need a new ubus call in which I build such a BSS Transition Management Request like it’s done in the neighbor reports ubus call.
I found a patch on a mailing list that adds a function to build such a bss transition frame in an easy way.

wnm_send_bss_tm_req2

Sadly, it was never merged. 80211v implementation can be found in the hostapd.

Furthermore, I could use 802.11k for asking a client to report which APs he can see. This is a better approach as collecting all the probe entries. The hearing map is very problematic, because clients are not continuously scanning the background (or they don’t scan at all). Furthermore a client can move around. Typically questions are, how long such a probe entry can be seen as valid. If the time span a probe request is seen as valid is set to big and the clients moves around, he can not leave the AP although the RSSI is very bad. (and a bad rssi is the worst thing you can have!) A bad RSSI can trigger the internal client roaming algorithm and the client tries always to roam to another AP and gets denied because there is already a hearing map entry with a very good rssi. But this entry is not valid anymore, because the client moved very fast.

My Merged Pull Requests:

My Open Pull  Requests:

My Declined Pull Requests:

Der Beitrag DAWN – Final Post erschien zuerst auf Freifunkblog.

16:37

GSoC 2018 – Better map for nodewatcher (Final update)

Hello everyone,

In my last update I represented solutions for most of my goals that I set in my first post. There was still one feature to implement and I worked hard to have it finished in time for GSoC.

Problem

The last feature that I am talking about is the ability to show recently offline nodes in the map. This was the hardest part to implement but also the most important. Because with it you would be able to see which nodes are offline and need maintenance and you could see exactly where they are located. Until now there was only an email alert system, but it sent out an email for every change to the node. There wasn’t a filtering option and also it would do this for every node so the inbox would get cluttered really fast. By adding this feature you can get a list of all nodes that went offline in the past 24 hours and it updates that list alongside the map.

Solution

In my last post I talked about adding a sidebar that had a list of all nodes that are currently online and showing on the map. So I just added a new tab that represented the recently offline nodes. The hardest part of adding this was that I had to use nodewatchers API v2 which was still in development and hasn’t been fully documented. I still wanted to use it because in the newest nodewatcher version every API v1 request will be replaced by v2. This way there would be less work in the future and also I took some time to document everything I have learned about it. This document has everything that I was able to gather from nodewatcher code and examples of how to use it. In the picture below you can see how the sidebar currently looks and also the list of recently offline nodes. It has the same functionalities as the online node list like the search bar, option to show the selected node on the map and to go to that specific nodes page.

What’s next?

GSoC has provided me with a unique opportunity to work on a large scale open source project and I have learned a lot in the past three months. Mostly about time management and not putting too much on my plate. It was truly an experience that will help me later on in my life. I will for sure work on other open source projects and continue my work with nodewatcher because I have analysed and figured out most of the code. It would be a shame to just let that knowledge go and move on to another project before being sure that someone else does take over and continues the work.

Important links:

Freifunk blog posts:

https://blog.freifunk.net/2018/05/14/gsoc-2018-better-map-for-nodewatcher/

https://blog.freifunk.net/2018/06/11/gsoc-2018-better-map-for-nodewatcher-1st-update/

https://blog.freifunk.net/2018/07/09/gsoc-2018-better-map-for-nodewatcher-2nd-update/

Github pull requests:

Main map code: https://github.com/wlanslovenija/nodewatcher/pull/69

API v2 documentation: https://github.com/wlanslovenija/nodewatcher/pull/70

 

Der Beitrag GSoC 2018 – Better map for nodewatcher (Final update) erschien zuerst auf Freifunkblog.

16:34

nodewatcher: Build system rework and package upstreaming – Final update

Hi, everybody.

This is my last post regarding my GSoC project for 2018.
My work can be found here:

A quick summary of what this project was about: Move on from building Nodewatcher supported imagebuilders from source but instead use upstream provided OpenWrt imagebuiders. Also, build our custom not yet upstreamed packages using OpenWrt SDK.

Current status

Part of the code was merged to their relevant wlanslovenija repositories but most of it is still waiting to be merged.

Nodewatcher

Various fixes to make Nodewatcher run on newer kernels and distributions like Ubuntu 18.04 were merged to the main branch of wlanslovenija/nodewatcher repository.
This includes fixes for known issues with the never pip tool as well as multiple packages with new names.
Still to be submitted is updating the various Python packages which are currently outdated, this is waiting for thorough testing.

firmware-packages-opkg

The name of wlanslovenija repository with our custom packages that are used by Nodewatcher like Tunneldigger is firmware-packages-opkg.
A big part of the changes is already merged, in wlanslovenija/firmware-packages-opkg.
This was a first big cleanup after a long time, a lot of packages that were not used and a lot of those that used custom patches were dropped.
I manually verified that those with custom patches had those patches upstreamed before they were dropped, this now enables us to use new iwinfo versions that include many fixes.
This also enables compiling packages such as curl with GCC7.3.
There are currently fixes for packages refusing to compile or dead sources waiting in my tree on Github.

firmware-core and the build process

The name of wlanslovenija repository where all files pertaining to the building of Nodewatcher compatible Docker-based imagebuilders are located is firmware-core.
This repository was the target of the bulk of my effort.
The current code was almost completely dropped or significantly reworked, which in the end resulted in removing 3,964 lines of code while adding only 255.
This significantly reduces the maintenance burden as almost no maintenance is needed except for adding or removing required Ubuntu packages in our Docker images.

Big changes that were made are:

  • LEDE and OpenWrt are remerged in our build process
  • Building of old OpenWrt versions prior to 17.01 was completely removed (CC 15.05 etc.)
    This was completely unnecessary and only caused the legacy code to stick around.
    There is no explanation for use of OpenWrt Chaos Calmer and even older versions now that OpenWrt and LEDE have merged.
    Those versions have numerous known exploits that have been fixed in 17.01 and now in 18.06.
  • Both our build and runtime Docker base images now use Ubuntu 18.04 instead of old 14.04.
    This enables us to fully utilize the fact that OpenWrt uses GCC7.3 as default compiler since Ubuntu 18.04 finally ships with it as default too.
    Size of the base image has reduced due to less unnecessary packages being shipped with it.
  • We now use imagebuilders provided by upstream OpenWrt project
    This significantly reduces the build time as most of the packages and the whole toolchain are not built anymore.
    The fact that we can’t patch the sources with custom patches anymore does not matter as we were not using any important patches.
    Unfortunately, due to the fact that most of the packages needed for Nodewatcher to function are custom written and were never upstreamed we still need to custom build them.
    Thankfully upstream OpenWrt provides an SDK next to imagebuilders, those are meant for just what we need, for building packages only.
    They provide already built toolchain and all of the tools needed so that saves a lot of time, but since our packages have a lot of dependencies it still takes some time to build them.
    Then they are simply copied to the imagebuilder, we manually trigger the package index to be regenerated as we use that package index to generate metadata so Nodewatcher knows what and which version of packages are inside each imagebuilder. This enables configuring packages on per version basis.
    Since we can now easily download all of the community packages we dont have to compile them in like we did so far.
    This completely removes the need for us to have package mirrors.In the end, this has reduced the time needed for each target around 3-4 times.
  • Configuration of the build process was greatly reduced as well as its complexity.
    No more need for a lot of Dockerfiles and configuration for each of the targets.

Currently, all of these changes have not yet made it to the main repository but rather sit in my repository on Github.
Pull request for merging all of those changes is created and approved so changes should be merged rather soon.
It can be found here: PR to wlanslovenija

Future

I did not have time to do all of the things I wanted.
This is mainly upstreaming as much of our packages as possible, as they are the biggest time consumer during building.
This will be dealt with after GSoC.

Nodewatcher needs to be updated to merge LEDE and OpenWrt as we have some checks to ensure that more advanced features are only enabled on LEDE as OpenWrt did not have them at that time.
This will be dealt with after GSoC too.

I also wanted to add some new features to our imagebuilders, but since hitting a lot of bugs and unexpected stuff during development I did not have for these, so like previous two points, this will be dealt after GSoC.

So to sum this up, this was a really good experience.
I got to focus on two things I enjoy working on: FOSS software and OpenWrt.
This enabled me to learn a lot on the functioning of our Nodewatcher, OpenWrt imagebuilder and especially OpenWrt SDK.

Thanks to Google for organizing GSoC, Freifunk for enabling me to give back to the community in the usefull way.
And special thanks to my mentor Valent Turković.

Best regards
Robert Marko

Der Beitrag nodewatcher: Build system rework and package upstreaming – Final update erschien zuerst auf Freifunkblog.

July 09 2018

20:34

A module for OLSRv2 to throughput estimation of 2-hop wireless links

Hi to community members!

In the phase 2 period, we set up an emulation environment in CORE, then we tested PRINCE with iperf client/server (https://github.com/pasquimp/prince/tree/iperf) in CORE. We built a simple three nodes topology (n1, n2, n3) where n1 is directly connected (in the wireless coverage) to n2 and n2 is directly connected to n3. n1 is connected to n3 through OLSR. The neighbor estimated throughput at IP level is of about 43 Mbps on a physical link of 54 Mbps (in the figure the throughput estimated from n2 towards n1).

We tested in CORE the initial version of the OONF plugin too (https://github.com/pasquimp/OONF/tree/neighbor-throughput). So, the plugin now is able to send a couple of probe packets towards each neighbor and is able to read the time of reception of packets. I’m now exploring a problem in the reception of the probe.

In the next weeks, we will perform other tests with PRINCE with iperf and with the OONF plugin to resolve the problems in the reception phase and then we will perform timestamp based throughput estimation in order to compare the results obtained with PRINCE with iperf and OONF with the plugin. We will update you in the coming weeks!

Der Beitrag A module for OLSRv2 to throughput estimation of 2-hop wireless links erschien zuerst auf Freifunkblog.

20:15

GSoC 2018 – Kernel-space SOCKS proxy for Linux – July progress

What we have so far

Last month I introduced my test setup intended for fast kernel trials and network development. After that updated my shadowsocks-libev fork for the latest 3.2.0 version which is the latest upstream stable version. This fork dont do any encryption which is not so secure but faster – and in our new approach: we can put the data plane into the kernel (because we cant do any data modification in the userspace).

Possible solutions

The problem emerged in a different environment recently: at the cloud/datacenter scope. In the cloud transmission between containers (like Docker) happens exactly like in our SOCKS proxy case: from user to kernel, than back to user (throught the proxy) than back to kernel, and to user. Lots of unnecessary copy. There was an attempt to solve that: kproxy . This solution is works pretty well, but there is two drawbacks: not merged into the kernel (the main part is a module, but also modifies kernel headers) and in my tests its slower than the regular proxy with the extra copies. Sadly I dont know the exact problem, but with my loopback tests on a patched 4.14 kernel where about ~30% slower than a regular proxy.

The kproxy currently AFAIK not in development, because with TCP zero-copy there is a better solution with zproxy, but its not released yet. But some part of the original kproxy code already merged into the kernel part of the eBPF socket redirect function: https://lwn.net/Articles/730011/
This is nice because its standard, already in the vanilla 4.14 kernel, but a bit more complicated to instrument it, so I will test it later.

The backup solution if none of them works the I will try it with netfilter hook with the skb_send_sock function, but that version is very fragile and hacky.

Der Beitrag GSoC 2018 – Kernel-space SOCKS proxy for Linux – July progress erschien zuerst auf Freifunkblog.

18:43

GSoC 2018 – Ground Routing in LimeApp – 2nd update

Hello in this past month I was working on the validation of the configuration in both the front-end and backend.

Basically it is to confirm that the minimum parameters to generate the basic configuration are selected and are of the corresponding type. The double validation is because the ubus module can be used in the future by other applications, and in this way its good use is guaranteed, while validation in the frontend allows a faster response to the user.

View for LuCi

While doing all this I started to develop the basic view for LuCi, although the goal of GSoC is to develop the view for LimeApp I can do both by reusing much of the code. In the next few days I will upload some screenshots.

Der Beitrag GSoC 2018 – Ground Routing in LimeApp – 2nd update erschien zuerst auf Freifunkblog.

15:48

GSoC 2018 – Better map for nodewatcher (2nd update)

Hello everyone,

I am very happy to say that since my last update I was able to implement most of the features that I have talked about and was able to test them with real data.

In the last update I talked about how I started my own local leaflet map with which I wanted to test every feature before implementing them. While doing that I also need to go through most of the nodewatcher code to see how the map is being generated. The problem here was that nodewatcher uses Django templates and many custom scripts that were placed in multiple locations. It took some time to figure out what each part was doing because the map was made at the start of nodewatcher and wasn’t documented well. So this took most of my time, but after I figured out where everything was I was able to start implementing most of my code.

The implementation went surprisingly fast, so I was able to test everything on my own nodewatcher server that I started at the beginning of GSoC. The only problem here was that I didn’t have any nodes to see on my map. I was able to workaround this by redirecting my API call to gather node data from the nodes.wlan-si.net server which is the wlan slovenija nodewatcher server. It has over 300+ active nodes. In the pictures below you are able to see the things that I have currently implemented that are:

  • The fullscreen map option
  • Popup with some general information about the node that you get when you click on it. And also by clicking the name in the popup you can go to that nodes website
  • Sidebar that gives you a list of all currently online nodes with a search bar and the ability to show each one on the map.

Next thing for me is to try to implement one more feature which is the ability to see nodes that have gone offline in the past 24 hours. I say try because I have already looked into it and the problem with this is that the current API doesn’t have a filtering option so I can’t get only the nodes that have the location parameter set. I will also mostly focus on writing good documentation because that is something that nodewatcher is currently lacking and it would have really helped me a lot.

Der Beitrag GSoC 2018 – Better map for nodewatcher (2nd update) erschien zuerst auf Freifunkblog.

15:00

LibreNet6 – update 2

This is an quick update on my work on LibreNet6 and LibreMesh within the last weeks. The exam period in Tokyo started and I had a cold which slowed me a bit down, once both passed I will focus with doubled concentration on the project again!

Multiple servers

The approach of using Tinc allows the usage of more then one IPv6 server, allowing to connect the servers of multiple communities with different IPv6 subnets. Babeld automatically detects where to route traffic when using external subnetworks. This is fortunate as it is possible that there is a high latency between mesh gateway and IPv6 server which would slow down traffic. However, using Tinc and babeld I ran a setup with two mesh gateways both using two different IPv6 subnets. While pings to the other network had high latencies at first (me in Tokyo, one IPv6 server in London and one in Argentina), Tinc automatically exchanged the IPv6 addresses of the mesh gateways which then could connect directly, lowering the latencies. Summarizing this experiment, using Tinc makes the network independent of the public IPv6 addresses.

No lime-app plugin

Initially I though of creating a lime-app plugin which allows to easily requests access to a Tinc mesh. However, after an evolution with my mentor and reading more about Tinc, we decided against it: The new 1.1 release of Tinc not only simplifies joining a mesh by offering the invite and join commands, but also allows to do all configuration automatically with the help of an invitation file. These new features simplify the project much more then I though, following the Spanish documentation on Altermundi.

Adding some security

As mentioned above some parts where easier as excepted, I though of looking into additional tasks for the project. Currently the usage of babeld requires all users of the mesh to fully trust one another as babeld does not provide any security (I could find) regarding announced routes. Mesh routing with security is offered by BMX7, which introduces a model to set (dis)trust between nodes. For this reason I’ve been in contact with Axel Neumann, the developer of BMX7, to fix an long standing error in OpenWrt which lead to false MTU rates in BMX7. The fix was merged upstream and thereby allows to test BMX7 over Tinc as a secure babeld alternative.

English documentation

Beneath the experiments I’ve started to translate (and simplify) the Spanish documentation of LibreNet6 and will upload it to the GitHub repository once finished. Important part is also how to configure 6to4 tunnels as surprisingly few VM providers offer any IPv6 connectivity per default but only a single public IPv4 address.

Der Beitrag LibreNet6 – update 2 erschien zuerst auf Freifunkblog.

14:32

nodewatcher: Build system rework and package upstreaming – Second update

Hi,

Last weeks have been spent solely on reworking the build system.

First, it was a matter of rebranding the current LEDE back into OpenWrt and fixing a couple of hard-coded names that would cause issues with OpenWrt name. It also involved dropping the old OpenWrt build system which has not been used for years and most likely never will again, so that removes unnecessary code to maintain.

After rebranding, I spent some time verifying that the whole system still works.
Fortunately, there were only small bugs which were simple to fix.

And then came the main task of this project, to completely rework and massively simplify the whole building the image builder job a lot easier and resource intensive.

Firstly, since I was still gonna use Docker to images for a build environment updating the base image which is the actual build environment was needed from old Trusty 14.04 to fresh 18.04 Bionic. This proved to be mostly trial and error as a lot less of default packages were included in 18.04 so getting all dependencies working. After a while base image is now working fine and is relatively small, actually smaller than 14.04 base image.
This is due to less unnecessary packages.

Once the base image was sorted out I finally got working on dropping the unnecessary scripts, docker files and all of the hardcoded build files.

This proved to be not so hard, so work on a new docker based build system started.

So far it’s broken into only 4 separate scripts:

  1. docker-prepare-build system: Like its name hints it builds the base image and installs the needed packages. I am still thinking to maybe pull this from the auto built image on Docker Hub.
  2. generate-docker files: Which generates the temporary docker files needed for building inside a Docker 18.04 base image.
  3. docker-build: Which actually “builds” the image builder and SDK.
  4. build: Main script, which simply calls others to configure and build everything.

Number of scripts will most likely grow by one or two since the built image builder with all of the packages need to be packaged and then deployed in a runtime specific image which will only contain the bare minimum of packages to keep it as lightweight as possible.

Currently, building works fine for most custom packages using SDK, but its stuck at building ncurses with a weird LC_TIME assertion error which I need to fix.

So next period will be strictly for fixing the bugs and finishing the build system.
After that is done I will update the custom packages and try to get them upstreamed.

Der Beitrag nodewatcher: Build system rework and package upstreaming – Second update erschien zuerst auf Freifunkblog.

14:30

GSoC 2018 – DAWN a decentralized WiFi controller (2st update)

Hi,
I still try to get my patches upstream.
For the libiwinfo patch I had to add the lua bindings. I never used lua so first I had to get comfortable with this. Additionally I wanted to add the channel utilization in the luci statistics app. But suddenly Luci is giving me a null pointer exception in the dev branch.


Additionally I tried to get comfortable with Luci for developing my own app.
Meanwhile another developer created nearly the same patch for iwinfo that add the survey data for the nl802.11 driver… This patch is still not accepted. The only difference is that it returns all survey data for all channels (like iw dev wlan0 survey dump)…
Furthermore, my pull request for the hostapd ubus bindings that add information about the ht and vht capabilities had to be rewritten. (https://github.com/openwrt/openwrt/pull/898). Again I have to wait for some feedback. While rewriting this patch, I had a new idea: If you subscribe to the hostapd via ubus and want to notify on the messages you have to activate it. It would be possible to add flags in the hostapd_ubus_bss to select what information should be published via the ubus bus. Before doing so, I want some feedback if this is a good idea.If somebody is interested why I am interested in the capabilities: I want to create a hearing map for every client. I’m building this hearing map using probe request messages. This probe request messages contain information like (rssi, capabilities, ht capabilities, vht capabilities, …). VHT give clients the opportunity to transfer up to 1,750 Gigabits (theoretical…) If you want to select some AP you should consider capabilities… In the normal hostapd configuration you can even set a flag that forbids 802.11b rates. If you are interested what happens if a 802.11b joins your network search for: WiFi performance anomaly. 🙂

Summarizing, I spent a lot of time waiting for feedback, debugging, modifying my patches or replying on the email lists. It is a bit frustrating.
The cool stuff was that I had my first pull request. 🙂 (it was just a typo ^^) But somebody took the time to fork my project and create a pull request. 😉
Furthermore, it is exam time and I have a lot of stuff to do for the university.

Actually I wanted to go on with more interesting stuff like connecting to the netifd demo to get more information.

Or to look at PLC. There is an interesting paper EMPoWER Hybrid Networks: Exploiting Multiple Paths over Wireless and ElectRical Mediums.

 

Der Beitrag GSoC 2018 – DAWN a decentralized WiFi controller (2st update) erschien zuerst auf Freifunkblog.

11:26

VRConfig Update 2

Hi,

I spent the last weeks mainly developing the LuCI Application for VRConfig. As soon as you want to do advanced things with LuCI, it gets cumbersome.
As the API is mostly undocumented, you have to dig through the LuCI’s source code trying out functions which could be useful according to their name.
It’s a bit of a trial and error game.
Currently the LuCI app does the following.
It displays an image of the router and parses the JSON file, which contains the locations of the components.
With this information it can mark the associated physical ports to the currently selected network interface and display those network ports, which are connected to a cable. You can also hover over the components and click on them, which leads you to their respective settings page.

I also improved the annotation app. It now lets you choose the router name from a list of all currently supported router models of OpenWrt. I got that list with a series of grep and sed commands from the OpenWrt git repository.
For your information, there are currently around 1100 different router models supported. 🙂

In the next weeks I will polish the LuCI Application and try to integrate VRConfig into the openwrt build system to be able to select the correct router image and JSON file at build time.

Der Beitrag VRConfig Update 2 erschien zuerst auf Freifunkblog.

00:33

OpenWLANMap App: Update 2

Hi,

In the last weeks I was working on  the storing process as described in the architecture in the last blog post [0].

Storage Handler:

Old app: the old app saves the data as byte in a file. A data entry is 28 bytes of MAC-address(12 bytes for 12 characters) and latitude(8 bytes for double) and longitude(8 bytes for double). An entry could be saved more than once in the file. There are 2 files, one for data which should be updated and one for data which should be deleted from backend.

New app: Firstly I wanted to adapt the structure from the old app. But since I saw some unreasonable points such as saving redundant data, flash workload, maintenance problem and unstructured storage, I decided for a standard database with more structure and easy to maintain: sqlite. Also I am using the new persistence lib, which provides an abstract layer for database: Room, newly released last year, as a part of android architecture components, with a lot of bug fixed since then. A lib with a lot of advantage when working with sqlite database: verify queries at compile time, reduce a lot of duplicate code in comparison with the last approach with DbHelper etc. In order to store the access point in the database, I implemented a seperate thread, which reads data from a blocking queue and saves it in the database, which works parallel with the scan thread and will be interrupted if there is nothing in the queue to store. Also to save energy and not force the store thread to run the whole time, a list of access points will be put into the blocking queue as an element. To pretend redundant data in storage, a data entry with BSSID will not be saved many times as in the old app but only once. The BSSID is used as primary key in the sqlite table. It will be updated the next time if the received signal strength is better than the last entry in the database. An explicit transaction is implemented to solve this case since the lib Room has only supported annotation for standard update/insert. To decide if a access point should be deleted or updated from backend, a flag is set.

Upload Handler:

The WifiUploader is in process. I did take a look at the uploading format in the old app and how it communicates with the current backend. Also the upload sequence is already defined, mean the scanning thread will be interrupted, all the rest of access point will be stored, the store thread will be interrupted to pretend conflict while 2 threads try to access same database at the same time before the uploading process is started. Also the WifiUploader will read maximum a number of data entries from the database and upload it, not the whole database like old app but one after another,  in order to pretend out-of-memory problem at device with small ram. (see more in the below diagram)

flowchart of uploading process

 

But since I am in the middle of my final exam period, there will be a small delay until this weekend for the WifiUploader to be published. Also from next week I will spend full time making the other features done includes implementing all saving resource features such as adaptive scanning, implementing all settings option. A clean and full documentation will be provided at the end as well.

Available issue: Permission request and handling

[0] https://blog.freifunk.net/2018/06/10/openwlanmap-app-update-1/

Der Beitrag OpenWLANMap App: Update 2 erschien zuerst auf Freifunkblog.

July 08 2018

16:06

WiFi Direct and Bluetooth Meshing

As hinted in my last blog article, for us to really be able to move forward we needed to do some experimentation with the new technologies we have to adapt. The primary candidate in the beginning of that phase was WiFi Direct, a type of WiFi mode setting which is an official standard published by the WiFi consortium which is meant to replace the ad-hoc wifi mode. But only partially: WiFi Direct was mostly designed to make integration with IoT products easier. As such, using it for the meshing applications is a bit outside of it’s primary use case. The idea behind it is to make two WiFi devices talk to each other without needing a router to be the middleman for negotiation and frequency selection. Even groups of devices are possible create, electing a group leader that then manages the group.

Unfortunately…that sounded a lot better in theory than it turned out to be in practice.

Also as hinted in my last blog post we built some little prototype applications to test WiFi direct between multiple devices and ran into some issues. The APIs that are provided by Android are okay to use, but not super convenient. But most of the issues come from bugs that we haven’t exactly been able to trace down yet. The system WiFi Direct interface (System Settings > WiFi > Advanced > WiFi Direct) detects all devices in the vicinity whereas our application, using the WiFi Direct interface in the Android SDK would sometimes (nondeterministically) fail to detect devices or open sessions between them. We also had some bad experience creating groups between the devices.

All in all…it was underwhelming. WiFi Direct really wasn’t meant to do the kind of networking we’re trying to do with it and even if we can figure out the bugs we encountered, there are other concerns to work out. Debugging these issues aren’t easy but there are a few things we can do. For one, there are other (open source) applications that exist (serbal, briar, …) that use this technology and we can study to see how they solved these issues. There is also the option of wireshark-ing packets that are being transmitted between the two devices to get a better understanding of where handshakes are going wrong. Simple debugging via Android/ Java debugger unfortunately hasn’t yielded many useful results.

We need a convenient way for people to be able to join the network. We need to figure out a way to create a captive portal for people just connecting without the software. The handoff between a WiFi Direct network section and a legacy ad-hoc section that might be created between infrastructure nodes that don’t support WiFi Direct. The last week or so I’ve had my head in the WiFi Direct specification, trying to answer these questions. And while I think we have solved most problems, there’s still a few left to answer.

The second technology we are investigating to complement WiFi Direct wherever it isn’t applicable is Bluetooth P2P Meshing. In contrast to WiFi Direct, it was actually developed for devices to mesh with each other which makes the adaptation of it easier for us in the long run. So far we’ve only done some simple experiments with 2 devices (due to a lack of Android devices in one location 😉 ) but these have been a lot more promising than what WiFi Direct has offered.

The biggest take-away from the last 2 weeks of experimentation is that we can’t dedicate the routing core to a single networking backend.

In the design of the actual code interface that I’ve built in the first few weeks of GSoC this means that there are some adjustments to be made before writing more code. This includes being more generic when binding interfaces and allowing a client to use multiple backends at the same time. This was not intended to be used in the initial design. But for the time being those interfaces will simply be mocked by some stub methods or maybe a simple simulation so we can test the actual routing algorithms. This is an interesting challenge because so many parts of qaul.net will have to change in lockstep with each other to make it all work.

There are some corner cases to test when it comes to bluetooth mesh networking such as groups and how they handle devices joining in and out of them

Der Beitrag WiFi Direct and Bluetooth Meshing erschien zuerst auf Freifunkblog.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl