Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

July 09 2018

20:34

A module for OLSRv2 to throughput estimation of 2-hop wireless links

Hi to community members!

In the phase 2 period, we set up an emulation environment in CORE, then we tested PRINCE with iperf client/server (https://github.com/pasquimp/prince/tree/iperf) in CORE. We built a simple three nodes topology (n1, n2, n3) where n1 is directly connected (in the wireless coverage) to n2 and n2 is directly connected to n3. n1 is connected to n3 through OLSR. The neighbor estimated throughput at IP level is of about 43 Mbps on a physical link of 54 Mbps (in the figure the throughput estimated from n2 towards n1).

We tested in CORE the initial version of the OONF plugin too (https://github.com/pasquimp/OONF/tree/neighbor-throughput). So, the plugin now is able to send a couple of probe packets towards each neighbor and is able to read the time of reception of packets. I’m now exploring a problem in the reception of the probe.

In the next weeks, we will perform other tests with PRINCE with iperf and with the OONF plugin to resolve the problems in the reception phase and then we will perform timestamp based throughput estimation in order to compare the results obtained with PRINCE with iperf and OONF with the plugin. We will update you in the coming weeks!

Der Beitrag A module for OLSRv2 to throughput estimation of 2-hop wireless links erschien zuerst auf Freifunkblog.

20:15

GSoC 2018 – Kernel-space SOCKS proxy for Linux – July progress

What we have so far

Last month I introduced my test setup intended for fast kernel trials and network development. After that updated my shadowsocks-libev fork for the latest 3.2.0 version which is the latest upstream stable version. This fork dont do any encryption which is not so secure but faster – and in our new approach: we can put the data plane into the kernel (because we cant do any data modification in the userspace).

Possible solutions

The problem emerged in a different environment recently: at the cloud/datacenter scope. In the cloud transmission between containers (like Docker) happens exactly like in our SOCKS proxy case: from user to kernel, than back to user (throught the proxy) than back to kernel, and to user. Lots of unnecessary copy. There was an attempt to solve that: kproxy . This solution is works pretty well, but there is two drawbacks: not merged into the kernel (the main part is a module, but also modifies kernel headers) and in my tests its slower than the regular proxy with the extra copies. Sadly I dont know the exact problem, but with my loopback tests on a patched 4.14 kernel where about ~30% slower than a regular proxy.

The kproxy currently AFAIK not in development, because with TCP zero-copy there is a better solution with zproxy, but its not released yet. But some part of the original kproxy code already merged into the kernel part of the eBPF socket redirect function: https://lwn.net/Articles/730011/
This is nice because its standard, already in the vanilla 4.14 kernel, but a bit more complicated to instrument it, so I will test it later.

The backup solution if none of them works the I will try it with netfilter hook with the skb_send_sock function, but that version is very fragile and hacky.

Der Beitrag GSoC 2018 – Kernel-space SOCKS proxy for Linux – July progress erschien zuerst auf Freifunkblog.

18:43

GSoC 2018 – Ground Routing in LimeApp – 2nd update

Hello in this past month I was working on the validation of the configuration in both the front-end and backend.

Basically it is to confirm that the minimum parameters to generate the basic configuration are selected and are of the corresponding type. The double validation is because the ubus module can be used in the future by other applications, and in this way its good use is guaranteed, while validation in the frontend allows a faster response to the user.

View for LuCi

While doing all this I started to develop the basic view for LuCi, although the goal of GSoC is to develop the view for LimeApp I can do both by reusing much of the code. In the next few days I will upload some screenshots.

Der Beitrag GSoC 2018 – Ground Routing in LimeApp – 2nd update erschien zuerst auf Freifunkblog.

15:48

GSoC 2018 – Better map for nodewatcher (2nd update)

Hello everyone,

I am very happy to say that since my last update I was able to implement most of the features that I have talked about and was able to test them with real data.

In the last update I talked about how I started my own local leaflet map with which I wanted to test every feature before implementing them. While doing that I also need to go through most of the nodewatcher code to see how the map is being generated. The problem here was that nodewatcher uses Django templates and many custom scripts that were placed in multiple locations. It took some time to figure out what each part was doing because the map was made at the start of nodewatcher and wasn’t documented well. So this took most of my time, but after I figured out where everything was I was able to start implementing most of my code.

The implementation went surprisingly fast, so I was able to test everything on my own nodewatcher server that I started at the beginning of GSoC. The only problem here was that I didn’t have any nodes to see on my map. I was able to workaround this by redirecting my API call to gather node data from the nodes.wlan-si.net server which is the wlan slovenija nodewatcher server. It has over 300+ active nodes. In the pictures below you are able to see the things that I have currently implemented that are:

  • The fullscreen map option
  • Popup with some general information about the node that you get when you click on it. And also by clicking the name in the popup you can go to that nodes website
  • Sidebar that gives you a list of all currently online nodes with a search bar and the ability to show each one on the map.

Next thing for me is to try to implement one more feature which is the ability to see nodes that have gone offline in the past 24 hours. I say try because I have already looked into it and the problem with this is that the current API doesn’t have a filtering option so I can’t get only the nodes that have the location parameter set. I will also mostly focus on writing good documentation because that is something that nodewatcher is currently lacking and it would have really helped me a lot.

Der Beitrag GSoC 2018 – Better map for nodewatcher (2nd update) erschien zuerst auf Freifunkblog.

15:00

LibreNet6 – update 2

This is an quick update on my work on LibreNet6 and LibreMesh within the last weeks. The exam period in Tokyo started and I had a cold which slowed me a bit down, once both passed I will focus with doubled concentration on the project again!

Multiple servers

The approach of using Tinc allows the usage of more then one IPv6 server, allowing to connect the servers of multiple communities with different IPv6 subnets. Babeld automatically detects where to route traffic when using external subnetworks. This is fortunate as it is possible that there is a high latency between mesh gateway and IPv6 server which would slow down traffic. However, using Tinc and babeld I ran a setup with two mesh gateways both using two different IPv6 subnets. While pings to the other network had high latencies at first (me in Tokyo, one IPv6 server in London and one in Argentina), Tinc automatically exchanged the IPv6 addresses of the mesh gateways which then could connect directly, lowering the latencies. Summarizing this experiment, using Tinc makes the network independent of the public IPv6 addresses.

No lime-app plugin

Initially I though of creating a lime-app plugin which allows to easily requests access to a Tinc mesh. However, after an evolution with my mentor and reading more about Tinc, we decided against it: The new 1.1 release of Tinc not only simplifies joining a mesh by offering the invite and join commands, but also allows to do all configuration automatically with the help of an invitation file. These new features simplify the project much more then I though, following the Spanish documentation on Altermundi.

Adding some security

As mentioned above some parts where easier as excepted, I though of looking into additional tasks for the project. Currently the usage of babeld requires all users of the mesh to fully trust one another as babeld does not provide any security (I could find) regarding announced routes. Mesh routing with security is offered by BMX7, which introduces a model to set (dis)trust between nodes. For this reason I’ve been in contact with Axel Neumann, the developer of BMX7, to fix an long standing error in OpenWrt which lead to false MTU rates in BMX7. The fix was merged upstream and thereby allows to test BMX7 over Tinc as a secure babeld alternative.

English documentation

Beneath the experiments I’ve started to translate (and simplify) the Spanish documentation of LibreNet6 and will upload it to the GitHub repository once finished. Important part is also how to configure 6to4 tunnels as surprisingly few VM providers offer any IPv6 connectivity per default but only a single public IPv4 address.

Der Beitrag LibreNet6 – update 2 erschien zuerst auf Freifunkblog.

14:32

nodewatcher: Build system rework and package upstreaming – Second update

Hi,

Last weeks have been spent solely on reworking the build system.

First, it was a matter of rebranding the current LEDE back into OpenWrt and fixing a couple of hard-coded names that would cause issues with OpenWrt name. It also involved dropping the old OpenWrt build system which has not been used for years and most likely never will again, so that removes unnecessary code to maintain.

After rebranding, I spent some time verifying that the whole system still works.
Fortunately, there were only small bugs which were simple to fix.

And then came the main task of this project, to completely rework and massively simplify the whole building the image builder job a lot easier and resource intensive.

Firstly, since I was still gonna use Docker to images for a build environment updating the base image which is the actual build environment was needed from old Trusty 14.04 to fresh 18.04 Bionic. This proved to be mostly trial and error as a lot less of default packages were included in 18.04 so getting all dependencies working. After a while base image is now working fine and is relatively small, actually smaller than 14.04 base image.
This is due to less unnecessary packages.

Once the base image was sorted out I finally got working on dropping the unnecessary scripts, docker files and all of the hardcoded build files.

This proved to be not so hard, so work on a new docker based build system started.

So far it’s broken into only 4 separate scripts:

  1. docker-prepare-build system: Like its name hints it builds the base image and installs the needed packages. I am still thinking to maybe pull this from the auto built image on Docker Hub.
  2. generate-docker files: Which generates the temporary docker files needed for building inside a Docker 18.04 base image.
  3. docker-build: Which actually “builds” the image builder and SDK.
  4. build: Main script, which simply calls others to configure and build everything.

Number of scripts will most likely grow by one or two since the built image builder with all of the packages need to be packaged and then deployed in a runtime specific image which will only contain the bare minimum of packages to keep it as lightweight as possible.

Currently, building works fine for most custom packages using SDK, but its stuck at building ncurses with a weird LC_TIME assertion error which I need to fix.

So next period will be strictly for fixing the bugs and finishing the build system.
After that is done I will update the custom packages and try to get them upstreamed.

Der Beitrag nodewatcher: Build system rework and package upstreaming – Second update erschien zuerst auf Freifunkblog.

14:30

GSoC 2018 – DAWN a decentralized WiFi controller (2st update)

Hi,
I still try to get my patches upstream.
For the libiwinfo patch I had to add the lua bindings. I never used lua so first I had to get comfortable with this. Additionally I wanted to add the channel utilization in the luci statistics app. But suddenly Luci is giving me a null pointer exception in the dev branch.


Additionally I tried to get comfortable with Luci for developing my own app.
Meanwhile another developer created nearly the same patch for iwinfo that add the survey data for the nl802.11 driver… This patch is still not accepted. The only difference is that it returns all survey data for all channels (like iw dev wlan0 survey dump)…
Furthermore, my pull request for the hostapd ubus bindings that add information about the ht and vht capabilities had to be rewritten. (https://github.com/openwrt/openwrt/pull/898). Again I have to wait for some feedback. While rewriting this patch, I had a new idea: If you subscribe to the hostapd via ubus and want to notify on the messages you have to activate it. It would be possible to add flags in the hostapd_ubus_bss to select what information should be published via the ubus bus. Before doing so, I want some feedback if this is a good idea.If somebody is interested why I am interested in the capabilities: I want to create a hearing map for every client. I’m building this hearing map using probe request messages. This probe request messages contain information like (rssi, capabilities, ht capabilities, vht capabilities, …). VHT give clients the opportunity to transfer up to 1,750 Gigabits (theoretical…) If you want to select some AP you should consider capabilities… In the normal hostapd configuration you can even set a flag that forbids 802.11b rates. If you are interested what happens if a 802.11b joins your network search for: WiFi performance anomaly. 🙂

Summarizing, I spent a lot of time waiting for feedback, debugging, modifying my patches or replying on the email lists. It is a bit frustrating.
The cool stuff was that I had my first pull request. 🙂 (it was just a typo ^^) But somebody took the time to fork my project and create a pull request. 😉
Furthermore, it is exam time and I have a lot of stuff to do for the university.

Actually I wanted to go on with more interesting stuff like connecting to the netifd demo to get more information.

Or to look at PLC. There is an interesting paper EMPoWER Hybrid Networks: Exploiting Multiple Paths over Wireless and ElectRical Mediums.

 

Der Beitrag GSoC 2018 – DAWN a decentralized WiFi controller (2st update) erschien zuerst auf Freifunkblog.

11:26

VRConfig Update 2

Hi,

I spent the last weeks mainly developing the LuCI Application for VRConfig. As soon as you want to do advanced things with LuCI, it gets cumbersome.
As the API is mostly undocumented, you have to dig through the LuCI’s source code trying out functions which could be useful according to their name.
It’s a bit of a trial and error game.
Currently the LuCI app does the following.
It displays an image of the router and parses the JSON file, which contains the locations of the components.
With this information it can mark the associated physical ports to the currently selected network interface and display those network ports, which are connected to a cable. You can also hover over the components and click on them, which leads you to their respective settings page.

I also improved the annotation app. It now lets you choose the router name from a list of all currently supported router models of OpenWrt. I got that list with a series of grep and sed commands from the OpenWrt git repository.
For your information, there are currently around 1100 different router models supported. 🙂

In the next weeks I will polish the LuCI Application and try to integrate VRConfig into the openwrt build system to be able to select the correct router image and JSON file at build time.

Der Beitrag VRConfig Update 2 erschien zuerst auf Freifunkblog.

00:33

OpenWLANMap App: Update 2

Hi,

In the last weeks I was working on  the storing process as described in the architecture in the last blog post [0].

Storage Handler:

Old app: the old app saves the data as byte in a file. A data entry is 28 bytes of MAC-address(12 bytes for 12 characters) and latitude(8 bytes for double) and longitude(8 bytes for double). An entry could be saved more than once in the file. There are 2 files, one for data which should be updated and one for data which should be deleted from backend.

New app: Firstly I wanted to adapt the structure from the old app. But since I saw some unreasonable points such as saving redundant data, flash workload, maintenance problem and unstructured storage, I decided for a standard database with more structure and easy to maintain: sqlite. Also I am using the new persistence lib, which provides an abstract layer for database: Room, newly released last year, as a part of android architecture components, with a lot of bug fixed since then. A lib with a lot of advantage when working with sqlite database: verify queries at compile time, reduce a lot of duplicate code in comparison with the last approach with DbHelper etc. In order to store the access point in the database, I implemented a seperate thread, which reads data from a blocking queue and saves it in the database, which works parallel with the scan thread and will be interrupted if there is nothing in the queue to store. Also to save energy and not force the store thread to run the whole time, a list of access points will be put into the blocking queue as an element. To pretend redundant data in storage, a data entry with BSSID will not be saved many times as in the old app but only once. The BSSID is used as primary key in the sqlite table. It will be updated the next time if the received signal strength is better than the last entry in the database. An explicit transaction is implemented to solve this case since the lib Room has only supported annotation for standard update/insert. To decide if a access point should be deleted or updated from backend, a flag is set.

Upload Handler:

The WifiUploader is in process. I did take a look at the uploading format in the old app and how it communicates with the current backend. Also the upload sequence is already defined, mean the scanning thread will be interrupted, all the rest of access point will be stored, the store thread will be interrupted to pretend conflict while 2 threads try to access same database at the same time before the uploading process is started. Also the WifiUploader will read maximum a number of data entries from the database and upload it, not the whole database like old app but one after another,  in order to pretend out-of-memory problem at device with small ram. (see more in the below diagram)

flowchart of uploading process

 

But since I am in the middle of my final exam period, there will be a small delay until this weekend for the WifiUploader to be published. Also from next week I will spend full time making the other features done includes implementing all saving resource features such as adaptive scanning, implementing all settings option. A clean and full documentation will be provided at the end as well.

Available issue: Permission request and handling

[0] https://blog.freifunk.net/2018/06/10/openwlanmap-app-update-1/

Der Beitrag OpenWLANMap App: Update 2 erschien zuerst auf Freifunkblog.

July 08 2018

16:06

WiFi Direct and Bluetooth Meshing

As hinted in my last blog article, for us to really be able to move forward we needed to do some experimentation with the new technologies we have to adapt. The primary candidate in the beginning of that phase was WiFi Direct, a type of WiFi mode setting which is an official standard published by the WiFi consortium which is meant to replace the ad-hoc wifi mode. But only partially: WiFi Direct was mostly designed to make integration with IoT products easier. As such, using it for the meshing applications is a bit outside of it’s primary use case. The idea behind it is to make two WiFi devices talk to each other without needing a router to be the middleman for negotiation and frequency selection. Even groups of devices are possible create, electing a group leader that then manages the group.

Unfortunately…that sounded a lot better in theory than it turned out to be in practice.

Also as hinted in my last blog post we built some little prototype applications to test WiFi direct between multiple devices and ran into some issues. The APIs that are provided by Android are okay to use, but not super convenient. But most of the issues come from bugs that we haven’t exactly been able to trace down yet. The system WiFi Direct interface (System Settings > WiFi > Advanced > WiFi Direct) detects all devices in the vicinity whereas our application, using the WiFi Direct interface in the Android SDK would sometimes (nondeterministically) fail to detect devices or open sessions between them. We also had some bad experience creating groups between the devices.

All in all…it was underwhelming. WiFi Direct really wasn’t meant to do the kind of networking we’re trying to do with it and even if we can figure out the bugs we encountered, there are other concerns to work out. Debugging these issues aren’t easy but there are a few things we can do. For one, there are other (open source) applications that exist (serbal, briar, …) that use this technology and we can study to see how they solved these issues. There is also the option of wireshark-ing packets that are being transmitted between the two devices to get a better understanding of where handshakes are going wrong. Simple debugging via Android/ Java debugger unfortunately hasn’t yielded many useful results.

We need a convenient way for people to be able to join the network. We need to figure out a way to create a captive portal for people just connecting without the software. The handoff between a WiFi Direct network section and a legacy ad-hoc section that might be created between infrastructure nodes that don’t support WiFi Direct. The last week or so I’ve had my head in the WiFi Direct specification, trying to answer these questions. And while I think we have solved most problems, there’s still a few left to answer.

The second technology we are investigating to complement WiFi Direct wherever it isn’t applicable is Bluetooth P2P Meshing. In contrast to WiFi Direct, it was actually developed for devices to mesh with each other which makes the adaptation of it easier for us in the long run. So far we’ve only done some simple experiments with 2 devices (due to a lack of Android devices in one location 😉 ) but these have been a lot more promising than what WiFi Direct has offered.

The biggest take-away from the last 2 weeks of experimentation is that we can’t dedicate the routing core to a single networking backend.

In the design of the actual code interface that I’ve built in the first few weeks of GSoC this means that there are some adjustments to be made before writing more code. This includes being more generic when binding interfaces and allowing a client to use multiple backends at the same time. This was not intended to be used in the initial design. But for the time being those interfaces will simply be mocked by some stub methods or maybe a simple simulation so we can test the actual routing algorithms. This is an interesting challenge because so many parts of qaul.net will have to change in lockstep with each other to make it all work.

There are some corner cases to test when it comes to bluetooth mesh networking such as groups and how they handle devices joining in and out of them

Der Beitrag WiFi Direct and Bluetooth Meshing erschien zuerst auf Freifunkblog.

11:24

The Turnantenna – Second evaluation update

Time is passing, and work is proceeding.

Last month I reported a problem concerning speed of our beloved Turnantenna: the acceleration was not constant during movement of the stepper engine, as I wanted. The error was caused by implementation of a bad algorithm. A constant acceleration is important to provide a smoother movement, and is needed to reduce the load on the engine. Force is equal to mass times the acceleration; if the acceleration is constant, so is the force; but if the acceleration grows, the stepper’s force grows as well, as long as it can keep up. Uncontrollable acceleration lead to unpredictable forces (or better, toques).

To understand the issue, a brief summary should be given: the way to control the stepper’s speed consists of changing the time between two consecutive steps. The shorter the time, the faster the movement. The previous (and wrong) algorithm, is documented in the older post. It wasn’t a good way to control a torque-limited engine because, as said before, the acceleration was not constant. In the previous algorithm, speed was taken like this:

vn = vn-1 + const

namely, at time tn the speed was a fixed amount more than tn-1.
Time between two steps was

dt = (n – n-1)/vn = 1 / vn

It may appear correct, but the resulting graph was the following:

As can be seen, the speed is not linear. This mean that the acceleration is not constant, but increasing.

I found a solution thanks to a document written by Atmel Corporation. It made me think about the relationships between speed (v), space (s), time (t) and acceleration (a) that comes from physics laws:

s = a ½  t² + v₀ * t + s₀

this equation is always true, when accelerating, when the speed is constant, and even when decelerating. Quantities change inside the formula, but it always remain true.

Now, to keep it simple, let’s consider the first phase: the acceleration. A the beginning of its movement, the engine is still (v0 = 0), and it starts without having already done one single step (s0 = 0). The resulting equation is evaluated at v₀ =0  and s₀= 0:

s = a ½  t² + 0 * t + 0
s = a ½  t²

Now, let’s think about what is known: the acceleration a, that it is constant (because I want it so), s and t; s is the number of steps already done at time t. If I know how many steps -s- I have to do I can find how much time I have to wait –t-, and vice versa.

To find the time between two steps (the step #n and the step #n+1) the formula is:

s = a ½  t²
==>  t = sqrt(2 * s / a)

# at the step number ‘n’
tn = sqrt(2 * n / a)

# time between step ‘n’ and ‘n+1’
dt = tn-1 – tn = sqrt(2 / a) * (sqrt(n+1)-sqrt(n))

Using this calculation, acceleration is constant, and speed increases linearly, as it can be seen in the graph below:

AAAAAH.. a perfect blue line! 😀

Problem: Solved!

Working on tests

Now, more progresses have been done in tests. For those who don’t knows, I’ve started my programming adventure with with this project. Everything for me is anexciting discovery, and during this month I learned and implemented the “argparse” and “logging” libraries. Now it is possible to execute the tests with three verbose levels: the first is silent, the second shows debugging informations and the third shows the info level.
It could appear trivial, but I’d never done it before, and now tests are smarter!

It is not all: I reviewed all the tests, fixed problems and improved their reliability. They’re still not perfect, but I’m working on them daily to get details right.

Fly across borders

It was time to go outside the boundaries, and to think about an interface that bring into communication the web interface and the driver. This is what I’m working on in these days.

To achieve that goal, the problem has to be studied starting from an high level. The main process, which is constantly running, float between a small number of determined states: initialising, still, moving and error handling. Together with the Ninux Florence developers community I built the following state machine graph:

This was realized with the GraphMachine module of the “transmissions” library. Now I’m working on the full representation of this map in code lines. But there is something more.. In fact, at this point, multiprocessing became necessary to provide a safe environment: when the engine is in the MOVING state, for example, and a new command is sent to make a new different rotation, the main process should have the possibility to manage simultaneously the ongoing movement and newer requests.

That’s why we choose to keep the main process always active and make it decide when to run the movement procedure in a dedicated process, like a traffic light.

Turnantenna.me

The greatest effort, this month, was done writing down a full, detailed documentation of the project. 80 pages on what is the Turnantenna, how it works, and when and why to use it.

Many people expressed their interest in the project, someone has offered to support us but, without a complete documentation available, it is difficult to provide a starting point.

The whole doc will be soon available, and this post will be updated with the dedicated link. So, if you are interested in the project, let us know! For the moment, GitHub repository is available here.

See you next month!

Der Beitrag The Turnantenna – Second evaluation update erschien zuerst auf Freifunkblog.

July 07 2018

01:02

Meshenger – P2P local network messenger – Update 2

Just a few days after the first update I figured out how to use webRTC, which I shortly after implemented into Meshenger.

To shortly describe the way signalling is working in my app:

  • Phone A issues a call to Phone B, already knowing its address
  • A sends a call request to B, which B may accept or decline
  • In case of success A creates a Sessiondescription, also called offer, which it transmits to B
  • B creates an answer which it transmits back to A
  • Using the exchanged offer and answer a Peerconnection is established, where the Data is transmittet through seperate DataChannels/Streams
  • Uppon then, the phones have a Peerconnection which they use to send Audio/Video aswell as service messages, e.g. when a camera is connected

 

Besides the sheer implementation of webRTC the front/back camera can be turned on and even switched on the fly.

The app has undergone some graphical improvements, all buttons containing text has been replaced with Imagebuttons,

many of them even show a animation depending on their effect.

 

I even tried to cover the case of the user switching a network, if the link-local address loses its reachability.

Instead of simply trying to reach the last known address, the app now examins every address the phone has

and tries to replace its own mac address in those addresses with the mac of the target.

This leads to a higher chance of re-finding a contact even if the Phones have switched networks.

 

As you can see in the following screenshot, bidirectional video-transmission is enabled, and the buttons at the bottom now have cute icons.

 

Phone A Phone B

 

 

 

 

 

 

 

 

 

 

 

From now on I will focus on polishing the app, cleaning the source code, finding and resolving bugs and,

last but not least, writing sufficient documentation.

I may even write a blog post somewhere, explaining how to archieve a serverless webRTC connection,

since the documentation i have found so far was not really helpful and mainly focused on JavaScript.

Der Beitrag Meshenger – P2P local network messenger – Update 2 erschien zuerst auf Freifunkblog.

June 11 2018

21:51

Routing and WiFi experimentation

The beginning of my work period was pretty busy, not always with Summer of Code things. My mentor math and I had already talked about a lot of the things that needed to happen in order to move qaul.net away from an OLSR based routing protocol and make it extendable as well.

As previously hinted, we are using Rust for the protocol implementation, allowing for easy integrating into the existing C code as well as giving us the option to bit-by-bit rewrite the entire software in Rust, a much more modern and forgiving language. The first thing I tackled was to design a common API for the qaul.net library (libqaul) to use to talk to any networking backend. The routing code holds the state of the network and allows the sending of direct and flooded messages into a network (regardless of implementation under the hood). But to do that we also had to define some common characteristics for nodes and messages.

In the end, a lot of the work was sitting down, going through old notes and determining what our protocol was supposed to do. We looked at existing protocols a lot, thinking about extendability and backwards compatibility. The protocol itself will be binary encoded, although not yet sure which format. There is msgpack, cpnproto/ protobuf as well as some Rust specifics (Rust Object Notation to mention one) to look at. But that shouldn’t actually matter for now. All the versioning and extend-ability are being done in the struct level of the protocol, meaning that we could even switch binary encoding half way through. Keeping the encoding and decoding written in Rust, this is actually incredibly easy with the `From` traits. But I digress…

The protocl we ended up designing can handle any type that already exists in qaul.net, as well as allows for custom user extentions – messages that have a type field and a random binary message payload which allows plugins on both sides to interact with it.

 

So…so far the routing core isn’t doing much routing. But that’s okay, that comes later 🙂 With the networking API in place, we actually have something what we’ve wanted for the last 2 years: a hardware abstraction over any networking backend. The API is implemented in Rust as a trait (think like a Java interface), which makes implementations and even implementation specific code very easy.

The next thing on our todo list is working out how WiFi direct behaves. This is kinda disconnected from the rest of the project, but it’s something that has to happen. For this purpose I’ve written a small demo app (still WIP at the time of this writing), which will let us explore the way that WiFi direct works, how to build mesh groups, etc. These experiments are still ongoing, and we hope to have something to show until the end of the week. I will probably publish a small article on my blog about it – check it out (if you’re reading this in the future 😉 )

All in all, the amount of code written in the first section of GSoC2018 is medium. We have however answered a lot of open questions, have a good plan on how to continue and hope to have more to show off by the time of the next evaluation.

If you’re curious about the progress being made, check out the github repository.

 

Until next time,
Katharina

Der Beitrag Routing and WiFi experimentation erschien zuerst auf Freifunkblog.

21:44

A module for OLSRv2 to throughput estimation of 2-hop wireless links

Hi to community members!

In the phase 1 period, we designed and prototyped a throughput estimation client/server in PRINCE based on iperf3 library. The basic idea is that each node has an iperf3 server and a node can estimate the neighbor throughput by running an iperf3 evaluation. The code is available at https://github.com/pasquimp/prince/tree/iperf.

In order to keep the throughput estimation in OONF, we are evaluating the best strategy among forwarding the estimation from PRINCE to OONF or introduce a new plugin in OONF to perform throughput evaluation. A possible prototype of a new plugin in OONF is available at https://github.com/pasquimp/OONF/tree/neighbor-throughput.

In the next weeks, we will decide on the best strategy in order to keel reliable neighbor throughput estimation and keep aware OONF about the estimation. We will update you in the coming weeks!

 

Der Beitrag A module for OLSRv2 to throughput estimation of 2-hop wireless links erschien zuerst auf Freifunkblog.

20:07

nodewatcher: Build system rework and package upstreaming – First update

Since last update I spent most of time on fully understanding current build system and nodewatcher internals.

Build system

During the time spent into looking how the whole system works I belive that I was able to figure out what every step and script does in the current build system.
During that I found a lot of relatively simple improvements that can really reduce custom stuff that we have. Most of it was added 3-5 years ago when OpenWrt wasnt really in the state it is now. Custom mirror for source was added,which is now useless as it was not updated and on some really old custom packages it is really slow. Also,building all packages that are added to the package list during build time is not really efficient as during its building OpenWrt default feeds are replaced with our custom package feeds.
This causes users to be stuck on really old and quite limited number of packages. This will be revorked to only replace the target feed as it contains all of the kernel mods that are tied to specific kernel version.
Other packages have no such requirments and versions in upstream can be used.
Also,wget was used during the build time to pull dependecies instead of curl which is recommended tool. Wget is fine for simple downloads but a lof of packages are pulled from behind CDNs and with lot of redirects from mirrors such as Sourcefourge,those curl can handle fine but wget cant.

I have started dropping unused and old packages as well as those that had custom patches that were upstreamed a long time ago.
Stuff like iwinfo from 2015 and old curl.

Also,I have started to move both the buildsystem docker image and runtime docker image from Ubuntu 14.04 to 18.04 Bionic.
This does not yet fully work as Imagebuilder does not detect GCC and Ncurses in the runtime image as working despite the fact that GCC works fine. This will be hard to diagnose,but I feel that it is quite simple.

Custom Wlan Slovenija packages are being prepared for upstreaming.

This is all for now,next two weeks should be bring solutions to most issues.

Robert Marko

Der Beitrag nodewatcher: Build system rework and package upstreaming – First update erschien zuerst auf Freifunkblog.

17:07

VRConfig Update

Hi,

I have some quick updates about VRConfig for you.
Short recap: VRConfig aims to introduce a graphical configuration mode for OpenWrt’s Webinterface LuCI.
For that need to collect pictures of the backside of all supported routers. The idea is to do this in a crowdsourcing manner. The community can submit pictures of their routers together with a metadata file which contains the locations of the components on the picture.

I spent the last weeks developing a web application to provide the annotation functionality of the router components.
A working prototype is now ready and can be tested at the following URL: https://vrconfig.gitlab.io/annotator/
Source code: https://gitlab.com/vrconfig/annotator

The annotator produces a JSON file which in turn can be parsed by the LuCI Application to provide the graphical configuration mode.

The LuCI application is being developed right now and will be provided shorty under the following URL: https://gitlab.com/vrconfig/luci-app-vrconfig

More info about that in the next blog post.

Der Beitrag VRConfig Update erschien zuerst auf Freifunkblog.

16:35

GSoC 2018 – DAWN a decentralized WiFi controller (1st update)

DAWN is using the ubus bindings of the hostapd. Ubus is a messaging system in OpenWrt to which processes can subscribe and publish information or services. The hostapd ubus bindings allow to collect probe-, auth- and assoc-requests. Furthermore, it is possible to deny these requests. Additionally, we can gather client information or deauthenticate clients.
I made my life easy by just extending the hostapd ubus calls with all information I need. I wanted to get these changes upstream but some of my pull request were rejected. I added the bssid, the essid and stuff like this to the hostapd notifications. The pull requests were rejected because I can gather these information through the netlink socket. The ubus bindings of the hostapd should only spread information that can not be gathered in other ways. Now I only have two pull requests left:

Already accepted pull requests:

Added channel survey data in libiwinfo

I had to made the decision, if I want to directly use nl80211 or some library. I already used libiwinfo to contentiously update the rssi of the connected clients. Furthermore, the libiwinfo library is often installed on OpenWrt devices. With the libiwinfo it was possible to gather the essid and the bssid of the WiFi interface. The only information I missed is the channel utilization. The channel utilization is a value between 0 and 255. It is a measure how much a channel is used and what capacity is left.
The channel utilization can be calculated:

Unfortunately, the needed information is not contained in the libiwinfo. So I extended the lib by the necessary information: https://github.com/PolynomialDivision/iwinfo/tree/feature/channel_util
There is some weird behavior of the ath10k driver that I tried to debug. The ath9k driver is working very smooth. If I try to obtain channel survey data without waiting a short time, the survey results become 0. Just waiting between 2 calls fixes the problem.
When I had to figure out how to contribute to the OpenWrt projects. (https://git.openwrt.org/project/iwinfo.git) This can be done via the mailing list. There is a nice tutorial how to send patches using git (https://burzalodowa.wordpress.com/2013/10/05/how-to-send-patches-with-git-send-email/) I’m still waiting that the patch will be merged.

Now I can calculate the channel utilization. Instead of always updating this value, the channel utilization should be averaged. (channel utilization value can be very dynamic)

That’s it. Now I had to rewrite the daemon to gather the informations from the libiwinfo.

Bootstrapping

I want to implement bootstrapping. If a router joins the decentralized controller, it should automatically get the configuration from one router of the decentralized group. Different solutions are possible. I could use scp, rsync to get a configuration from another node. I wanted a different solution. With uci (Unified Configuration Interface) you can configure daemons. I use uci to read my configuration into the daemon. My idea was to send the daemon configuration via the network as a string and use uci to configure the daemon configurations file. Unfortunately, I had some troubles with the uci lib. This approach is not finished.

Lesson Learned – Use calloc instead of malloc!

I spent a lot of time trying to fix some stupid mistake.
The ubus c-library has a function called ubus_add_subscriber which expects a ubus_subscriber. Everything was fine in my old implementation because I used a global variable. Now I wanted to add more subscriber using an array of pointers. What I did was:
struct ubus_subscriber *sub = malloc(sizeof(struct ubus_subscriber));
ubus_add_subscriber(ctx, sub);

This crashed all the time and I was very confused. Finally, a friend of mine said that I should try calloc. It worked! The function ubus_add_subscriber goes through the existing pointers in this struct if they are not null!
Lesson learned: Use calloc. 😉

Lesson Learned 2 – Read header files carefully if they exist!

If you use uci_lookup_ptr(ptr,”bla.@bla.bla=blavalue”,true) it will not work!
uci_lookup_ptr needs a string that can be edited and that is not constant! 😉

Der Beitrag GSoC 2018 – DAWN a decentralized WiFi controller (1st update) erschien zuerst auf Freifunkblog.

15:23

GSoC 2018 – Better map for nodewatcher (1st update)

Since my last update I made a lot of progress in understanding how nodewatcher works, mostly around Django, and implementing some of the elements I stated in my last post.

My progress in the beginning was very slow because I hadn’t used Django in such capacity used in nodewatcher. But after a couple of trial and error moments and a lot of help from Django forums and wlan-si members I was able to get a grip of the things I needed. There should be a more detailed description as to how to some parts of the nodewatcher system work. Currently only a handful of people know how the whole system work and that shouldn’t be the case, I will try to document most of my findings and contribute them to the project to help others later on.

I started my own leaflet map in order to begin progress on the map while I learned everything around the current nodewatcher schema. I tried to implement the basic functionalities first to see how the whole code is layed out. As you can see from the picture below I tested out the fullscreen option of the map and also the color representation of the different nodes. In the top-right corner I also added the support for selecting which nodes to show.

This is just a test example and there is a lot of work implementing this into nodewatcher because here I wrote my own script and added the markers by hand. The biggest part is that I need to figure out where to add this code to nodewatcher and make it work with real nodes. I hope to have this figured out until the next update so that some of the features get added. These features are subject to changes and will most likely change in appearance.

I also had some problems with implementing new scripts but that shouldn’t pose a problem in the future. As I said earlier the main problem is adapting to the current code and maintaining its structure so that there isn’t any confusion when someone else takes over. But as I said this involves a lot of asking around for help and will be helpful later on because I hope to continue with this project after GSoC.

After that I will start working on the side menu and anything else that shows as a good addition to the map but for now it is important to learn how the system works and add basic upgrades so that later on it is easier to focus just on adding new elements not wasting time with learning how everything works again. I hope that there won’t be any more problems in the future.

Der Beitrag GSoC 2018 – Better map for nodewatcher (1st update) erschien zuerst auf Freifunkblog.

03:17

GSoC 2018 – Kernel-space SOCKS proxy for Linux – June progress

Assembling the testbed

I decided to give you a brief intorduction to the development of my testbed. In the past month most of the time I experimented with different virtual environments for kernel development. The pros of virtualization:

  • Fast test cycles: multiple virtual machine (VM) can use the same, freshly compiled kernel
  • No physical devices, you dont have to reboot your machine every time when you want to test your recent kernel changes. VMs reboots very fast (about 6-7 sec in my current setup)
  • Flexible network virtualization: you can connect your VMs with virtual ethernet links to virtual switches

My current worflow looks like this:
1. Make changes in the kernel code or configuration (make menuconfig or .config file)
2. Compile the modified kernel
3. Boot the virtual machines with the new kernel
4. Test if works, debug, etc.
5. Goto 1.

In the following you can find a detailed intro how to setup the kernel development and test environment with QEMU and virtual networking

The key components

On my host machine I use the following softwares for the setup:

  • Ubuntu 18.04 Desktop
  • Default 4.15 kernel
  • QEMU 2.12
  • nmcli NetworkManager console interface for bridge creation

Some info about the VMs:

  • Ubuntu Server 18.04 qcow2 cloud images
  • MPTCP supported 4.14 kernel
  • cloud-init for customizing the cloud images

My current testbed

The picture above give the main components of the network configuration of my development environment. I try to explain the steps for reproducing the environment. This section shows how I made the virtualization environment on the host machine. The QEMU brings lots of neat features for easy virtualization, like connect your VMs to bridges on your host, port-forward some port from guest to the host, load external kernel to the guest, etc. We will need all of them for the development.

Get the dependencies

Step 1) Install the required softwares for kernel compilation

I use Ubuntu 18.04 where most of the required components are available from the default repository and you can install them with a simple apt command. First I installed the packages for kernel compilation, you can find lots of resources on the inetrnet about the current dependencies, in my case:

$ sudo apt install git fakeroot build-essential ncurses-dev xz-utils libssl-dev bc bison felx libelf-dev

Step 2) Install the softwares for the virtualization

Now we will install QEMU and some additional tools for maximum compatiblitiy. This will install QEMU 2.11 version. For my setup I compiled and installed the 2.12 version from source, you can find more info here: https://www.qemu.org/download/#source This version contains a simplified -nic networking option described here: https://www.qemu.org/2018/05/31/nic-parameter/

$ sudo apt install qemu-kvm libvirt-clients libvirt-daemon-system bridge-utils virt-manager cloud-utils

Step 3) Get the MPTCP kernel source code

After lots of effort looks like MPTCP will be merged into the mainline kernel soon, so in the near future this step will be deprecetad, but until that you can get the MPTCP kernel source form github

$ git clone -b mptcp_v0.94 git://github.com/multipath-tcp/mptcp

Step 4) Get the Ubuntu Cloud image (what we will use as a rootfs)

With QEMU can boot cloud images which is very common in cloud enviroments, where the installation of the linux distributions might be difficult for the end user or requires lots of resources. With cloud images you can skip the installation of the linux distribution (for example Ubuntu Server) you can access minimal set of softwares and install more with the package manager. I get the latest Ubuntu Server cloud image from here: https://cloud-images.ubuntu.com/bionic/current/ There are lots of architectures and formats, I use https://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64.img which is for QEMU (qcow2 format).

Preconfigure the networking

In the following we will take look for the network setup. This is very easy, we will make two bridges for the VMs. You can imagine these as switches where you can plug the UTP cables of the VMs. Also, this method is very flexible, so you can setup these bridges with Mininet or NS3 network simulator and you plug your VMs into them. With those simulators you can simulate Wi-Fi, LTE or LoRa links instead of error free, reliable ethernet links. But for simplicity we will use regular linux bridges now.

Step 1) Create two briges with nmcli

With a few command we can make two linux bridges with the NetworkManager command interface nmcli. This is probably already preinstalled on your machine. Important note: we use nmcli because this makes permanent changes, so this will keep the bridges after a reboot of the machine. Also with this method we can avoid any config file modification.

$ #Setup the bridge interfaces
$ nmcli con add ifname br0 type bridge con-name br0
$ nmcli con add ifname br1 type bridge con-name br1
$
$ #Disable the STP because we need both path later
$ nmcli con modify br0 bridge.stp no
$ nmcli con modify br1 bridge.stp no
$
$ #Disable DHCP on the bridges
$ nmcli device modify br0 ipv4.method disabled
$ nmcli device modify br1 ipv4.method disabled
$ nmcli device modify br0 ipv6.method ignore
$ nmcli device modify br1 ipv6.method ignore
$
$ #Activate the bridges
$ nmcli con up br0
$ nmcli con up br1

Step 2) Configure the qemu-bridge-helper to get know QEMU from the bridges

You have two option here. It depends on QEMU version and your linux distribution on the host machine, but there is two config file you have to modify. The content of the files should be the same in both cases, to tell QEMU “hello, we have br0 and br1 bridges, use them as you wish”:

allow br0
allow br1

Method #1: create a config file in /etc/

$ sudo mkdir /etc/qemu/
$ sudo gedit /etc/qemu/bridge.conf
$ sudo chmod 640 /etc/qemu/bridge.conf
$ sudo chown root:libvirt-qemu /etc/qemu/bridge.conf

Method #2: modify the content of the /usr/local/etc/qemu/bridge.conf file (which was empty in my case). I use this method for keep my /etc/ clean.

Compile the kernel

Now we prepare the kernel image for the VMs. If we want to use our kernel for network development for example, we have to enable some networking related features in the config. Also, we will use some debugging and tracing tools for inspect the operation, so we should have to enable the debug informations.

Step 1) Make the initial config (defconfig)

With the following commands we will create a .config file what we can use as a starting point of the configuration. Then we can modify this file or make further changes with make menuconfig

$ #Assuming you already cloned the MPTCP kernel at the beginning of the tutorial
$ cd mptcp
$ make x86_64_defconfig
$ make kvmconfig
$ make -j `nproc --all`

This will gives you the compiled kernel what you can find the arch/x86/boot/bzImage place.

Step 2) Enable the MPTCP and debugging

Now we have to enable the MPTCP and the debug features, because both of them disabled by default. Also I will enable tc netem module which will be useful for limit traffic rate to lower bandwidth. I will use eBPF (more info here: http://www.brendangregg.com/ebpf.html), ftrace (https://lwn.net/Articles/370423/) and perf (https://perf.wiki.kernel.org/index.php/Main_Page) for tracing and debugging. Modify the .config file (or search all the features in make menuconfig, but in this case I don’t recommend that).

#Common debug parameters
CONFIG_BLK_DEBUG_FS=y
CONFIG_CIFS_DEBUG=y
CONFIG_DEBUG_BOOT_PARAMS=y
CONFIG_DEBUG_BUGVERBOSE=y
CONFIG_DEBUG_DEVRES=y
CONFIG_DEBUG_FS=y
CONFIG_DEBUG_INFO=y
CONFIG_DEBUG_KERNEL=y
CONFIG_DEBUG_MEMORY_INIT=y
CONFIG_DEBUG_STACKOVERFLOW=y
CONFIG_DEBUG_STACK_USAGE=y
CONFIG_PM_DEBUG=y
CONFIG_PM_SLEEP_DEBUG=y
CONFIG_PNP_DEBUG_MESSAGES=y
CONFIG_SLUB_DEBUG=y
CONFIG_X86_DEBUGCTLMSR=y
CONFIG_X86_DEBUG_FPU=y
CONFIG_GENERIC_CPU_AUTOPROBE=y
CONFIG_GENERIC_IRQ_PROBE=y
CONFIG_KPROBE_EVENTS=y
CONFIG_KPROBES_ON_FTRACE=y
CONFIG_KPROBES=y
CONFIG_KRETPROBES=y
CONFIG_OPTPROBES=y
CONFIG_PROBE_EVENTS=y
CONFIG_UPROBE_EVENTS=y
CONFIG_UPROBES=y


#eBPF related parameters
CONFIG_BPF_EVENTS=y
CONFIG_BPF_SYSCALL=y
CONFIG_BPF=y
CONFIG_HAVE_EBPF_JIT=y

#perf related parameters
CONFIG_PERF_EVENTS_INTEL_CSTATE=y
CONFIG_PERF_EVENTS_INTEL_RAPL=y
CONFIG_PERF_EVENTS_INTEL_UNCORE=y
CONFIG_PERF_EVENTS=y

#tracefs related parameter
CONFIG_DYNAMIC_FTRACE_WITH_REGS=y
CONFIG_DYNAMIC_FTRACE=y
CONFIG_FTRACE_MCOUNT_RECORD=y
CONFIG_FTRACE_SYSCALLS=y
CONFIG_FTRACE=y
CONFIG_KPROBES_ON_FTRACE=y

#Enable MPTCP
CONFIG_MPTCP_BINDER=y
CONFIG_MPTCP_FULLMESH=y
CONFIG_MPTCP_NDIFFPORTS=y
CONFIG_MPTCP_PM_ADVANCED=y
CONFIG_MPTCP_REDUNDANT=y
CONFIG_MPTCP_ROUNDROBIN=y
CONFIG_MPTCP_SCHED_ADVANCED=y
CONFIG_MPTCP=y

Save the modified .config file.

Now we will enable the netem in menuconfig just to show this method as an example. In the kernel folder type

$ make menuconfig

Then in the menu navigate to the Network emulator (NETEM) and enable it by pressing the y button

-> Networking support
-> Networking options
    -> QoS and/or fair queueing
        -> Network emulator (NETEM)

Step 3) Recompile the kernel with the new features

Now we have to recompile the kernel to instrument the new features. The kernel image (bzImage) file size should be larger because of the debug informations

make -j `nproc --all`

Booting the guests

This is the most important part of the tutorial because we have to take care lots of details. If the following steps not works as expected on your machine or you have troubles you can find lots of resources on the web (like: https://www.collabora.com/news-and-blog/blog/2017/01/16/setting-up-qemu-kvm-for-kernel-development/ or https://www.youtube.com/watch?v=PBY9l97-lto)

Step 1) Create the cloud-init input images

We have a fresh .img file to boot it, but think about it for a second: what is the username and the password for the first boot? How we can change that or add SSH public ker to the authorized_hosts? Can we change the username and the hostname on boot? The answer for all the question: yes, all of the possible with cloud-init (http://cloudinit.readthedocs.io/en/latest/)

Create a file with the host infos (hostname, username, SSH public key, etc) with the following format. Save it on the name you wish, cloud-init-data.txt for example. (Replace the ssh-authorized-keys parameter with your own public key, .ssh/id_rsa.pub for example)

#cloud-config
hostname: ubu1
users:
  - name: test
ssh-authorized-keys:
  - ssh-rsa AAAAB3[...] spyff@pc
sudo: ['ALL=(ALL) NOPASSWD:ALL']
groups: sudo
shell: /bin/bash

Now you can create a cloud-init image file with the configuration above what you will add to the VM. The cloud-init module of, which is preinstalled on every Ubuntu cloud image file will find it and configure the guest.

$ cloud-localds ubu1.img cloud-init-data.txt

The output of the command is a small image file with the cloud config.

Step 2) Boot the guest first time

In this step we just try out if everyting works. If the guest VM boot in, we can install additional softwares, kernel modules and headers, etc.

$ sudo qemu-system-x86_64 \
-kernel mptcp/arch/x86/boot/bzImage \
-append "root=/dev/sda1 single console=ttyS0 systemd.unit=graphical.target" \
-hda bionic-server-cloudimg-amd64.img \
-hdb ubu1.img \
-m 2048 \
--nographic \
--enable-kvm \
-nic user,hostfwd=tcp::2222-:22

In this command we boot the guest VM from the cloud image with the MPTCP kernel. The systemd.unit=graphical.target kernel command line parameter is important otherwise we will boot into rescue mode. -hdb ubu1.img pass our cloud config informations to the geust. The parameter -nic user,hostfwd=tcp::2222-:22 forward the guest’s SSH port to us as a local TCP 2222 port. This is useful if we have more than one geust VM, we can forward each geust’s SSH port to different local port.

Important note: -nic QEMU parameter only works with >= 2.12 versions, with 2.11 you can use -netdev user,id=net0,hostfwd=tcp::2222-:22 -device e1000,netdev=net0 See this for details: https://wiki.qemu.org/Documentation/Networking#The_new_-nic_option

We can SSH into the guest VM:

ssh test@127.0.0.1 -p 2222

Step 3) Prepare the second VM

Repeat Step 1) with but modify the hostname to ubu2 in the cloud-init-data.txt first. Then create a new ubu2.img file with cloud-localds what we will pass to the second VM with the -hdb parameter.

If we want to boot the same .img file with two QEMU guest we get the following error: qemu-system-x86_64: -hda bionic-server-cloudimg-amd64.img: Failed to get "write" lock Is another process using the image?. We have to option now: copy the .img file as a new one, or use backing files. With backing files we can use the base image as a common “root” of the two VMs. To get a brief intro to backing files I recommend this article: https://dustymabe.com/2015/01/11/qemu-img-backing-files-a-poor-mans-snapshotrollback/ Lets create two images:

$ qemu-img create -f qcow2 -b bionic-server-cloudimg-amd64.img ubuntu1.img
$ qemu-img create -f qcow2 -b bionic-server-cloudimg-amd64.img ubuntu2.img

Now we can pass the backing files for the VMs what they can read and write, but they don’t touch the original bionic-server-cloudimg-amd64.img file and save the differences only. Keep in mind those changes will lost if you delete the backing files and without the base image your backing files doesn’t works anymore.

Step 4) Boot both VM without network config

For config the network interfaces of the virtual machines, just boot both of them and try if we can SSH access both console at the same time. Open four terminal windows (2 SSH + 2 QEMU) and type the commands

$ sudo qemu-system-x86_64 \
-kernel mptcp/arch/x86/boot/bzImage \
-append "root=/dev/sda1 single console=ttyS0 systemd.unit=graphical.target" \
-hda ubuntu1.img \
-hdb ubu1.img \
-m 2048 \
--nographic \
--enable-kvm \
-nic user,hostfwd=tcp::2222-:22 \
$ sudo qemu-system-x86_64 \
-kernel mptcp/arch/x86/boot/bzImage \
-append "root=/dev/sda1 single console=ttyS0 systemd.unit=graphical.target" \
-hda ubuntu2.img \
-hdb ubu2.img \
-m 2048 \
--nographic \
--enable-kvm \
-nic user,hostfwd=tcp::3333-:22 \

Then login

$ ssh test@127.0.0.1 -p 2222
$
test@ubu1:~$
$ ssh test@127.0.0.1 -p 3333
$
test@ubu2:~$

Step 5) Configure the networking on the guest machines

This is a little bit tricky step. We dont now the names of the network interfaces yet, so we only guess. For exapmle we can check the default interface name on the guests with ip a

$ sudo -i
# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:54:00:12:34:56 brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp0s3
   valid_lft 86157sec preferred_lft 86157sec
inet6 fec0::5054:ff:fe12:3456/64 scope site dynamic mngtmpaddr noprefixroute
   valid_lft 86395sec preferred_lft 14395sec
inet6 fe80::5054:ff:fe12:3456/64 scope link
   valid_lft forever preferred_lft forever

Looks like we use enp0s3 so if we add two more ethernet interface we assume they will get the enp0s4 and enp0s5 names. With this assumptions configure the guests:

On ubu1 guest VM:

# touch /etc/systemd/network/20-wired-enp0s4.network
# touch /etc/systemd/network/21-wired-enp0s5.network
#
# cat <> /etc/systemd/network/20-wired-enp0s4.network
[Match]
Name=enp0s4
[Network]
Address=10.1.1.1/24
Gateway=10.1.1.2
EOF
#
# cat <> /etc/systemd/network/21-wired-enp0s5.network
[Match]
Name=enp0s5
[Network]
Address=10.2.2.1/24
Gateway=10.2.2.2
EOF

On ubu2 guest VM:

# touch /etc/systemd/network/20-wired-enp0s4.network
# touch /etc/systemd/network/21-wired-enp0s5.network
#
# cat <> /etc/systemd/network/20-wired-enp0s4.network
[Match]
Name=enp0s4
[Network]
Address=10.1.1.2/24
Gateway=10.1.1.1
EOF
#
# cat <> /etc/systemd/network/21-wired-enp0s5.network
[Match]
Name=enp0s5
[Network]
Address=10.2.2.2/24
Gateway=10.2.2.1
EOF

If you scroll back and take a look to the figure you can verify that the IP addresses are matching.

Step 6) Start the guest VMs with additional network interfaces

Now we will start both guest and they can connect each other on both path through the bridges. We have to specify for the QEMU that we want to add two additional ethernet interface for both VM and connect them to the host bridges br0 and br1 (see the figure on the top of the post). Open up two terminal and do the following commands

Start ubu1 VM

sudo qemu-system-x86_64 \
-kernel mptcp/arch/x86/boot/bzImage \
-append "root=/dev/sda1 single console=ttyS0 systemd.unit=graphical.target" \
-hda ubuntu1.img \
-hdb ubu1.img \
-m 2048 \
--nographic \
--enable-kvm \
-nic user,hostfwd=tcp::2222-:22 \
-nic bridge,br=br0,mac=52:54:00:10:11:01 \
-nic bridge,br=br1,mac=52:54:00:10:22:01

Start ubu2 VM

sudo qemu-system-x86_64 \
-kernel mptcp/arch/x86/boot/bzImage \
-append "root=/dev/sda1 single console=ttyS0 systemd.unit=graphical.target" \
-hda ubuntu2.img \
-hdb ubu2.img \
-m 2048 \
--nographic \
--enable-kvm \
-nic user,hostfwd=tcp::3333-:22 \
-nic bridge,br=br0,mac=52:54:00:10:11:02 \
-nic bridge,br=br1,mac=52:54:00:10:22:02

Important: you should specify different MAC addresses with the mac= key, because otherwise the VM you started second will stuck, beacause at the boot it can see the same MAC address on the bridge. Thats because one QEMU process only generates different MAC addresses for his network interfaces. If you start two QEMU process, both will get the same MAC addresses on their ethernet interfaces.

If everyting right you should see the following output on ubu1 VM

# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:54:00:12:34:56 brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp0s3
   valid_lft 85004sec preferred_lft 85004sec
inet6 fec0::5054:ff:fe12:3456/64 scope site dynamic mngtmpaddr noprefixroute
   valid_lft 86397sec preferred_lft 14397sec
inet6 fe80::5054:ff:fe12:3456/64 scope link
   valid_lft forever preferred_lft forever
3: enp0s4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:54:00:10:11:01 brd ff:ff:ff:ff:ff:ff
inet 10.1.1.1/24 brd 10.1.1.255 scope global enp0s4
   valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fe10:1101/64 scope link
   valid_lft forever preferred_lft forever
4: enp0s5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:54:00:10:22:01 brd ff:ff:ff:ff:ff:ff
inet 10.2.2.1/24 brd 10.2.2.255 scope global enp0s5
   valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fe10:2201/64 scope link
   valid_lft forever preferred_lft forever
5: teql0:  mtu 1500 qdisc noop state DOWN group default qlen 100
link/void
6: sit0@NONE:  mtu 1480 qdisc noop state DOWN group default qlen 1000
link/sit 0.0.0.0 brd 0.0.0.0

Note: modify the content of the /etc/systemd/network/ files if you got different interface names in your VMs. Now you can verify the connectivity between the VMs:

# ping -c 4 10.1.1.2
PING 10.1.1.2 (10.1.1.2) 56(84) bytes of data.
64 bytes from 10.1.1.2: icmp_seq=1 ttl=64 time=0.340 ms
64 bytes from 10.1.1.2: icmp_seq=2 ttl=64 time=0.338 ms
64 bytes from 10.1.1.2: icmp_seq=3 ttl=64 time=0.489 ms
64 bytes from 10.1.1.2: icmp_seq=4 ttl=64 time=0.422 ms

--- 10.1.1.2 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3055ms
rtt min/avg/max/mdev = 0.338/0.397/0.489/0.064 ms
#
#
# ping -c 4 10.2.2.2
PING 10.2.2.2 (10.2.2.2) 56(84) bytes of data.
64 bytes from 10.2.2.2: icmp_seq=1 ttl=64 time=0.353 ms
64 bytes from 10.2.2.2: icmp_seq=2 ttl=64 time=0.360 ms
64 bytes from 10.2.2.2: icmp_seq=3 ttl=64 time=0.429 ms
64 bytes from 10.2.2.2: icmp_seq=4 ttl=64 time=0.362 ms

--- 10.2.2.2 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3064ms
rtt min/avg/max/mdev = 0.353/0.376/0.429/0.030 ms

Few other trick

Extend the default disk space in the VM

Soon

Bandwith limit the guest VM interfaces

Soon

Install kernel modules and kernel headers in the VM

Soon

Install eBPF BCC on the VM from source

Soon

Der Beitrag GSoC 2018 – Kernel-space SOCKS proxy for Linux – June progress erschien zuerst auf Freifunkblog.

June 10 2018

21:19

GSoC – Ground Routing in LiMe app

Overview

In this past month I was working on the update of the lime-app dependencies (it was quite outdated). I also worked on the view and the ubus module that reads and saves ground routing settings in the LiMe config file.

The view: (Github LimeApp branch)

It is the minimum configuration of a plugin for lime-app. It has defined the constants, the store, actions (set and get) and basic epics to obtain the data using uhttp-mod-ubus.

Lime-app uses Preact for rendering the views, redux for state management and rxjs-observable as middleware for asynchronous events. For now you only get the setting as a json and expose it to the user.

Ubus (Github lime-package-ui branch)

Create the lime-groundrouting package that exposes and sets the graound rotuing configuration to lime. For the time being, just expose the settings.

ubus call lime-groundrouting get

To do this I use the LUA library lime.config.

Next step: Save changes.

In the coming weeks I will mount the form and the validation scheme in both the app and the ubus module.

Der Beitrag GSoC – Ground Routing in LiMe app erschien zuerst auf Freifunkblog.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl