You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Network Function Virtualization (NFV), Service Function Chaining (SFC), Function as a Service (FaaS), all those buzzwords are not brand new words anymore in this late 2017. It's been actively discussed and a lot of technologies are crafted and developed. Especially the illusion, function chaining, is treated as a magical technology so that a huge problem will be solved by this magic: an enterprise will never have to pay a tons of dollars for expensive middleboxes, options to the Internet users will be expanded as much as possible, etc.
Function chaining in the NFV context is often discussed with the method of interconnect, or routing scheme to form a chain across multiple functions (middleboxes). The interconnect goes into the design of fast software switch (or channels for inter-process communication). The routing scheme is to dynamically dispatch the path of middleboxes which particular flows go through.
I was thinking that somebody implements this chaining functionality by Unix pipes. The world of Unix is now dominant not only at the backend of the Internet, but also at the user visible terminals which everybody lives with their daily life. Since the inter-connection concept is similar to the chains, and the handy and useful combinations of programs might be helpful, connecting network functions (or virtual machines) by Unix pipes is natural to me.
Unix and its pipe functionality, championed by Douglas McIlroy, are the part of its design but somehow the symbols of the huge success of its history. The principle, do one thing and do it well, is super familiar with implementing minimal, modular programs to do a larger job. The idea is designed in a really simple way: define a universal interface for each program as standard input and output (stdin and stdout), and connect programs by pipes.
But so far this idea in NFV context hasn't caught by my antenna, so we did it.
My coworker, @motomuman, quickly prototyped this idea by extending Linux Kernel Library (commit lkl/linux@b31aa0d) and connect multiple processes which are virtual Linux instances (together with userspace program, something like unikernel). The prototype is named as /dev/stdpkt.
So What is this ?
To present what this does, here is a quick example of ping command (a universal diagnose tool for every network related stuff). When you use the tools in your laptop, it usually transmit packets from one of your network devices. But with stdpkt writes outgoing data to console (stdout) and reads incoming one from console too (stdin). So if there is no readers than a terminal, then binary expressed data will be shown in your terminal.
NAT and IP address filtering in one-linear shell
So you could quickly create a chain by the following set of commands.
% nat.sh | filter.sh
where nat.sh which translates source IP address looks like
and filter.sh which will prevent packets which destined to 10.0.0.0/8 looks like
% ./bin/lkl-hijack.sh iptables -A OUTPUT -d 10.0.0.0/8 -j DROP
(excuse: those are simplified command lines. additional configurations should be required.)
In this example, iptables command is used to configure of network stack inside LKL (not the host kernel) and the process works as it is configured as a middlebox which has two network devices.
packet filtering by grep
You can actually filter packets by grep command (or whatever you like) as EtherPIPE (by @sora).
The above examples are not limited to those: if you wish to use your preferred commands for packet processing, we would like to see more broader usage. It would be also great if you share such usage below.
The detail of this tools is presented at AINTEC 2017 with a paper. The paper will be posted here but will give you locally upon a request.
The text was updated successfully, but these errors were encountered:
thehajime
changed the title
Poem: NFV/SFC, unikernels, and Unix pipes
NFV/SFC, unikernels, and Unix pipes
Nov 19, 2017
tl;dr (I'm just a big fan of Unix :))
(thanks @motomuman for the gif animations)
NFV and Unix
Network Function Virtualization (NFV), Service Function Chaining (SFC), Function as a Service (FaaS), all those buzzwords are not brand new words anymore in this late 2017. It's been actively discussed and a lot of technologies are crafted and developed. Especially the illusion, function chaining, is treated as a magical technology so that a huge problem will be solved by this magic: an enterprise will never have to pay a tons of dollars for expensive middleboxes, options to the Internet users will be expanded as much as possible, etc.
Function chaining in the NFV context is often discussed with the method of interconnect, or routing scheme to form a chain across multiple functions (middleboxes). The interconnect goes into the design of fast software switch (or channels for inter-process communication). The routing scheme is to dynamically dispatch the path of middleboxes which particular flows go through.
I was thinking that somebody implements this chaining functionality by Unix pipes. The world of Unix is now dominant not only at the backend of the Internet, but also at the user visible terminals which everybody lives with their daily life. Since the inter-connection concept is similar to the chains, and the handy and useful combinations of programs might be helpful, connecting network functions (or virtual machines) by Unix pipes is natural to me.
Unix and its pipe functionality, championed by Douglas McIlroy, are the part of its design but somehow the symbols of the huge success of its history. The principle, do one thing and do it well, is super familiar with implementing minimal, modular programs to do a larger job. The idea is designed in a really simple way: define a universal interface for each program as standard input and output (stdin and stdout), and connect programs by pipes.
But so far this idea in NFV context hasn't caught by my antenna, so we did it.
My coworker, @motomuman, quickly prototyped this idea by extending Linux Kernel Library (commit lkl/linux@b31aa0d) and connect multiple processes which are virtual Linux instances (together with userspace program, something like unikernel). The prototype is named as /dev/stdpkt.
So What is this ?
To present what this does, here is a quick example of
ping
command (a universal diagnose tool for every network related stuff). When you use the tools in your laptop, it usually transmit packets from one of your network devices. But with stdpkt writes outgoing data to console (stdout) and reads incoming one from console too (stdin). So if there is no readers than a terminal, then binary expressed data will be shown in your terminal.NAT and IP address filtering in one-linear shell
So you could quickly create a chain by the following set of commands.
where nat.sh which translates source IP address looks like
and filter.sh which will prevent packets which destined to 10.0.0.0/8 looks like
(excuse: those are simplified command lines. additional configurations should be required.)
In this example,
iptables
command is used to configure of network stack inside LKL (not the host kernel) and the process works as it is configured as a middlebox which has two network devices.packet filtering by
grep
You can actually filter packets by
grep
command (or whatever you like) as EtherPIPE (by @sora).port mirroring by
tee
You can mirror all packets by
tee
command and print everything by anothertcpdump
command.and to dump packets.
Expand usages to what you wish
The above examples are not limited to those: if you wish to use your preferred commands for packet processing, we would like to see more broader usage. It would be also great if you share such usage below.
The detail of this tools is presented at AINTEC 2017 with a paper. The paper will be posted here but will give you locally upon a request.
The text was updated successfully, but these errors were encountered: