kernel-based (Linux) data relay between two TCP sockets -


i wrote tcp relay server works peer-to-peer router (supernode).

the simplest case 2 opened sockets , data relay between them:

clienta <---> server <---> clientb

however server have serve 2000 such a-b pairs, ie. 4000 sockets...

there 2 known data stream relay implementations in userland (based on socketa.recv() --> socketb.send() , socketb.recv() --> socketa.send()):

  • using of select / poll functions (non-blocking method)
  • using of threads / forks (blocking method)

i used threads in worst case server creates 2*2000 threads! had limit stack size , works right solution?

core of question:

is there way avoid active data relaying between 2 sockets in userland?

it seems there passive way. example can create file descriptor each socket, create 2 pipes , use dup2() - same method stdin/out redirecting. 2 threads useless data relay , can finished/closed. the question if server should ever close sockets , pipes , how know when pipe broken log fact?

i've found "socket pairs" not sure purpose.

what solution advice off-load userland , limit amount fo threads?

some explanations:

  • the server has defined static routing table (eg. id_a id_b - paired identifiers). client connects server , sends id_a. server waits client b. when , b paired (both sockets opened) server starts data relay.
  • clients simple devices behind symmetric nat therefore n2n protocol or nat traversal techniques complex them.

thanks gerhard rieger have hint:

i aware of 2 kernel space ways avoid read/write, recv/send in user space:

  • sendfile
  • splice

both have restrictions regarding type of file descriptor.

dup2 not in kernel, afaik.

man pages: splice(2) splice(2) vmsplice(2) sendfile(2) tee(2)

related links:

bsd implements so_splice:

does linux support similar or own kernel-module solution?


Comments

Popular posts from this blog

javascript - Count length of each class -

What design pattern is this code in Javascript? -

hadoop - Restrict secondarynamenode to be installed and run on any other node in the cluster -