Limit bandwidth on a specific port in CentOS 7?

I am running CentOS 7 on my VPS and I would like to limit bandwidth on a specific port. I have looked around extensively, and out of the solutions I can find, either it’s a limit placed on an interface, or it’s a vaguely described iptable setup that seems to have only been tried on CentOS 6.

In my case, my Shadowsocks (a proxy application) serverside is listening on port 1080, 1081, and 1082 on eth0. I would like to allow 1080 unlimited bandwidth, but limit both 1081 and 1082 to around 1MBps. Since it’s a proxy application the inbound and outbound traffic is roughly equal. Note that it is a single instance of Shadowsocks listening on 3 ports, NOT 3 instances listening on 1 port each, so limiting bandwidth by process is not applicable.

But otherwise any solution is on the table for me, whether it’s something CentOS supports out of the box, or some kind of intermediate monitoring layer. As long as it gets the job done I’m open to it.

Thanks in advance.

Here is Solutions:

We have many solutions to this problem, But we recommend you to use the first solution because it is tested & true solution that will 100% work for you.

Solution 1

Traffic can be limited using only Linux’s Traffic Control.

Just to clarify, shadowsocks creates a tunnel with one side as a SOCKS5 proxy (sslocal, I’m assuming that’s what is running on the OP’s server considering the given ports), communicating with a remote endpoint (ssserver) which will itself communicate with the actual target servers. shadowsocks handles SOCKS5 UDP ASSOCIATE, and uses then (SOCKS5) UDP on the same port as the (SOCKS5) TCP port.

This solution works as is (see note 1) for both TCP and UDP, except UDP might give additional challenges: if a source is creating “bigger than MTU” sized UDP packets (which probably shouldn’t be done by a well behaving client or server), they get fragmented. tc, which works earlier than netfilter in ingress and later than netfilter in egress, will see the fragments. The UDP port is not available in fragments, so no filter will be able to catch them and almost no limitation will happen. TCP naturally using MTU for packet size limit (and doing path MTU discovery anyway) doesn’t suffer this issue in most settings.

Here’s a packet flow ascii picture (the whole picture would typically represent one client activity resulting in two flows, one to the left and one to the right of the proxy):

              traffic controlled      TCP self-adjusting / no UDP control
             ------------->               <-------------
           /                \           /                \
  clients |                  |  proxy  |                  |  remote  ====== real servers
           \                / (sslocal) \                / (ssserver)
             <-------------               ------------->
              traffic controlled       already rate limited

There’s no need or use to worry about the traffic with the remote server:

  • outgoing from proxy to remote server will of course be limited by clients’ incoming,
  • incoming from remote/servers to proxy
    • TCP will typically adjust and behave like the traffic on the clients side.
    • UDP will not have such possibility, unless the application protocol can do it. Eg: if two video feeds over simple UDP arrive from server side and exceed the limit on client side, both clients flows will likely be corrupted. There should be an application feedback to reduce bandwidth, this is out of this scope.

Anyway it would become much more complex, probably involving changes inside shadowsocks, to link remote/server’s side traffic to client’s side for tc usage.

For SOCKS5 clients only sending data, limiting ingress from them is required to limit bandwidth, and for SOCKS5 clients only receiving data, limiting egress to them is required to limit bandwidth: unless the application in use is well known, both ways should be traffic controlled.

Traffic Control is a complex topic, which I can barely scratch. I’ll give two kinds of answers: the simple and crude one doing policing (drop excess) only, and a more complex one, doing shaping (incl. delaying before having to drop), with an IFB interface to work around limitations of ingress.

The documentation below should be read to understand the concepts and the Linux implementation:

http://www.tldp.org/HOWTO/Traffic-Control-HOWTO/

Also this command implemented in shell script (and using similar mechanisms as in this answer) can really do wonders too:

https://github.com/magnific0/wondershaper

Simple and crude

A police action is used to drop any excess packet matching ports (which is a crude method). It’s usually used in ingress but works on egress too. Traffic is rate limited, but there might be be fluctuations and unfair sharing among various rate limited clients (especially if UDP vs TCP is involved).

  • egress (outgoing packets)

    The simpliest qdisc allowing to attach filters is the prio qdisc, whose specific features won’t really be used.

    tc qdisc add dev eth0 root handle 1: prio
    

    Simply adding the following filter (with 8mbits/s <=> 1MBytes/s) one per port (u16 at 0 layer transport means “source port”), will get it done for TCP and UDP (see also note 2):

    tc filter add dev eth0 parent 1: protocol ip basic match 'cmp(u16 at 0 layer transport eq 1081)' action police rate 8mibit burst 256k
    tc filter add dev eth0 parent 1: protocol ip basic match 'cmp(u16 at 0 layer transport eq 1082)' action police rate 8mibit burst 256k
    

    In case I misunderstood and there should be only one common limit for 1081 and 1082, use this instead of the two above, grouping them in the same action (which is easy with the basic/ematch filter), which will then handle them in a single token bucket:

    tc filter add dev eth0 parent 1: protocol ip basic match 'cmp(u16 at 0 layer transport eq 1081) or cmp(u16 at 0 layer transport eq 1082)' action police rate 8mibit burst 256k
    
  • ingress (incoming packets)

    Ingress is more limited than egress (can’t do shaping), but it wasn’t done in the simple case anyway. Using it just requires adding an ingress qdisc (see note 3):

    tc qdisc add dev eth0 ingress
    

    The equivalent filters (u16 at 2 layer transport means “destination port”):

    tc filter add dev eth0 ingress protocol ip basic match 'cmp(u16 at 2 layer transport eq 1081)' action police rate 8mibit burst 256k
    tc filter add dev eth0 ingress protocol ip basic match 'cmp(u16 at 2 layer transport eq 1082)' action police rate 8mibit burst 256k
    

    or for a single limit, instead of the two above:

    tc filter add dev eth0 ingress protocol ip basic match 'cmp(u16 at 2 layer transport eq 1081) or cmp(u16 at 2 layer transport eq 1082)' action police rate 8mibit burst 256k
    

Cleaning tc

egress, ingress or both settings can be replaced with their improved version below. previous settings should be cleaned first.

To remove previously applied tc settings, simply delete the root and ingress qdiscs. Everything below them, including filters, will also be removed. The default interface root qdisc with the reserved handle 0: will be put back.

tc qdisc del dev eth0 root
tc qdisc del dev eth0 ingress

More complex setup with classful qdiscs and IFB interface

The use of shaping, which can delay packets before having to drop them should improve overall results. Hierarchy Token Bucket (HTB), a classful qdisc will handle bandwidth, while below it Stochastic Fairness Queueing (SFQ) will improve fairness between clients when they’re competing within the restricted bandwidth.

  • egress

    Here’s an ascii picture describing the next settings:

                        root 1:   HTB classful qdisc
                          |
                        / | \
                       /  |  \
                      /   |   \
                     /    |    \
                    /    1:20  1:30  HTB classes
                   /    8mibit  8mibit
                  /       |       \
                 /        |        \
                /        20:       30:
               /         SFQ       SFQ
         still 1:
         default         port         port
    incl. port 1080      1081         1082
    

    The limited bandwidths will not borrow extra available traffic (it was not asked by OP): that’s why they aren’t a subclass of a “whole available bandwidth” default class. The remaining default traffic, including port 1080, just stays at 1:, with no special handling. In different settings where classes are allowed to borrow available bandwidth, those classes should be put below a parent class having its rate set with an accurate value of the maximum available bandwidth, to know what to borrow. So the configuration would require fine-tuning for each case. I kept it simple.

    The htb classful qdisc:

    tc qdisc add dev eth0 root handle 1: htb
    

    The htb classes, attached sfq, and filters directing to them:

    tc class add dev eth0 parent 1: classid 1:20 htb rate 8mibit
    tc class add dev eth0 parent 1: classid 1:30 htb rate 8mibit
    
    tc qdisc add dev eth0 parent 1:20 handle 20: sfq perturb 10
    tc qdisc add dev eth0 parent 1:30 handle 30: sfq perturb 10
    
    tc filter add dev eth0 parent 1: protocol ip prio 1 basic match 'cmp(u16 at 0 layer transport eq 1081)' flowid 1:20
    tc filter add dev eth0 parent 1: protocol ip prio 1 basic match 'cmp(u16 at 0 layer transport eq 1082)' flowid 1:30
    

    or for a single limit, instead of the 6 commands above:

    tc class add dev eth0 parent 1: classid 1:20 htb rate 8mibit
    tc qdisc add dev eth0 parent 1:20 handle 20: sfq perturb 10
    tc filter add dev eth0 parent 1: protocol ip prio 1 basic match 'cmp(u16 at 0 layer transport eq 1081)' flowid 1:20
    tc filter add dev eth0 parent 1: protocol ip prio 1 basic match 'cmp(u16 at 0 layer transport eq 1082)' flowid 1:20
    
  • ingress

    Ingress qdisc can’t be used for shaping (eg delaying packets) but only to have them dropped with filters like in the simple case. In order to have better control, a trick is available: the Intermediate Functional Block, which appears as an artificial egress interface where ingress traffic can be redirected with filters, but else has little interaction with the rest of the network stack. Once in place, egress features can be applied on it, even if some of them might not always be helpful, considering the real control of incoming traffic is not in the hands of the receiving system. So here I setup the ifb0 interface then duplicate above (egress) settings on it, to have sort-of ingress shaping behaving better than just policing.

    Creating ifb0 (see note 4) and applying the same settings as previous egress:

    ip link add name ifb0 type ifb 2>/dev/null || :
    ip link set dev ifb0 up
    
    tc qdisc add dev ifb0 root handle 1: htb
    

    Classes and filters directing to them:

    tc class add dev ifb0 parent 1: classid 1:20 htb rate 8mibit
    tc class add dev ifb0 parent 1: classid 1:30 htb rate 8mibit
    
    tc qdisc add dev ifb0 parent 1:20 handle 20: sfq perturb 10
    tc qdisc add dev ifb0 parent 1:30 handle 30: sfq perturb 10
    
    tc filter add dev ifb0 parent 1: protocol ip prio 1 basic match 'cmp(u16 at 2 layer transport eq 1081)' flowid 1:20
    tc filter add dev ifb0 parent 1: protocol ip prio 1 basic match 'cmp(u16 at 2 layer transport eq 1082)' flowid 1:30
    

    or for a single limit, instead if the 6 commands above:

    tc class add dev ifb0 parent 1: classid 1:20 htb rate 8mibit     
    tc qdisc add dev ifb0 parent 1:20 handle 20: sfq perturb 10
    tc filter add dev ifb0 parent 1: protocol ip prio 1 basic match 'cmp(u16 at 2 layer transport eq 1081)' flowid 1:20
    tc filter add dev ifb0 parent 1: protocol ip prio 1 basic match 'cmp(u16 at 2 layer transport eq 1082)' flowid 1:20
    

    The redirection from eth0‘s ingress to ifb0 egress is done below. To optimize, only redirect intended ports instead of all traffic. The actual filtering and shaping is done above in ifb0 anyway.

    tc qdisc add dev eth0 ingress
    tc filter add dev eth0 ingress protocol ip basic match 'cmp(u16 at 2 layer transport eq 1081)' action mirred egress redirect dev ifb0
    tc filter add dev eth0 ingress protocol ip basic match 'cmp(u16 at 2 layer transport eq 1081)' action mirred egress redirect dev ifb0
    

Notes:

1. Tested using a few network namespaces on Debian 10 / kernel 5.3. Commands syntax also tested on CentOS 7.6 container / kernel 5.3 (rather than 3.10).

2. u32 match ip sport 1081 0xffff could have been used instead to match source port 1081. But it wouldn’t handle the presence of an IP option .u32 match tcp src 1081 0xffff could handle it but it actually requires the complex usage of three u32 filters as explained in the man page. So I chose basic match in the end.

3. ingress has the reserved handle ffff: wether specified or not (the specified handle value is ignored), so I’d rather not specify it. Referencing ingress by parent ffff: can be replaced by just ingress so that’s what I chose.

4. When creating an IFB interface for the first time, the ifb module gets loaded, which by default automatically creates the ifb0 and ifb1 interfaces in initial namespace, resulting in an error if the interface name ifb0 is asked, while it was actually created as a result of the command. At the same time this interface doesn’t appear in a network namespace (eg: container) if simply loading the module, so is still needed there. So adding 2>/dev/null || : solves it for both cases. Of course I’m assuming IFB support is really available.

Note: Use and implement solution 1 because this method fully tested our system.
Thank you 🙂

All methods was sourced from stackoverflow.com or stackexchange.com, is licensed under cc by-sa 2.5, cc by-sa 3.0 and cc by-sa 4.0

Leave a Reply