Bandwidth Allocation

  • Thread starter Thread starter salman
  • Start date Start date
S

salman

Hello

I am developing a server and I want to view and change the the clients
TCP connection bandwidth.
Please help me

Salman Ali
 
Hello

I am developing a server and I want to view and change the the clients
TCP connection bandwidth.
Please help me

Salman Ali

Seems the problem is not trivial. I think the issue of IO through
sockets and ports is complex...that's one reason it's hardly ever
covered in most S/W books. I did a quick Google search and found a
lot of Cisco hits...something their s/w engineers work on every day
I'm sure.

Good luck Ali,

RL
 
salman said:
I am developing a server and I want to view and change the the clients
TCP connection bandwidth.

It's not really clear from your question what you want to do. However,
assuming you want to do the possible rather than the impossible, the
basic idea is simple:

* Displaying bandwidth requires that you count how many bytes have
been transferred over a certain amount of time. How exactly to
calculate this depends on what you want to display to the user. If you
want to display total cumulative bandwidth, you'll just note the
starting time for a transfer, count total bytes, and calculate the
current bandwidth based on the ratio of the two.

If you want some sort of trailing average, you'll have to decide how
often to update the average and for how many of those intervals back you
want to calculate the average. One technique would be to keep a list of
the byte count for each interval so that you can subtract the oldest
byte count at each interval (the time part will be constant, based on
the interval length and number).

Alternatively, you can do some sort of weighted average, in which all of
your bytes are counted, but the most recent ones are weighted more
heavily; this would allow you to calculate the trailing average without
keeping track of all of the byte counts for some fixed number of intervals.

* Changing the bandwidth is a little more complicated, but not
much. Obviously you can only restrict bandwidth; if the user wants more
bandwidth than their network will support, you can't provide that.

For restricting bandwidth, you'll simply calculate a maximum number of
bytes to transfer in some period of time, and when you reach that limit,
wait until the period of time has expired. The period of time should be
long enough that you can still send a reasonable amount of data in each
"burst", but short enough that the network transfer is reasonably smooth.

The basic idea here is that if you have some requirement to not send
data faster than, say, 100K per second then you include logic in your
code that counts bytes sent and only allows 100K to go through for any
one second interval (or 50K in a half second interval, or 200K in a two
second interval, etc.).

Pete
 
Thanks Peter
I am working on a trading application, and there is 1 MB link on
trading server. Normally at a time 25 to 30 TCP clients are connecting
to server. If more than 30 clients (TCP connection) connect to the
server, then perform is going down.

Trading server sends stock market feeds to clients on request, on
client side there is a timer which sending 'a request for feed' to
server after specific time interval and then server sends the feed to
client.
Hops you all understand the problem.

Salman Ali
 
salman said:
Thanks Peter
I am working on a trading application, and there is 1 MB link on
trading server. Normally at a time 25 to 30 TCP clients are connecting
to server. If more than 30 clients (TCP connection) connect to the
server, then perform is going down.

Trading server sends stock market feeds to clients on request, on
client side there is a timer which sending 'a request for feed' to
server after specific time interval and then server sends the feed to
client.

If by "going down" you mean "using up all the bandwidth", the simplest
solution is on the client side: simply send the requests less often.

From the server side, if the problem is really that more than
1Mbit/sec of data is required by the protocol, there's not much to be
done besides make the protocol more efficient or buying more
bandwidth. If the problem is instead the requests come in bursts,
queueing them and handling them one-at-a-time might help.

Good luck!

----Scott.
 
My apologies, but for some reason salman's reply didn't make it to my
news server. So I'm going to reply to the reply to his post. :)


In addition to what Scott already said, I'll ask this: for whom is
performance going down?

If you have some other critical application on the network that is being
affected, then I can see why you'd want to limit the bandwidth of the
(apparently?) non-critical application. As Scott says, given what
you've written so far it sounds like just extending the timer for the
requests would be beneficial.

Perhaps introduce some sort of randomization to the timer so that you
minimize the effects of clients all hitting the network at the same
time, or do as Scott suggests and use some sort of queuing mechanism
that ensures the clients never make the request all at once.

Beyond that, a couple of non-software suggestions:

* If you have a critical application that you want to avoid
interfering with, perhaps the best solution is to ensure that
application is on its own network, separate from the other stuff.

* If you don't have a critical application and it's just
performance of these clients that suffers, then perhaps letting that
happen is truly the simplest solution. After all, limiting bandwidth
artificially is going to cause performance to suffer anyway, but it will
be all the time instead of just when you have a lot of clients present.
It's true that network congestion can make things worse than just
limiting bandwidth, but IMHO you'd have to be getting a lot more clients
than just over the limiting number before that starts to be a really bad
problem.

If you do need to limit bandwidth, I'd go with Scott's suggestion for
something like this. The more general technique of actually monitoring
throughput and using timing to control the throttling of your network
i/o is probably more suited to streaming applications, like file
transfers, media applications, etc. If you have a well-defined
protocol, you can essentially pre-process all the timing work and just
control a specific interval controlling how often the protocol uses the
network.

Pete
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Back
Top