reading into the buffer

P

puzzlecracker

Say I want to arrange bytes in the internal buffer in a certain way. I
receive those bytes in the socket.

One solution is to read in socket in pieces:

byte[] buffer = new byte[65536];
int index = 0;
m_socket.Receive(buffer , 8, SocketFlags.None);
index += 8;
//read the rest of the message
//int msgLength is remaining number of bytes in the buffer
m_socket.Receive(buffer , index, msgLen - index,
SocketFlags.None);


Questions:

1. How to calculate msgLength ?
2. How to make this operation when reading data in asynchronously,
using BeginReceive()

Thanks
 
P

puzzlecracker

Say I want to arrange bytes in the internal buffer in a certain way. I
receive those bytes in the socket.
One solution is to read in socket in pieces:
            byte[] buffer = new byte[65536];
            int index = 0;
            m_socket.Receive(buffer , 8, SocketFlags.None);
             index += 8;
            //read the rest of the message
            //int msgLength is remaining number of bytes inthe buffer
            m_socket.Receive(buffer , index, msgLen - index,
SocketFlags.None);
Questions:

1. How to calculate  msgLength ?

If by "remaining number of bytes in the buffer", you mean ready to be read  
 from the socket, the answer is "you don't".  For the same reason that  
using a method like Poll() or Select() cannot provide any guarantees about  
what will happen when you actually try to read from the socket, there's no  
reliable way to calculate the bytes yet unread in the socket buffer in a  
way that will allow you to depend on the calculation in a later call to  
Receive().

The best you can do is provide a buffer to Receive() that is just large  
enough for the remaining bytes you expect to get for a given message (I'm 
assuming that the data in the first eight bytes in some way allow you to  
calculate that).  Then just keep reading in a loop until you've read that  
many bytes, decreasing the length you offer to Receive() with each  
iteration to take into account the number of bytes read already.

That said, this is only something you should be doing if you don't need  
efficient i/o.  There's a lot of overhead going back and forth between  
your code and the network buffers, and if there's a lot of data coming  
through, you can wind up forcing the network driver to have to throw out  
incoming data, and making the other end send it again.

A much better approach is for the network i/o code to always just read as 
much data as it can, and let another layer of your code deal with parsing 
that out into usable messages.  Doing so won't actually necessarily  
address the above question directly (after all, the network i/o layer is  
probably still going to be delivering data as a stream), but it does shift  
it into a domain where you have a bit more control.
2. How to make this operation when reading data in asynchronously,
using BeginReceive()

The same way you'd deal with any transition in design from synchronous to 
asynchronous.  State that is kept in local variables simply has to be  
moved into instance variables that can be accessed as each network  
operation completes.

Pete

Is there another .net group dedicated to networking issues related to
csharp and .net general?
 
S

Steve

puzzlecracker said:
Say I want to arrange bytes in the internal buffer in a certain way. I
receive those bytes in the socket.

One solution is to read in socket in pieces:

byte[] buffer = new byte[65536];
int index = 0;
m_socket.Receive(buffer , 8, SocketFlags.None);
index += 8;
//read the rest of the message
//int msgLength is remaining number of bytes in the buffer
m_socket.Receive(buffer , index, msgLen - index,
SocketFlags.None);


Questions:

1. How to calculate msgLength ?
2. How to make this operation when reading data in asynchronously,
using BeginReceive()

Thanks

The receive funciton returns the number of bytes received. The actual
number received may be (and often is) less than the number of bytes
requested (what I consider to be a quirk of BSD sockets). Here is a small
example of some working code that receives the next 4 bytes from a socket
stream.

Byte[] headerBuffer = new Byte[ 4 ];
int bytesReceived = 0;
int bytesToReceive = 4;
while( bytesToReceive > bytesReceived )
{
bytesReceived += clientSocket.Receive( headerBuffer,
bytesReceived,
bytesToReceive -
bytesReceived, SocketFlags.None );
}

The basic idea is to repeast receiving until the number of byte asked for
are received. The reeive function includes an offset into the buffer to
append received data to, which is conventient so you can just keep sending
the same base buffer address.

I can't help with BeginReceive. I've never used it. I usually run separate
send and receive threads. The send thread takes messages from a queue and
sends them out the socket, and the receive thread receives messages from a
socket and makes the available to a queue that is read from the rest of the
application.

Regards,
Steve
 
S

Steve

Peter Duniho said:
[snip]

But that's hardly a universal condition for network code. It's not
something one would want generally built into the network API itself, and
I don't really think that "quirk" is the right word to describe how TCP
and the APIs available to use it work.

Pete

The behavior you describe (and the way the receive works) is what I would
have expected from non-blocking sockets. With non-blocking sockets I would
expect to use "select" if I want to wait for data to show up, which perits a
timeout, and then use recv to get whatever data is available.

With blocking sockets I would expect that if I say I want to receive 128
characters, then I know I am expecting 128 characters, and don't bother me
until they show up. I know that isn't the way it works, but that was the
behavior I was expecting before I learned how things really work.

In fact I think the API would be easier to use if I could say receive n
bytes of data, or timeout if no new data shows up for u milliseconds or the
entire message doesn't show up in v milliseconds.

But, we live with what we have... and it works.

Regards,
Steve
 
S

Steve

[snip]
So, it's your assertion that only protocols which advertise the number of
bytes to be received in advance should be allowed to be implemented using
blocking sockets?

It would certainly be more intuitive.
There are a number of things about network programming that prove
non-intuitive to the beginner. That doesn't mean those things are wrong.
It just means that the beginner doesn't have a broad enough view of the
API to understand how things _have_ to work in order for the API to be
consistent and useable.

Either that or there wasn't enough thought put into the API.

When I read a file from disk and ask for 128 bytes of data, I expect to not
return from the read until I receive 128 bytes of data. When I read 128
bytes of data from a serial port, I realize that the data may never show up,
so I may want to read 128 bytes of data with a specified timeout, and
receive a status if not all of the data shows up. I may also want the
ability to just read the number of bytes that are currently available.

[snip]
It works quite well, and it wouldn't work at all if designed without the
behavior that you say is "a quirk".

The thing that I find to be a quirk is that when I say receive 128 bytes of
data and block until those 128 bytes of data show up... I have to see how
many bytes came back from the call to recv, and loop to fill the data buffer
in my application.

I understand that there are cases where this is not the behaviour you want,
but when you do want that behavior you have to add it at the application
level. To me it seems like the kind of behaviour that belongs in the
driver, not at the application level. When the logic is in the driver it
also tends to not incur as much application overhead.

Of course my view is different than the view of someone writing one of the
higher level protocols where sizes are not known. Most of the applications
I have worked with have been in a quasi-realtime enviroment where I expect
to wait until data shows up, and I have to time out when it doesn't so I can
resolve issues.

I don't buy the argument that "it wouldn't work at all if designed without
the behaviour you say is a quirk". If you had to wait on one call that
tells you when and how much data is available, and then do a recv call to
read that much data, it would work just as well as the existing recv.

Regards,
Steve
 
P

puzzlecracker

There are no networking issues related to C#.  The API is  
language-agnostic.

As far as a newsgroup specific to network code in .NET, none that I'm  
aware of.  However, note that -- as I've mentioned before -- to a large 
extent these issues aren't even specific to .NET; they are either inherent  
in networking generally, or in Winsock, on which the .NET networking API  
is built.

If you want a newsgroup that is more specific to networking questions than  
this one, you might try alt.winsock.programming or  
comp.os.ms-windows.programmer.tools.winsock.  There are other newsgroups  
that are even more general than those, not even being specific to Winsock..

Pete

Thanks, I'm taking a dive into learning of winsock..
 
S

Steve

[snip]
The entire Internet is founded on some fundamental design choices about
how TCP/IP works, including the option of endpoints to drop received data
and force a retry if they become overloaded or otherwise need to clear
some buffer space. These very basic design choices affect a number of
things, including how an API like sockets _has_ to work.

Clearly you have the positon that anything other than the way things are
currently implented will not work.

I'll agree to disagree.

Regards,
Steve
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top