Question about Socket.Receive

J

Jon Berry

I setup a socket to receive an 8k byte array on it's own thread in my C#
application.

I found that the Receive method did not read all 8k and I had to loop and
read in chunks in order to receive all 8k.

Why doesn't the Receive method block until all data is read?

Is there another method that will "ReadFully" or is there a buffer I need to
set?

Thanks.
 
P

Peter Duniho

Jon said:
I setup a socket to receive an 8k byte array on it's own thread in my C#
application.

I found that the Receive method did not read all 8k and I had to loop
and read in chunks in order to receive all 8k.

Why doesn't the Receive method block until all data is read?

The primary answer: because that's how it's defined to work.

The elaboration: it is actually impossible for Receive() to guarantee
that it will fill the buffer you passed to it. If the end-of-stream is
reached before the buffer is filled, your choices are either:

-- block indefinitely, or
-- return before the buffer is filled

Presumably you would not like the method to block indefinitely. So,
your code _always_ has to be able to handle the situation where the
buffer you passed in is not completely filled.

This is true for all types of streams; for network streams, you also
have the issue that for a given stream, not all of the data necessarily
will even be sent all at once, nor is possible to know the length of the
data that has been sent. It's quite common to have a single connection
over which some kind of interactive communication is performed. If
Receive() waited until its buffer was filled before it returned,
arbitrary, inefficient requirements would have to be imposed on
application protocols, just so that the receiver could know the size of
the buffer to be passed.

Instead, applications are allowed to implement whatever application
protocol they want or need to, and the method is simply defined to do
the best it can.
Is there another method that will "ReadFully" or is there a buffer I
need to set?

Not in the Socket class. Depending on what you're doing, you may find
the WebClient.DownloadFile() method useful, or it may well be that you
are mistaken in your belief that your code can be correctly written
while still assuming that the buffer you pass to the Receive() method
will always be filled.

Pete
 
J

Jon Berry

Thanks Pete.

Why is the behavior different for UDP socket connections?

With UDP connections, the Receive method blocks until all data is received.
 
P

Peter Duniho

Jon said:
Thanks Pete.

Why is the behavior different for UDP socket connections?

With UDP connections, the Receive method blocks until all data is received.

Define "until all data is received".

Yes, normally with UDP you get an entire datagram at once. And the
reason is because UDP is a message-oriented protocol while TCP is a
stream-oriented protocol. In other words, with UDP every datagram is a
complete message unto itself. Not only would it not make much sense to
deliver part of a datagram, the network stack has enough information to
know where the datagram starts and ends, and so there's no risk of some
data arriving that the program needs to process but failing to return to
the caller because the caller asked for more.

So datagrams are always delivered as complete entities.

But even in that scenario, the datagram that is received by UDP will not
necessarily fill the buffer you passed to the Receive() or ReceiveFrom()
method. You'll get whatever data was in the datagram, and you only
receive one datagram at once.

Pete
 
T

Tom Shelton

Define "until all data is received".

Yes, normally with UDP you get an entire datagram at once. And the
reason is because UDP is a message-oriented protocol while TCP is a
stream-oriented protocol. In other words, with UDP every datagram is a
complete message unto itself. Not only would it not make much sense to
deliver part of a datagram, the network stack has enough information to
know where the datagram starts and ends, and so there's no risk of some
data arriving that the program needs to process but failing to return to
the caller because the caller asked for more.

So datagrams are always delivered as complete entities.

But even in that scenario, the datagram that is received by UDP will not
necessarily fill the buffer you passed to the Receive() or ReceiveFrom()
method. You'll get whatever data was in the datagram, and you only
receive one datagram at once.

Nor are you guarenteed that the datagrams will be received in the order
sent...
 
P

Peter Duniho

Tom said:
Nor are you guarenteed that the datagrams will be received in the order
sent...

Nor are you guaranteed that datagrams will be received at all, nor are
you guaranteed that any given datagram will be received only once.

But none of that has anything to do with the _length_ of the datagram.
 
T

Tom Shelton

Nor are you guaranteed that datagrams will be received at all, nor are
you guaranteed that any given datagram will be received only once.

But none of that has anything to do with the _length_ of the datagram.

Correct. I was just pointing out differences between udp and tcp :)
 
J

Jon Berry

I'm having another problem with Socket.Receive.

I'm using a TCP Connection and the socket is blocking until it gets some
data but it's not closing when the remote host closes the connection.

The remote host in this case is a Java application.

Once or twice I've seen this exception but it's not consistent:

"An existing connection was forcibly closed by the remote host"

A socket is a socket, right? Does it matter if one is Java and the other is
..NET?

Perhaps there is a socket setting I need to adjust?
 
P

Peter Duniho

Jon said:
I'm having another problem with Socket.Receive.

I'm using a TCP Connection and the socket is blocking until it gets some
data but it's not closing when the remote host closes the connection.

How is the remote host closing the connection?
The remote host in this case is a Java application.

Once or twice I've seen this exception but it's not consistent:

"An existing connection was forcibly closed by the remote host"

A socket is a socket, right? Does it matter if one is Java and the
other is ..NET?

No. In fact, the other end doesn't even need to be using a socket API.
For a TCP connection, the other end just has to support TCP somehow.

This is almost always via some socket API, but it doesn't have to be.
Perhaps there is a socket setting I need to adjust?

There's a bug somewhere, either in the remote host or your client.
Based on the description, it _might_ be due to the remote host failing
to shutdown the socket correctly (i.e. calling Socket.shutdownOutput()),
or it might be due to your client failing to detect the shutdown
correctly (i.e. looking for a 0 byte return value from the
Socket.Receive() method).

But unless you post a concise-but-complete code example that reliably
demonstrates the problem, it's impossible to say for sure.

Pete
 
J

Jon Berry

I added the check for a 0 byte return value but it never gets that far.

I will double check that all the streams are getting closed on the remote
host.

There is no "shutdown" method in java but I can make sure to close all the
streams.
 
J

Jon Berry

You were right Pete.

I had to explicitly close ALL the input and output streams on the remote
host, not just the connection.

Seems to be working now. :)

Thanks!
 
P

Peter Duniho

Jon said:
I added the check for a 0 byte return value but it never gets that far.

I will double check that all the streams are getting closed on the
remote host.

There is no "shutdown" method in java but I can make sure to close all
the streams.

Yes, there is a "shutdown" method in Java. I even mentioned it in my
previous reply. If you have an instance of java.net.Socket, you can
call the shutdownOutput() method to close the sending half of the
socket, and shutdownInput() method to close the receiving half.

Pete
 
J

Jon Berry

Thanks Pete. I've got one more problem with the Socket Receive method.

Is there a way to close the socket when the remote host does not close the
connection gracefully?

For example, the remote host loses power or is using a wifi connection and
it suddenly drops.

I don't think I want to use a timeout because I'm trying to keep the
connection open as long as possible.

Is there any other way to detect a lost connection in this scenario?
 
P

Peter Duniho

Jon said:
Thanks Pete. I've got one more problem with the Socket Receive method.

Is there a way to close the socket when the remote host does not close
the connection gracefully?

For example, the remote host loses power or is using a wifi connection
and it suddenly drops.

I don't think I want to use a timeout because I'm trying to keep the
connection open as long as possible.

Is there any other way to detect a lost connection in this scenario?

You will automatically detect a lost connection if you try to send data
to the other end. Other than that, no. And you probably don't want to
either, given that you are "trying to keep the connection open as long
as possible".

A broken connection can happen for a variety of reasons, many of which
have nothing to do with either endpoint. But it's not possible for one
endpoint to distinguish between a problem in the middle from a problem
at the other end (and "in the middle" is a very broad description...a
lost wireless connection may or may not result in a permanently broken
connection, depending the wireless implementation). If it's a problem
in the middle, as long as neither end tries to send data while the
connection is broken, the connection will remain viable and still be
usable when the connection is restored.

For most network applications, this is a desirable feature. Detecting
broken connections prematurely (i.e. before it really matters) simply
decreases the overall reliability of the connection.

Instead, you should be using an application protocol in which there are
rules for dealing with connections that are lost for real. For example,
a client may need to try to reconnect if it tries to send data and that
fails. Or if the client is the one that disconnects forcefully for
whatever reason, the server should have a way of identifying that client
and eliminating the previous connection if and when that client tries to
reconnect.

Pete
 
J

Jon Berry

If it's a problem in the middle, as long as neither end tries to send data
while the connection is broken, the connection will remain viable and
still be usable when the connection is restored.

Interesting. I sort of assumed the connection would be lost.

As you mentioned it seems that the only way to detect a broken connection
in the middle is to send data on the connection and see if it goes through.
It takes ~90 seconds for the connection to timeout, however.
Instead, you should be using an application protocol in which there are
rules for dealing with connections that are lost for real. For example, a
client may need to try to reconnect if it tries to send data and that
fails. Or if the client is the one that disconnects forcefully for
whatever reason, the server should have a way of identifying that client
and eliminating the previous connection if and when that client tries to
reconnect.

That sounds like exactly what I need.

Is there anything in the framework to help me out or will I have to roll my
own?

I really only need one connection at a time but if there was a way for the
client
to reconnect and dispose of the previous connection that would probably work
well.
 
P

Peter Duniho

Jon said:
Interesting. I sort of assumed the connection would be lost.

As you mentioned it seems that the only way to detect a broken connection
in the middle is to send data on the connection and see if it goes through.
It takes ~90 seconds for the connection to timeout, however.

The exact manifestation of the error will vary according to the type of
disconnect. But yes, in some cases you have to wait long enough for the
network stack to know for sure that data hasn't just been delayed, which
means waiting for a timeout to expire.

In other cases, the network system can identify immediately that the
data can't possibly be delivered, and you'll get an error right away.
Instead, you should be using an application protocol in which there
are rules for dealing with connections that are lost for real. For
example, a client may need to try to reconnect if it tries to send
data and that fails. Or if the client is the one that disconnects
forcefully for whatever reason, the server should have a way of
identifying that client and eliminating the previous connection if and
when that client tries to reconnect.

That sounds like exactly what I need.

Is there anything in the framework to help me out or will I have to roll
my own? [...]

I don't know. .NET includes "remoting" and WCF (successor to remoting).
But I haven't really used either, so I don't know what kind of
features they include. I suspect that even with those, you'll have to
track your client identity yourself.

Pete
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top