why send() doesn't return error when server is closed

my prog (client side)use TCP to connect to server,it’s strange the send() function doesn’t return error(<=0) when server is closed sometimes,I use netstat to check,the server state is listen and the client state is established.then my prog can’t find the socket is closed.

Does the server do a clean shut down? TCP/IP often acts in unexpected ways. While packets that arrive are guarenteed to be in order and error free, packet arrival is not guarenteed.

For example, if the network connection to the server went down, or the server just powered off, the client would not know this, and a send would not return an error.

thank you,Can i use another way to know that the connection is broken on client side?sorry,i can’t change the programme of server,for it is wroten by another company.

Can you write code that runs on this server? If not, I don’t know how you could find out.
If you wrote a program for the server that sends a heartbeat message, when it stops, you would know that the connection was broken.

Xuyong,

You can set the SO_LINGER option on the socket and then ‘close’ it. This will block your program until the data is received on the server side.

There are probably a couple of other options to set as well. You should google around for reliable delivery + sockets to hopefully some some examples.

Tim

Tim,

I think he is saying that he has no control over the server end.   Unless you are always expecting data back from the server after the send, which can be timed out on, I don't think this will work.

You could set the socket as non blocking and get output notification when the output buffer is empty. If the output buffer doesn’t become empty after a certain period of time you know there is a problem

You can also set the send timeout via setsockopt()

mario,Could you give me one example?thanks

Maschoen,

I googled around on SO_LINGER and this is what I found:

The effect of an setsockopt(…, SO_LINGER,…) depends on what the values in the linger structure (the third parameter passed to setsockopt()) are:

Case 1: linger->l_onoff is zero (linger->l_linger has no meaning):
This is the default.

On close(), the underlying stack attempts to gracefully shutdown the connection after ensuring all unsent data is sent. In the case of connection-oriented protocols such as TCP, the stack also ensures that sent data is acknowledged by the peer. The stack will perform the above-mentioned graceful shutdown in the background (after the call to close() returns), regardless of whether the socket is blocking or non-blocking.

Case 2: linger->l_onoff is non-zero and linger->l_linger is zero:

A close() returns immediately. The underlying stack discards any unsent data, and, in the case of connection-oriented protocols such as TCP, sends a RST (reset) to the peer (this is termed a hard or abortive close). All subsequent attempts by the peer’s application to read()/recv() data will result in an ECONNRESET.

Case 3: linger->l_onoff is non-zero and linger->l_linger is non-zero:

A close() will either block (if a blocking socket) or fail with EWOULDBLOCK (if non-blocking) until a graceful shutdown completes or the time specified in linger->l_linger elapses (time-out). Upon time-out the stack behaves as in case 2 above.

I believe that case 3 is the one xuyong wants. This means that your TCP/IP stack will block on the close() call until the remote side ack’s all the packets (reliable delivery). If the remote side isn’t there they they can’t ack them. So you’ll eventually time out on your close() call. If you time out, then you know the remote side is not there and you can do your error processing.

Tim