That is why all event waiting calls should have a timeout. If that occurs, the server should close the corresponding socket and go into listen state again. Similar for the client: close and try to connect again.
There is no direct and fast method to detect an interrupted connection. Interrupted in the sense off one side closing the connection without announcement or even network disconnect.
When using CAsyncSocket, you have to implement your own timer that is reset when data are received.
With non-blocking calls you usually wait for events with WaitFor(Single|Multiple)Object(s) which have a timeout parameter. I recommend this when using a worker thread for receiving because the call can also catch a terminate event to stop the thread when closing the connection by intention like when terminating the application.
Once a timeout is detected you close the related connections / sockets like when terminating the application. Afterwards enter the listen state again on the server and try to (re-)connect on the client like when starting the applications.
A complete example - even when knowing the used socket type - would be far too much for this forum. But there are a lot of tutorials in the net and here at CP about the topic.
I am using CSocket implementation for server and client with Serialization using a class derived from CObject. As already mentioned, I use a thread to send and receive data continuously every second from Server Side. Similarly I use a thread to send and receive data continuously every second from client Side.
Can I implement this timer suggested by you for CAsyncSocket in my code that is using serialization and continuous send/receive operation. Please suggest how to do it especially to handle connection lost issues (to detect connection failure and reconnect properly) and to avoid crash problems.
In the client side:
I used OnReceive() and a timer variable which is reset to zero in that function. In a separate thread of 1 second frequency, I increment the timer variable to one. So every one second the timer variable is incremented but it is reset to zero in the OnReceive since the data is received in OnReceive() which is sent by the server every second. When the timer variable exceeds 30 counts, it is elapsed. When the timer is elapsed, I closed the client socket using shutdown and deleted the socket.
But When I tested the Server and the client, the server hangs after some 2 hours. Previously my client got hanged or lost connection.
Any suggestion, why the server is getting hanged this time?
Just to be clear
1. Works in environment A
2. Doesn't work in environment B
Obviously then the environment, not your code, is where the problem originates.
As one possibility there is a firewall rule that is disconnecting/dropping the connection after 2 hours. If that is the problem then the solution is to either fix the rule or alter your application such that it recreates the connection more frequently than the rule disconnects. So say every hour although I might go with less than that. That said though defensive programming would suggest that the connection could be lost, arbitrarily, for any number of reasons so you should be attempting to restore the connection anyways.
FYI might want to verify "crash" versus exit. I worked with one system where it turned out there was some sort of monitor that was externally terminating the application after a certain amount of time. As a windows client app all threads should have a generic global system catch which catches "system" exceptions and logs them. Also normal requests to exit should be logged as well.
Because of the comparison with the C# version I expected it to be about the same control, but you may be right.
In the CMFCRibbonBar, tabs are called categories and they can be set with SetActiveCategory().
// async-unwrapping.cpp// compile with: /EHsc
auto t = create_task(()
wcout << L"Task A" << endl;
// Create an inner task that runs before any continuation// of the outer task.return create_task(()
wcout << L"Task B" << endl;
// Run and wait for a continuation of the outer task.
t.then((task<void>& t3) // or task<void> t3
wcout << L"Task C" << endl;
t3.wait(); // or .get()
I expected the output to be
But the output is
Why t3.wait(); isn't calling the B task again or a tleast should throw error?