These days i read a book about the socket .There is a question ,when we bind the address in the server ,we must use the function htons to transform the port to the network byte order, but why do not we need to use the function in the send /recv function ?
i guess some reasons ,but i am not sure about it .
1.Because the TCP\IP protocl will do the transform at the back
2.Because of the parameter char*,it makes the buffer to the array of char and that do not need to transform.
Is there anyone know this ? I am very appreciate for your help .
I guess you meant utf-16 when you said unicode. Actually there are little and big endian versions of unicode, both versions have a unique byte order mark at the beginning of the file. Of course if you send utf-16 basically "as a file" with byte order mark and process it accordingly (for example using an utf library) then there is no problem because the library will do the byte swap for you if necessary after interpreting the byte order mark. However if you interpret it as a sequence of 16bit integers then you have to take care. with utf-8 there are no endianness problems but the same is not true for utf-16 and utf-32 that are basically just a series of uint16/uint32 integers - you decide what byte order to use for transferring.
When you are filling some data structures of the socket api (like the port parameter of sockaddr_in) then you have to specify the port number in big endian as the socket api expects you to give port numbers and inet4 addresses in big endian format. The reason for this is probably that some guys decided to use big endian. (I think requiring the use of htons and its friends was a bad idea as this transormation from host to network byte order could be done by the socket api implementation itself, but thats another subject for debate...). htons() always returns the 16 bit integer in big endian format regardless of the endianness of your machine. If your machine is big endian then it does nothing, if your machine is little endian then it swaps the bytes. Anyway the htons is a acronym for HostToNetworkShort - I didn't know that for some time in the past and without that it was quite hard for me to memorize these functions: HostToNetworkShort, NetworkToHostShort (16bit), HostToNetworkLong, NetworkToHostLong (32bit). Since these days most of the user machines and a lot of server machines are little endian you have to use these functions when the socket api you are calling requires them.
When you are transferring your own data you decide what byte order to use. For example if both of your machines (server and client) are little endian then it is logical to transfer 16bit and wider integers in little endian. However if the endianness of the machines are different then you have to choose whether you use little or big endian format in your data stream. If you choose little endian, then you have to do nothing on the little endian when you are sending/receiving network data, but you have to swap byte order on the big endian machine.
Since you called this whole stuff "the big-ending and little-ending" I assume you know not too much about endianness and I try to give you some help here. The two most important parts of the computer from the programmer's perspective are the processor and the memory. Imagine the memory as a big byte array. (pointers are basically just indexes into this byte array! What the computer does is basically the following: It reads instructions from the memory and executes them. During instruction execution the processor loads values from the memory into its own internal storages ("variables"/registers http://en.wikipedia.org/wiki/Processor_register[^]) and then performs calculations with these loaded values and later the processor stores these values back to memory. The difference between big-endian and little endian processors is the way they load/store integers that consist of more than 1 byte. Lets demonstrate that with and example: lets say you have a 16 bit integer in memory (2 bytes in hexadecimal):
If you load these 2 bytes on a little endian machine then the processor will load it as 0x0001, but a big endian will load these 2 bytes as 0x0100.
The same is true not only for data loading from memory but also for storing. Lets say you make complex calculations and the result is a 16 bit number that you want to store somewhere in memory. If the result of the calculation is 1 then a little endian processor stores 01 00 into the memory while the big endian stores 00 01.
TCP does nothing with the order of bytes when you transfer them over the wire. However if the endianness of the 2 machines (connected by TCP) differes then you have to take care.
Lets say you connect a little endian windows machine with a big endian linux machine using TCP. You calculate this, calculate that on your windows machine and lets say the result of the calculation is 1. You store this to memory on your little endian windows machine (01 00) and send it with TCP to the linux machine. If you load it on the big endian machine without byte swap then the big endian machine will see 0x0100 and not 0x0001. You can decide to use big endian in your transferrable data but in that case you have to convert on the windows machine (instead of the linux) every time you load or send network data.
You are welcome! There is one more thing I forgot to mention: when I have machines with different endianness I usually choose the endianness of the server for transferring data over the network. This way the byte swaps must be done by the clients - the server is usually a performance bottleneck as it has to communicate with a lot of clients while a client has not much to do, just communicates with the server. In general its a good practice to push every kind of computation/task to the clients whenever possible.
If you want your program to be portable, then any time you send an integer greater than 1 byte in size over the network, you must first convert it to network byte order using htons or htonl, and the receiving computer must convert it to host byte order using ntohs or ntohl. If you're sending computer may be an Intel x86, and the receiving may be a Sun SPARC, and then your program will fail if you don't use htons.
Just add some code so that every time a dialog opens it calls in to a central function which can keep track of them. If these are all modal dialogs I would be interested to know how the app has multiple ones open at the same time.
Derive all of your dialogs from a common class. In that base class, override OnInitDialog() and OnCancel() to adjust your counters accordingly.
More specifically, write a new class MyCDialog which derives from CDialog, and implements OnInitDialog() and OnCancel() as DavidCrow describes, then change all of your existing dlg classes that derive from CDialog to derive from MyCDialog. [Specifically, if a class doesn't derive from CDialog directly, then don't change it.]
I have no idea where to ask this since it covers multiple areas, though the base code is in C++, so I thought I'd post it here.
In short, in part of our product we import AOL emails. The user can then export an email as a [structure storage] MSG file. However, when we then bring that file into Outlook, the From field in the list of emails is not populated. When you click on the message, there is data at the top where the sender information is listed. I've been experimenting with various streams in MSG files and it seems Outlook picks up the "From", for the list, from various places. I've populated ALL of those places with my exported file and Outlook still won't display anything.
Does anyone have any experience with this or have any idea how Outlook processes MSG files?
I may have found a solution. Turns out that a MSG file not only has multiple streams, but a [stream] properties table which lists which streams are valid along with their length. While I'd created the correct stream, I hadn't created a correlating entry in the properties table.
The truly baffling part is that the "from" data is listed multiple times. Why Outlook couldn't pick one of the other ones is baffling.
BTW, the stream id in question was, wait for it, 0x0042!
I have an application that performs some processing on files dropped into a 'watch folder'. The problem I'm having is detecting when the file copy to the folder is complete.
The application needs to run on both Windows and OSX, so ideally I'd like a cross-platform solution, although there is no reason (other than future maintainability) for using the same algorithm on both platforms.
The way I'm doing this at the moment is to wait until I can get a read lock on the file (by just trying to open it), then periodically checking the size of the file to make sure it's not getting any bigger.
Once both of those criteria have been met, I process the file.
The problem I'm seeing is that once in a blue moon, the two criteria will be met when the file is still being copied. My gut feeling is that it's caused by momentary network congestion, but I've been unable to reproduce the issue whilst debugging.
I can probably make it more stable by just throwing a few more checks at the problem and tweaking the time-out values, but the software engineer in me would like a more definitive solution.
One solution I've used is to write the new file to one directory (on the same system) and, when done, moving it to the target directory (on the same system) using the Win32 API, which is an atomic operation.
For one variation, we wrote the file to the same directory, but with an extra extension like ".~tmp" and then did the rename/move operation.
Exactly. That is how browser file downloads work and most other file copy programs that I know of work. The destination filename does not appear until the file is complete.
I'd change your file copy program to copy to a temp file with 'guid-like' name or at least a .~nonsense extension and rename it after the transfer is complete. There are ways to do this, keeping such things as the file timestamp intact.