|
Sorry, but that does not make much sense. Which code map are you referring to, and where are you right-clicking?
Use the best guess
|
|
|
|
|
uhh..i watched that video in vs2012 video's,,,there are "Code Map" in the vs2012 toolbar on video,but why did not i? thans for reply
|
|
|
|
|
|
What happens when you press Ctrl+` ? Aren't you able to see it in any of the menu/right click on any function?
-Sarath.
Rate the answers and close your posts if it's answered
|
|
|
|
|
I guess you need VS 2012 Ultimate to get the full code map feature.
I tried this with VS 2012 Update 2 and it sure works.
|
|
|
|
|
Hi,everyone.
These days i read a book about the socket .There is a question ,when we bind the address in the server ,we must use the function htons to transform the port to the network byte order, but why do not we need to use the function in the send /recv function ?
i guess some reasons ,but i am not sure about it .
1.Because the TCP\IP protocl will do the transform at the back
2.Because of the parameter char*,it makes the buffer to the array of char and that do not need to transform.
Is there anyone know this ? I am very appreciate for your help .
|
|
|
|
|
Probably because Send/Receive are handling char (7/8bit BYTE) obviously it doesn't create a problem in which order which they arrive.
It only becomes a problem when transmitting 16 bits or larger. I would also hazard a guess than unicode is transmitted MSB first.
"It's true that hard work never killed anyone. But I figure, why take the chance." - Ronald Reagan
That's what machines are for.
Got a problem?
Sleep on it.
|
|
|
|
|
I guess you meant utf-16 when you said unicode. Actually there are little and big endian versions of unicode, both versions have a unique byte order mark at the beginning of the file. Of course if you send utf-16 basically "as a file" with byte order mark and process it accordingly (for example using an utf library) then there is no problem because the library will do the byte swap for you if necessary after interpreting the byte order mark. However if you interpret it as a sequence of 16bit integers then you have to take care. with utf-8 there are no endianness problems but the same is not true for utf-16 and utf-32 that are basically just a series of uint16/uint32 integers - you decide what byte order to use for transferring.
|
|
|
|
|
From what I've recently read/heard about UTF-16 (plus more javascript issues), and space for 1 million code points, it is now UTF-20, but nobody bothered to rebless the name.
see this --> mathiasbynens.be/notes/javascript-encoding
You're correct about the BOM, I just forgot.
"It's true that hard work never killed anyone. But I figure, why take the chance." - Ronald Reagan
That's what machines are for.
Got a problem?
Sleep on it.
|
|
|
|
|
UTF-20 LOL 
|
|
|
|
|
You don't have to use the function on the data that you are transferring.
If you don't know what endianness means then read this first: http://en.wikipedia.org/wiki/Endianness[^]
When you are filling some data structures of the socket api (like the port parameter of sockaddr_in) then you have to specify the port number in big endian as the socket api expects you to give port numbers and inet4 addresses in big endian format. The reason for this is probably that some guys decided to use big endian. (I think requiring the use of htons and its friends was a bad idea as this transormation from host to network byte order could be done by the socket api implementation itself, but thats another subject for debate...). htons() always returns the 16 bit integer in big endian format regardless of the endianness of your machine. If your machine is big endian then it does nothing, if your machine is little endian then it swaps the bytes. Anyway the htons is a acronym for HostToNetworkShort - I didn't know that for some time in the past and without that it was quite hard for me to memorize these functions: HostToNetworkShort, NetworkToHostShort (16bit), HostToNetworkLong, NetworkToHostLong (32bit). Since these days most of the user machines and a lot of server machines are little endian you have to use these functions when the socket api you are calling requires them.
When you are transferring your own data you decide what byte order to use. For example if both of your machines (server and client) are little endian then it is logical to transfer 16bit and wider integers in little endian. However if the endianness of the machines are different then you have to choose whether you use little or big endian format in your data stream. If you choose little endian, then you have to do nothing on the little endian when you are sending/receiving network data, but you have to swap byte order on the big endian machine.
Since you called this whole stuff "the big-ending and little-ending" I assume you know not too much about endianness and I try to give you some help here. The two most important parts of the computer from the programmer's perspective are the processor and the memory. Imagine the memory as a big byte array. (pointers are basically just indexes into this byte array! What the computer does is basically the following: It reads instructions from the memory and executes them. During instruction execution the processor loads values from the memory into its own internal storages ("variables"/registers http://en.wikipedia.org/wiki/Processor_register[^]) and then performs calculations with these loaded values and later the processor stores these values back to memory. The difference between big-endian and little endian processors is the way they load/store integers that consist of more than 1 byte. Lets demonstrate that with and example: lets say you have a 16 bit integer in memory (2 bytes in hexadecimal):
01 00
If you load these 2 bytes on a little endian machine then the processor will load it as 0x0001, but a big endian will load these 2 bytes as 0x0100.
The same is true not only for data loading from memory but also for storing. Lets say you make complex calculations and the result is a 16 bit number that you want to store somewhere in memory. If the result of the calculation is 1 then a little endian processor stores 01 00 into the memory while the big endian stores 00 01.
TCP does nothing with the order of bytes when you transfer them over the wire. However if the endianness of the 2 machines (connected by TCP) differes then you have to take care.
Lets say you connect a little endian windows machine with a big endian linux machine using TCP. You calculate this, calculate that on your windows machine and lets say the result of the calculation is 1. You store this to memory on your little endian windows machine (01 00) and send it with TCP to the linux machine. If you load it on the big endian machine without byte swap then the big endian machine will see 0x0100 and not 0x0001. You can decide to use big endian in your transferrable data but in that case you have to convert on the windows machine (instead of the linux) every time you load or send network data.
|
|
|
|
|
Thank you for help , i got it .
|
|
|
|
|
You are welcome! There is one more thing I forgot to mention: when I have machines with different endianness I usually choose the endianness of the server for transferring data over the network. This way the byte swaps must be done by the clients - the server is usually a performance bottleneck as it has to communicate with a lot of clients while a client has not much to do, just communicates with the server. In general its a good practice to push every kind of computation/task to the clients whenever possible.
|
|
|
|
|
Because you are sending a data stream between two little endian (ENDIAN not ENDING) machines and the sockets dont care what the data is or what order it is in.
This is assuming you are uising windows of course.
If you were doing a windows to xxx (insert your favourite bigendian os here) one end woulod have to translate the data.
==============================
Nothing to say.
|
|
|
|
|
ITboy_Lemon wrote: Is there anyone know this ?
First step with sockets - define the protocol. That specifies how and what is sent. One then uses that to determine how to code the client/server.
|
|
|
|
|
If you want your program to be portable, then any time you send an integer greater than 1 byte in size over the network, you must first convert it to network byte order using htons or htonl , and the receiving computer must convert it to host byte order using ntohs or ntohl . If you're sending computer may be an Intel x86, and the receiving may be a Sun SPARC, and then your program will fail if you don't use htons .
-Sarath.
Rate the answers and close your posts if it's answered
|
|
|
|
|
Hello All,
I have a MFC app developed in VS 6.0
User can open multiple dialogs in this app. Is there a way of keeping track of how many dialogs are open at a given time?
Also, is there a way of getting the handle to each dialog from a central location?
Because the dialogs are opened from different parts of the app, i.e., different files its hard to keep track of all dialogs.
Any sample code will help.
Thanks in advance.
|
|
|
|
|
Just add some code so that every time a dialog opens it calls in to a central function which can keep track of them. If these are all modal dialogs I would be interested to know how the app has multiple ones open at the same time.
Use the best guess
|
|
|
|
|
Derive all of your dialogs from a common class. In that base class, override OnInitDialog() and OnCancel() to adjust your counters accordingly.
"One man's wage rise is another man's price increase." - Harold Wilson
"Fireproof doesn't mean the fire will never come. It means when the fire comes that you will be able to withstand it." - Michael Simmons
"Show me a community that obeys the Ten Commandments and I'll show you a less crowded prison system." - Anonymous
|
|
|
|
|
DavidCrow wrote: Derive all of your dialogs from a common class. In that base class, override OnInitDialog() and OnCancel() to adjust your counters accordingly.
More specifically, write a new class MyCDialog which derives from CDialog , and implements OnInitDialog() and OnCancel() as DavidCrow describes, then change all of your existing dlg classes that derive from CDialog to derive from MyCDialog . [Specifically, if a class doesn't derive from CDialog directly, then don't change it.]
--
Harvey
|
|
|
|
|
I have no idea where to ask this since it covers multiple areas, though the base code is in C++, so I thought I'd post it here.
In short, in part of our product we import AOL emails. The user can then export an email as a [structure storage] MSG file. However, when we then bring that file into Outlook, the From field in the list of emails is not populated. When you click on the message, there is data at the top where the sender information is listed. I've been experimenting with various streams in MSG files and it seems Outlook picks up the "From", for the list, from various places. I've populated ALL of those places with my exported file and Outlook still won't display anything.
Does anyone have any experience with this or have any idea how Outlook processes MSG files?
|
|
|
|
|
Joe Woodbury wrote: However, when we then bring that file into Outlook... Using code or with Outlook itself.
"One man's wage rise is another man's price increase." - Harold Wilson
"Fireproof doesn't mean the fire will never come. It means when the fire comes that you will be able to withstand it." - Michael Simmons
"Show me a community that obeys the Ten Commandments and I'll show you a less crowded prison system." - Anonymous
|
|
|
|
|
With Outlook.
I may have found a solution. Turns out that a MSG file not only has multiple streams, but a [stream] properties table which lists which streams are valid along with their length. While I'd created the correct stream, I hadn't created a correlating entry in the properties table.
The truly baffling part is that the "from" data is listed multiple times. Why Outlook couldn't pick one of the other ones is baffling.
BTW, the stream id in question was, wait for it, 0x0042!
modified 13-Apr-13 22:12pm.
|
|
|
|
|
Hi,
I have an application that performs some processing on files dropped into a 'watch folder'. The problem I'm having is detecting when the file copy to the folder is complete.
The application needs to run on both Windows and OSX, so ideally I'd like a cross-platform solution, although there is no reason (other than future maintainability) for using the same algorithm on both platforms.
The way I'm doing this at the moment is to wait until I can get a read lock on the file (by just trying to open it), then periodically checking the size of the file to make sure it's not getting any bigger.
Once both of those criteria have been met, I process the file.
The problem I'm seeing is that once in a blue moon, the two criteria will be met when the file is still being copied. My gut feeling is that it's caused by momentary network congestion, but I've been unable to reproduce the issue whilst debugging.
I can probably make it more stable by just throwing a few more checks at the problem and tweaking the time-out values, but the software engineer in me would like a more definitive solution.
Does anyone have any suggestions?
Cheers,
Charles
Charles Blessing
Software Engineer
NUGEN Audio
|
|
|
|
|
Charles Blessing wrote: The way I'm doing this at the moment is to wait until I can get a read lock on the file (by just trying to open it)... What about an exclusive read lock, or maybe a write lock?
"One man's wage rise is another man's price increase." - Harold Wilson
"Fireproof doesn't mean the fire will never come. It means when the fire comes that you will be able to withstand it." - Michael Simmons
"Show me a community that obeys the Ten Commandments and I'll show you a less crowded prison system." - Anonymous
|
|
|
|