I first saw this technique in the stl where collections return
from their insert method. The bool indicating if the item was inserted or already existed.
Personally I dislike this, I find the names first and second which identify the pair's elements very misleading and find I'm constantly having to look at doco to see what something is. The worst case I've seen was along the lines of
pair<bool, pair<bool, bool>
. How could anyone be expected to remember what result.second.first is?
This has the disadvantage that the caller has to allocate a bool and pass it in even if they're not interested in the result (this can be overcome by using a pointer with default value of NULL but that's also ugly)
I blame python and its tuples for this outrage. What do others think?
Well, as long as we're talking about personal preferences.
I think that templates, in general, has uglied up the language by spraying < and > all over the place. I think somebody went nuts with this stuff and talked the language committee into using it. Plus the fact that it's not just the template definition that's affected but the coder has to use that syntax when referencing it had contributed to the unreadability of code.
I mean > is already used in pointer use (->) and in inequality tests (greater than) and then there's the right shift (>>) operator. Then somebody comes along to overload it for "cin" and other stream io. And don't get me started on <
I mean really, try writing code that takes some template result as a pointer to an object that you want to stream data into. Count the >'s. If they were right parens ) instead, it would remind me of early Lisp.
Rich syntax has nothing to do with maintainable code.
"Clever" coding tricks demonstrates that one has mastered a language while at the same time demonstrating that one hasn't mastered software development (which of course includes much more than just producing code.0
Oh, I'm sure it saved somebody some debugging time. "saved" is a bit of a stretch unless somebody would have released something with insufficient testing. And if they do that, there'd be plenty of other bugs that will bite them due to the lack of testing.
Personally, I hate it too. I tend to read code like English, left to right, and reading it as "if my pointer is null" is more natural for me than the other way around. Speakers of other native languages may find the other order more natural.
In my opinion, if you are writing code that will be maintained by someone else, a support team or other engineers as you move on to bigger projects, then "readability" is more important. Readability include code structure / indents, comments (meaningful), reasonablly neumonic variable and function nammes, and making if statements readable as they convey the "branching logic" of the program.
This has helped me detect the problem, but I cannot say that it would've gone into production code because it would surely be detected during testing.
But nevertheless, it was a good trick to detect the typo.
However, it was not fool proof because sometimes we would need to compare 2 variables instead of a variable and a constant.
In such cases a single equal sign would make the assignment and you would get unexpected results.
Having said all this, I don't think it is necessary to write the constant first anymore with the current compilers because it would surely give you a warning.
For example, VS 2010 shows the C4706 warning.
I'm not sure about other compilers like GCC.
«_Superman_» I love work. It gives me something to do between weekends.
Yes, that is in fact a very good alternative, specifically if you set warnings=errors for production code. At the time I started using the practice there were no such warnings, and thus no such elegant alternative. Also I remember quite a few typical constructs that deliberately used assignments, such as
Even I love this.
Putting constant at left hand side surely saves from missing '='. The earlier compilers were not intelligent to raise a warning like the latest ones. That might have forced people like me to cultivate this habit. Otherwise one had to debug code to find such kind of bug (Some great minds might have found these kind of bugs with their open eyes, I won't deny that as well). But surely it wont give u a chance to debug the code to find such bugs. That's my personal opinion.
If the variable is decleared to be bool (C++) or BOOL (C/Microsoft) then I don't mind the if (bCondition) or even if (!bCondition). What kills me is if the variable is an int or similar and that short cut is used just because it evaluates to a zero/non-zero test. Damnit, write:
if (iVar != 0)
if (iVAR == 0)
if that's what you're testing for and don't use the "boolean variable" if notation.
to mean "if iVar is equal to zero" just drives me crazy.
. What? Hungarian Notation going out of style? Hey, I'm just getting into it having finally stopped following Fortran II notations of variables starting with I thru N are integers, everything else is floating (aka "real").
I have no problem at all, and in fact prefer the form
provided bcondition is of type bool. I hate it though when the variable in question is, in fact, an int or similar, especially when it can take more than 2 values (e. g. when interpreting error codes, where sometimes 0 indicates failure, and on other occasions 0 indicates success and anything else is an enumerated code...)
In my experience, this is not true. Typing '=' instead of '==' is as often a real typing error as it is a coding oversight. I even go so far as to always put the constants on the left hand side in all comparisons, so I don't forget to do so on equality operators. I never ever had a '='/'==' error since I started using this practice some time around 1990. (I did fix quite a few such errors caused by others though, and I can tell you that kind of work is not pretty!)
Last Visit: 31-Dec-99 19:00 Last Update: 8-Dec-23 12:39