|
I should add, I agree with you about C++ changes being frustrating, but I lean on the -std=c++17 compiler option. as it gives me a good mix of features without overwhelming me with language features I'm not familiar with, or restricting me to toolchains that support the newer standards.
The saving grace of C++ is the standards are stamped every few years with compilers allowing you to choose between them. It helps a lot.
To err is human. Fortune favors the monsters.
|
|
|
|
|
Back in the day, working for a NASA contractor, we were working on graphic representation of data from the engineering (FORTRAN) programs, which had to be done in C on massive UNIX machines. OOP was still a new concept, but we used the flexibility of C to create object-like arrays of data (okay, just the parameter part of objects, but with a suite of functions to support each). OOP was the natural progression, but I never did learn C++. Perhaps I should. The freedom of C is both scary and appealing to the megalomaniac in me; I always delighted in writing in it.
|
|
|
|
|
My main language is C, although I've coded in C++ far longer. I don't use classes unless I have a very good reason to and on only one very specific occasion I used templates (I hate generics). That said I have to confess I come from the opposite end of programming. C is a very nice abstraction of assembler, with the added caveat of being portable and I have emulated objects in C using function pointers within structs.
Despite the supposed (never seen it) security implications of void pointers, those are about as close as I can imagine to using generic data types in your functions (I do videogames so performance and efficiency always trumps readability...and security will always be an afterthought)
Anyways welcome 
|
|
|
|
|
I spent my career as an embedded developer (retired now) and I've only had one project that had enough RAM to be able to use something like STL. The project was basically done when I got it so all I did to it was add/fix miscellaneous features.
For most of my projects in the last 15 years of my career, I was using C++ but stayed away from dynamic object creation (all objects instantiated at startup) and inheritance. Since compiler tech had gotten so good, I did use templates for common things like FIFO's, queue's and components for some DSP (filters, tone generators, etc). Don't be afraid to use C++ features, just make sure you know the memory and time cost.
|
|
|
|
|
Sounds like we're very similar in how we approach C++ on embedded and IoT.
In projects like htcw_gfx[^] I rarely allocate memory for you, although temporary allocations are sometimes necessary for performance, they are few and far between, plus allow you to specify custom allocators.
I don't use STL in such projects. In fact, I've gone out of my way to avoid it, even implementing my own streams and basic data structures like a hashtable and a vector that you can't remove items from aside from clearing the whole thing. The reason is flash size and memory usage - primarily heap frag.
I keep my constructors inlineable by my compiler and generally use template based "interfaces" at the source level rather than inheritance based interfaces at the binary level. The idea being flash size is at less of a premium than CPU cycles.
To err is human. Fortune favors the monsters.
|
|
|
|
|
As a late-comer to this thread, all I can say is that somewhere inside C++ is a beautiful language waiting to get out.
C/C++ both suffer from readability issues. When I look at a piece of code, I want to be able to "grok" it in a few seconds.
My most serious complaint about both is that the very things, flexibility and conciseness, that make them usable work against them.
I trip through a lot of third party code and it takes waaaaay too much time to understand what the original author wanted to do and how he went about it. Then try to find the error or where it needs to be tweaked to add/modify the code.....days turn into weeks, then months.
Simply put, I want to look at the code and comprehend its intent and organization in minutes.
Bottom line: C/C++ allow a programmer too many variations to accomplish a task. Good and bad.
|
|
|
|
|
I think you have to be more careful to write readable C++ code but it's totally doable. It's just that a lot of people (sadly, including myself ) don't bother.
On the other hand, reading C++ is a bit like listening to Aesop Rock. Absolutely unintelligible at first, each for similar reasons, actually, but you start to get an organic feel for the various ways it tends to get put together and then it clicks.
To err is human. Fortune favors the monsters.
|
|
|
|
|
I agree on both counts: It's doable, ...don't bother.
|
|
|
|
|
If you like C, wait until you see C#! It's what C++ was meant to be.
|
|
|
|
|
No it isn't. Just sayin'
To err is human. Fortune favors the monsters.
|
|
|
|
|
I am not asking for absolute knowledge, I am not asking you to Google for me. And I know I could measure the answer myself. I just wonder whether you share my gut feeling on this.
During a review yesterday I came across: (The language here is Go, but I would argue the same way in C etc. And uint64() is compile-time)
gopNr = reqGopNr - entryStart + uint64(entry.Offset)
if gopNr >= uint64(entry.assetLen) { gopNr = gopNr % uint64(entry.assetLen)
}
I commented:
I do not think % can have a measurable cost for small divisors. I would skip the if. It is a single IDIV operation in X86. If that is expensive, there might even be an if in the operator already...
Shooting from the hip, what is your gut feeling?
** Update! **
Thanks for all the interesting feedback.
So I did measure it: Go Playground - The Go Programming Language[^] For some reason the code always measures zero or times out on that playground, but measures fine locally.
The verdict is:
Running with if is in fact faster, if we stick to the original assumption that the divisor is almost always smaller.
The difference is a blazing 10 nanoseconds or if you prefer factor ~4x on an old x86 laptop. I was wrong thinking that it would not be measurable. But this will run on monster server and this is not the most frequently visited code. So I still vote to remove the if for the sake of readability.
"If we don't change direction, we'll end up where we're going"
modified 14-Sep-22 12:15pm.
|
|
|
|
|
Ifs clear the instruction prefetch cache on most processors. Branch prediction can mitigate this somewhat, but it's literally hit or miss.
I'd use almost any instruction in the alternative to a conditional jmp
To err is human. Fortune favors the monsters.
|
|
|
|
|
I wouldn't use it on a PC.
I would use it on the MCUs I usually deal with.
"In testa che avete, Signor di Ceprano?"
-- Rigoletto
|
|
|
|
|
I'm in two minds: part of me agrees with with Carlos - it depends on the target processor. ARM for example has conditional execution on almost every instruction, so the if becomes a "skip" rather than a full on jump.
But ... the modulus operator is an integer divide with knobs on (unless the divisor is always a power of two), and they aren't cheap, so it could be that it's worth the comparison cost even if it breaks branch prediction.
And since the condition requires address calculation as well as a comparison, I'd probably say "dump it" even then. Optimization may improve it if it's in a tight loop, but I'd want to look at the assembly code before making a final decision.
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
"Common sense is so rare these days, it should be classified as a super power" - Random T-shirt
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
Way more in depth than I've ever gotten. I just avoid ifs in tight code!
But maybe I shouldn't be. I do know if you can make say, an entire DFA traversal without conditional branching (and I think it's possible?) it should be significantly faster than the traditional method, which requires a ton of branching, but then you probably wouldn't be using idiv instructions in the first place with such a beast.
So I guess ultimately it depends, as you suggest.
I did not know that about the ARMs. I've been mostly dealing with Tensilica XTensa LX chips, but I'm getting sick of them.
The trouble with ARMs is they're as rare as hen's teeth. Out of stock everywhere for the ones I want.
To err is human. Fortune favors the monsters.
|
|
|
|
|
I wouldn't care one way or the other unless this code was executed frequently. Very frequently.
|
|
|
|
|
The conditional branch should be slower on modern/latest Desktop cpu.
Have a look at this table: Instruction tables
Scroll down to the Intel 11th generation Tiger Lake. The IDIV only costs 4 ops. The JGE and two MOVs for the conditional will exceed that.
It depends on the cpu, older architectures benefit from the branch.
|
|
|
|
|
CMP+JGE (these macro-fuse) and two MOVs is only 3 µops, and they're fast µops. Why are there two movs anyway, only entry.assetLen should be getting loaded here, we already have gopNr and I don't see any immediate reason to copy it to another register. These µops are also not in the dependency chain of gopNr , they're only there for the compare&branch, following code could execute at the same time as this condition is being evaluated (of course subject to throughput limitations).
At least one of the µops in IDIV has a bad latency and moderately bad throughput, and they're in the dependency chain from computing gopNr to using it (not shown).
|
|
|
|
|
Yeah,
If it makes you feel better, I would probably use the 'if statement' simply because not everyone is on Ice/Tiger Lake.
|
|
|
|
|
I suppose both operands are known to always be positive?
If not, you understand that the meaning of the % operator varies between languages?
I probably wouldn't use the if , though I might test both ways just out of curiosity.
It looks like a sophomoric inclusion.
A freshman doesn't know an issue may exist.
A sophomore thinks an issue may exist -- and adds protection.
A master knows the suspected issue doesn't exist.
A little knowledge is a dangerous thing.
It's like when junior developers test an index value every time rather than simply catching an Exception (C#) when something goes awry.
|
|
|
|
|
My gut feeling is that code that's clever at the expense of readability should only be allowed when it has a demonstrated performance impact. Show me something that indicates that it will significantly increase application performance or the PR gets rejected.
Did you ever see history portrayed as an old man with a wise brow and pulseless heart, weighing all things in the balance of reason?
Is not rather the genius of history like an eternal, imploring maiden, full of fire, with a burning heart and flaming soul, humanly warm and humanly beautiful?
--Zachris Topelius
|
|
|
|
|
Yeah! If you can't do it in VB then don't other.
- I would love to change the world, but they won’t give me the source code.
|
|
|
|
|
I'll put the if there, especially if wrapping is unusual.
Even on modern chips with so-called "fast division", 64-bit div (surely we're talking about unsigned numbers here?) takes over a dozen cycles at best. Sure it has only a few µops today, but they're µops with a high latency (or at least one of them is anyway). Further in the past, div only gets worse. Computers with "slow division" are still extremely common. Cascade Lake still had slow division, those are high-end computers that are only a couple of years old.
By contrast, a branch can be bad, but this one won't be, if the comment is to be believed. If wrapping is unusual, then the branch will usually be correctly predicted non-taken. The comparison (and associated loads, if any) that happens before the branch is also nearly irrelevant in that case, because that dependency chain ends in the branch. Code after it does not need to wait until the comparison is done. In the normal case where there is no wrapping, an instruction that uses the new value of gopNr may be able to execute back-to-back with the instruction that produced it (doesn't mean it will, but it could). That is of course impossible if there was a div between them.
If that is expensive, there might even be an if in the operator already...
Doesn't happen on any compiler I'm familiar with. I'm not familiar with the Go compiler, but still. It's not really a thing.
modified 14-Sep-22 11:58am.
|
|
|
|
|
megaadam wrote: blazing 10 nanoseconds
That's about 10 feet (3 meters) at light speed. Whenever I see 'nanoseconds', I am reminded of USN Rear Admiral Grace Hopper - one of the "great's" in early computing history. See, for example, Grace Hopper's Nanoseconds[^]
<edit>Mis-spelt 'Hopper'
modified 15-Sep-22 5:28am.
|
|
|
|
|
Tyrant works with badly paid journalist (9)
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
"Common sense is so rare these days, it should be classified as a super power" - Random T-shirt
AntiTwitter: @DalekDave is now a follower!
|
|
|
|