|
Well, to be clear, I started with C++, not C, but yeah. =) CString, std::string and the like are a mess.
To err is human. Fortune favors the monsters.
|
|
|
|
|
Another good argument for using C++ templates is its capability to "inject" code into algorithms. I mean that C is forced to use callbacks to expand the functionality of the algorithms: with C++ you can pass the "portion" of the algorithm that extends the functionality of the base algorithm as a template argument to a template parameter. Functors (or lambdas, they are the same beast) which has code declared inline, it is generally injected into the template of the base algorithm.
For example, the classic C library qsort algorithm, ignoring the fact that it forces you to play dangerously with casts to connect arguments to the callback function parameters, produces a function call for each comparison. If the algorithm (which is never a pure qsort but is a modified version, e.g. introsort) has to do with many elements and the compare function is inexpensive, it turns out that the biggest waste of time is in the invocation of the callback. In fact, especially when dealing with processors without caches or sophisticated branch prediction mechanisms, which is almost always the case with small MCUs, a function call can cost a lot.
This means that, apart from code bloating (which with modern MCUs is no longer a big problem since they have a lot of Flash), C++ is often faster than C, unless a C programmer writes everything by hand and doesn't use any library facility.
This gives the possibility to use the just-ready STL or Boost algorithms that do not need exceptions or dynamic allocation, customizing them efficiently. I want to remind all of you that usually the above code is written better than we could do and, in general, it is already debugged. And yes, using custom allocators and the placement-new operator, C++ can do what C does and many times it does even better: that is, C++, with a little care, can be happily used on small embedded systems.
Cheers
|
|
|
|
|
Absolutely, although I would not use STL and and Boost on most IoT class MCUs, and the reason his heap fragmentation.
When you're dealing with say 360KB of usable SRAM (the amount available for use on an ESP32) or less, the struggle is real. STL just takes a big steamer all over your heap.
To err is human. Fortune favors the monsters.
|
|
|
|
|
honey the codewitch wrote: Absolutely, although I would not use STL and and Boost on most IoT class MCUs, and the reason his heap fragmentation.
I understand, but I wasn't talking about STL containers: I was talking about "algorithms". Many ready-to-use algorithms only work on data in place. However, you can use STL algorithms on standard C arrays (basically pointers are "random access iterators", intended as the category, aren't they?). If you like templates can always use std::array which has zero overhead with respect C array.
honey the codewitch wrote: When you're dealing with say 360KB of usable SRAM (the amount available for use on an ESP32) or less, the struggle is real. STL just takes a big steamer all over your heap.
I started programming MCUs like 6805 in pure assembly up to MC68000 in "pure C". So I know well what are you talking about.
Regards.
|
|
|
|
|
Fair enough. When you said algorithms I thought you were including things like hashtables.
To err is human. Fortune favors the monsters.
|
|
|
|
|
"Also, using the preprocessor freely is kind of liberating"
C will set you free. the oldest dogmas i've heard were: goto is evil and macros are evil
i remember you once said (probably here on The Lounge) that you prefer C++ and even that you work with it like it is C# (translating C++ to C# in your head and vice versa)
what i don't like about C is the standards. there are nice things in every standard that benefit, but they push with the Undefined Behavior as a way to shame and discipline the coder. what the creators of C didn't put in the language, they try to force thru the standards
the spirit of C is kind of hippie, uncertain. that made me search for the first edition of The C Programming Language:
"C is a general-purpose programming language with features economy of expression...", "its absence of restrictions and its generality make it more convenient and effective for many tasks than supposedly more powerful languages."
"...the run-time library required to implement self-contained programs is tiny.", "...efficient enough that there is no compulsion to write assembly language instead." - this seems like something that is not important now, but lets think of the energy consumption.
"Existing compilers provide no run-time checking of array subscripts, argument types, etc." - wooow, you just put an int where the function takes a float , the sizeof float bytes are copied from the address of the integer object to the stack frame. the function treats the data as float .
the most astonishing for me has always been "The arguments to functions are passed by copying the value of the argument, and it is impossible for the called function to change the actual argument in the caller", i interpret as Ritchie's intention towards pure functions. yes, you can pass Person 's pointer to a function, but the default is passing a copy of Person .
i chose C because it doesn't change, although C++ was my first choice. C++ now changes every 3 years? i cannot even recognize the language. anyway, i'm not a competent programmer.
|
|
|
|
|
Martin ISDN wrote: C will set you free. the oldest dogmas i've heard were: goto is evil and macros are evil
There's a time and a place for nearly everything (except Python ). Macros make certain impossible things possible in C, like conditionally compiling code for different platforms, or setting compile time configuration parameters in the absence of templates. Gotos are pretty handy for state machines, where higher level constructs don't work because you can't jump in and out of while loops.
Martin ISDN wrote: the most astonishing for me has always been "The arguments to functions are passed by copying the value of the argument, and it is impossible for the called function to change the actual argument in the caller"
I'm surprised this astonished you, as it's the default in most any programming language including asm, where the most natural way to call is to put a *copy* of a value in a register or onto the stack. Indeed to pass by reference you need to put the *address* of the object in a register or on the stack. BASIC facilitates this using the Byref keyword, C# with the ref and out keywords, but it's pretty much always extra typing. The exception is arrays including strings, because you *reference* them (in C by referencing the first element), and while in theory you could push each element onto the stack in practice that's prohibitive.
To err is human. Fortune favors the monsters.
|
|
|
|
|
honey the codewitch wrote:
I'm surprised this astonished you, as it's the default in most any programming language including asm, where the most natural way to call is to put a copy of a value in a register or onto the stack. Indeed to pass by reference you need to put the address of the object in a register or on the stack. BASIC facilitates this using the Byref keyword, C# with the ref and out keywords, but it's pretty much always extra typing. The exception is arrays including strings, because you reference them (in C by referencing the first element), and while in theory you could push each element onto the stack in practice that's prohibitive.
probably i wanted to give Dennis more credit than he deserves. once an idea like "i have underrated C, Dennis was more clever and foreseeing than i thought. he made the right compromises" appeared in my mind it is constantly working in the background trying to find new prof of greatness.
cannot test it, but the BASICs on the home computers may have been default to pass by reference. that set the intuition that the function changes the callers arguments at very young age
though, he made copy by value the only way to pass. except for that array!
i often wonder at length, why he did so. the default way is to pass it by reference i suppose for economical reasons. there is a way to pass it by copy if you put it inside a struct. or simply, cast the array as a struct.
i wish i could find some paper written by Ritchie about this or "the other things, not because they are easy, but because they are hard"
|
|
|
|
|
I should add, I agree with you about C++ changes being frustrating, but I lean on the -std=c++17 compiler option. as it gives me a good mix of features without overwhelming me with language features I'm not familiar with, or restricting me to toolchains that support the newer standards.
The saving grace of C++ is the standards are stamped every few years with compilers allowing you to choose between them. It helps a lot.
To err is human. Fortune favors the monsters.
|
|
|
|
|
Back in the day, working for a NASA contractor, we were working on graphic representation of data from the engineering (FORTRAN) programs, which had to be done in C on massive UNIX machines. OOP was still a new concept, but we used the flexibility of C to create object-like arrays of data (okay, just the parameter part of objects, but with a suite of functions to support each). OOP was the natural progression, but I never did learn C++. Perhaps I should. The freedom of C is both scary and appealing to the megalomaniac in me; I always delighted in writing in it.
|
|
|
|
|
My main language is C, although I've coded in C++ far longer. I don't use classes unless I have a very good reason to and on only one very specific occasion I used templates (I hate generics). That said I have to confess I come from the opposite end of programming. C is a very nice abstraction of assembler, with the added caveat of being portable and I have emulated objects in C using function pointers within structs.
Despite the supposed (never seen it) security implications of void pointers, those are about as close as I can imagine to using generic data types in your functions (I do videogames so performance and efficiency always trumps readability...and security will always be an afterthought)
Anyways welcome 
|
|
|
|
|
I spent my career as an embedded developer (retired now) and I've only had one project that had enough RAM to be able to use something like STL. The project was basically done when I got it so all I did to it was add/fix miscellaneous features.
For most of my projects in the last 15 years of my career, I was using C++ but stayed away from dynamic object creation (all objects instantiated at startup) and inheritance. Since compiler tech had gotten so good, I did use templates for common things like FIFO's, queue's and components for some DSP (filters, tone generators, etc). Don't be afraid to use C++ features, just make sure you know the memory and time cost.
|
|
|
|
|
Sounds like we're very similar in how we approach C++ on embedded and IoT.
In projects like htcw_gfx[^] I rarely allocate memory for you, although temporary allocations are sometimes necessary for performance, they are few and far between, plus allow you to specify custom allocators.
I don't use STL in such projects. In fact, I've gone out of my way to avoid it, even implementing my own streams and basic data structures like a hashtable and a vector that you can't remove items from aside from clearing the whole thing. The reason is flash size and memory usage - primarily heap frag.
I keep my constructors inlineable by my compiler and generally use template based "interfaces" at the source level rather than inheritance based interfaces at the binary level. The idea being flash size is at less of a premium than CPU cycles.
To err is human. Fortune favors the monsters.
|
|
|
|
|
As a late-comer to this thread, all I can say is that somewhere inside C++ is a beautiful language waiting to get out.
C/C++ both suffer from readability issues. When I look at a piece of code, I want to be able to "grok" it in a few seconds.
My most serious complaint about both is that the very things, flexibility and conciseness, that make them usable work against them.
I trip through a lot of third party code and it takes waaaaay too much time to understand what the original author wanted to do and how he went about it. Then try to find the error or where it needs to be tweaked to add/modify the code.....days turn into weeks, then months.
Simply put, I want to look at the code and comprehend its intent and organization in minutes.
Bottom line: C/C++ allow a programmer too many variations to accomplish a task. Good and bad.
|
|
|
|
|
I think you have to be more careful to write readable C++ code but it's totally doable. It's just that a lot of people (sadly, including myself ) don't bother.
On the other hand, reading C++ is a bit like listening to Aesop Rock. Absolutely unintelligible at first, each for similar reasons, actually, but you start to get an organic feel for the various ways it tends to get put together and then it clicks.
To err is human. Fortune favors the monsters.
|
|
|
|
|
I agree on both counts: It's doable, ...don't bother.
|
|
|
|
|
If you like C, wait until you see C#! It's what C++ was meant to be.
|
|
|
|
|
No it isn't. Just sayin'
To err is human. Fortune favors the monsters.
|
|
|
|
|
I am not asking for absolute knowledge, I am not asking you to Google for me. And I know I could measure the answer myself. I just wonder whether you share my gut feeling on this.
During a review yesterday I came across: (The language here is Go, but I would argue the same way in C etc. And uint64() is compile-time)
gopNr = reqGopNr - entryStart + uint64(entry.Offset)
if gopNr >= uint64(entry.assetLen) { gopNr = gopNr % uint64(entry.assetLen)
}
I commented:
I do not think % can have a measurable cost for small divisors. I would skip the if. It is a single IDIV operation in X86. If that is expensive, there might even be an if in the operator already...
Shooting from the hip, what is your gut feeling?
** Update! **
Thanks for all the interesting feedback.
So I did measure it: Go Playground - The Go Programming Language[^] For some reason the code always measures zero or times out on that playground, but measures fine locally.
The verdict is:
Running with if is in fact faster, if we stick to the original assumption that the divisor is almost always smaller.
The difference is a blazing 10 nanoseconds or if you prefer factor ~4x on an old x86 laptop. I was wrong thinking that it would not be measurable. But this will run on monster server and this is not the most frequently visited code. So I still vote to remove the if for the sake of readability.
"If we don't change direction, we'll end up where we're going"
modified 14-Sep-22 12:15pm.
|
|
|
|
|
Ifs clear the instruction prefetch cache on most processors. Branch prediction can mitigate this somewhat, but it's literally hit or miss.
I'd use almost any instruction in the alternative to a conditional jmp
To err is human. Fortune favors the monsters.
|
|
|
|
|
I wouldn't use it on a PC.
I would use it on the MCUs I usually deal with.
"In testa che avete, Signor di Ceprano?"
-- Rigoletto
|
|
|
|
|
I'm in two minds: part of me agrees with with Carlos - it depends on the target processor. ARM for example has conditional execution on almost every instruction, so the if becomes a "skip" rather than a full on jump.
But ... the modulus operator is an integer divide with knobs on (unless the divisor is always a power of two), and they aren't cheap, so it could be that it's worth the comparison cost even if it breaks branch prediction.
And since the condition requires address calculation as well as a comparison, I'd probably say "dump it" even then. Optimization may improve it if it's in a tight loop, but I'd want to look at the assembly code before making a final decision.
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
"Common sense is so rare these days, it should be classified as a super power" - Random T-shirt
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
Way more in depth than I've ever gotten. I just avoid ifs in tight code!
But maybe I shouldn't be. I do know if you can make say, an entire DFA traversal without conditional branching (and I think it's possible?) it should be significantly faster than the traditional method, which requires a ton of branching, but then you probably wouldn't be using idiv instructions in the first place with such a beast.
So I guess ultimately it depends, as you suggest.
I did not know that about the ARMs. I've been mostly dealing with Tensilica XTensa LX chips, but I'm getting sick of them.
The trouble with ARMs is they're as rare as hen's teeth. Out of stock everywhere for the ones I want.
To err is human. Fortune favors the monsters.
|
|
|
|
|
I wouldn't care one way or the other unless this code was executed frequently. Very frequently.
|
|
|
|
|
The conditional branch should be slower on modern/latest Desktop cpu.
Have a look at this table: Instruction tables
Scroll down to the Intel 11th generation Tiger Lake. The IDIV only costs 4 ops. The JGE and two MOVs for the conditional will exceed that.
It depends on the cpu, older architectures benefit from the branch.
|
|
|
|
|