I disagree about allocating ALL arrays on the heap... that's not a good practice at all... the heap is significantly slower for allocating resources. I think this is a special case because of the size of his arrays.
I have worked in the kernel for 14 years. If the code isnt 100%, it is crap. I never use stacj based buffer becuase they can be overrun leading to an imnpossible to debug stack trace. If you believe onie thing I say, it is this, All arrays go oin the heap where verifier can catch overruns for you.
Seriously, and my kernel code has been on 30% of global products, I know what I am talking about here.
As much as I hate jumping into another man's argument, I feel compelled to answer this one.
When you have as much experience as me in the kernel I might listen to you
The problem with saying things like this is that somebody comes along with more experience. I'll see your 14 years and raise you 31 more. I've been doing this since 1967 on various machine types, mostly on kernel / monitor / core / OS, whatever you wish to call it. And yes, some of my kernel code has been into space.
Allocating from something called heap / common storage / pool / whatever is always slower than just adding / subtracting a size from the stack pointer. Plus, the pool is usually one shared resource so for multithreaded or multiprocessor kernels some sort of locking mechanism is needed to prevent corruption of the basic allocation / deallocation structures. Add in the possibility of doing allocations at interrupt level too and you complicate the locking mechanism further.
I've also dome a fair amount of kernel performance analysis over the years and guess what it reveals as a common bottleneck in kernel programming, heap allocation / deallocation. These are places where one would concentrate the efforts to find smarter and faster allocation methods, split poole, etc. And even though better algorithms have developed over the years, carving small arrays / structures / blocks of code on the stack just flys over calling pool allocation.
Now it is true that you need kernel pool for longer life objects and maybe for things too large to be of practical use on the stack but please don't argue that all things must always go on one place over another, that's just too many absolutes. Sounds more like a religious argument then a logical one.
I dont know if you followed the rest of this thread but what I recomended was using the heap so that verifier can see where your buffers are getting overrun, then go back to using a stack buffer when the code is good.
Yes, heap allocaiton is slow, thats why preallocaiton is a good idea for frequently used memory blocks of a known size. Look aside lists for example. I mentioned this too. This obviattes any performance issues.
What you say about locking mecnahisms isnt the case though. The memory manager deals with that, not the code, so it doesnt add complication. Shared data between threads will though, but this wont be the case here since the guy is only using the buffer in one function.
Re kernel performance, look aside lists. Plus IO is the real bottle neck.
But here is my central point, whic is always valid: Stability is more imporant than speed. Always.
This is why I don't like jumping into someelse's argument, it becomes mine
Yes, I've followed the posts and just to be sure, I went back and re-read all of yours specifically (you should do the same to refresh your memory). You never mention going back to a stack buffer "after the code is good". In fact, quite the opposite, you were adament about buffers always being from the heap, always, always. You repeated that quite often. So the resposes where to those statements.
What you say about locking mecnahisms isnt the case though. The memory manager
deals with that, not the code, so it doesnt add complication.
Well that's a convenient hand wave, blaming the underlying function rather than the caller who invokes it. The point is that memory allocation at that level is costly and that even if the "memory manager" has to deal with the locking, etc, you still are resposible for chosing a methodology that invokes that call over stack allocation. So, the introduction of the overhead is your choice, the kernel code is just giving you what you asked for. Don't blame it.
Plus IO is the real bottle neck
IO is a "wait state" event and not chewing up cpu cycles, which was what this discussion was all about. The kernel / application is free to do other things while IO is going on using any number of asynchronous IO techniques. If you wish to now have a discussion on all the things that affect application / kernel performance, we can do that too.
Stability is more imporant than speed. Always.
Only an idiot would argue in favor of "instability". Of course stability is important. In fact, if you're getting paid to do code, your client / employer will assume stability and will find someone else to deliver it if you fail. So most don't even bother listing stability as a priority, it's assumed you will deliver it. On the other hand, many will list speed as the priority, depending on the application. Imagine trying to defend a radar application that is too slow to catch all the incoming phase radar data by saying "but it's stable!!".
Vectors use the heap effectively because they allocate memory ahead of time instead of doing a new every time. They effectively get around the slowness at the expense of having some extra memory allocated ahead of time. Think he refuses to hear other people's arguments at all.
Oh, and by the way, if you regularly need a set size of heap buffer you can preallocate and manage yourself. In the kernel you can use look aside lists to do the same. Very good they are too. I work a lot in the kernel. Believe me, stability is more important.